HK1132394B - Network delay control - Google Patents
Network delay control Download PDFInfo
- Publication number
- HK1132394B HK1132394B HK09110198.0A HK09110198A HK1132394B HK 1132394 B HK1132394 B HK 1132394B HK 09110198 A HK09110198 A HK 09110198A HK 1132394 B HK1132394 B HK 1132394B
- Authority
- HK
- Hong Kong
- Prior art keywords
- packets
- transmission path
- packet
- network transmission
- controlling
- Prior art date
Links
Description
Cross reference to other applications
This application claims priority to provisional U.S. patent application 60/562111 entitled Network Flow Control (Network Flow Control), filed 4/13/2004, which is incorporated herein by reference.
Technical Field
The present invention relates generally to networking. More specifically, a delay control technique is disclosed.
Background
In computer networks, packets transmitted from one network node to the next may encounter varying delays. Generally, packets are typically ordered and transmitted one at a time over a network path. When multiple packets are to be transmitted through the network, subsequent negotiation and ordering may result in a random amount of transmission delay. For example, one packet may be forced to wait for another packet to complete transmission because the other packet begins transmission before waiting for the packet to become available, or because the other packet has a higher priority designation. Thus, the delay in the arrival of a packet may deviate from the expected transmission delay. The deviation from the expected delay is commonly referred to as jitter. For many network configurations, jitter has a probability distribution function (pdf) that is not a normal distribution. In addition, dithering is a non-stationary random process.
Jitter in a network is particularly undesirable for time sensitive packets such as timing packets, voice over Internet Protocol (IP), video streaming, or other protocol packets with more stringent timing requirements. Jitter can cause significant performance degradation and reduce the utilization of network bandwidth. In addition, for packets that include timing and frequency information, the non-stationary nature of network jitter makes such information difficult to propagate reliably. Therefore, a way to reduce jitter for time sensitive packets would be desirable. This would also be helpful if the utilization of network bandwidth could also be improved. Furthermore, it would be helpful if network jitter could be made a smooth random process.
Drawings
In the following detailed description and the accompanying drawings, various embodiments of the present invention are disclosed.
Fig. 1 illustrates a network delay controller used in one embodiment.
Fig. 2A illustrates a packet processing procedure implemented by the NDC.
Fig. 2B shows a process for the normal mode timer.
Fig. 2C shows a process for a quiet mode timer.
Figure 3A provides a graphical representation of the transmission rate of NTP during a 12mS interval.
Figure 3B illustrates an embodiment in which a network delay controller is configured to estimate NTP internal buffer utilization.
Figure 4A illustrates the process of receiving and transmitting packets via NTP.
Fig. 4B illustrates the operation of a normal mode timer for determining when the normal operating mode shown in fig. 4A is active.
Figure 4C shows how the current buffer utilization in NTP is estimated by the propagation timer.
Fig. 5 shows a network configuration in which first, second and third nodes are coupled to transmit network packet data to a network flow controller.
Figures 6A-6D illustrate the operation of NDC to minimize arbitration jitter and NTP-related propagation jitter within a network flow controller.
Figure 7 shows a network configuration that includes a Link Bandwidth Analyzer (LBA) for automatically estimating the bandwidth and input buffer size of a given NTP.
Fig. 8 shows an example sequence of 13 packets for the second NDC transmission.
Fig. 9 shows an example format of a PIP data packet.
Detailed Description
The invention can be implemented in numerous ways, including as a process, an apparatus, a system, a computer readable medium such as a computer readable storage medium or a computer network wherein program instructions are sent over optical or electronic communication links. In this specification, these implementations, or any other form that the invention may take, may be referred to as techniques. In general, the order of the steps of disclosed processes may be altered within the scope of the invention.
A detailed description of one or more embodiments of the invention is provided below along with accompanying figures that illustrate the principles of the invention. The invention is described in connection with such embodiments, but the invention is not limited to any embodiment. The scope of the invention is limited only by the claims and the invention encompasses numerous alternatives, modifications and equivalents. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. These details are provided as examples and the invention may be practiced according to the claims without some or all of these specific details. For the purpose of clarity, technical material that is known in the technical fields related to the invention has not been described in detail so that the invention is not unnecessarily obscured.
The transmission performance of packet-based networks (e.g., ethernet, Asynchronous Transfer Mode (ATM), packet relay, etc.) is improved relative to delay characteristics. In some embodiments, the average propagation delay of a given packet through a packet-based network and the variation in propagation delay of a series of corresponding given packets is reduced or minimized. As used herein, the term "timing sensitive packet" is used to refer to a packet for which it may be advantageous or desirable to reduce, minimize or avoid propagation delay. One example of such a packet may be a timing packet used to synchronize two or more processes or operations, e.g., at two different locations. Such timing packets may be used to provide a phase or frequency reference over a network to lock onto a Phase Locked Loop (PLL) or Frequency Locked Loop (FLL) at a remote location.
As used herein, a packet refers to a unit of data transmitted over a network, such as an ethernet packet, a data cell, or other unit of data according to various protocols and standards. The format of the packets and the amount of data stored in the packets may vary. For ease of illustration, examples using ethernet packets are discussed extensively throughout this specification, but the techniques are applicable to other packet types as well.
Fig. 1 illustrates a network delay controller used in one embodiment. In this example, a first node (101) transmits an information packet to a second node (102) over a network transmission path (104). Nodes 101 and 102 may comprise embedded systems, computer terminals, or entry/exit points to associated networks or network elements. A Network Delay Controller (NDC) (106) is used to transmit packets between the node (101) and a Network Transmission Path (NTP) (104). Virtually all packets received by the second node (102) are sent through the NDC (106). In fig. 1, the maximum data transmission rate from the first node (101) to the NDC (106) is represented as T1. The maximum data rate from the NDC (106) to the network transmission path (104) is denoted as T2. The maximum data rate of the network transmission path (104) from the port connected to the NDC (106) to the port connected to the second node (102) is denoted T3. In this example, assume that the maximum data rate T3 is less than the maximum data rate T1 or T2. The problem associated with this situation is referred to herein as "rate transition jitter", whereby it may be difficult for the receiving end to reliably predict the arrival time of a timing sensitive packet due, at least in part, to the different segments of the end-to-end path that the timing sensitive packet must traverse, which may have different data rates. The assumption that the maximum data rate T3 is less than the maximum data rate T1 or T2 implies that the first node (101) has a network interface with a higher transmission bandwidth than the network transmission path (104) itself. Thus, the first node (101) is capable of data transmission at a faster rate than the NTP (104). One example of such a network transmission path may include a Digital Subscriber Line (DSL) transmission path. The DSL modem may have a "10 base T" ethernet interface to the computer, for example, which is capable of transmitting 10 megabits per second (Mbps). The DSL connection itself may have a maximum transmission capacity of 1 Mbps. To work with higher speed transmitters, such as the first node (101), the NTP (104) includes storage for data packets when received from the transmitter in full bursts and propagated slower through the NTP (104). The NTP (104) may include additional storage space for packets when fast but limited bursts of packets are received from the first node (101). The NTP (104) may comprise a combination of commonly known network elements such as switches, routers, fiber optic media, analog subscriber lines, etc.
The designated timing sensitive packet is included in a set of packet types transmitted by the first node (101) to the second node (102). Timing sensitive packets differ from other packets in one or more special fields within the packet structure. For the ethernet protocol, a 16-bit "ethertype field" may be used, followed by a 48-bit source address field and destination address field. Alternatively, a 48-bit destination address field or subset of a destination address field, a portion of a source address field, a payload, a combination of various fields within a packet, or any other suitable field may be used. Factors such as the particular application, system configuration, overall efficiency, network protocol, and network medium may affect the field selection used to distinguish timing sensitive packets. Packets received by the NDC (106) are monitored for the presence of designated time sensitive packets.
In figure 1, the NDC (106) comprises a rate limiter (107) for limiting the data rate when receiving packets from the first node (101), such that packet transmissions to the NTP (104) are throttled according to the maximum data rate of the NTP (104). To limit the data rate of packets forwarded to the network transmission path (104), the NDC may store some packets received from the first node (101). Packets are discarded if a non-timing sensitive packet is received and no storage space is available, or if the amount of unused storage space is insufficient and if the packet cannot be forwarded to the network transmission path (104) without actually exceeding the data rate capacity of the NTP (104). Preferably, when the NDC (106) receives all timing sensitive packets, they are forwarded to the NTP (104). Because the NDC (106) rate limits data according to the maximum data rate of the network transmission path (104), the network transmission path (104) typically does not store more than one maximum size packet, depending on the particular rate limiting technique used in the NDC (106).
The NDC (106) is configured to identify a certain timing sensitive packet when it is output from the first node (101). Fig. 2A-2C illustrate a set of operations implemented by the NDC (106), according to some embodiments. Specifically, fig. 2A illustrates a packet processing procedure implemented by the NDC, and fig. 2B and 2C illustrate procedures of a normal mode timer and a quiet mode timer, respectively, used by the NDC as described below. Referring first to fig. 2A, a determination is made as to whether a designated timing sensitive packet is received or stored in a queue (206). In some embodiments, timing sensitive packets are always transmitted as they are received and are never stored in a queue for later transmission. In such embodiments, step 206 includes determining whether a timing sensitive packet has been received. In other embodiments, timing sensitive packets may be stored in a queue under prescribed circumstances, for example, where higher priority timing sensitive packets are received or expected to be received at or near the same time, or where NTP is unavailable at the time the timing sensitive packets are received. If a timing sensitive packet is received or in the queue and no higher priority packets are expected to arrive during the time of transmission of the timing sensitive packet through the NTP, the timing sensitive packet is forwarded to the NTP for transmission (225). If a timing sensitive packet has not been received and there are no timing sensitive packets in the queue, a determination is made as to whether non-timing sensitive packets are received or stored in the queue (208). If a non-timing sensitive packet has not been received and there are no non-timing sensitive packets in the queue, the process starts again and steps 206 and 208 are performed until a timing sensitive packet or a non-timing sensitive packet is received or found to be present in the queue. If it is determined in step 208 that a non-timing sensitive packet is received or present in the queue, then a determination is made as to whether quiet mode is active (210). In some embodiments, step 210 includes determining whether a quiet mode timer is set and has not expired or been reset (i.e., whether the timer is running). The quiet mode timer defines a time window during which incoming non-time sensitive packets are buffered to keep the network transmission path clear for transmission of timing sensitive packets that are expected to arrive.
Details of the quiet mode timer are discussed below in conjunction with fig. 2B and 2C. In some embodiments, the time window begins at the time the timing sensitive packet is expected to arrive. In other embodiments, the time window begins at a time that is advanced by a predetermined amount of time from the expected arrival time of the timing sensitive packet to allow time for clearing of non-timing sensitive packets on the NTP before the timing sensitive packet arrives. In some embodiments, the time window may extend beyond the expected arrival time of the timing sensitive packet, e.g., to allow the timing sensitive packet to defer arrival. In some embodiments, the time window may initially be set to expire at the expected arrival time of the timing sensitive packet, but if the timing sensitive packet does not arrive at the expected time, the time window may be extended, for example, by a specified amount or according to a specified algorithm.
If quiet mode is active (i.e., the quiet mode timer is set and has not expired) (210), and it is determined in step 208 that a non-timing sensitive packet is received, the received packet is stored in a queue or discarded when insufficient memory is available (270). If the quiet mode is active (i.e., the quiet mode timer is set and has not expired) (210) and a non-timing sensitive packet has not been received (i.e., the answer determined in step 208 is affirmative due to the presence of one or more non-timing sensitive packets in the queue, rather than due to the receipt of a non-timing packet that has not yet been stored in the queue), then no action is taken in step 270.
If the quiet mode is not active (i.e., the quiet mode timer has expired or is reset, i.e., not running) (210), a determination is made as to whether the rate limit threshold has been reached (215). Depending on the implementation of NDC, various rate limiting techniques may be employed. In one embodiment, the rate is limited such that data is transmitted to the NTP (104) at an average rate that does not exceed the maximum data transmission capacity of the NTP. If the rate limit threshold is reached, the received non-timed packets (if any) are stored in a queue or discarded if sufficient storage space is not available (270). If the rate limit threshold is not exceeded, then non-timing sensitive packets are transmitted (220). In some embodiments, in step 220, non-timing sensitive packets are transmitted on a first-in-first-out (FIFO) basis, such that packets stored in the queue (if any) are transmitted in the order in which the packets were received and before any subsequently received packets are transmitted. In some such embodiments, step 220 includes storing packets that are received later, that have not been transmitted in some iteration of step 220 (e.g., because previously received packets are stored in the queue), and that have not yet been stored in the queue in a position in the queue that is below or after all previously received packets that may have been stored in the queue. In other embodiments, a FIFO scheme may not be used and packets received that have not been previously stored in the queue may be transmitted before any packets are pulled from the queue for transmission. Once the packet is transmitted (220), the process begins again.
Referring to fig. 2B, a normal mode timer is set (250) whenever a designated timing sensitive packet is received (245). The normal mode timer is preferably set to a value that depends on the time interval or specified time at which the next timing-sensitive packet is expected to arrive minus the time value that defines the window before the expected arrival time of the next timing-sensitive packet (referred to herein as "quiet mode" during which non-timing-sensitive packets are not transmitted to the NTP to ensure that the NTP is available to transmit the packet at the time the timing-sensitive packet arrives). The normal mode timer will expire after the set time value is reached. Once the normal mode timer (250) is set, the running (i.e., set and not yet expired) quiet mode timer (252) is reset (stopped). In some embodiments, step 252 of FIG. 2B is performed only when it is determined that a timing sensitive packet is transmitted, to ensure that the quiet window remains valid (i.e., no non-timing sensitive packets are transmitted) until a timing sensitive packet for which the quiet mode timer is set is transmitted. In some embodiments, step 225 of FIG. 2A includes resetting the quiet mode timer when it is set and has not expired, and in such embodiments, step 252 of FIG. 2B is omitted.
Referring to fig. 2C, whenever the normal mode timer expires (255), the quiet mode timer is set (260). As described above, the quiet mode timer is set to a time value indicating an interval during which non-timing sensitive packets are not transmitted to the NTP. The quiet mode timer will expire after the quiet mode time value is reached.
Preferably the normal mode timer anticipates the period of the designated timing sensitive packet. The quiet mode timer is used to control the flow of non-timing sensitive packets to the NTP (104) such that at the time the timing sensitive packets are expected to arrive, the non-timing sensitive packets will not occupy the network transmission path. In some embodiments, the NDC (106) is configured to set the normal mode timer to some interval T upon receipt of a timing sensitive packetNThis interval coincides with the known periodic transmission interval of timing sensitive packets. For example, assume that the network transmission path (104) has a maximum transmission rate T of 1Mbps3And the first node (101) to the interface T of the network transmission path (104)1And T2Are ethernet connections that have a maximum transmission capacity of 10 Mbps. It is also assumed that the designated timing sensitive packets are transmitted at regular intervals of 1 second. For the remainder of this illustration, the designated timing sensitive packets will be referred to as "1 pps packets," but the techniques are also applicable to timing sensitive packets having different arrival rates.
All non-1 pps packets received by the NDC (106) will be rate limited and forwarded to the NTP (104) as appropriate before the normal mode timer expires. Figure 3A provides a graphical representation of the transmission rate of NTP (104) during a 12mS interval as adjusted by NDC (106) in one embodimentShown in the figure. As shown in fig. 3A, a maximum NTP transmission rate of 12 kbits/12 mS or 1Mbps is specified. Also in FIG. 3A, a threshold transmission rate of 1200 bits/12 mS or 100Kbps is specified. If the calculated NTP transmission rate actually exceeds the threshold transmission rate of 100Kbps, packets received by the NDC (106) are not forwarded to the NTP (104). As shown in fig. 3A, at time zero T0, a 64 byte packet is forwarded to the NTP (104). Because of the relatively fast NDC/NTP interface, the instantaneous calculated transmission rate quickly increases to a maximum rate of 38.416Kbps when the entire packet is completely transmitted to the NTP (104). As the NTP (104) propagates the packet, the transmission rate during the 12mS window is reduced to zero. At T1, a maximum size packet of 1500 bytes is transmitted to the NTP. When the packet is transmitted to the NTP (104), the calculated peak transmission rate during the 12mS interval is 900 Kbps. While the packet is propagated through the NTP (104), at time T2, the NDC (106) receives another maximum size packet. However, the packet is not forwarded to the NTP until the calculated transmission rate during the 12mS time window is actually 100Kbps, e.g., at or near time T4 in the illustrated example. When the second packet is transmitted to the NTP (104), the calculated NTP (104) transmission rate peaks at 1Mbps when the second packet is completely transmitted to the NTP (104). Similarly, it will be stored at T later3The largest size packet is received and not forwarded to the NTP until the calculated transmission rate during the 12mS time window is actually 100kbps, i.e., at or near time T5 in the illustrated example. Typically, the maximum size of a packet for ethernet is 1500 bytes. If the network is configured to accept packets of maximum size of 1500 bytes, it will transmit packets from the NDC (106) to the NTP (104) using a maximum of 1500 bytes 8 bits/byte 1 second/10 Mb or 1.2 milliseconds. If the NTP (104) starts transmitting as soon as data is received from the NDC (106), by the time the NTP (104) receives the entire maximum size packet, the NTP (104) will have transmitted approximately 1200 bits of the same packet at a rate of 1 Mbps. Thus, the NTP (104) would need to store about 12000 bits (1500 bytes by 8 bits/byte) minus 1200 bits or 10800 bits to accommodate the largest size packet. To illustrate this example, assume that the rate limiting function operates to ensure that packets received by the NDC (106) are calculated before being passed to the NTP (104)It is necessary that the NTP transmission rate of the network does not exceed 100 Kbps. Thus, the normal mode timer should be set to allow the maximum amount of data stored in the NTP (104) to propagate out of the NTP (104) by the time the next 1pps packet is expected to arrive, in the worst case. For 10800 bits of information transmitted at 1Mbps, the normal mode timer should be set to expire in 1 second minus 10.8 milliseconds or 989.2 milliseconds. Upon expiration of the normal mode timer (255), a quiet timer is set (260). The quiet timer should be set to run at least for the duration of time starting from the expiration of the normal mode timer until the expected time of the next 1pps packet. Preferably the quiet mode timer will be set to run for a slightly longer duration to provide a margin for jitter with respect to the arrival time of the next 1pps packet. After expiration of the normal mode timer and before expiration of the quiet window timer, all non-time sensitive packets received from the first node (100) are not forwarded to the NTP (104), but are stored by the NDC (106) when storage space is available. If a smaller packet is received and it is determined that it can be forwarded and the transmission rate within 12mS would be equal to zero in anticipation of the next timing sensitive packet, it may optionally be transmitted to the NTP (104). Otherwise, non-time sensitive packets are discarded. Thus, the normal mode timer is preferably set to a value that coincides with the time interval in which the next time sensitive packet is expected minus the time required to clear all stored data in the network transmission path (104). The quiet timer is preferably set to limit the length of time for throttling non-timing sensitive data transmissions on the network transmission path (104). After the NDC (106) receives, detects, and forwards the timing sensitive packet, the quiet timer (252) is cleared and all other data traffic stored or received by the NDC (106) is rate limited and forwarded to the network transmission path (104). If the next expected timing sensitive packet delay is significant or not reached and the quiet timer expires, then all data traffic stored or subsequently received by the NDC (106) is rate limited and forwarded to the network transmission path (104).
Alternatively, the normal mode timer may be set to expire when the next designated timing sensitive packet is expected to arrive, and the quiet mode timer may be set at some predetermined time before the normal mode timer is set to expire.
In the above example shown in fig. 2A-2C, a rate transition of receiving data at 10Mbps and outputting data at 1Mbps or may be incorporated within the NDC (106). Such functional configuration or rearrangement does not require changes to the operation of the NDC (106), the normal mode timer, the operation of the quiet timer, etc., in terms of processing designated timing sensitive packets. The division of functionality for the examples described in this disclosure is chosen to render the disclosure clear in large part. Other logical and physical groupings and arrangements of network elements are possible depending on the implementation or environment.
In the above-described embodiments shown in figures 1-2C, data transmission to the NTP (104) is throttled according to a worst-case assumption of buffer utilization within the NTP (104) based on the particular rate limiting method used. That is, the data is squashed during a period of time that assumes that the data buffer within the NTP (104) is not larger than the data buffer required to accommodate the largest size packet. Based on the transmission time of the maximum size packet, the NDC (106) limits the maximum rate according to the NTP (104). Alternatively, the NDC may monitor NTP buffer utilization. This is advantageous when an NTP can store more than one maximum size packet, thus better utilizing the storage space available in the NTP. In some embodiments, the NDC records buffer utilization within the network transmission path and forwards incoming non-timing sensitive packets based on the size of the incoming non-time sensitive packets, the estimated current amount of data in the NTP buffer, and the remaining time until the next time sensitive packet is expected to arrive.
Figure 3B illustrates an embodiment in which the network delay controller is configured to estimate buffer utilization within NTP, as implied in the preceding paragraph. The network configuration (300) includes an NDC (320) that controls the flow of packets into the NTP (330), in part, by using a buffer utilization estimator (325). In this example, assume that the maximum data rate T3 through NTP is less than the maximum data rates of both T1 and T2. Upon transmission of a packet from the NDC (320) to the NTP (330), the buffer utilization estimator (325) records the amount of packet data stored in the NTP (330) as the packet propagates through the NTP (330). The NDC (320) operates to ensure that no data is stored in the NTP (330) buffer at the expected arrival time of the next time-sensitive packet, taking into account the current reading of the buffer utilization estimator (325), the transmission speed of the NTP (330), and the time remaining until the expected arrival of the specified time-sensitive packet.
Figures 4A-4C illustrate a set of processes such as may be implemented on an NDC to estimate NTP buffer utilization and control packet flows based on the estimation. Figure 4A illustrates a process by which packets are received and transmitted via an NTP based on estimated NTP buffer utilization and expected arrival times of timing sensitive packets. Fig. 4B illustrates the operation of the normal mode timer for determining when the normal operating mode shown in fig. 4A is active. Figure 4C shows how the current buffer utilization in NTP is estimated by the propagation timer.
In one embodiment, NDC 320 of FIG. 3B implements the processes of FIGS. 4A-4C. In operation, the NDC may forward packets to the NTP and maintain an estimate of NTP buffer utilization. In the following description, NDC does not necessarily take into account the actual capacity of the NTP buffer. In addition, when the capacity of the NTP is the minimum required for the maximum size packet, the NTP buffer utilization algorithm becomes equivalent to the rate limiting algorithm. In the following example, it is assumed that the NDC includes design parameters that account for the NTP buffer size and NTP transmission speed. The instantaneous buffer utilization of the NTP can be estimated by the current reading of the propagation timer. When the NDC transmits a packet to the NTP, a propagation timer is set according to the size of the transmitted packet. The propagation timer, after being set, begins to count down according to the transmission rate of the NTP. As the NDC transmits additional packets, the size of each packet is added to the current contents of the propagation timer. In this way, the NDC maintains a continuous estimate of the amount of quiet time required to empty the NTP buffer (i.e., the length of the interval over which no other packets are transmitted to the NTP), for example, to ensure that the buffer is empty when the next timing sensitive packet arrives.
Referring to fig. 4A, the NDC determines whether a timing sensitive packet is received or stored in a queue (430). If a timing sensitive packet is received or in the queue, it is transmitted (470). If no timing sensitive packet is received and no timing sensitive packet is in the queue, the NDC checks to see if a non-timing sensitive packet is received or present in the queue (435). If a non-timing sensitive packet is not received and there are no non-timing sensitive packets in the queue, the NDC continues to check if a timing sensitive or non-timing sensitive packet is received or present in the queue (430 and 435). If a non-timing sensitive packet is received or present in the queue, the NDC checks if the packet can be transmitted through the NTP before the next timing sensitive packet arrives, and does not overflow the NTP buffer (440). To this end, the NDC compares 440 the sum of the packet size (Ps) of the next non-timed packet expected to be transmitted and the contents (Pt) of the propagation timer (as will be described more fully below in conjunction with fig. 4C) to the buffer size (Bsize) of the NTP. This comparison determines whether the size of the received packet (Ps) added to the data (Pt) that has been estimated by the propagation timer to be in the NTP buffer would exceed the capacity of the buffer. NDC also compares Ps and Pt, expressed as the quiet time required to empty the buffer (i.e., Ps + Pt divided by the transmission rate of the NTP), to the time (Te) expected to arrive until the next timing sensitive packet (440). In some embodiments, Te is equal to the time remaining on the normal mode timer, the operation of which is shown in fig. 4B. In some embodiments, the normal mode timer expires at a time other than the expected arrival time of the next timing sensitive packet, e.g., some time before or after the expected arrival time. In some such embodiments, the time Te expected to arrive until the next timing sensitive packet is derived from the remaining time before the normal mode timer expires, for example, by adding or subtracting a specified offset from the remaining time before the normal mode timer expires. In other embodiments, a separate timer may be used to record the time Te until the expected arrival of the next timing sensitive packet. If the sum of Ps and Pt is less than or equal to Bsize (440) and less than Te, the non-timing sensitive packet is forwarded to the NTP (450). As described above in connection with step 220 of fig. 2A, the algorithm used to determine which non-timing sensitive packet to transmit in step 450 when more than one packet is available for transmission may be different for different configurations and/or implementations. If Ps plus Pt is greater than Bsize, or greater than or equal to Te (440), and a positive result is obtained in step 435 because a non-timing sensitive packet was received but not already stored in the queue (i.e., not because one or more such packets already exist in the queue), then the received packet is stored in the queue when storage space is available, or the received packet is discarded when no storage space is available (460).
Fig. 4B shows a process for controlling the start of the normal mode timer. In step 445, it is determined whether a timing sensitive packet has been received. Each time such a packet is received, a normal mode timer is started 455.
The timer settings and/or decision criteria employed in NDC are preferably selected to allow some time-of-arrival variance of expected time-sensitive packets. If the normal mode timer expires, this condition indicates in the example shown in fig. 3B and fig. 4A-4C that the next timing sensitive packet does not actually arrive at the expected time, and thus the timer expiration is equivalent to a timeout. In one embodiment, the timeout results in a default condition such that Te is assumed to be greater than the sum of Ps and Pt until the normal timer is restarted, e.g., by receipt of a subsequent timed packet, so that transmission of these packets will continue through the operations of steps 435, 440 and 450 when non-timed packets are received or present in the queue as long as the input buffer of the NTP is not too full. Preferably, in the example shown in fig. 3B-4C, the normal mode timer is set to a value slightly greater than the time of the next expected timing sensitive packet to allow for residual network jitter. In some embodiments, if a timeout occurs (i.e., the normal mode timer expires before the expected timing sensitive packet arrives), the normal mode timer or the supplemental timer may be started based on the expected arrival time of the next timing sensitive packet expected to arrive, e.g., based on a known or expected period of the expected arrival of the timing sensitive packet. In such embodiments, the normal mode timer is restarted, or an auxiliary timer is started, to resume normal operation, anticipating the arrival of timing sensitive packets that are expected to arrive after the timing sensitive packet for which the normal mode timer was initially set. Such a configuration would enable normal operation to resume if the expected timing sensitive packets never arrive, for example, because they were dropped or lost when transmitted in the upstream network.
Fig. 4C illustrates the operation of the propagation timer in the example shown in fig. 4A-4C. When a packet is forwarded to the NTP (480), the number of bits in the packet is added to the contents of the propagation timer (495), as in steps 450 and/or 470 of figure 4A. When the contents of the propagation timer are greater than zero (485), the propagation timer is decremented at the same rate as the NTP maximum transmission rate (490). The contents of the propagation timer represent the amount of data currently buffered in and propagated through the NTP. The contents of the propagation timer, when divided by the transmission rate of the NTP, represent the remaining time that all data will propagate through the NTP.
Some network configurations may transport multiple designated timing sensitive packet streams. Preferably, when NDC specifies and identifies more than one type of timing sensitive packet, the different types of timing sensitive packets are prioritized (prioritized) with respect to each other. Further, in embodiments such as those of fig. 1-2C that use a normal mode timer and a quiet mode timer, a separate normal mode timer and a separate quiet mode timer are maintained for each designated timing sensitive packet identified by the NDC. When the quiet window of a lower priority timing sensitive packet coincides with the quiet window of a higher priority timing sensitive packet, the lower priority packet will be stored by the NDC and sent after the higher priority packet is received and forwarded to the NTP or after the higher priority quiet mode timer expires. If the NDC maintains an estimate of the NTP's buffer utilization, then lower priority timing sensitive packets are stored only if the NDC determines that they will not propagate through the NTP before higher priority packets are expected to arrive.
Some network protocols include the following provisions: the rate of incoming traffic is modified according to the capacity of the destination network path and the congestion parameters. For ethernet this can be achieved by using a pause packet. When a receiving node within an ethernet network detects a condition where it will not be able to process received data in the near or near future, the node may send a pause packet to the originating node that originated the data packet. A time interval is specified in the pause packet to indicate a duration of time that no packet will be sent to the receiving node. The pause packet can be used to more efficiently utilize data buffers throughout various nodes in the network. Generally, in the embodiment shown in fig. 1, if the pause packets are timed such that the first node (101) will not be paused in anticipation of transmitting the designated timing sensitive packets, the NDC (106) may transmit these pause packets to the first node (101) without actually affecting the network latency of the designated timing sensitive packets.
The above embodiments have primarily focused on solving the problem of rate transmission jitter and its impact on timing sensitive packets. However, the network delay controller may additionally and/or instead be used to virtually eliminate arbitration delays for timing sensitive packets that are caused when multiple network traffic sources attempt to transmit through a common network port. As used herein, the term "arbitration delay" refers to the delay in transmitting data sent by a first sending node through an intermediate node or a common network port due to the intermediate node or common network port being busy with forwarding data sent by a second (or more other) sending node through the same intermediate node or common port. Traditionally, such arbitration delays for selected packets are minimized by a prioritization scheme in which a high priority packet type is always placed at the top of a queue of packets destined to be transmitted from a certain network port. For example, ethernet can provide such prioritization by using Virtual Local Area Network (VLAN) tagging. However, even with the prioritization method, timing sensitive packets of high priority still experience random delays depending on the arbitration status of a certain network output port. Such random delays caused by potentially varying arbitration delays are referred to herein as "arbitration jitter". In particular, if a high priority packet becomes available for transmission by a network port while another packet transmission is already in progress, the high priority packet will be delayed.
Referring to fig. 5, a network configuration (500) is shown in which first, second and third nodes (501, 502 and 503, respectively) are coupled to transmit network packet data to a network flow controller (510). The network flow controller (510) includes network functions typically found in network devices such as hubs, switches, and routers. In fig. 5, a network flow controller (510) manages the flow of network packets transmitted by first, second, and third nodes (501, 502, and 503, respectively) to an output network port coupled to a network transmission path (520). Thus, in fig. 5, the first, second and third nodes (501, 502 and 503) are all capable of transmitting network packets to the fourth node (530) through the network flow controller (510) and the NTP (520). Depending on the particular type, a network flow controller may direct some network traffic from some ports to other ports. For example, ethernet hubs typically do not direct traffic. Alternatively, all network traffic received at one port of the hub is output only at all other ports of the hub. Other network flow controllers, such as switches and routers, typically "learn" to direct data traffic containing a certain destination address to a port where it is generated with the same source address. Many different ways and methods of directing traffic are used in actual network devices. The network flow controller (510) includes a routing logic function (515) that may include any arbitration method for directing network traffic. The network flow controller (510) also includes a network delay controller (540) to minimize arbitration jitter for the transmission of designated timing sensitive packets. Routing logic selectively couples traffic from the first, second and third nodes (501, 502 and 503, respectively) to the NDC (540). The particular functional division shown in the network flow controller (510) is not unique. In fact, many alternative architectures of the network flow controller (510) are possible including the NDC (540). In operation, substantially random packet traffic is generated by the first, second and third nodes (501, 502 and 503, respectively) and transmitted through the network flow controller (410) and NTP (520) to the fourth node (530). The NDC (540) detects designated timing sensitive packets among packets received by the network flow controller (510). Upon detection of a timing sensitive packet, it is placed at the beginning of a packet queue to be transmitted to the fourth node (530) through the NTP (520). At about the same time, the normal mode timer is set to expire in accordance with the time interval during which the next corresponding designated timing sensitive packet is expected to be available for transmission by the network flow controller (510). Using the normal mode timer, the NDC (520) will ensure that no further packet transmissions to the NTP (520) and fourth node (530) will be initiated or made upon arrival of the next timing sensitive packet. The NDC may also compensate for jitter caused by a reduction in the transmission rate in the network path if the transmission rate of the NTP (520) from the network flow controller (510) to the fourth node (530) is less than the transmission rate of the network flow controller (510).
Figures 6A-6D illustrate the operation of the NDC (540) to minimize arbitration jitter within the network flow controller (510) and propagation jitter associated with the NTP (520). As described above, the propagation jitter associated with an NTP may manifest when the transmission rate of the NTP is less than the transmission rate of the network flow controller to the NTP. Figure 6A illustrates a process of adding a packet received from one of a plurality of input ports (e.g., from nodes 1, 2, 3, etc. of figure 5) to a packet queue to be transmitted via NTP. Fig. 6B illustrates the operation of the normal mode timer for determining when the normal operating mode shown in fig. 6A and 6C is active. Figure 6C illustrates the process of pulling a packet from a queue for transmission via NTP. Figure 6D shows how the current level of buffer utilization in NTP is estimated by the propagation timer.
As shown in fig. 6A, the NDC detects whether a packet is received (605). If the received packet is a timing packet (610), the packet is placed at the top of the packet queue to be forwarded to the NTP (615), and the normal mode timer is set (620 and 625 of figure 6B). Note that if the transmission rate of the NTP is equal to or greater than the transmission rate of the network flow controller interface to the NTP, the propagation timer has no effect in the NDC and need not be included. If the received packet is not a timing packet (610), the received packet is stored in a queue or discarded when the queue is full (660).
Fig. 6C shows a process of pulling a packet from a queue and transmitting the packet in the examples shown in fig. 5 and fig. 6A-6B. In the process of fig. 6C, which runs in parallel with the processes of fig. 6A and 6B, the transmit queue is checked to determine if it is empty (645). If it is not empty, NDC determines if the top packet in the queue is a timing sensitive packet (647). If the top packet is a timing sensitive packet, the top packet is transmitted via the NTP (655). If the top packet is not a timing sensitive packet, a determination is made as to whether the packet in the queue can propagate completely through the NTP without overflowing the NTP's buffer before the next expected timing sensitive packet arrives (650). If so, the non-timing sensitive packet in the queue is transmitted (655), and the transmit queue is again checked to determine if it is empty (645). If it is determined that the packet in the queue cannot propagate completely through the NTP without overflowing the NTP's buffer before the next expected timing sensitive packet arrives (650), then the packet is not transmitted at that time.
Fig. 6D illustrates the operation of the propagation timer in the example shown in fig. 4A-4C. When the packet is forwarded to the NTP (680), the number of bits in the packet is added to the contents of the propagation timer (695), as in step 655 of figure 6C. When the contents of the propagation timer are greater than zero (685), the propagation timer is decreased (690) at the same rate as the NTP maximum transmission rate. The contents of the propagation timer represent the amount of data currently buffered in and propagated through the NTP. The contents of the propagation timer, when divided by the transmission rate of the NTP, represent the remaining time that all data will propagate through the NTP.
For a network delay controller to efficiently utilize the available bandwidth of a given network transmission path, it is important that the NDC have a reasonably accurate estimate of the maximum throughput capacity of the network transmission path. Other enhancements in transmission efficiency may be obtained when the NDC has a reasonably accurate estimate of the available memory in the NTP. In practice, however, the maximum transmission rate and buffer capacity of a certain network transmission path may be arbitrary and is not expected to be used for a general and widespread deployment of network delay controllers to various network configurations.
Figure 7 shows a network configuration (700) that includes a Link Bandwidth Analyzer (LBA) for automatically estimating the bandwidth and input buffer size of a given NTP. The network configuration (700) includes a first node (705) coupled to a first NDC (710) and a second node (720) coupled to a second NDC (740). As shown in fig. 7, the second NDC (740) includes an LBA (742). During the analysis of the NTP (760), virtually all network traffic from node 1 and node 2 (705 and 720, respectively) is prevented from propagating through the NTP (760). The LBA (742) begins operation with the second NDC (740) network interface at its mostThe large transmission rate transmits the packet sequence through the NTP (760) to the first NDC (710). Fig. 8 shows an example sequence (800) of 13 packets transmitted by the second NDC (740). Preferably, to improve the accuracy of the NTP (760) buffer estimation, the transmitted packets are small and equal in size. The sequence of packets is preferably large enough to ensure that all buffers associated with the NTP (760) will overflow when the transmission rate of the NTP (760) is lower than the transmission rate of the NDC (740). Each transmitted packet includes a sequence number indicating the position of each particular packet within the series of packets transmitted by the second NDC (740). The LBA (712) checks the sequence number of each packet received from the second NDC (740). If no missing sequence numbers are found for all transmitted packets, the bandwidth of the NTP (760) is assumed to be the maximum transmission rate capable of supporting the second NDC (740) network interface. If the LBA (712) detects a lack of a received sequence, the maximum transmission rate of the NTP (760) is assumed to be less than the maximum transmission rate of the second NDC (740) network interface. In fig. 8, packets with sequence numbers 5(820), 8(830), and 11(840) are shown shaded to indicate that they were not received by the first NDC (710). Upon finding a missing sequence number, the LBA (712) of the first NDC detects the number of packets received before finding the missing sequence number, designated N in the sequence of packets (800)R. As shown in fig. 8, the total number of packets transmitted by the second NDC (740), minus the number of packets received before the sequence number was found to be missing, is designated as NT. By adding NTDivided by subset NTThe number of packets successfully transmitted to the remote end (i.e., N)TMinus the number of missing packets NM) The LBA (712) of the first NDC (710) results in a second NDC (740) maximum transmission rate RNDCMaximum transmission rate R with NTP (760)NTPApproximate ratio P of phase comparisonRThus, PR=NT/(NT-NM). In the example shown in FIG. 8, NT9, where 3 packets do not reach the far end (i.e., N)M3), thus, PR9/(9-3) ═ 1.5. That is, the maximum transmission rate of the NDC is 1.5 times the maximum transmission rate of the NTP, i.e., this enables the NTP to transmit only two packets for every three packets that the NDC transmits to the NTP after the NTP input buffer is filled and the first packet of the series is dropped. Ratio PRFor in NTP (760) toolsRate limiting in the NDC with a relatively low maximum transmission rate and for estimating the propagation time of packets forwarded to the NTP (760).
The approximate input buffer size of NTP (760) is equal to NR-(NR/PR) Multiplied by the average packet size of the transmitted sequence. By setting the approximate buffer size equal to the amount of data sent into the NTP before the buffer becomes full as shown by the first missing packet (i.e., N)RMultiplied by the average packet size of the transmitted sequence) minus T at which the buffer is filledRThe amount of data (i.e., T) transmitted during the time period through the NTPRMaximum transmission rate R multiplied by NTPNTP) This equation can be derived. This yields the equation Bsize NR*AVG SIZE-TR*RNTP. By counting the number of packets NRThe period T is found by multiplying the average packet size to obtain an approximate amount of data that the NDC sends to the NTP during the buffer filling period, and dividing the result by the maximum transmission rate of the NDCRI.e. TR=NR*AVGSIZE/RNDC. Tolerances for other network parameters, such as inter-packet delay, may also be taken into account. Will find TRThe latter equation is substituted into the above equation for buffer size to yield Bsize NR*AVG SIZE-(NR*AVG SIZE/RNDC)*RNTPBy using PRSubstituted for RNTP/RNDCThis becomes Bsize ═ NR*AVG SIZE-(NR*AVG SIZE)/PRThus, it is simplified to Bsize ═ NR-(NR/PR)]AVG SIZE. For the example packet sequence shown in FIG. 8, NREqual to 4, ratio PREqual to 1.5 and the approximate input buffer size of the NTP (750) is equal to 1.333 multiplied by the average packet size of the transmitted sequence. The estimated ratio PRAnd the estimated buffer size is provided to the second NDC (740). Thus, considering the results of the link bandwidth analyzer, the NDC is provided with estimated network parameters that can be used in network delay control methods such as rate limiting type methods (such as those shown in figures 1-2C, by way of example and not limitation) or NTP buffer management type methods (such as those shown in figures 3-4C, by way of example and not limitation)And not by way of limitation).
A generic and dynamic method of specifying timing sensitive packets is provided by the Packet Interval Protocol (PIP). With PIP, the designated timing sensitive packets need not be periodic nor defined prior to deployment of the network including network delay control. Fig. 9 shows an example format of a PIP data packet. The PIP packet format (900) shown in fig. 9 includes an address information field (902) that may include identification data regarding the destination and origin of the data packet. Also included in the PIP packet format (900) is a PIP type field (904) for distinguishing PIP packets from other packets. The PIP identifier field (906) is used to distinguish PIP packets associated with a particular function or application from other PIP packets. A priority field (908) included in the PIP packet format (900) is used to prioritize network delay control of various PIP packets. The packet interval field (910) indicates the time period during which another PIP packet with the same PIP identifier will be transmitted. The data or payload of the packet may be included in a data field (912). In operation, these PIP fields will typically be managed such that, for example, the assignment of a PIP identifier associated with a packet of one application does not coincide with the PIP identifier of a packet associated with another application on the same network. It is preferable to specify a minimum interval in the PIP so that a given timing sensitive packet is unlikely to consume a too large percentage of the available bandwidth for a given network path.
The various elements and embodiments may be combined together and incorporated into network flow controllers (e.g., switches, hubs, routers, and gateways) to provide NDC networks suitable for arbitration and general network applications. In general, NDC operates to ensure that after a first designated timing sensitive packet is detected, extraneous network traffic will not actually delay subsequent designated timing sensitive packets, which may otherwise be stored and propagated through a port or network transmission path. Thus, NDC provides a short period of time through which timing sensitive packets can propagate unimpeded and with minimal and predictable propagation delay. With NDC, the measured average delay and the measured average jitter of a timing sensitive packet through a network transmission path are virtually constant even when extraneous network traffic varies from 0% to 100% of the series combination of NDC and network transmission path transmission capacity.
Detecting and anticipating timing sensitive packets and thus controlling packet flow to NTPs through the use of timers is disclosed herein. The packet output from the network flow controller may be adjusted and stopped in anticipation of the next designated timing sensitive packet. For example, the normal mode timer may be set to expire slightly later than the expected arrival time of the next designated timing sensitive packet, or may be used in conjunction with another timer, such as a quiet timer. In some embodiments, all packets to be transmitted that are not designated timing sensitive packets are evaluated with respect to the status of the timer, the size of the packet to be transmitted, and the estimated packet propagation time before the expiration of the normal mode timer or quiet timer in use. Further, the rate-limiting based NDC may employ buffer utilization estimation to improve the efficiency or throughput of the corresponding NTP.
Although the foregoing embodiments have been described in some detail for purposes of clarity of understanding, the invention is not limited to the details provided. There are many alternative ways of implementing the invention. The disclosed embodiments are presented for purposes of illustration and not limitation. Numerous variations, modifications and substitutions will occur to those skilled in the art without departing from the scope of the invention as defined by the appended claims.
Claims (48)
1. A method of controlling network traffic, comprising:
monitoring a plurality of packets to be transmitted over a network transmission path;
predicting a time at which a timing sensitive packet will become available for transmission over the network transmission path; and
controlling the plurality of packets according to the projected time such that the network transmission path will not be occupied by packets other than the timing sensitive packet at a time associated with the projected time;
controlling the flow of packets to the network transmission path includes sending a plurality of test packets over the network transmission path.
2. A method of controlling network traffic as recited in claim 1, further comprising rate limiting the plurality of packets to be transmitted according to an optimal data rate for the network transmission path.
3. A method of controlling network traffic as recited in claim 1, wherein the timing sensitive packets comprise timing packets.
4. A method of controlling network traffic as recited in claim 1, wherein the time associated with the projected time comprises a time window determined based on the projected time.
5. A method of controlling network traffic according to claim 4 further comprising extending the time window if the timing sensitive packet does not arrive at the expected time.
6. A method of controlling network traffic as recited in claim 1, wherein controlling the flow of packets to the network transmission path comprises flushing memory associated with the network transmission path before the projected time.
7. A method of controlling network traffic as recited in claim 1, wherein controlling the flow of packets to the network transmission path comprises flushing a memory associated with the network transmission path prior to the projected time such that packets of a particular type are transmitted via the network transmission path without buffering.
8. A method of controlling network traffic as recited in claim 1, wherein controlling packet flow to the network transmission path comprises inhibiting transmission of packets other than the timing sensitive packet during a time window associated with the projected time.
9. A method of controlling network traffic as recited in claim 8, wherein the time window is associated with a timer.
10. A method of controlling network traffic as recited in claim 8, wherein the beginning of the time window is associated with setting a timer.
11. A method of controlling network traffic as recited in claim 8, wherein the end of the time window is associated with the expiration of a timer.
12. A method of controlling network traffic as recited in claim 1, wherein controlling packet flow to the network transmission path comprises storing packets other than the timing sensitive packets in a memory.
13. A method of controlling network traffic as recited in claim 1, wherein controlling the flow of packets to the network transmission path comprises determining a bandwidth of the network transmission path.
14. A method of controlling network traffic as recited in claim 1, wherein controlling the flow of packets to the network transmission path comprises determining a bandwidth of the network transmission path and adjusting the sending of data to the network transmission path to avoid significantly exceeding the network transmission path bandwidth.
15. A method of controlling network traffic as recited in claim 1, wherein controlling the flow of packets to the network transmission path comprises estimating a maximum data transmission rate associated with the network transmission path.
16. A method of controlling network traffic as recited in claim 15, wherein estimating a maximum data transmission rate associated with the network transmission path comprises:
sending a series of test packets at a known rate from a sending node to the network transmission path for transmission over the network transmission path to a receiving node;
determining a number of packets that arrive at the receiving node among the packets comprising the series of test packets; and
estimating the maximum data transmission rate associated with the network transmission path based at least in part on data associated with packets that are present that do not reach the receiving node.
17. A method of controlling network traffic as recited in claim 16, wherein estimating the maximum data transmission rate associated with the network transmission path based at least in part on data associated with packets that are present that did not arrive at the receiving node comprises estimating the maximum data transmission rate associated with the network transmission path based at least in part on a number of packets that did not arrive.
18. A method of controlling network traffic as recited in claim 16, wherein estimating the maximum data transmission rate associated with the network transmission path based at least in part on data associated with packets that are present that did not arrive at the receiving node comprises estimating the maximum data transmission rate associated with the network transmission path based at least in part on an identifier associated with packets that did not arrive.
19. A method of controlling network traffic as recited in claim 16, wherein estimating the maximum data transmission rate associated with the network transmission path based at least in part on data associated with packets that are present that do not reach the receiving node comprises:
identifying a first missing packet in the series as received by the receiving node;
determining a number of packets in a subset comprising the first missing packet and packets in the series that follow the first missing packet in order; and
comparing the total number of packets in the subset with the number of packets in the subset that arrive at the receiving node.
20. A method of controlling network traffic as recited in claim 19, wherein estimating the maximum data transmission rate associated with the network transmission path based at least in part on data associated with packets that are present that did not reach the receiving node further comprises dividing the number of packets in the subset that reach the receiving node by the total number of packets in the subset and multiplying the result by the known rate.
21. A method of controlling network traffic as recited in claim 16, wherein the known rate comprises a maximum data transmission rate associated with the sending node.
22. A method of controlling network traffic as recited in claim 16, wherein the maximum data transmission rate associated with the network transmission path is determined to be equal to or exceed the known rate.
23. A method of controlling network traffic as recited in claim 1, wherein controlling the flow of packets to the network transmission path comprises determining a buffer size of a buffer associated with the network transmission path.
24. A method of controlling network traffic as recited in claim 23, wherein determining a buffer size comprises:
sending a series of test packets at a known rate from a sending node to the network transmission path for transmission over the network transmission path to a receiving node;
determining whether all packets comprising the series of test packets arrive at the receiving node; and
if it is determined that not all packets comprising the series arrive at the receiving node:
identifying a first missing packet in the series that has not arrived;
determining a number of packets in the series that precede the first missing packet; and
estimating the buffer size based at least in part on a number of packets in the series that precede the first missing packet.
25. A method of controlling network traffic as recited in claim 24, wherein estimating the buffer size further comprises estimating the buffer size based at least in part on a maximum data transmission rate associated with the network transmission path.
26. A method of controlling network traffic as recited in claim 24, wherein estimating the buffer size further comprises estimating the buffer size based at least in part on an average packet size associated with the series.
27. A method of controlling network traffic as recited in claim 24, wherein estimating the buffer size based at least in part on the number of packets in the series that precede the first missing packet comprises subtracting an approximate amount of data transmitted by the network transmission path to the receiving node during transmission of packets in the series that precede the first missing packet from an amount of data associated with packets in the series that precede the first missing packet.
28. A method of controlling network traffic as recited in claim 1, wherein controlling the flow of packets to the network transmission path comprises determining a buffer size of a buffer associated with the network transmission path and setting a time window based at least in part on the buffer size.
29. A method of controlling network traffic as recited in claim 1, wherein controlling the flow of packets to the network transmission path comprises determining a buffer size of a buffer associated with the network transmission path and dynamically adjusting a time window based on a length of incoming packets and an amount of data currently in the buffer.
30. A method of controlling network traffic as recited in claim 1, wherein controlling the flow of packets to the network transmission path comprises continually estimating an amount of data occupying the network transmission path at any given time.
31. A method of controlling network traffic as recited in claim 30, wherein continuously estimating the amount of data occupying the network transmission path at any given time comprises increasing a propagation timer each time a packet is sent to the network transmission path and decreasing the propagation timer over time based on a data transmission rate associated with the network transmission path.
32. A method of controlling network traffic as recited in claim 30, wherein controlling the flow of packets to the network transmission path further comprises preventing sending of non-timing sensitive packets when adding the non-timing sensitive packets to data occupying the network transmission path would result in exceeding a capacity of a buffer size associated with a buffer associated with the network transmission path.
33. A method of controlling network traffic as recited in claim 32, wherein controlling the flow of packets to the network transmission path further comprises preventing sending of non-timing sensitive packets when adding the non-timing sensitive packets to data occupying the network transmission path would cause the network transmission path to be unavailable to transmit the timing sensitive packets with substantially minimal delay at a time associated with the expected time.
34. A method of controlling network traffic as recited in claim 30, wherein controlling the packet flow to the network transmission path further comprises preventing sending of non-timing sensitive packets when adding the non-timing sensitive packets to data occupying the network transmission path would cause the network transmission path to be unavailable to transmit the timing sensitive packets with substantially minimal delay at a time associated with the expected time.
35. A method of controlling network traffic as recited in claim 1, wherein the timing sensitive packet includes a field indicating a time when a next timing sensitive packet will become available.
36. A method of controlling network traffic as recited in claim 1, wherein the timing sensitive packet comprises a field for distinguishing between the timing sensitive packet and a non-timing sensitive packet.
37. A method of controlling network traffic as recited in claim 1, wherein the timing sensitive packet is associated with a particular function and comprises a field for distinguishing the timing sensitive packet from timing sensitive packets not associated with the function.
38. A method of controlling network traffic as recited in claim 1, wherein the timing sensitive packet comprises a field indicating a priority associated with the timing sensitive packet.
39. A method of controlling network traffic as recited in claim 1, wherein the timing sensitive packet includes synchronization information.
40. A method of controlling network traffic as recited in claim 1, wherein the timing sensitive packet includes frequency information for frequency locking a receiving node.
41. A method of controlling network traffic as recited in claim 1, wherein the timing sensitive packets comprise wireless protocol packets.
42. A method of controlling network traffic as recited in claim 1, wherein the timing sensitive packets comprise voice data.
43. A method of controlling network traffic as recited in claim 1, wherein the timing sensitive packets comprise streaming video data.
44. A system for controlling network traffic, comprising:
an interface for transmitting a plurality of packets via a network transmission path; and
a network delay controller coupled to the interface configured to:
monitoring a plurality of packets to be transmitted over a network transmission path;
predicting a time at which a timing sensitive packet will become available for transmission over the network transmission path; and
controlling packet flow to the network transmission path in accordance with the predicted time such that the network transmission path will not be occupied by packets other than the timing sensitive packet at a time associated with the predicted time;
controlling the flow of packets to the network transmission path includes sending a plurality of test packets over the network transmission path.
45. A method of estimating a maximum data transmission rate associated with a network transmission path, comprising:
sending a series of test packets at a known rate from a sending node to the network transmission path for transmission over the network transmission path to a receiving node;
determining whether all packets comprising the series of test packets arrive at the receiving node; and
estimating the maximum data transmission rate associated with the network transmission path based at least in part on data associated with packets that are present that do not reach the receiving node.
46. A system for estimating a maximum data transmission rate associated with a network transmission path, comprising:
a transmit interface configured to transmit a series of test packets at a known rate from a transmitting node to the network transmission path for transmission via the network transmission path to a receiving node; and
a processor configured to determine whether all packets, including the series of test packets, arrive at the receiving node, and estimate the maximum data transmission rate associated with the network transmission path based at least in part on data associated with packets that are present that do not arrive at the receiving node.
47. A method of estimating a buffer size associated with a buffer associated with a network transmission path, comprising:
sending a series of test packets at a known rate from a sending node to the network transmission path for transmission over the network transmission path to a receiving node;
determining whether all packets comprising the series of test packets arrive at the receiving node; and
if it is determined that not all packets comprising the series arrive at the receiving node:
identifying a first missing packet in the series that has not arrived;
determining a number of packets in the series that precede the first missing packet; and
estimating the buffer size based at least in part on a number of packets in the series that precede the first missing packet.
48. A system for estimating a buffer size associated with a buffer associated with a network transmission path, comprising:
a transmit interface configured to transmit a series of test packets at a known rate from a transmitting node to the network transmission path for transmission via the network transmission path to a receiving node; and
a processor configured to determine whether all packets comprising the series of test packets arrive at the receiving node, and in the event that it is determined that not all packets comprising the series arrive at the receiving node:
identifying a first missing packet in the series that has not arrived;
determining a number of packets in the series that precede the first missing packet; and
estimating the buffer size based at least in part on a number of packets in the series that precede the first missing packet.
Applications Claiming Priority (5)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US56211104P | 2004-04-13 | 2004-04-13 | |
| US60/562,111 | 2004-04-13 | ||
| US11/054,345 US7499402B2 (en) | 2004-04-13 | 2005-02-08 | Network delay control |
| US11/054,345 | 2005-02-08 | ||
| PCT/US2005/012487 WO2005101744A2 (en) | 2004-04-13 | 2005-04-12 | Network delay control |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| HK1132394A1 HK1132394A1 (en) | 2010-02-19 |
| HK1132394B true HK1132394B (en) | 2012-04-20 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| EP1735953B1 (en) | Network delay control | |
| US12028256B2 (en) | Method and apparatus for controlling traffic in packet-based network | |
| US9054973B2 (en) | Method and system for Ethernet congestion management | |
| US6167027A (en) | Flow control technique for X.25 traffic in a high speed packet switching network | |
| US20050254423A1 (en) | Rate shaper algorithm | |
| EP2050199B1 (en) | Expedited communication traffic handling apparatus and methods | |
| EP2629470B1 (en) | Apparatus and method for optimizing power consumption by traffic shaping in a transmission network | |
| JPH1093624A (en) | Packet transmission network | |
| US10439940B2 (en) | Latency correction between transport layer host and deterministic interface circuit | |
| JP2002319968A (en) | System and method for flow control | |
| CN101567851B (en) | Method and equipment for shaping transmission speed of data traffic flow | |
| EP1668847B1 (en) | Encapsulating packets into a frame for a network | |
| US7355976B2 (en) | Method and apparatus for providing retry control, buffer sizing and management | |
| WO2008068167A1 (en) | Method of adaptively dejittering packetized signals buffered at the receiver of a communication network node | |
| WO2013104178A1 (en) | Ethernet flow control device and method based on microwave transmission | |
| US7471630B2 (en) | Systems and methods for performing selective flow control | |
| EP2302847B1 (en) | Control of traffic flows with different CoS in a communication network | |
| JPH11239163A (en) | Inter-LAN flow control method and switch | |
| US20020083189A1 (en) | Relay of a datagram | |
| HK1132394B (en) | Network delay control | |
| JP4838739B2 (en) | Router buffer management method and router using the management method |