[go: up one dir, main page]

CN111211996B - Flow scheduling method - Google Patents

Flow scheduling method Download PDF

Info

Publication number
CN111211996B
CN111211996B CN201911375677.2A CN201911375677A CN111211996B CN 111211996 B CN111211996 B CN 111211996B CN 201911375677 A CN201911375677 A CN 201911375677A CN 111211996 B CN111211996 B CN 111211996B
Authority
CN
China
Prior art keywords
path
flow
data
stream
priority
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911375677.2A
Other languages
Chinese (zh)
Other versions
CN111211996A (en
Inventor
刘志勇
石翔
张法
王林
周碧玉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Computing Technology of CAS
Original Assignee
Institute of Computing Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Computing Technology of CAS filed Critical Institute of Computing Technology of CAS
Priority to CN201911375677.2A priority Critical patent/CN111211996B/en
Publication of CN111211996A publication Critical patent/CN111211996A/en
Application granted granted Critical
Publication of CN111211996B publication Critical patent/CN111211996B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2425Traffic characterised by specific attributes, e.g. priority or QoS for supporting services specification, e.g. SLA
    • H04L47/2433Allocation of priorities to traffic types
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/12Shortest path evaluation
    • H04L45/121Shortest path evaluation by minimising delays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/12Shortest path evaluation
    • H04L45/124Shortest path evaluation using a combination of metrics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/12Shortest path evaluation
    • H04L45/125Shortest path evaluation based on throughput or bandwidth
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/24Multipath
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2441Traffic characterised by specific attributes, e.g. priority or QoS relying on flow classification, e.g. using integrated services [IntServ]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/80Actions related to the user profile or the type of traffic
    • H04L47/805QOS or priority aware

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

一种流调度方法,包括如下步骤:对于请求/响应,发起对应的流,所述流带有优先级信息和对应网页元素大小信息;根据当前流的优先级以及各条路径当前已经被分配的流的优先级,计算当前流被分配到各条路径上可获得的带宽;根据当前流在每条路径上可获得的带宽以及每条路径的单向时延将当前流的数据量均衡分配到一条或多条路径上;各路径基于分配结果对其上的数据进行发送。将流的优先级与调度方法结合保证了在关键流与其他流共享路径时,其带宽资源不被其他流过分抢占。在多条路径的场景下,在流传输的起始阶段就能同时利用多条路径进行传输,降低了单条路径上产生突发流量而导致接收端乱序的概率。

Figure 201911375677

A flow scheduling method, comprising the following steps: for a request/response, initiating a corresponding flow, the flow has priority information and corresponding webpage element size information; The priority of the flow, calculate the available bandwidth of the current flow allocated to each path; according to the available bandwidth of the current flow on each path and the one-way delay of each path, the data volume of the current flow is evenly allocated to on one or more paths; each path sends data on it based on the assignment. Combining the flow priority with the scheduling method ensures that when the critical flow shares the path with other flows, its bandwidth resources are not preempted by other flows. In the scenario of multiple paths, multiple paths can be used for transmission at the same time in the initial stage of stream transmission, which reduces the probability of out-of-order receivers caused by burst traffic on a single path.

Figure 201911375677

Description

Flow scheduling method
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a method for scheduling a stream of a transport layer corresponding to the field of data transmission in a communications technology.
Background
The World Wide Web (WWW) is a system composed of many interconnected hypertext, and as the most important internet application, the World Wide Web (WWW) becomes the main medium for information dissemination in the internet at present. According to statistics, by 12 months in 2017, internet users in China reach 7.72 hundred million, and the popularization rate is 55.8%. The web page is a basic information unit in the world wide web, and is composed of various media information such as characters, pictures, sound, video and the like, and the information is collectively called as web page elements. With the great increase of the richness of the web page content, the number and the distribution range of elements forming the whole web page are also greatly increased, and the service experience of the user is seriously influenced and the number of the users of the network service is determined by the level of the web page loading delay. For delay sensitive network services, even a small delay will cause a large loss. In the browser, taking the example of electricity merchant giga amazon, a 100ms (millisecond) delay will cause a 1% reduction in the sales volume of goods; once google's search page time increases by 0.5ms, it will reduce its access traffic by 20%; for some network applications, such as online games, video chat, real-time payment, financial transactions, etc., the level of network delay is one of the most important indicators for meeting the quality of service.
In order to reduce network delay and improve service quality, the prior art carries out a series of optimization on network protocols from different layers. The optimization mainly aims at two problems:
first, Head-of-line blocking (Head-of-line blocking) problem.
At the application layer, the head-of-line blocking problem refers to that HTTP requests or responses must follow a certain order, and if the previous request or response is not completed, the next one cannot be done.
For example, in the HTTP/1.0 protocol, if a browser needs to request a plurality of web page elements from a server, HTTP/1.0 needs to re-establish a TCP connection for each web page element request, and the next request can be made until the corresponding web page element is received, because most web page elements are small, the required transmission time is short, and TCP needs to go through slow start to detect the available transmission bandwidth in the network after establishing a connection every time, such a method makes the transmission efficiency low.
In the HTTP/1.1 protocol, the TCP connection is kept open during continuous requests, and there is no need to wait for the receipt of a response before requesting the next element, which is called "pipelining", however, there are some problems in HTTP/1.1, although it can continuously request on a TCP connection, all responses must be received in sequence according to the order of request transmission, so that a single slow response can block all responses thereafter, and in order to solve this problem, SPDY (predecessor of HTTP/2 protocol) adopts a multiplexing (multiplexing) method, i.e. multiple requests/responses share a TCP connection, and their transmission is in parallel, and there is no queue head blocking in the application layer.
In the transport layer, the problem of head-of-line blocking means that, since the SPDY protocol multiplexes on one TCP connection, and TCP only supports sequential transmission, the absence of data from one request affects the transmission of data from other requests.
Second, the priority setting of the web content. If all requests/responses are transmitted indiscriminately, some key requests can be blocked under the condition of limited bandwidth, therefore, in the application layer protocol, the SPDY protocol introduces the concept of request priority, so that the requests/responses with high priority obtain more bandwidth resources, and the time delay of the key requests is optimized in a targeted manner. There are also protocol improvements working at the transport layer for the priority of the network content, for example, MulTCP achieves proportional sharing of resources by changing parameters of TCP congestion control; pTCP uses microflows that function the same as TCP connections, providing priority division by controlling the number of microflows. However, the task of prioritization at the TCP level places a burden on the application layer to manage connections and allocate resources, and as a multiplexed protocol, the QUIC can provide management and resource allocation functions for multiple streams at the transport layer, which feature makes it possible to prioritize different streams.
To reduce the loading delay of web pages and improve the user experience, Google has recently introduced a new type of transport layer protocol QUIC (Quick UDP Internet Connections). The QUIC protocol multiplexes the transport layer: after the basic connection between the client and the server is established, for the transmission of each webpage element, a 'Stream' (Stream) is provided for data transmission, and the opening and closing of the Stream are light-weight, do not influence the connection to which the webpage element belongs, are independent from one another, and do not influence the respective transmission. On the basis of QUIC, researchers have proposed that one physical link used in transmission be extended to multiple physical links, which in turn form an MPQUIC (multipath QUIC) protocol. In the prior art, scheduling the stream by using an MPQUIC protocol in the transmission process mainly comprises two modes, wherein one mode is to schedule the stream by using an LRF (Lowest-RTT-First, minimum round-trip delay First) scheduler, and the other mode is to schedule the stream by using an SA-ECF scheduler; the following describes the scheduling process of the stream by two schedulers.
Scheduling by adopting an LRF (Lowest-RTT-First, minimum round-trip time delay priority) scheduler: when transmitting data, the LRF scheduler treats all flows as equal relation, packs their data indiscriminately in a polling mode, and selects a link with the smallest current RTT (round trip delay) for each packed data packet to transmit. In a multipath scenario, the LRF first uses the path with the smallest RTT until it fills its send window, then uses the path with the second smallest RTT, and so on.
And (3) scheduling by adopting an SA-ECF scheduler: when transmitting data, the SA-ECF scheduler selects two paths from the existing multiple paths, wherein one path is the path with the smallest RTT and does not necessarily have available transmission space; the other is the path with the smallest RTT for ensuring the transmission space. When each data packet is transmitted, the SA-ECF scheduler estimates the transmission time of the flow on each path according to the remaining data volume to be transmitted of the flow corresponding to the data packet and the path parameter information, and selects a path with the shortest estimation time for the data packet. This is done to avoid that the current packet is transmitted on a path that may affect the completion time of the stream.
However, the above two schedulers also have corresponding disadvantages when scheduling the stream:
for the LRF scheduler, the LRF scheduler does not consider the HTTP/2 priority setting when scheduling the flow, since without considering the priority setting, all of which is likely to result in an increase in the completion time of the critical flow. Furthermore, different characteristics, such as delay, bandwidth, etc., may exist due to different paths. For example, RTT differences between two different network paths, WiFi and MBB (Mobile broadband) are common and very different. When the RTT of the path is very different, the scheduling method used by the LRF scheduler causes the data to be sent from different paths to the receiving end, which causes the receiving end to receive the data out of order, and the receiving end needs a larger buffer and more time to wait for the data on the slow path.
For the SA-ECF scheduler, when the SA-ECF scheduler schedules a flow, different flows firstly compete for a fast path, and when the fast path occupies a full transmission window, a slow path is used, in which case, the same problem of receiving end disorder as the LRF scheduler may be caused. In addition, during the transmission of the web page, each web page element corresponds to a stream in the MPQUIC protocol, and different types of web page elements prefer different paths. For example, HTML documents are suitable for transmission on paths with a lower RTT, while images are suitable for transmission on paths with a higher bandwidth. However, the scheduling method of the SA-ECF scheduler considers the effect that the scheduling of each packet may have on the completion time of the current flow, the completion time estimate of which is calculated based on the current amount of remaining data, and does not consider the problem of resource allocation from the flow perspective, which is likely to result in the flow being transmitted on an inappropriate path.
Disclosure of Invention
Therefore, the present invention is directed to overcome the above-mentioned drawbacks of the prior art, and to provide a flow scheduling method and a flow scheduler for scheduling a flow based on the method.
The purpose of the invention is realized by the following technical scheme:
according to a first aspect of the present invention, there is provided a stream scheduling method, comprising the steps of:
s1, for the request/response, initiating a corresponding stream with priority information and corresponding web page element size information.
And S2, calculating the available bandwidth of the current flow allocated to each path according to the priority of the current flow and the priority of the flow allocated to each path, wherein the flow adopts a first-come first-handle mode, and for a plurality of flows arriving at the same time, the flows are processed from high to low according to the priorities corresponding to the flows. Wherein calculating the available bandwidth of the current flow allocated to each path comprises: s21, respectively calculating the sum of the priority of the flow to which the data distributed to the path belongs and the priority of the current flow for all paths; and S22, calculating the available bandwidth of the current flow on the path according to the proportion of the priority of the current flow in the sum of the priorities, wherein the proportion of the available bandwidth in the total bandwidth of the path is consistent with the proportion of the priority in the sum of the priorities.
And S3, according to the available bandwidth of the current flow on each path and the one-way time delay of each path, the data volume of the current flow is distributed to one or more paths in a balanced way, so that the total transmission time of the data volume of the same flow distributed on different paths is minimized. Wherein, the step S3 specifically includes the following steps: s31, acquiring the one-way time delays of all paths, and arranging each path according to the corresponding one-way time delay; s32, according to the arrangement sequence in the step S31, starting from the path with the minimum one-way delay, calculating the one-way delay difference of the adjacent paths two by two in sequence, performing data distribution once when the one-way delay difference is calculated, distributing partial data contained in the current flow to the path with smaller one-way delay in the current calculation and other paths with all one-way delays smaller than the path during each data distribution, wherein the distributed data volume of each path is determined by the product of the one-way delay difference calculated this time and the available bandwidth of the current flow on the path; and S33, after the unidirectional delay difference is calculated for a certain time, if the residual data of the current flow is not enough to finish the data distribution, proportionally distributing the residual data to the path with smaller unidirectional delay in the current calculation and other paths with all unidirectional delays smaller than the path according to the bandwidth available on each path of the current flow.
S4, each path transmits data thereon based on the assignment result. Preferably, the transmission of all streams is done through a plurality of transmission turns, wherein each path transmits data packets in turn in each transmission turn.
And before each path sends the data packet, encapsulating the data packet, wherein the number of the encapsulated data packets is equal to the number of the streams on the path.
Preferably, encapsulating each data packet includes: and selecting one stream from all the streams to which the data allocated to the path belongs, and selecting a part of data of the stream on the path to be packaged into a data packet according to the number of bytes allowed by the data packet.
Selecting one stream from all streams to which the data distributed to the path belongs, and adopting a mode of random selection according to probability; the probability of each flow being selected is the priority of the flow divided by the sum of the priorities of all the flows on the path.
According to a second aspect of the present invention, there is provided a flow scheduler comprising a memory having stored therein a computer program and a processor executing the computer program to implement the steps of the flow scheduling method according to the first aspect of the present invention.
Compared with the prior art, the invention has the advantages that:
firstly, in the invention, each flow on each path carries out bandwidth sharing according to the priority, thus ensuring that the flow with high priority can obtain more bandwidth resources when sharing the same path with other flows, and the bandwidth resources can not be excessively preempted by other flows, thereby effectively preventing the key flow with high priority from being blocked. Secondly, the invention only distributes the stream with small data volume to one path so as to avoid the completion time being slowed down by other paths in the transmission process; the method distributes the stream with large data volume to a plurality of paths, so that the stream can simultaneously utilize the plurality of paths to carry out bandwidth aggregation, and the completion time is shortened. And thirdly, the invention carries out path distribution before the stream starts to be transmitted, and can simultaneously utilize a plurality of paths to carry out transmission at the initial stage of the stream transmission, thereby avoiding the single-path burst flow generated by only contending for one path in a period of time in the transmission process when the stream needs to be transmitted on a plurality of paths, and further reducing the probability of disorder of a receiving end.
Drawings
Embodiments of the invention are further described below with reference to the accompanying drawings, in which:
fig. 1 is a flowchart illustrating a flow scheduling method according to an embodiment of the present invention;
fig. 2 is a schematic network structure diagram of a flow scheduling method according to an embodiment of the present invention;
fig. 3 is a diagram illustrating comparison of transmission completion times when different methods are used for stream a under a corresponding condition according to an embodiment of the present invention;
fig. 4 is a diagram illustrating comparison of transmission completion times when a flow B under a corresponding condition adopts different methods according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating comparison of transmission completion times for different methods for stream A under exemplary second corresponding conditions, in accordance with an embodiment of the present invention;
fig. 6 is a diagram illustrating comparison of transmission completion times when different methods are used for the stream B under the exemplary second corresponding condition according to the embodiment of the present invention;
fig. 7 is a diagram illustrating comparison of transmission completion times when different methods are employed for stream a under exemplary three corresponding conditions according to an embodiment of the present invention;
fig. 8 is a diagram illustrating comparison of transmission completion times when different methods are used for the stream B under the exemplary three corresponding conditions according to the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail by embodiments with reference to the accompanying drawings. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The block diagrams shown in the figures are only examples of functional entities and do not necessarily correspond to physically separate entities, i.e. the functional entities may be implemented in software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The flow charts shown in the drawings are also merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
The invention mainly optimizes the completion time of the stream based on the MPQUIC protocol, and the inventor finds that different scheduling methods have great influence on network delay when researching the stream scheduling method of the MPQUIC protocol. The investigation shows that: the existing stream scheduling method has two problems in a multipath scene: firstly, the existing method selects to occupy a fast path first and then uses a slow path for transmission, which brings the problem of disorder at a receiving end and influences the completion time of a stream; second, the existing methods do not perform path allocation according to the application layer characteristics of different streams (the size of the stream and the priority of the stream), resulting in that the stream may be transmitted on an inappropriate path. Therefore, the inventor proposes a new flow scheduling method, which performs path allocation according to the application layer characteristics of the flow, so as to achieve the purpose of reducing network delay.
When designing the flow scheduling method of the present invention, the inventors start from the following four aspects:
(1) setting and meaning of transport layer priority: the setting of the priority of the transport layer is important because the scheduling of streams of different priorities needs to be performed at the transport layer. To solve this problem, the inventors introduced HTTP/2 priority setting as a priority reference for the transport layer. The priority of HTTP/2 is an application level priority, typically set by the browser, whose value characterizes the criticality of the corresponding requested web page element. Currently, all mainstream browsers such as Chrome, Firefox, Safari, etc. support HTTP/2. In the invention, the meaning of the priority value of the transmission layer represents the requirement of the webpage element on the time delay, and the larger the value is, the higher the priority of the webpage element is, the lower the time delay required by the webpage element is.
(2) How to embody the priority of the flow in the scheduling: in order to optimize the completion time of the flow based on the priority, in the flow scheduling method, the scheduling order and the resource allocation proportion of the flow take the priority into consideration, and the transmission optimization of the flow is realized by fully considering the requirements of the flows with different priorities on the bandwidth and the time delay.
(3) How to optimize its completion time for different size streams simultaneously: according to the method, an optimization model is designed according to the size of the flow and the characteristic parameters of the path, the optimization target is the completion time of the flow, the model ensures that the flow with small data volume can avoid using the path which may slow the completion time, and simultaneously ensures that the flow with large data volume can simultaneously aggregate the bandwidths of a plurality of paths, thereby optimizing the transmission time of the flow.
(4) How to avoid bursty traffic on a single path: based on the optimization model, the invention designs a corresponding algorithm to ensure that the algorithm can distribute paths for the streams in real time. The flow scheduling method of the present invention performs path allocation before the flow starts transmission and ensures that the allocated paths can be used simultaneously when the flow starts transmission. Burst traffic generated when a flow needs to transmit on multiple paths contending for only one path for a period of time is avoided.
In order to optimize the completion time of the stream in the transmission process, the inventor provides a stream scheduling method, which achieves the purpose of reducing the completion time of the stream by optimizing the scheduling mode of the stream. In summary, the transmission process of the stream includes the following steps: (1) receiving a webpage element request initiated by a client; (2) for each requested response, the transport layer initiates a corresponding Stream (Stream) with information of priority and corresponding web page element size; (3) the scheduler allocates the paths of the data of the flow according to the parameters of the existing multiple paths and by combining the information of the current flow needing to be allocated, and the obtained allocation result is specific to the data volume needing to be transmitted by the flow on each path; (4) and after all the streams are distributed, transmitting all the streams according to the distribution result.
The present invention will be described in detail below with reference to the accompanying drawings and examples.
According to an embodiment of the present invention, the present invention provides a flow scheduling method for scheduling a flow with priority information, as shown in fig. 1, the method for scheduling a flow based on priority includes the following steps: a1, a2, A3 and a 4. Each step is described in detail below.
A1, for a request/response, initiating a corresponding stream with priority information and corresponding web page element size information.
In a certain period of time, there may be several users initiating requests to the server, and for these requests, the transport layer may initiate a corresponding flow, where each flow itself may carry information identifying its priority and information identifying the size of its corresponding web page element. In the invention, the meaning of the priority value of the transmission layer represents the requirement of the webpage element on the time delay, and the larger the value is, the higher the priority of the webpage element is, the lower the time delay required by the webpage element is.
And A2, calculating the bandwidth of the flow on each path. Specifically, the available bandwidth allocated to each path by the current flow is calculated according to the priority of the current flow and the priority of the flow allocated to each path currently, wherein a first-come-first-processed mode is adopted for the flow, and for a plurality of flows arriving at the same time, the flows are processed from high to low according to the priorities corresponding to the flows.
Because some of the requests/responses have different precedence orders and some of the requests/responses occur at the same time, the flows are distinguished in the invention, when the flows are scheduled, the flows arriving at different times adopt a first-come-first-processed mode, and the flows arriving at the same time are distributed according to the order of priority from high to low.
Since the overall completion time of a flow is determined by the slowest path, each flow should be reasonably allocated to a heterogeneous path to balance its completion time. For example, for a stream s, n paths of the current transport stream s are defined, the total data volume of the stream s is defined as D, the overall completion time of the stream s is defined as T, and the estimated completion time T of the stream s on the ith path is defined as TiThe data amount of the flow s allocated on the ith path is diWhere n and i are both positive integers, whereby tiCan be calculated by the following formula:
Figure BDA0002340891220000081
wherein o isiFor the one-way delay of the ith path, bsiIs the bandwidth that the stream s would get if it were allocated to the ith path at the current time.
In the invention, each flow on each path carries out bandwidth sharing according to the priority, thus ensuring that more bandwidth resources can be obtained when the flow with high priority shares the same path with other flows, and the bandwidth resources can not be excessively preempted by other flows, thereby effectively preventing the key flow with high priority from being blocked.
According to one embodiment of the invention, calculating the available bandwidth of the current flow allocated to each path comprises:
a21, respectively calculating the sum of the priority of the flow to which the data distributed to the path belongs and the priority of the current flow for all paths.
And A22, calculating the available bandwidth of the current flow on the path according to the proportion of the priority of the current flow in the sum of the priorities, wherein the proportion of the available bandwidth in the total bandwidth of the path is consistent with the proportion of the priority of the current flow in the sum of the priorities.
In order to meet the principle that the streams on the same path share the bandwidth according to the priority, the invention designs a mechanism for sharing the bandwidth of the streams on the same path, which comprises the following steps:
for example, sum of priority values of all streams already allocated on ith path is defined as sumiStream s has a priority value pr, then stream s is allocated to the bandwidth bs available on the ith pathiCan be calculated by the following formula:
Figure BDA0002340891220000091
wherein b isiThe total bandwidth of the ith path itself.
Based on the above formula, a transmission completion time model of the stream s is constructed, and the overall completion time T of the stream s can be expressed as:
(P1)min T
Figure BDA0002340891220000092
Figure BDA0002340891220000093
T=max{ti|i∈[1,2,...,n]}
a3, performing path allocation on the data contained in the stream. And according to the available bandwidth of the current flow on each path and the one-way delay of each path, the data volume of the current flow is uniformly distributed to one or more paths, so that the total transmission time of the data volume of the same flow distributed on different paths is minimized.
The size of data included in a stream has an important relationship with the transmission time thereof, and in the same case, the transmission time consumed by a stream having a large data amount is longer than the transmission time consumed by a stream having a small data amount. In order to prevent the transmission time of the stream with large data volume from being overlong, the stream is distributed to a plurality of paths when the paths are distributed, and the completion time of the stream is shortened by a plurality of path transmission modes; and the stream with small data volume is distributed to only one path, so that the completion time of the stream is prevented from being increased due to the difference of one-way time delay between the paths.
According to one embodiment of the invention, the distribution of the data of the stream is achieved by steps a31 and a32 and/or a 33.
And A31, acquiring the one-way time delays of all paths, and performing ascending or descending arrangement on each path according to the corresponding one-way time delay.
A32, according to the arrangement sequence in the step A31, two by two calculating the one-way delay difference of the adjacent paths in turn from the path with the smallest one-way delay difference, performing data distribution once every time the one-way delay difference is calculated, distributing part of data contained in the current stream to the path with smaller one-way delay in the current calculation and other paths with all the one-way delays smaller than the path during each data distribution, wherein the distributed data volume of each path is determined by the product of the one-way delay difference calculated this time and the available bandwidth of the current stream on the path.
Because the lower limit of the transmission time on each path is the one-way time delay of the path, all paths are sequenced according to the sequence of the one-way time delay from low to high, and then the data contained in the current flow is distributed from the path with the lowest one-way time delay. For a better understanding of the present embodiment, the following description is given in conjunction with formulas and text as follows:
for example, all paths are first sorted in ascending order, and when data included in the current stream is allocated, the path is usedxTo indicate a path, wherein the subscript x is a positive integer for indicating the order of the path after the ascending order, oxTo represent the one-way delay of each path. First, the path with the first small one-way delay1And path with the second smallest one-way delay2To begin with, first put the path1One-way delay o1And path2One-way delay o2In contrast, will (o)2-o1)×bs1The amount of data of the size is allocated to the path1To balance the path1And path2The transmission time of (c). Then the path with the second smallest one-way delay is used2And the third smallest path3Comparing, respectively, the (o)3-o2)×bs1And (o)3-o2)×bs2Is assigned to path1And path2Thus balancing the path3And path1、path2The transmission time of (c). And repeating the process until the residual data of the current flow is not enough to finish the data distribution after the unidirectional time delay difference is calculated for a certain time or the finishing time of all paths for the current flow is equal. If all paths are sorted in descending order, when data included in the current stream is distributed, the data is distributed from the path arranged at the last, that is, from the path with the smallest one-way delay.
In this process, if the data amount of the current stream is less than (o)2-o1)×bs1All data of the current flow will be all allocated to the path with the first small one-way delay1If the data amount of the current stream is greater than (o)2-o1)×bs1Then the data of the current flow will be distributed to the path comprising the first smallest one-way delay1On the inner multiple paths; so that the method of the invention enables to select a suitable path for a stream according to the size of the amount of data contained by the stream. For the stream with small data volume, the stream is only distributed to one path so as to avoid the slow completion time of the stream dragged by other paths in the transmission process; the method distributes the stream with large data volume to a plurality of paths, so that the stream can simultaneously utilize the plurality of paths to carry out bandwidth aggregation, and the completion time is shortened.
A33, after calculating the one-way delay difference for a certain time, if the residual data of the current flow is not enough to complete the data distribution, proportionally distributing the residual data to the path with smaller one-way delay in the current calculation and other paths with all the one-way delays smaller than the path according to the bandwidth available on each path of the current flow.
Specifically, after calculating the one-way delay difference, it is found that the remaining data of the current stream is not enough to complete the data distribution, that is, although the data distribution is still performed nowThere is a path which is not allocated to the current stream data, but the data amount of the remaining data of the current stream is smaller than the data amount required for performing the data allocation of this time in the above allocation manner, for example, the data amount of the current stream is larger than (o)2-o1)×bs1And is less than (o)2-o1)×bs1+(o3-o2)×bs1+(o3-o2)×bs2Then (o) will be used after the first calculation of the one-way delay difference of the adjacent paths2-o1)×bs1The data size is distributed to the path with the first small one-way delay1Then, the one-way time delay difference of the adjacent paths is calculated for the second time, and the residual data quantity is found to be less than (o)3-o2)×bs1+(o3-o2) X bs; path of current flow with first small one-way delay1The available bandwidth is 1Mbps, and the path of the current flow with the second smallest one-way delay2If the available bandwidth is 2Mbps, the rest data are distributed to the path with the first small one-way delay according to the ratio of 1:21And path with the second smallest one-way delay2The above.
When the transmission time of all paths is equal, it indicates that the data allocation for the current stream has exhausted all paths, and the completion time of all paths for the current stream is equal, the most ideal state is that the data of the current stream is just completely allocated, but in practice, the probability of this occurrence is very low, and the case with a high occurrence probability is: the current stream has remaining data unallocated. Under the condition of higher occurrence probability, the method of the invention proportionally distributes the residual data of the current flow to each path according to the available bandwidth of the current flow on each path; by distributing the residual data to all paths in this way, the bandwidth aggregation can be carried out by using all paths simultaneously, and the completion time is shortened. For example, the remaining data amount of the stream s is ds, the total number n of the current paths is 5, and the bandwidth occupied by the stream s on the first path is bs1The bandwidth occupied on the second path is bs2Bandwidth occupied on the third pathIs bs3The bandwidth occupied on the fourth path is bs4Bandwidth occupied on the fifth path is bs5. Then in this data allocation, the amount of data obtained on the first path is
Figure BDA0002340891220000111
The amount of data obtained on the first path is
Figure BDA0002340891220000121
The amount of data obtained on the third path is
Figure BDA0002340891220000122
The amount of data obtained on the fourth path is
Figure BDA0002340891220000123
The amount of data obtained on the fifth path is
Figure BDA0002340891220000124
In summary, it can be seen that: in this data allocation, the proportion of data allocated to each path is: bs1:bs2:bs3:bs4:bs5
According to an embodiment of the present invention, the present invention designs an algorithm for path allocation of data contained in a current stream, and the pseudo code of the algorithm is as follows:
procedure SCHEDULESTREAM
sortedList
(all paths are sorted in order of one-way delay from small to large and the sorted results are recorded in sortedList.)
initialize di=0,i∈[1,2,...,n]
(initializing the amount of data of the allocated stream s on all paths to 0.)
D′←D
for path i in sortedList[1,...,n-1]do
ogap←oi+1–oi
(starting from the path with the lowest one-way delay, calculating the one-way delay difference between the adjacent paths two by two in sequence.)
Figure BDA0002340891220000125
(if the data size of stream s
Figure BDA0002340891220000126
Indicating that the amount of data is sufficient to average out the current difference in transmission time. For the 1 st to i-th paths, a specified amount (ogap × bsj) of data is allocated. )
Figure BDA0002340891220000131
(if the data size of stream s
Figure BDA0002340891220000132
Indicating that the amount of data is not sufficient to average the current difference in transmission time. For paths 1 through i, it is allocated data proportionally — a specified amount
Figure BDA0002340891220000133
The data of (1). )
Figure BDA0002340891220000134
(if there is unallocated data after the end of the previous step, we allocate this remaining data to each path in proportion to the bandwidth available on the path for the current flow-each path being allocated a specified amount
Figure BDA0002340891220000135
。)
end procedure
A4, each path transmits data thereon based on the assignment result.
Compared with the method of selecting the path with the minimum RTT every time adopted by an LRF scheduler and the method of selecting the path with the minimum estimated transmission time every time adopted by an SA-ECF scheduler for transmission until the sending window of a good path is filled, the method of the invention allocates the paths before the stream starts to transmit and can simultaneously utilize a plurality of paths at the initial stage of stream transmission, thereby avoiding single-path burst flow generated by only contending for one path in a period of time during the transmission process when the stream needs to be transmitted on the plurality of paths.
According to an embodiment of the present invention, in step a4, the transmission of all streams is completed through multiple transmission rounds, wherein each path transmits data packets in turn in each transmission round.
After confirming that all the simultaneously arriving flows are distributed with paths, the invention transmits the data contained in the flows through a plurality of transmission rounds, and in one transmission round, because the quantity of the distributed flows on each path may be inconsistent, the invention provides that each path is transmitted in turn in one transmission round, and the transmission rounds are repeatedly executed until all the flows are transmitted.
According to an embodiment of the present invention, each path sends data on the path based on the allocation result, and before sending a data packet, each path performs encapsulation of the data packet, where the number of encapsulated data packets is equal to the number of streams on the path. Before each path sends data, firstly, encapsulation of a data packet is carried out, that is, the data is packaged into a data packet, and the path sends the formed data packet.
According to one embodiment of the present invention, encapsulating each packet comprises: and selecting one stream from all the streams to which the data allocated to the path belongs, and selecting partial data or all data of the stream on the path to be packaged into a data packet within the range of the allowed number of bytes of the data packet. The above process is repeated until the number of packets on the path equals the number of streams to which the data allocated on the path belongs. Each path transmits packets equal to the number of streams to which the data on the path belongs when performing one transmission. Selecting one stream from all streams to which the data distributed to the path belongs, and adopting a mode of random selection according to probability; the probability of each flow being selected is the priority of the flow divided by the sum of the priorities of all the flows on the path.
In multiple transmission rounds, when packing the last data packet of a certain stream, the optimal situation is that the remaining number of bytes can just fill all the gaps, but due to the influence of mechanisms such as protocol congestion control, the size of the data packet is not constant, so that the state is difficult to achieve. In order to improve transmission efficiency, the prior art packs the remaining bytes of a plurality of streams into one data packet, but the method may have the effect that the loss of one data packet may simultaneously affect the transmission of all the streams carried by the data packet.
The probability of each flow being selected is determined according to the priority, therefore, the higher the priority of the flow is, the higher the probability of the flow being selected is, and the probability of the flow being selected is proportional to the priority value thereof, so that the actual bandwidth occupied by the flow on the path is about the probability of the flow being selected multiplied by the bandwidth of the path, thereby realizing that the data on the same path shares the bandwidth according to the priority proportion of the flow to which the data belong.
By the stream scheduling method of the method, path allocation is performed before transmission of the stream is started, and under the technical background of introducing an MPQUIC protocol, a receiving end divides a common buffer space of a plurality of original streams into each stream, namely, the receiving end provides a buffer space for each stream to be received independently, so that in the process of transmitting all the streams through a plurality of transmission rounds, even if a certain stream is full of the buffer space to cause data packet blocking, the transmission of other streams cannot be influenced, and the problem that all the streams are blocked due to the blocking of one stream is solved.
In order to verify the practical effects of the present invention, the inventors designed the following three examples; PStream represents the proposed priority-based stream scheduling method in each example; the method for shortening the transmission time by selecting the path with the smallest RTT each time for transmission is represented by LRF, namely the method adopted by an LRF scheduler in the prior art; the SA-ECF is used to indicate a method of estimating only the transmission time of a flow without considering the characteristics of the flow to perform path allocation, i.e., a method adopted by the SA-ECF scheduler in the related art.
In comparing the three methods, first, a network as shown in fig. 2 is constructed, which has two paths and a performance difference between the two paths. The setting can simulate the network characteristics of mobile application of the mobile phone end for connecting WiFi and data traffic at the same time. Then, it is set to transmit two streams having different application characteristics at the same time. The two streams represent two different web page elements, respectively, with their respective priority settings referring to the settings in the Safari browser. The first stream A is an HTML document, 26KB in size, 255 in priority; the second stream B is an image file of size 900KB and priority 8.
According to example one of the present invention, the method is used for comparing the completion time of two streams in three different stream scheduling methods in the case that there is a bandwidth difference between the two paths.
In this example, the bandwidth of path 1 is set to 1Mbps (megabits per second), the bandwidth of path 2 is set to 1Mbps, 20Mbps, 40Mbps, 60Mbps, 80Mbps, and 100Mbps in this order, and the RTTs of both paths are set to 1ms (millisecond) to increase the bandwidth heterogeneity of both paths, resulting in the results shown in fig. 3 and 4.
Based on the results shown in fig. 3 and 4, the priority-based flow scheduling method of the present invention is superior to the other two flow scheduling methods in terms of transmission for critical flows in general. In the method adopted by the LRF scheduler, since the bandwidth heterogeneity of the path is not considered, and the RTT of the two paths is the same in this example, the data packets may be uniformly distributed on the two paths, such a method may have a great influence on the critical flow a with a high priority, because when the data of a transmitted on the two paths are aggregated at the receiving end, the final completion time of the method is limited by the path 1 with a low bandwidth. In the method adopted by the SA-ECF scheduler, the congestion window of the better path (i.e. path 2) is used first and path 1 is left idle for a while before the data packets are transmitted on path 1. The burst traffic on path 2 not only makes the receiving end need a larger buffer to buffer the data, but also causes the receiving end to have more serious packet out-of-sequence, which prolongs the completion time of the stream. In the priority-based stream scheduling method of the present invention, streams are pre-allocated to paths, and the streams simultaneously utilize the scheduled paths from the beginning of transmission, thereby reducing burst traffic and shortening the completion time; because the priority is considered during scheduling, the completion time of the critical flow A of the invention is obviously faster than that of the other two methods, so that the invention has remarkable progress in the transmission of the critical flow compared with the prior art.
According to the second example of the present invention, the method is used to compare the completion times of two flows in three different flow scheduling methods in the case that there is an RTT difference between the two paths.
In this example, the bandwidths of the two paths are both set to 1Mbps (megabit per second), the RTT of path 1 is set to 1ms, and the RTT of path 2 is set to 1ms, 80ms, 160ms, 240ms, 320ms, and 400ms in sequence, so as to increase the RTT heterogeneity of the two paths, and obtain the results shown in fig. 5 and fig. 6.
Based on the results shown in fig. 5 and fig. 6, in general, the performance difference on flow B is less significant than the performance difference on flow a for the three flow scheduling methods, because the scheduling decisions under the RTT heterogeneous paths play a more important role in affecting the completion time of the streamlets. For a, the present invention has a clear advantage in shortening its completion time, which gradually decreases as the difference in RTT of the paths increases, because in the method adopted by the SA-ECF scheduler, in the case where one path has a clear advantage, the bad path is used less. For stream B, the performance difference is not significant. This is because the bandwidth difference of the paths is small, so the disadvantage that the method adopted by the SA-ECF scheduler easily causes burst traffic is not shown.
According to example three of the present invention, the method is used to compare the completion times of two flows in three different flow scheduling methods in the case where there is a difference between RTT and bandwidth in the two paths.
In this embodiment, the bandwidth and RTT of the path 1 are set to 1Mbps and 1ms, respectively, and the bandwidth and RTT of the path 2 are set to 1Mbps and 1ms, 20Mbps and 80ms, 40Mbps and 160ms, 60Mbps and 240ms, 80Mbps and 320ms, and 100Mbps and 400ms in sequence, to obtain the results shown in fig. 7 and 8.
Based on the results shown in fig. 7 and 8, the priority-based flow scheduling method of the present invention has a very significant improvement in the completion time of two flows compared to the other two methods in general. This is because the priority-based flow scheduling method of the present invention takes into account the matching of the characteristics of the flow with the path characteristics. A flow a with a high priority and a small amount of data prefers a path with a low RTT, and a flow B with a low priority and a large amount of data prefers a path with a high bandwidth. In this case, in the priority-based flow scheduling method of the present invention, two flows are transmitted more on different paths, which also reduces resource contention between the two flows on the same path, and effectively shortens the completion time of the two flows at the same time. Under this path heterogeneity, the other two flow scheduling methods lose advantage by not considering the matching of flow types and path characteristics, thereby causing unnecessary inter-flow blocking when flows compete for the same path.
In summary, compared to the prior art without considering the priority, the present invention considers the priority requirement of the application layer content transmitted by the stream, and schedules the content according to the priority at the transmission layer. Combining the priority of the flow with the scheduling method: firstly, a critical flow with high priority is made to be prior to other flows in the scheduling order; secondly, the invention can ensure that the bandwidth resource is not excessively preempted by other flows when the key flow shares the path with other flows. In addition, under the scene of multiple paths, the flow method based on the priority can simultaneously utilize the multiple paths to carry out transmission at the initial stage of flow transmission, and reduces the probability of receiving end disorder caused by burst flow generated on a single path.
It should be noted that, although the steps are described in a specific order, the steps are not necessarily performed in the specific order, and in fact, some of the steps may be performed concurrently or even in a changed order as long as the required functions are achieved.
The present invention may be a system, method and/or computer program product. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied therewith for causing a processor to implement various aspects of the present invention.
The computer readable storage medium may be a tangible device that retains and stores instructions for use by an instruction execution device. The computer readable storage medium may include, for example, but is not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (9)

1. A method for flow scheduling, comprising:
s1, for the request/response, initiating a corresponding stream with priority information and corresponding webpage element size information;
s2, calculating the available bandwidth of the current flow distributed to each path according to the priority of the current flow and the priority of the flow distributed to each path, wherein, the flow adopts a first-come first-handle mode, and for a plurality of flows arriving at the same time, the flows are processed from high to low according to the priority corresponding to the flows; wherein calculating the available bandwidth of the current flow allocated to each path comprises: s21, respectively calculating the sum of the priority of the flow to which the data distributed to the path belongs and the priority of the current flow for all paths; s22, calculating the available bandwidth of the current flow on the path according to the ratio of the priority of the current flow in the sum of the priorities, wherein the ratio of the available bandwidth in the total bandwidth of the path is consistent with the ratio of the priority in the sum of the priorities;
s3, according to the available bandwidth of the current flow on each path and the one-way time delay of each path, the data volume of the current flow is distributed to one or more paths in a balanced way, so that the total transmission time of the data volume of the same flow distributed on different paths is minimum;
s4, each path transmits data thereon based on the assignment result.
2. The flow scheduling method according to claim 1, wherein the step S3 includes:
s31, acquiring the one-way time delays of all paths, and arranging each path according to the corresponding one-way time delay;
and S32, sequentially computing the unidirectional delay differences of the adjacent paths pairwise from the path with the minimum unidirectional delay, performing data distribution once when the unidirectional delay difference is computed, distributing partial data contained in the current flow to the path with smaller unidirectional delay in current computation and other paths with all unidirectional delays smaller than the path during data distribution, wherein the distributed data volume of each path is determined by the product of the calculated unidirectional delay difference and the available bandwidth of the current flow on the path.
3. The flow scheduling method according to claim 2, wherein the step S3 further comprises:
and S33, after the unidirectional delay difference is calculated for a certain time, if the residual data of the current flow is not enough to finish the data distribution, proportionally distributing the residual data to the path with smaller unidirectional delay in the current calculation and other paths with all unidirectional delays smaller than the path according to the bandwidth available on each path of the current flow.
4. The method according to claim 1, wherein in step S4, the transmission of all streams is completed through multiple transmission rounds, wherein each path transmits data packets in turn in each transmission round.
5. The stream scheduling method of claim 4, wherein before sending the data packet in each path, encapsulating the data packet, and the number of encapsulated data packets is equal to the number of streams in the path.
6. The flow scheduling method of claim 5, wherein encapsulating each packet comprises: and selecting one stream from all the streams to which the data allocated to the path belongs, and selecting partial data or all data of the stream on the path to be packaged into a data packet within the range of the allowed number of bytes of the data packet.
7. The stream scheduling method according to claim 6, wherein one stream is selected from all streams to which the data allocated to the path belongs, in a manner of random selection according to probability; the probability of each flow being selected is the priority of the flow divided by the sum of the priorities of all the flows on the path.
8. A stream scheduler comprising a memory in which a computer program is stored and a processor that executes the computer program to implement the steps of the method of any of claims 1 to 7.
9. A non-volatile storage medium, characterized in that a computer program is stored, which program is adapted to carry out the method of any one of claims 1 to 7 when executed.
CN201911375677.2A 2019-12-27 2019-12-27 Flow scheduling method Active CN111211996B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911375677.2A CN111211996B (en) 2019-12-27 2019-12-27 Flow scheduling method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911375677.2A CN111211996B (en) 2019-12-27 2019-12-27 Flow scheduling method

Publications (2)

Publication Number Publication Date
CN111211996A CN111211996A (en) 2020-05-29
CN111211996B true CN111211996B (en) 2022-03-18

Family

ID=70787697

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911375677.2A Active CN111211996B (en) 2019-12-27 2019-12-27 Flow scheduling method

Country Status (1)

Country Link
CN (1) CN111211996B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112672190B (en) * 2020-12-22 2023-04-25 中国科学院计算技术研究所 A minimum tariff MPQUIC packet scheduling method and system
CN113783942B (en) * 2021-08-24 2023-01-31 中国科学院计算技术研究所 MPQUIC data packet fast transmission method and system based on priority classification queue
CN114390043A (en) * 2021-12-29 2022-04-22 华南理工大学 Method for achieving multi-path QUIC transmission optimization based on enterprise-level cloud storage system
CN118474041A (en) * 2024-04-30 2024-08-09 新华三技术有限公司 Multi-stream bandwidth dynamic control method, device and equipment based on user datagram
CN119211161B (en) * 2024-10-16 2025-04-04 杭州映云科技有限公司 Method, system, device and medium for dispatching bandwidth of transmission stream
CN119449693B (en) * 2025-01-10 2025-04-08 深圳方圆宝信息科技服务有限公司 Multipath optimization method, device and system for network route

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103329608A (en) * 2011-02-01 2013-09-25 华为技术有限公司 Method and apparatus for scheduling transmission of data streams over a common communication link

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100531134C (en) * 2006-05-17 2009-08-19 华为技术有限公司 Method, device and system for implementing multi-route transmission
US9860179B2 (en) * 2014-10-23 2018-01-02 Vivint, Inc. Data-connection aggregation
US10432540B2 (en) * 2015-11-03 2019-10-01 Comcast Cable Communications, Llc Determining quality information for a route
CN107770875B (en) * 2017-10-20 2021-07-16 中国航空无线电电子研究所 Method for mixing MAC protocol of aviation ad hoc network

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103329608A (en) * 2011-02-01 2013-09-25 华为技术有限公司 Method and apparatus for scheduling transmission of data streams over a common communication link

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
"A Multipath QUIC Scheduler for Mobile HTTP/2";Jing Wang 等;《Proceedings of the 3rd Asia-Pacific Workshop on Networking APNet》;20190818;全文 *
"FStream: Flexible Stream Scheduling and Prioritizing in Multipath-QUIC";Xiang Shi 等;《2019 IEEE 25th International Conference on Parallel and Distributed Systems (ICPADS)》;20191206;第2节 *
"异构无线网络中多路径并行传输调度算法以及重传算法设计";黄琦;《中国优秀硕士学位论文全文数据库•信息科技辑》;20150831;第22-27页 *
Xiang Shi 等."FStream: Flexible Stream Scheduling and Prioritizing in Multipath-QUIC".《2019 IEEE 25th International Conference on Parallel and Distributed Systems (ICPADS)》.2019,第2节. *

Also Published As

Publication number Publication date
CN111211996A (en) 2020-05-29

Similar Documents

Publication Publication Date Title
CN111211996B (en) Flow scheduling method
Shi et al. Pstream: Priority-based stream scheduling for heterogeneous paths in multipath-quic
Rabitsch et al. A stream-aware multipath QUIC scheduler for heterogeneous paths
CN105191209B (en) Method and device for scheduling video-on-demand stream and best-effort stream in same frequency band
Høiland-Jørgensen et al. Ending the anomaly: Achieving low latency and airtime fairness in {WiFi}
Vamanan et al. Deadline-aware datacenter tcp (d2tcp)
Blanquer et al. Fair queuing for aggregated multiple links
US20030223428A1 (en) Method and apparatus for scheduling aggregated resources
CN106685515B (en) Method and device for allocating satellite resources in space information network
WO2013182122A1 (en) Multiflow service simultaneous-transmission control method and device
Xing et al. A stream-aware MPQUIC scheduler for HTTP traffic in mobile networks
Oh et al. Constraint-based proactive scheduling for MPTCP in wireless networks
CN113162789A (en) Method, device, equipment, system and storage medium for adjusting service level
Irazabal et al. Dynamic buffer sizing and pacing as enablers of 5G low-latency services
US9509620B2 (en) Deadline-aware network protocol
CN101436996B (en) Method for scheduling packet feedback based on short time fairness
US20180115498A1 (en) Systems and methods for adaptive credit-based flow
CN109614215A (en) Stream scheduling method, device, device and medium based on deep reinforcement learning
CN110535705B (en) A Service Function Chain Construction Method for Adaptive User Delay Requirements
CN114980345A (en) Non-ground network service priority calculation method, scheduling method, device and medium
CN118509374A (en) Multipath congestion control method, device, chip, network interface card and equipment
CN116761211B (en) Data packet scheduling method, device, equipment and storage medium based on multipath transmission
CN106992944A (en) A Resource Mapping Method in Wireless Virtual Network
CN117714381A (en) Fair congestion control method and equipment with flow awareness under SDN data center network
Shi et al. PriorityBucket: a multipath-QUIC scheduler on accelerating first rendering time in page loading

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant