[go: up one dir, main page]

CN107786371B - A data acceleration method, device and storage medium - Google Patents

A data acceleration method, device and storage medium Download PDF

Info

Publication number
CN107786371B
CN107786371B CN201710931628.7A CN201710931628A CN107786371B CN 107786371 B CN107786371 B CN 107786371B CN 201710931628 A CN201710931628 A CN 201710931628A CN 107786371 B CN107786371 B CN 107786371B
Authority
CN
China
Prior art keywords
network node
target network
acceleration
bandwidth
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710931628.7A
Other languages
Chinese (zh)
Other versions
CN107786371A (en
Inventor
袁松翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
MIGU Video Technology Co Ltd
Original Assignee
MIGU Video Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by MIGU Video Technology Co Ltd filed Critical MIGU Video Technology Co Ltd
Priority to CN201710931628.7A priority Critical patent/CN107786371B/en
Publication of CN107786371A publication Critical patent/CN107786371A/en
Application granted granted Critical
Publication of CN107786371B publication Critical patent/CN107786371B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0823Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability
    • H04L41/083Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability for increasing network speed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0876Network utilisation, e.g. volume of load or congestion level

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Environmental & Geological Engineering (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本发明公开了一种数据的加速方法,所述方法包括:确定目标网络节点的带宽满载率;所述带宽满载率达到所述目标网络节点的预设负载阈值时,向所述目标网络节点发送调整加速策略的通知消息,所述通知消息中至少包括加速模式。本发明还同时公开了一种数据的加速装置及存储介质。

Figure 201710931628

The invention discloses a data acceleration method. The method includes: determining a bandwidth full load rate of a target network node; when the bandwidth full load rate reaches a preset load threshold of the target network node, sending a data to the target network node. A notification message for adjusting the acceleration strategy, the notification message includes at least an acceleration mode. The invention also discloses a data acceleration device and a storage medium at the same time.

Figure 201710931628

Description

Data acceleration method and device and storage medium
Technical Field
The invention relates to a data processing technology, in particular to a data acceleration method, a data acceleration device and a storage medium.
Background
Nowadays, the Internet mainly uses a Transmission Control Protocol (TCP Transmission Control Protocol) and/or an Internet Protocol (IP) for network Transmission, and since the two protocols are designed too early (1983), the flow Control and congestion Control algorithms cannot match the current network environment, which results in network resource waste.
Wherein, the TCP protocol includes: a Loss-base-based TCP acceleration technology and a Delay-base-based TCP acceleration technology;
however, the Loss-base-based TCP acceleration technology has a method of determining congestion by packet Loss and adjusting the transmission rate in mainstream TCP, and this method often causes a large amount of data packets to be lost, which may aggravate node congestion and cause transmission block.
The Delay-base-based TCP acceleration technology adopts Delay variation to judge the congestion degree and adjust the transmission speed accordingly in principle, and cannot cope with the situation of large Delay variation of a network path, so that the transmission speed is unnecessarily reduced.
Therefore, how to improve the transmission efficiency of TCP and increase the utilization rate of network resources is a problem that needs to be solved by the internet.
Disclosure of Invention
In order to solve the existing technical problem, embodiments of the present invention are expected to provide a data acceleration method, apparatus, and storage medium, which can solve the problem in the prior art that the TCP transmission efficiency cannot be improved under the condition of full bandwidth data.
The technical scheme of the embodiment of the invention is realized as follows:
according to an aspect of the embodiments of the present invention, there is provided a method for accelerating data, the method including:
determining the bandwidth full load rate of a target network node;
and when the bandwidth full load rate reaches a preset load threshold value of the target network node, sending a notification message for adjusting an acceleration strategy to the target network node, wherein the notification message at least comprises an acceleration mode.
In the above scheme, the bandwidth full rate of the target network node is determined by the following formula:
Figure GDA0002960269540000021
wherein Rv represents a bandwidth full load rate, Tn represents a current bandwidth of the target network node, and a represents a value corresponding to the bandwidth availability of the target network node; v represents the peak value of the bandwidth capacity of the target network node, and alpha represents the value corresponding to the bandwidth availability factor of the target network node.
According to another aspect of the embodiments of the present invention, there is provided a method for accelerating data, the method including:
receiving a notification message for adjusting an acceleration strategy, which is sent by a dispatching center, wherein the notification message at least comprises an acceleration mode;
monitoring a first network parameter of a target network node according to the notification message, wherein the first network parameter at least comprises real-time bandwidth data, network resource utilization data and Transmission Control Protocol (TCP) connection data of the target network node;
and according to the first network parameter and the acceleration mode, performing parameter configuration on the target network node to adjust the acceleration strategy.
In the foregoing scheme, the real-time bandwidth data includes at least one of: network card actual throughput flow, network card actual throughput data packet quantity and request number scheduled by load balancing equipment SLB of the target network node;
the network resource utilization data includes at least one of: the TCP connection number, the TCP half-open connection number and the connection number queue length of the target network node;
the TCP connection data includes at least one of: the size of a sliding window, the size of a congestion window, the retransmission proportion of a TCP (transmission control protocol) packet and the transmission rate of the current connection of the target network node;
correspondingly, according to the first network parameter and the acceleration mode, performing parameter configuration on the target network node includes:
calculating a first acceleration factor for parameter configuration of the target network node according to the network card actual throughput flow, the network card actual throughput data packet quantity, the request number scheduled by the load balancing device SLB, the TCP connection number, the TCP semi-open connection number and the connection number queue length by using the following matrix formula:
a first acceleration factor [ pat-value, bw-value, ct-value ];
wherein, pat-value represents a value corresponding to an acceleration mode, bw-value represents (1-network card flow utilization rate) × (1-network card data packet throughput rate) × (1-SLB request number/request number design capacity)/(network card flow utilization rate × network card data packet throughput rate ([ SLB request number/request number design capacity)), and ct-value represents (1- (TCP connection number + TCP half-open connection number 0.5)/connection number queue length)/((TCP connection number + TCP half-open connection number 0.5)/connection number queue length);
calculating a second acceleration factor for parameter configuration of the target network node according to the size of a sliding window, the size of a congestion window, the retransmission proportion of a TCP packet and the transmission rate by using the following matrix formula;
second accelerator factor [ S-win, TCP-ret, Bitrate]T
Wherein S-win represents the size of the sliding window, TCP-ret represents the number of TCP retransmission packets/the number of TCP completion transmission packets, Bitrate represents the transmission rate, and]Trepresenting a transpose;
and according to the first acceleration factor and the second acceleration factor, performing parameter configuration on the target network node.
In the foregoing solution, performing parameter configuration on the target network node according to the first network parameter and the acceleration mode further includes:
comparing the size of the sliding window with a first preset threshold value to generate a first comparison result; determining the second network parameter for increasing or decreasing the size of the sliding window according to the first comparison result;
or, comparing the TCP packet retransmission ratio data with a second preset threshold value to generate a second comparison result; determining the second network parameter for adjusting the transmission ratio of the TCP packet according to the second comparison result;
or, detecting whether the transmission rate reaches a preset target transmission rate; generating a detection result; determining the second network parameter for adjusting the transmission rate when determining that the transmission rate reaches a preset target transmission rate according to the detection result;
and performing parameter configuration on the target network node according to the second network parameter.
According to another aspect of the embodiments of the present invention, there is also provided an apparatus for accelerating data, the apparatus including:
the determining unit is used for determining the bandwidth full load rate of the target network node;
a sending unit, configured to send a notification message for adjusting an acceleration policy to the target network node when the bandwidth full load rate reaches a preset load threshold of the target network node, where the notification message at least includes an acceleration mode.
In the foregoing solution, the determining unit specifically determines the bandwidth full rate of the target network node by using the following formula:
Figure GDA0002960269540000041
wherein Rv represents a bandwidth full load rate, Tn represents a current bandwidth of the target network node, and a represents a value corresponding to the bandwidth availability of the target network node; v represents the peak value of the bandwidth capacity of the target network node, and alpha represents the value corresponding to the bandwidth availability factor of the target network node.
According to another aspect of the embodiments of the present invention, there is also provided an apparatus for accelerating data, the apparatus including:
a receiving unit, configured to receive a notification message sent by a scheduling center for adjusting an acceleration policy, where the notification message at least includes an acceleration mode;
a monitoring unit, configured to monitor a first network parameter of a target network node according to the notification message, where the first network parameter at least includes real-time bandwidth data, network resource utilization data, and TCP connection data of the target network node;
a configuration unit, configured to perform parameter configuration on the target network node according to the first network parameter and the acceleration mode, so as to adjust the acceleration policy.
According to another aspect of the embodiments of the present invention, there is also provided an apparatus for accelerating data, the apparatus including: a memory and a processor;
wherein the memory is to store a computer program operable on the processor;
the processor, when executing the computer program, is adapted to perform the steps of the method of any of claims 1 to 5.
According to another aspect of embodiments of the present invention, there is also provided a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 5
According to the data acceleration method, the data acceleration device and the storage medium, which are provided by the embodiment of the invention, for a schedulable CDN node, the flow change of the CDN node can be monitored through a scheduling system, and a TCP acceleration strategy and an algorithm rule are adjusted according to a monitoring result, so that the TCP transmission efficiency is improved, and the network resource utilization rate is increased.
Drawings
FIG. 1 is a flow chart illustrating a data acceleration method according to an embodiment of the present invention;
FIG. 2 is a flow chart illustrating another data acceleration method according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a data acceleration apparatus according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of another data accelerator according to an embodiment of the invention;
fig. 5 is a schematic structural diagram of a data acceleration apparatus according to another embodiment of the present invention.
Detailed Description
The following detailed description of embodiments of the invention refers to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present invention, are given by way of illustration and explanation only, not limitation.
FIG. 1 is a flow chart illustrating a data acceleration method according to an embodiment of the present invention; as shown in fig. 1, the method includes:
step 101, determining the bandwidth full load rate of a target network node;
in the embodiment of the invention, the method is mainly applied to a scheduling center deployed in a Content Delivery Network (CDN) system architecture, the scheduling center maintains a CDN whole-Network flow load condition table and a scheduling information table, and updates the flow load condition table and the scheduling information table in real time according to flow conditions reported by Network nodes and scheduling conditions of the scheduling center.
Specifically, in the scheduling center, a combination condition of two or more level thresholds needs to be set in advance, when contents in a traffic load condition table and a scheduling information table change, whether the contents in the traffic load condition table and the scheduling information table reach the threshold of the combination condition is checked, and when the contents in the traffic load condition table and the scheduling information table reach the threshold of the combination condition, a notification message for adjusting an acceleration policy is sent to a target network node that needs to adjust a data rate in a message interface manner, where the notification message at least includes an acceleration mode.
Specifically, the information in the traffic load table specifically includes:
1. a network node ID, the network node ID being an identification code that uniquely identifies the target network node information. For example, it may be a Network Identification (NID).
2. Capacity peak (V) of each network node: the bandwidth that the network node is capable of providing is identified, typically taking the smaller of the bandwidth of the network node egress and the server capacity bandwidth.
3. Bandwidth availability factor (α) for each network node: 0< alpha <1, which is a parameter determined according to the price of the bandwidth of the network node, the quality of the bandwidth outlet and other factors.
4. Current bandwidth per network node (Tn): and the dynamic value is used for calculating the current bandwidth value of the network node by collecting the current bandwidth value of each server network card in the network node.
5. Availability of each network node (a): boolean type, normal is 1 and abnormal is 0.
In the embodiment of the present invention, the information in the scheduling information table specifically includes:
1. area ID: and identifying the area of the piece of data in the scheduling information table. For example, the operating system refers to a program ID number (PID, ProcessID).
2. Bandwidth predicted peak (Ve) for this region: and calculating the latest possible bandwidth peak value of the area according to historical data of the area within a certain number of days.
3. Operator ID identifying the operator to which the area belongs, also known as Group ID (GID) is a unique identifier used to identify a subscriber group.
Here, the same type of users are set as the same group, for example, all system administrators can be set as admin groups, which facilitates assigning permissions, and some important files are set as all admin group users can read and write, which enables permission assignment.
Each user has a unique user id and each user group has a unique group id.
4. Scheduling node id (midi): the node ID(s) scheduled for that region may be multiple.
5. Backup scheduling node id (bidj): and the backup scheduling node ID when the main scheduling node fails.
6. Scheduling weight (Eij): and identifying the proportion of the flow to be dispatched to the MIDi or BIDj node, and updating periodically according to parameters such as Ve and Rv.
7. Actual scheduling weight (Pij): identifying what proportion of traffic is currently being dispatched to the MIDi or BIDj nodes.
In the embodiment of the invention, the dispatching center determines the bandwidth full load rate of the target network node through the following formula:
Figure GDA0002960269540000071
wherein Rv represents a bandwidth full load rate, Tn represents a current bandwidth of the target network node, and a represents a value corresponding to the bandwidth availability of the target network node; v represents the peak value of the bandwidth capacity of the target network node, and alpha represents the value corresponding to the bandwidth availability factor of the target network node.
And step 102, when the bandwidth full load rate reaches a preset load threshold of the target network node, sending a notification message for adjusting an acceleration strategy to the target network node, wherein the notification message at least comprises an acceleration mode.
In the embodiment of the present invention, when the scheduling center traverses the area ID, and performs bandwidth detection on each area where the scheduling policy of the node N exists, and when the detection result indicates that Rv of the node N is 0, it indicates that the node is faulty, and traverses the scheduling information table, and sets Pij of the node to 0. Then, the actual scheduling weight of the node N is assigned to other scheduling nodes in the area according to a certain algorithm (e.g., average assignment), and if the Rv value of the other scheduling nodes is greater than or equal to 1, the weight is assigned to the backup scheduling node. And the increment of the adjusted node Tc value (whether the calculation process is changed or not) is set as follows: + weight Ve scheduled to this node.
And when the dispatching center traverses the area ID, performing bandwidth detection on each area with the node N dispatching strategy, and when the detection result represents that the Rv > of the node N is 1, indicating that the node is full, traversing the dispatching information table and reducing the Pij of the node by 10%. And distributing the reduced weight of the node N to other scheduling nodes in the region according to a certain algorithm (such as average distribution), and distributing the weight to the backup scheduling node if the Rv value of the other scheduling nodes is more than or equal to 1. And setting the adjusted node Tc value to increment as: + weight Ve scheduled to this node.
In the embodiment of the present invention, a load threshold of the target network node is defined, for example, 0.5 is light load, and 0.8 is high load. When the Rv value of the node N increases or decreases to pass the load threshold, adjusting the Pij value of the target network node according to the corresponding load condition, and adjusting the scheduling policies of the corresponding area of the target network node and the target network node, and setting the Tc value to increase or decrease as: weight Ve scheduled to the node.
And after the scheduling center calculates the Rv values of all the network nodes, adjusting the scheduling mode of the target network node. In particular, set up
After the Tc value is reached, for a network node, an accelerated adjustment parameter Tu is defined as Tc/V/α, which means how much scheduling adjustment affects the node bandwidth. Issuing an acceleration strategy mode according to the values of Rv and Tu, as shown in Table 1 specifically:
Figure GDA0002960269540000081
TABLE 1
Specifically, when the bandwidth full load rate Rv of the target network node is 0< Rv <0.5 and the influence degree Tu on the bandwidth of the target network node is Tu <0.5, the notification message for adjusting the acceleration policy, sent by the scheduling center to the target network node, is an aggressive acceleration mode; and when the Tu is greater than 0.5, the notification message for adjusting the acceleration strategy, sent by the scheduling center to the target network node, is in a normal acceleration mode.
When the bandwidth full load rate Rv of the target network node is 0.5< Rv <0.8 and the influence degree Tu on the bandwidth of the target network node is Tu <0, the notification message for adjusting the acceleration policy, sent by the scheduling center to the target network node, is an aggressive acceleration mode; when the influence degree Tu on the bandwidth of the target network node is 0< Tu <0.3, the notification message for adjusting the acceleration strategy, sent by the scheduling center to the target network node, is in a normal acceleration mode; when the influence degree Tu on the bandwidth of the target network node is Tu >0.3, the notification message for adjusting the acceleration strategy, sent by the scheduling center to the target network node, is a congestion prevention mode.
When the bandwidth full load rate Rv of the target network node is 0.8< Rv <1 and the influence degree Tu on the bandwidth of the target network node is Tu < -0.3, the notification message for adjusting the acceleration strategy sent by the scheduling center to the target network node is an aggressive acceleration mode; when the influence degree Tu on the bandwidth of the target network node is-0.3 < Tu <0, the notification message for adjusting the acceleration strategy, sent by the scheduling center to the target network node, is in a normal acceleration mode; and when the influence degree Tu on the bandwidth of the target network node is Tu >0, the notification message for adjusting the acceleration strategy, sent by the scheduling center to the target network node, is a congestion prevention mode.
Fig. 2 is a schematic flow chart of another data acceleration method in an embodiment of the present invention, where the method includes:
step 201, receiving a notification message for adjusting an acceleration strategy sent by a scheduling center, where the notification message at least includes an acceleration mode.
In the embodiment of the invention, the method is mainly applied to edge network nodes deployed in a CDN system architecture. One or more servers are included in each edge network node. The CDN system is used for releasing the website content to the edge node closest to the user, so that a netizen can obtain the required content nearby, the response speed and success rate of netizen access are improved, bottlenecks and links which possibly influence the data transmission speed and stability on the Internet can be avoided as far as possible, and the content transmission is faster and more stable.
In the embodiment of the present invention, the CDN system architecture further includes a scheduling center, and the scheduling center also maintains a table of traffic load conditions and a table of scheduling information of the CDN entire network, and the scheduling center updates the table of traffic load conditions and the table of scheduling information in real time according to the traffic conditions reported by each edge network node and the scheduling conditions of the scheduling center.
Specifically, in the scheduling center, a combination condition of two or more level thresholds needs to be set in advance, when contents in a traffic load condition table and a scheduling information table change, whether the contents in the traffic load condition table and the scheduling information table reach the threshold of the combination condition is checked, and when the contents in the traffic load condition table and the scheduling information table reach the threshold of the combination condition, a notification message for adjusting an acceleration policy is sent to an edge network node that needs to adjust a data rate in a message interface manner, where the notification message at least includes an acceleration mode.
Specifically, the information in the traffic load table specifically includes:
1. a network node ID, the network node ID being an identification code that uniquely identifies the target network node information. For example, it may be referred to as NID.
2. Capacity peak (V) of each network node: the bandwidth that the network node is capable of providing is identified, typically taking the smaller of the bandwidth of the network node egress and the server capacity bandwidth.
3. Bandwidth availability factor (α) for each network node: 0< alpha <1, which is a parameter determined according to the price of the bandwidth of the network node, the quality of the bandwidth outlet and other factors.
4. Current bandwidth per network node (Tn): and the dynamic value is used for calculating the current bandwidth value of the network node by collecting the current bandwidth value of each server network card in the network node.
5. Availability of each network node (a): boolean type, normal is 1 and abnormal is 0.
In the embodiment of the present invention, the information in the scheduling information table specifically includes:
1. area ID: and identifying the area of the piece of data in the scheduling information table. For example, a PID is referred to in the operating system.
2. Bandwidth predicted peak (Ve) for this region: and calculating the latest possible bandwidth peak value of the area according to historical data of the area within a certain number of days.
3. Operator ID-a unique identifier identifying the operator to which the area belongs, also known as GID, identifying the user group.
Here, the same type of users are set as the same group, for example, all system administrators can be set as admin groups, which is convenient for assigning permissions, and some important files are set as all admin group users can read and write, so that permission assignment can be performed.
Each user has a unique user id and each user group has a unique group id.
4. Scheduling node id (midi): the node ID(s) scheduled for that region may be multiple.
5. Backup scheduling node id (bidj): and the backup scheduling node ID when the main scheduling node fails.
6. Scheduling weight (Eij): and identifying the proportion of the flow to be dispatched to the MIDi or BIDj node, and updating periodically according to parameters such as Ve and Rv.
7. Actual scheduling weight (Pij): identifying what proportion of traffic is currently being dispatched to the MIDi or BIDj nodes.
In the embodiment of the present invention, receiving a notification message for adjusting an acceleration policy sent by a scheduling center includes:
and when the dispatching center determines that the bandwidth full load rate of the target network node reaches a preset load threshold value of the target network node, receiving the notification message sent by the dispatching center.
In the embodiment of the invention, the full load rate Rv of each node is calculated according to the information of the flow load table passing through every minute by the following formula:
Figure GDA0002960269540000111
wherein Rv represents a bandwidth full load rate, Tn represents a current bandwidth of the target network node, and a represents a value corresponding to the bandwidth availability of the target network node; v represents the peak value of the bandwidth capacity of the target network node, and alpha represents the value corresponding to the bandwidth availability factor of the target network node.
In the embodiment of the present invention, when the scheduling center traverses the area ID, and performs bandwidth detection on each area where the scheduling policy of the node N exists, and when the detection result indicates that Rv of the node N is 0, it indicates that the node is faulty, and traverses the scheduling information table, and sets Pij of the node to 0. Then, the actual scheduling weight of the node N is assigned to other scheduling nodes in the area according to a certain algorithm (e.g., average assignment), and if the Rv value of the other scheduling nodes is greater than or equal to 1, the weight is assigned to the backup scheduling node. And the increment of the adjusted node Tc value (whether the calculation process is changed or not) is set as follows: + weight Ve scheduled to this node.
And when the dispatching center traverses the area ID, performing bandwidth detection on each area with the node N dispatching strategy, and when the detection result represents that the Rv > of the node N is 1, indicating that the node is full, traversing the dispatching information table and reducing the Pij of the node by 10%. And distributing the reduced weight of the node N to other scheduling nodes in the region according to a certain algorithm (such as average distribution), and distributing the weight to the backup scheduling node if the Rv value of the other scheduling nodes is more than or equal to 1. And setting the adjusted node Tc value to increment as: + weight Ve scheduled to this node.
In the embodiment of the present invention, a load threshold of the target network node is defined, for example, 0.5 is light load, and 0.8 is high load. When the Rv value of the node N increases or decreases to pass the load threshold, adjusting the Pij value of the target network node according to the corresponding load condition, and adjusting the scheduling policies of the corresponding area of the target network node and the target network node, and setting the Tc value to increase or decrease as: weight Ve scheduled to the node.
And after the scheduling center calculates the Rv values of all the network nodes, adjusting the scheduling mode of the network nodes. Specifically, after setting the Tc value, for a network node, an accelerated adjustment parameter Tu is defined as Tc/V/α, which means the degree of influence of scheduling adjustment on the node bandwidth. And issuing an acceleration strategy mode according to the values of Rv and Tu, which is specifically shown in the table 1.
Specifically, when the bandwidth full load rate Rv of the target network node is 0< Rv <0.5 and the influence degree Tu on the bandwidth of the target network node is Tu <0.5, the notification message for adjusting the acceleration policy, sent by the scheduling center to the target network node, is an aggressive acceleration mode; and when the Tu is greater than 0.5, the notification message for adjusting the acceleration strategy, sent by the scheduling center to the target network node, is in a normal acceleration mode.
When the bandwidth full load rate Rv of the target network node is 0.5< Rv <0.8 and the influence degree Tu on the bandwidth of the target network node is Tu <0, the notification message for adjusting the acceleration policy, sent by the scheduling center to the target network node, is an aggressive acceleration mode; when the influence degree Tu on the bandwidth of the target network node is 0< Tu <0.3, the notification message for adjusting the acceleration strategy, sent by the scheduling center to the target network node, is in a normal acceleration mode; when the influence degree Tu on the bandwidth of the target network node is Tu >0.3, the notification message for adjusting the acceleration strategy, sent by the scheduling center to the target network node, is a congestion prevention mode.
When the bandwidth full load rate Rv of the target network node is 0.8< Rv <1 and the influence degree Tu on the bandwidth of the target network node is Tu < -0.3, the notification message for adjusting the acceleration strategy sent by the scheduling center to the target network node is an aggressive acceleration mode; when the influence degree Tu on the bandwidth of the target network node is-0.3 < Tu <0, the notification message for adjusting the acceleration strategy, sent by the scheduling center to the target network node, is in a normal acceleration mode; and when the influence degree Tu on the bandwidth of the target network node is Tu >0, the notification message for adjusting the acceleration strategy, sent by the scheduling center to the target network node, is a congestion prevention mode.
Step 202, according to the notification message, monitoring a first network parameter of the target network node, where the first network parameter includes at least one of: real-time bandwidth data, network resource utilization data and Transmission Control Protocol (TCP) connection data of the target network node.
In this embodiment of the present invention, the real-time bandwidth data includes at least one of the following: network card actual throughput flow, network card actual throughput data packet quantity and request number scheduled by load balancing equipment SLB of the target network node;
the network resource utilization data includes at least one of: the TCP connection number, the TCP half-open connection number and the connection number queue length of the target network node;
the TCP connection data includes at least one of: the size of a sliding window, the size of a congestion window, the retransmission proportion of a TCP packet and the transmission rate of the current connection of the target network node.
Step 203, according to the first network parameter and the acceleration mode, performing parameter configuration on the target network node to adjust the network acceleration policy.
Specifically, an edge node deployed in the CDN system architecture that needs TCP acceleration maintains a self-learning state machine. For each user's TCP connection, there is a state machine to control its TCP acceleration policy. When the target network node monitors a first network parameter of the target network node according to the notification message, determining a first acceleration factor and a second acceleration factor according to the first network parameter, and then performing parameter configuration on the target network node according to the first acceleration factor and the second acceleration factor to adjust the acceleration strategy.
Specifically, according to the network card actual throughput flow, the network card actual throughput data packet quantity, the number of requests scheduled by the load balancing device SLB, the number of TCP connections, the number of TCP semi-open connections, and the connection number queue length, a first acceleration factor for performing parameter configuration on the target network node is calculated by using the following matrix formula:
a first acceleration factor [ pat-value, bw-value, ct-value ];
wherein, pat-value represents a value corresponding to an acceleration mode, bw-value represents (1-network card flow utilization rate) × (1-network card data packet throughput rate) × (1-SLB request number/request number design capacity)/(network card flow utilization rate × network card data packet throughput rate ([ SLB request number/request number design capacity)), and ct-value represents (1- (TCP connection number + TCP half-open connection number 0.5)/connection number queue length)/((TCP connection number + TCP half-open connection number 0.5)/connection number queue length);
specifically, according to the size of a sliding window, the size of a congestion window, the retransmission ratio of a TCP packet and the transmission rate, a second acceleration factor for parameter configuration of the target network node is calculated through the following matrix formula;
second accelerator factor [ S-win, TCP-ret, Bitrate]T
Wherein S-win represents the size of the sliding window, TCP-ret represents the number of TCP retransmission packets/the number of TCP completion transmission packets, Bitrate represents the transmission rate, and]Tindicating transposition.
In the implementation of the present invention, configuring parameters of the target network node according to the first network parameter and the acceleration mode further includes:
comparing the size of the sliding window with a first preset threshold value to generate a first comparison result; determining the second network parameter for increasing or decreasing the size of the sliding window according to the first comparison result; or, comparing the TCP packet retransmission ratio data with a second preset threshold value to generate a second comparison result; determining the second network parameter for adjusting the transmission ratio of the TCP packet according to the second comparison result; or, detecting whether the transmission rate reaches a preset target transmission rate; generating a detection result; determining the second network parameter for adjusting the transmission rate when determining that the transmission rate reaches a preset target transmission rate according to the detection result; and performing parameter configuration on the target network node according to the second network parameter.
For example, when the first acceleration factor indicates that the network is lightly loaded and the traffic does not increase greatly in a short time, and the second acceleration factor indicates that the current TCP connection is currently connected, the size of the sliding window of the current connection can be adjusted to the maximum, the size of the congestion window can be adjusted to the maximum, the repeated transmission ratio of the TCP packets can be adjusted to the maximum, and the like.
Then, calculating an adjustment parameter C, C ═ a × B ═ S-win-g, TCP-ret-g, bite-g ], and gradually increasing or decreasing the sliding window according to S-win-g (sliding window size) (comparing with a set threshold); TCP-ret-g gradually adjusts the TCP packet sending proportion (compared with a set threshold value), Bitrate-g is the target transmission rate, and after the rate is reached, the adjustment is stopped and the next acceleration logic is waited.
In the implementation of the invention, TCP is a sliding window protocol, that is, how much data can be sent by a sending end of a TCP connection at a certain moment is controlled by a sliding window, the size of the sliding window is actually determined by two windows together, one is an announcement window of a receiving end, the window value is in TCP protocol header information and can be sent to the sending end along with an ACK packet of the data, the value indicates how much residual space is left in a TCP protocol cache of the receiving end, the sending end must ensure that the sent data does not exceed the residual space so as to avoid causing buffer overflow, the window is used for flow limitation by the receiving end, and in the transmission process, the size of the announcement window is related to the speed of taking out the data by a process of the receiving end. The other window is a Congestion window (Congestion window) of the sender, the sender maintains the value, the header information of the protocol does not contain the value, and the size of the sliding window is the smaller value of the notification window and the Congestion window, so the Congestion window is also regarded as a window used by the sender for flow control. The right movement of the right edge of the sliding window is called window opening, and occurs when the receiving process takes out data from the protocol buffer of the receiving end. With the ACK packet of the transmitted data continuously received by the transmitting end, the sliding window can be continuously closed and opened according to the confirmation sequence number and the size of the notification window in the ACK packet, and forward sliding of the sliding window is formed. If the receiving process does not take data all the time, a 0 window phenomenon occurs, namely the left edge and the right edge of the sliding window coincide, and at the moment, the window size is 0, and data can not be sent again.
Here, the bandwidth refers to the "highest data rate" that can be passed from the transmitting end to the receiving end per unit time, and is a hardware limitation. The data transmission rate of the TCP sender and receiver cannot exceed the bandwidth limit between the two points.
By placing node servers at various positions of the network to form a layer of intelligent virtual network on the basis of the existing internet, the CDN system can redirect the request of a user to a service node closest to the user in real time according to network flow, connection of each node, load condition, distance to the user, response time and other comprehensive information. The user can obtain the required content nearby, the congestion of the Internet network is solved, and the response speed of the user for accessing the website is improved.
Fig. 3 is a schematic structural composition diagram of a data acceleration device in an embodiment of the present invention, and as shown in fig. 3, the device includes:
a determining unit 301, configured to determine a bandwidth full rate of a target network node;
a sending unit 302, configured to send a notification message for adjusting an acceleration policy to the target network node when the bandwidth full load reaches a preset load threshold of the target network node, where the notification message at least includes an acceleration mode.
In the embodiment of the present invention, the device may specifically be a scheduling center in a CDN system architecture, where the scheduling center maintains a table of traffic load conditions of the CDN whole network and a table of scheduling information. And updating the traffic load condition table and the scheduling information table in real time according to the traffic condition reported by each network node and the scheduling condition of the scheduling center.
Specifically, in the scheduling center, a combination condition of two or more level thresholds needs to be set in advance, when the contents in the traffic load condition table and the scheduling information table change, whether the contents in the traffic load condition table and the scheduling information table reach the threshold of the combination condition is checked, and when the contents in the traffic load condition table and the scheduling information table reach the threshold of the combination condition, the sending unit 302 is triggered to send a notification message for adjusting an acceleration policy to a target network node that needs to adjust a data rate in a message interface manner, where the notification message at least includes an acceleration mode.
Specifically, the information in the traffic load table specifically includes:
1. a network node ID, the network node ID being an identification code that uniquely identifies the target network node information. For example, it may be a Network Identification (NID).
2. Capacity peak (V) of each network node: the bandwidth that the network node is capable of providing is identified, typically taking the smaller of the bandwidth of the network node egress and the server capacity bandwidth.
3. Bandwidth availability factor (α) for each network node: 0< alpha <1, which is a parameter determined according to the price of the bandwidth of the network node, the quality of the bandwidth outlet and other factors.
4. Current bandwidth per network node (Tn): and the dynamic value is used for calculating the current bandwidth value of the network node by collecting the current bandwidth value of each server network card in the network node.
5. Availability of each network node (a): boolean type, normal is 1 and abnormal is 0.
In the embodiment of the present invention, the information in the scheduling information table specifically includes:
1. area ID: and identifying the area of the piece of data in the scheduling information table. For example, the operating system refers to a program ID number (PID, ProcessID).
2. Bandwidth predicted peak (Ve) for this region: and calculating the latest possible bandwidth peak value of the area according to historical data of the area within a certain number of days.
3. Operator ID identifying the operator to which the area belongs, also known as Group ID (GID) is a unique identifier used to identify a subscriber group.
Here, the same type of users are set as the same group, for example, all system administrators can be set as admin groups, which facilitates assigning permissions, and some important files are set as all admin group users can read and write, which enables permission assignment.
Each user has a unique user id and each user group has a unique group id.
4. Scheduling node id (midi): the node ID(s) scheduled for that region may be multiple.
5. Backup scheduling node id (bidj): and the backup scheduling node ID when the main scheduling node fails.
6. Scheduling weight (Eij): and identifying the proportion of the flow to be dispatched to the MIDi or BIDj node, and updating periodically according to parameters such as Ve and Rv.
7. Actual scheduling weight (Pij): identifying what proportion of traffic is currently being dispatched to the MIDi or BIDj nodes.
In the embodiment of the present invention, the dispatch center triggers the determining unit 301 to determine the bandwidth full load rate of the target network node according to the following formula;
Figure GDA0002960269540000171
wherein Rv represents a bandwidth full load rate, Tn represents a current bandwidth of the target network node, and a represents a value corresponding to the bandwidth availability of the target network node; v represents the peak value of the bandwidth capacity of the target network node, and alpha represents the value corresponding to the bandwidth availability factor of the target network node.
In this embodiment of the present invention, when the determining unit 301 determines that the bandwidth full load rate reaches the preset load threshold of the target network node, the sending unit 302 is triggered to send a notification message for adjusting an acceleration policy to the target network node, where the notification message at least includes an acceleration mode.
Specifically, when the scheduling center traverses the area ID, the bandwidth detection is performed on each area where the scheduling policy of the node N exists, and when the detection result indicates that Rv of the node N is 0, the determining unit 301 determines that the node is faulty, and traverses the scheduling information table, and sets Pij of the node to 0. Then, the actual scheduling weight of the node N is assigned to other scheduling nodes in the area according to a certain algorithm (e.g., average assignment), and if the Rv value of the other scheduling nodes is greater than or equal to 1, the weight is assigned to the backup scheduling node. And the increment of the adjusted node Tc value (whether the calculation process is changed or not) is set as follows: + weight Ve scheduled to this node.
When the scheduling center traverses the area ID, bandwidth detection is performed on each area where the node N scheduling policy exists, and when the detection result indicates that Rv > of the node N is 1, the determining unit 301 determines that the node is full, and traverses the scheduling information table, and reduces Pij of the node by 10%. And distributing the reduced weight of the node N to other scheduling nodes in the region according to a certain algorithm (such as average distribution), and distributing the weight to the backup scheduling node if the Rv value of the other scheduling nodes is more than or equal to 1. And setting the adjusted node Tc value to increment as: + weight Ve scheduled to this node.
In the embodiment of the present invention, a load threshold of the target network node is defined, for example, 0.5 is light load, and 0.8 is high load. When the Rv value of the node N increases or decreases to pass the load threshold, adjusting the Pij value of the target network node according to the corresponding load condition, and adjusting the scheduling policies of the corresponding area of the target network node and the target network node, and setting the Tc value to increase or decrease as: weight Ve scheduled to the node.
And calculating the Rv values of all the network nodes and adjusting the scheduling modes of the network nodes. Specifically, after setting the Tc value, for a network node, an accelerated adjustment parameter Tu is defined as Tc/V/α, which means the degree of influence of scheduling adjustment on the node bandwidth. And issuing an acceleration strategy mode according to the values of Rv and Tu, which is specifically shown in the table 1.
Specifically, when the bandwidth full rate Rv of the target network node is 0< Rv <0.5 and the influence degree Tu on the bandwidth of the target network node is Tu <0.5, the notification message for adjusting the acceleration policy, which is sent to the target network node by the sending unit 302, is triggered to be the aggressive acceleration mode; when Tu >0.5, the notification message for adjusting the acceleration policy, which is sent to the target network node by the sending unit 302, is triggered to be in the normal acceleration mode.
When the bandwidth full load rate Rv of the target network node is 0.5< Rv <0.8 and the influence degree Tu on the bandwidth of the target network node is Tu <0, triggering the notification message for adjusting the acceleration policy, which is sent to the target network node by the sending unit 302, to be an aggressive acceleration mode; when the influence degree Tu on the bandwidth of the target network node is 0< Tu <0.3, triggering the notification message for adjusting the acceleration policy, which is sent to the target network node by the sending unit 302, to be the normal acceleration mode; when the influence degree Tu on the bandwidth of the target network node is Tu >0.3, the notification message for adjusting the acceleration policy, which is sent to the target network node by the sending unit 302, is triggered to be in the congestion prevention mode.
When the bandwidth full load rate Rv of the target network node is 0.8< Rv <1 and the influence degree Tu on the bandwidth of the target network node is Tu < -0.3, triggering the notification message for adjusting the acceleration policy, which is sent to the target network node by the sending unit 302, to be an aggressive acceleration mode; when the influence degree Tu on the bandwidth of the target network node is-0.3 < Tu <0, triggering the notification message for adjusting the acceleration policy, which is sent to the target network node by the sending unit 302, to be the normal acceleration mode; when the influence degree Tu on the bandwidth of the target network node is Tu >0, the notification message for adjusting the acceleration policy, which is sent to the target network node by the sending unit 302, is triggered to be the congestion prevention mode.
Fig. 4 is a schematic structural diagram of another data acceleration apparatus according to an embodiment of the present invention, as shown in fig. 4: the device comprises: a receiving unit 401, a monitoring unit 402 and a configuration unit 403;
the receiving unit 401 is configured to receive a notification message sent by a scheduling center for adjusting an acceleration policy, where the notification message at least includes an acceleration mode;
the monitoring unit 402 is configured to monitor a first network parameter of a target network node according to the notification message, where the first network parameter at least includes real-time bandwidth data, network resource utilization data, and TCP connection data of the target network node;
the configuring unit 403 is configured to perform parameter configuration on the target network node according to the first network parameter and the acceleration mode, so as to adjust the acceleration policy.
In the embodiment of the present invention, the device may specifically be an edge network node deployed in a CDN system architecture. One or more servers are included in each edge network node. The CDN system is used for releasing the website content to the edge node closest to the user, so that a netizen can obtain the required content nearby, the response speed and success rate of netizen access are improved, bottlenecks and links which possibly influence the data transmission speed and stability on the Internet can be avoided as far as possible, and the content transmission is faster and more stable.
In the embodiment of the present invention, the CDN system architecture further includes a scheduling center, and the scheduling center also maintains a table of traffic load conditions and a table of scheduling information of the CDN entire network, and the scheduling center updates the table of traffic load conditions and the table of scheduling information in real time according to the traffic conditions reported by each edge network node and the scheduling conditions of the scheduling center.
Specifically, in the scheduling center, a combination condition of two or more level thresholds needs to be set in advance, when contents in a traffic load condition table and a scheduling information table change, whether the contents in the traffic load condition table and the scheduling information table reach the threshold of the combination condition is checked, and when the contents in the traffic load condition table and the scheduling information table reach the threshold of the combination condition, a notification message for adjusting an acceleration policy is sent to an edge network node that needs to adjust a data rate in a message interface manner, where the notification message at least includes an acceleration mode.
The edge network node that needs to adjust the data rate triggers the receiving unit 401 to receive the notification message sent by the dispatch center.
Specifically, the information in the traffic load table specifically includes:
1. a network node ID, the network node ID being an identification code that uniquely identifies the target network node information. For example, it may be referred to as NID.
2. Capacity peak (V) of each network node: the bandwidth that the network node is capable of providing is identified, typically taking the smaller of the bandwidth of the network node egress and the server capacity bandwidth.
3. Bandwidth availability factor (α) for each network node: 0< alpha <1, which is a parameter determined according to the price of the bandwidth of the network node, the quality of the bandwidth outlet and other factors.
4. Current bandwidth per network node (Tn): and the dynamic value is used for calculating the current bandwidth value of the network node by collecting the current bandwidth value of each server network card in the network node.
5. Availability of each network node (a): boolean type, normal is 1 and abnormal is 0.
In the embodiment of the present invention, the information in the scheduling information table specifically includes:
1. area ID: and identifying the area of the piece of data in the scheduling information table. For example, a PID is referred to in the operating system.
2. Bandwidth predicted peak (Ve) for this region: and calculating the latest possible bandwidth peak value of the area according to historical data of the area within a certain number of days.
3. Operator ID-a unique identifier identifying the operator to which the area belongs, also known as GID, identifying the user group.
Here, the same type of users are set as the same group, for example, all system administrators can be set as admin groups, which is convenient for assigning permissions, and some important files are set as all admin group users can read and write, so that permission assignment can be performed.
Each user has a unique user id and each user group has a unique group id.
4. Scheduling node id (midi): the node ID(s) scheduled for that region may be multiple.
5. Backup scheduling node id (bidj): and the backup scheduling node ID when the main scheduling node fails.
6. Scheduling weight (Eij): and identifying the proportion of the flow to be dispatched to the MIDi or BIDj node, and updating periodically according to parameters such as Ve and Rv.
7. Actual scheduling weight (Pij): identifying what proportion of traffic is currently being dispatched to the MIDi or BIDj nodes.
Specifically, the receiving unit 401 receives the notification message sent by the scheduling center when the scheduling center determines that the bandwidth full load rate of the target network node reaches the preset load threshold of the target network node.
In the embodiment of the invention, the full load rate Rv of each node is calculated by the following formula according to the information of the flow load table per minute:
Figure GDA0002960269540000211
wherein Rv represents a bandwidth full load rate, Tn represents a current bandwidth of the target network node, and a represents a value corresponding to the bandwidth availability of the target network node; v represents the peak value of the bandwidth capacity of the target network node, and alpha represents the value corresponding to the bandwidth availability factor of the target network node.
Specifically, when the scheduling center traverses the area ID, the bandwidth detection is performed on each area where the scheduling policy of the node N exists, and when the detection result indicates that Rv of the node N is 0, it indicates that the node is faulty, the scheduling information table is traversed, and Pij of the node is set to 0. Then, the actual scheduling weight of the node N is assigned to other scheduling nodes in the area according to a certain algorithm (e.g., average assignment), and if the Rv value of the other scheduling nodes is greater than or equal to 1, the weight is assigned to the backup scheduling node. And the increment of the adjusted node Tc value (whether the calculation process is changed or not) is set as follows: + weight Ve scheduled to this node.
And when the dispatching center traverses the area ID, performing bandwidth detection on each area with the node N dispatching strategy, and when the detection result represents that the Rv > of the node N is 1, indicating that the node is full, traversing the dispatching information table and reducing the Pij of the node by 10%. And distributing the reduced weight of the node N to other scheduling nodes in the region according to a certain algorithm (such as average distribution), and distributing the weight to the backup scheduling node if the Rv value of the other scheduling nodes is more than or equal to 1. And setting the adjusted node Tc value to increment as: + weight Ve scheduled to this node.
In the embodiment of the present invention, a load threshold of the target network node is defined, for example, 0.5 is light load, and 0.8 is high load. When the Rv value of the node N increases or decreases to pass the load threshold, adjusting the Pij value of the target network node according to the corresponding load condition, and adjusting the scheduling policies of the corresponding area of the target network node and the target network node, and setting the Tc value to increase or decrease as: weight Ve scheduled to the node.
And after the scheduling center calculates the Rv values of all the network nodes, adjusting the scheduling mode of the target network node. Specifically, after the scheduling center sets the Tc value, for a network node, an accelerated adjustment parameter Tu is defined as Tc/V/α, which means the degree of influence of scheduling adjustment on the node bandwidth. And issuing an acceleration strategy mode according to the values of Rv and Tu, which is specifically shown in the table 1.
Specifically, when the bandwidth full load rate Rv of the target network node is 0< Rv <0.5 and the influence degree Tu on the bandwidth of the target network node is Tu <0.5, the notification message for adjusting the acceleration policy, sent by the scheduling center to the target network node, is an aggressive acceleration mode; and when the Tu is greater than 0.5, the notification message for adjusting the acceleration strategy, sent by the scheduling center to the target network node, is in a normal acceleration mode.
When the bandwidth full load rate Rv of the target network node is 0.5< Rv <0.8 and the influence degree Tu on the bandwidth of the target network node is Tu <0, the notification message for adjusting the acceleration policy, sent by the scheduling center to the target network node, is an aggressive acceleration mode; when the influence degree Tu on the bandwidth of the target network node is 0< Tu <0.3, the notification message for adjusting the acceleration strategy, sent by the scheduling center to the target network node, is in a normal acceleration mode; when the influence degree Tu on the bandwidth of the target network node is Tu >0.3, the notification message for adjusting the acceleration strategy, sent by the scheduling center to the target network node, is a congestion prevention mode.
When the bandwidth full load rate Rv of the target network node is 0.8< Rv <1 and the influence degree Tu on the bandwidth of the target network node is Tu < -0.3, the notification message for adjusting the acceleration strategy sent by the scheduling center to the target network node is an aggressive acceleration mode; when the influence degree Tu on the bandwidth of the target network node is-0.3 < Tu <0, the notification message for adjusting the acceleration strategy, sent by the scheduling center to the target network node, is in a normal acceleration mode; and when the influence degree Tu on the bandwidth of the target network node is Tu >0, the notification message for adjusting the acceleration strategy, sent by the scheduling center to the target network node, is a congestion prevention mode.
In this embodiment of the present invention, the real-time bandwidth data includes at least one of the following: network card actual throughput flow, network card actual throughput data packet quantity and request number scheduled by load balancing equipment SLB of the target network node;
the network resource utilization data includes at least one of: the TCP connection number, the TCP half-open connection number and the connection number queue length of the target network node;
the TCP connection data includes at least one of: the size of a sliding window, the size of a congestion window, the retransmission proportion of a TCP packet and the transmission rate of the current connection of the target network node.
The configuration unit 403 performs parameter configuration on the target network node according to the first network parameter and the acceleration mode, so as to adjust the acceleration policy of the network, specifically, in the following manner.
Specifically, an edge node deployed in the CDN system architecture that needs TCP acceleration maintains a self-learning state machine. For each user's TCP connection, there is a state machine to control its TCP acceleration policy. When the monitoring unit 402 monitors the first network parameter of the target network node according to the notification message, it triggers a first acceleration factor and a second acceleration factor determined by the target network node according to the first network parameter, and then the configuration unit 403 configures parameters of the target network node according to the first acceleration factor and the second acceleration factor.
Specifically, the target network node calculates a first acceleration factor for performing parameter configuration on the target network node according to the network card actual throughput flow, the network card actual throughput data packet quantity, the number of requests scheduled by the load balancing device SLB, the number of TCP connections, the number of TCP semi-open connections, and the connection number queue length by using the following matrix formula:
a first acceleration factor [ pat-value, bw-value, ct-value ];
wherein, pat-value represents a value corresponding to an acceleration mode, bw-value represents (1-network card flow utilization rate) × (1-network card data packet throughput rate) × (1-SLB request number/request number design capacity)/(network card flow utilization rate × network card data packet throughput rate ([ SLB request number/request number design capacity)), and ct-value represents (1- (TCP connection number + TCP half-open connection number 0.5)/connection number queue length)/((TCP connection number + TCP half-open connection number 0.5)/connection number queue length);
the target network node calculates a second acceleration factor for parameter configuration of the target network node according to the size of a sliding window, the size of a congestion window, the retransmission proportion of a TCP (transmission control protocol) packet and the transmission rate by using the following matrix formula;
second accelerator factor [ S-win, TCP-ret, Bitrate]T
Wherein S-win represents the size of the sliding window, TCP-ret represents the number of TCP retransmission packets/the number of TCP completion transmission packets, Bitrate represents the transmission rate, and]Tindicating transposition.
In the implementation of the present invention, the configuring unit 403 performs parameter configuration on the target network node according to the first network parameter and the acceleration mode, and further includes:
the target network node compares the size of the sliding window with a first preset threshold value to generate a first comparison result; determining the second network parameter for increasing or decreasing the size of the sliding window according to the first comparison result; or, comparing the TCP packet retransmission ratio data with a second preset threshold value to generate a second comparison result; determining the second network parameter for adjusting the transmission ratio of the TCP packet according to the second comparison result; or, detecting whether the transmission rate reaches a preset target transmission rate; generating a detection result; determining the second network parameter for adjusting the transmission rate when determining that the transmission rate reaches a preset target transmission rate according to the detection result; and then trigger the configuration unit 403 to perform parameter configuration on the target network node according to the second network parameter.
For example, when the first acceleration factor indicates that the network is lightly loaded and the traffic does not increase greatly in a short time, and the second acceleration factor indicates that the current TCP connection is currently connected, the size of the sliding window of the current connection can be adjusted to the maximum, the size of the congestion window can be adjusted to the maximum, the repeated transmission ratio of the TCP packets can be adjusted to the maximum, and the like.
Then, calculating an adjustment parameter C, C ═ a × B ═ S-win-g, TCP-ret-g, bite-g ], and gradually increasing or decreasing the sliding window according to S-win-g (sliding window size) (comparing with a set threshold); TCP-ret-g gradually adjusts the TCP packet sending proportion (compared with a set threshold value), Bitrate-g is the target transmission rate, and after the rate is reached, the adjustment is stopped and the next acceleration logic is waited.
In the implementation of the invention, TCP is a sliding window protocol, that is, how much data can be sent by a sending end of a TCP connection at a certain moment is controlled by a sliding window, the size of the sliding window is actually determined by two windows together, one is an announcement window of a receiving end, the window value is in TCP protocol header information and can be sent to the sending end along with an ACK packet of the data, the value indicates how much residual space is left in a TCP protocol cache of the receiving end, the sending end must ensure that the sent data does not exceed the residual space so as to avoid causing buffer overflow, the window is used for flow limitation by the receiving end, and in the transmission process, the size of the announcement window is related to the speed of taking out the data by a process of the receiving end. The other window is a Congestion window (Congestion window) of the sender, the sender maintains the value, the header information of the protocol does not contain the value, and the size of the sliding window is the smaller value of the notification window and the Congestion window, so the Congestion window is also regarded as a window used by the sender for flow control. The right movement of the right edge of the sliding window is called window opening, and occurs when the receiving process takes out data from the protocol buffer of the receiving end. With the ACK packet of the transmitted data continuously received by the transmitting end, the sliding window can be continuously closed and opened according to the confirmation sequence number and the size of the notification window in the ACK packet, and forward sliding of the sliding window is formed. If the receiving process does not take data all the time, a 0 window phenomenon occurs, namely the left edge and the right edge of the sliding window coincide, and at the moment, the window size is 0, and data can not be sent again.
Here, the bandwidth refers to the "highest data rate" that can be passed from the sending end to the receiving end in a unit time, and is a hardware limitation. The data transmission rate of the TCP sender and receiver cannot exceed the bandwidth limit between the two points.
By placing node servers at various positions of the network to form a layer of intelligent virtual network on the basis of the existing internet, the CDN system can redirect the request of a user to a service node closest to the user in real time according to network flow, connection of each node, load condition, distance to the user, response time and other comprehensive information. The user can obtain the required content nearby, the congestion of the Internet network is solved, and the response speed of the user for accessing the website is improved.
According to another embodiment of the present invention, there is also provided an apparatus for accelerating data, the apparatus including: a memory and a processor;
wherein the memory is to store a computer program operable on the processor;
the processor is used for executing and determining the bandwidth full load rate of the target network node when the computer program is operated;
and when the bandwidth full load rate reaches a preset load threshold value of the target network node, sending a notification message for adjusting an acceleration strategy to the target network node, wherein the notification message at least comprises an acceleration mode.
The processor is used for determining the bandwidth full load rate of the target network node by the following formula when the computer program is run;
Figure GDA0002960269540000261
wherein Rv represents a bandwidth full load rate, Tn represents a current bandwidth of the target network node, and a represents a value corresponding to the bandwidth availability of the target network node; v represents the peak value of the bandwidth capacity of the target network node, and alpha represents the value corresponding to the bandwidth availability factor of the target network node.
According to another embodiment of the present invention, there is also provided an apparatus for accelerating data, the apparatus including: a memory and a processor;
wherein the memory is to store a computer program operable on the processor;
the processor is configured to execute a notification message for adjusting an acceleration policy sent by a scheduling center when the computer program is run, where the notification message at least includes an acceleration mode;
monitoring a first network parameter of a target network node according to the notification message, wherein the first network parameter at least comprises real-time bandwidth data, network resource utilization data and Transmission Control Protocol (TCP) connection data of the target network node;
and according to the first network parameter and the acceleration mode, performing parameter configuration on the target network node to adjust the acceleration strategy.
The processor, when executing the computer program, is further configured to execute the real-time bandwidth data to include at least one of: network card actual throughput flow, network card actual throughput data packet quantity and request number scheduled by load balancing equipment SLB of the target network node;
the network resource utilization data includes at least one of: the TCP connection number, the TCP half-open connection number and the connection number queue length of the target network node;
the TCP connection data includes at least one of: the size of a sliding window, the size of a congestion window, the retransmission proportion of a TCP (transmission control protocol) packet and the transmission rate of the current connection of the target network node;
correspondingly, according to the first network parameter and the acceleration mode, performing parameter configuration on the target network node includes:
calculating a first acceleration factor for parameter configuration of the target network node according to the network card actual throughput flow, the network card actual throughput data packet quantity, the request number scheduled by the load balancing device SLB, the TCP connection number, the TCP semi-open connection number and the connection number queue length by using the following matrix formula:
a first acceleration factor [ pat-value, bw-value, ct-value ];
wherein, pat-value represents a value corresponding to an acceleration mode, bw-value represents (1-network card flow utilization rate) × (1-network card data packet throughput rate) × (1-SLB request number/request number design capacity)/(network card flow utilization rate × network card data packet throughput rate ([ SLB request number/request number design capacity)), and ct-value represents (1- (TCP connection number + TCP half-open connection number 0.5)/connection number queue length)/((TCP connection number + TCP half-open connection number 0.5)/connection number queue length);
calculating a second acceleration factor for parameter configuration of the target network node according to the size of a sliding window, the size of a congestion window, the retransmission proportion of a TCP packet and the transmission rate by using the following matrix formula;
second accelerator factor [ S-win, TCP-ret, Bitrate]T
Wherein S-win represents the size of the sliding window, TCP-ret represents the number of TCP retransmission packets/the number of TCP completion transmission packets, Bitrate represents the transmission rate, and]Trepresenting a transpose;
and according to the first acceleration factor and the second acceleration factor, performing parameter configuration on the target network node.
The processor is configured to, when the computer program is run, further perform comparison between the size of the sliding window and a first preset threshold, and generate a first comparison result; determining the second network parameter for increasing or decreasing the size of the sliding window according to the first comparison result;
or, comparing the TCP packet retransmission ratio data with a second preset threshold value to generate a second comparison result; determining the second network parameter for adjusting the transmission ratio of the TCP packet according to the second comparison result;
or, detecting whether the transmission rate reaches a preset target transmission rate; generating a detection result; determining the second network parameter for adjusting the transmission rate when determining that the transmission rate reaches a preset target transmission rate according to the detection result;
and performing parameter configuration on the target network node according to the second network parameter.
Fig. 5 is a schematic structural diagram of a data acceleration apparatus according to another embodiment of the present invention, and the data acceleration apparatus 500 may be a mobile phone, a computer, a digital broadcast terminal, an information transceiver device, a game console, a tablet device, a personal digital assistant, an information push server, a content server, and the like. The data acceleration apparatus 500 shown in fig. 5 includes: at least one processor 501, memory 502, at least one network interface 504, and a user interface 503. The various components of the data acceleration device 500 are coupled together by a bus system 505. It is understood that the bus system 505 is used to enable connection communications between these components. The bus system 505 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 505 in FIG. 5.
The user interface 503 may include a display, a keyboard, a mouse, a trackball, a click wheel, a key, a button, a touch pad, a touch screen, or the like, among others.
It will be appreciated that the memory 502 can be either volatile memory or nonvolatile memory, and can include both volatile and nonvolatile memory. Among them, the nonvolatile Memory may be a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a magnetic random access Memory (FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical disk, or a Compact Disc Read-Only Memory (CD-ROM); the magnetic surface storage may be disk storage or tape storage. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of illustration and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Synchronous Static Random Access Memory (SSRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), Double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), Enhanced Synchronous Dynamic Random Access Memory (ESDRAM), Enhanced Synchronous Dynamic Random Access Memory (Enhanced DRAM), Synchronous Dynamic Random Access Memory (SLDRAM), Direct Memory (DRmb Access), and Random Access Memory (DRAM). The memory 502 described in connection with the embodiments of the invention is intended to comprise, without being limited to, these and any other suitable types of memory.
The memory 502 in the embodiment of the present invention is used to store various types of data to support the operation of the acceleration apparatus 500 of data. Examples of such data include: any computer programs for operating on the acceleration appliance 500 of data, such as an operating system 5021 and application programs 5022; music data; animation data; book information; video, etc. The operating system 5021 includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, for implementing various basic services and processing hardware-based tasks. The application 5022 may contain various applications such as a Media Player (Media Player), a Browser (Browser), etc. for implementing various application services. The program for implementing the method according to the embodiment of the present invention may be included in the application program 5022.
The method disclosed by the above-mentioned embodiments of the present invention may be applied to the processor 501, or implemented by the processor 501. The processor 501 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 501. The Processor 501 may be a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, etc. Processor 501 may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present invention. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the method disclosed by the embodiment of the invention can be directly implemented by a hardware decoding processor, or can be implemented by combining hardware and software modules in the decoding processor. The software modules may be located in a storage medium located in the memory 502, and the processor 501 reads the information in the memory 502 and performs the steps of the aforementioned methods in conjunction with its hardware.
In an exemplary embodiment, the data accelerator 500 may be implemented by one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, Micro Controllers (MCUs), microprocessors (microprocessors), or other electronic components for performing the aforementioned methods.
In an exemplary embodiment, the present invention further provides a computer readable storage medium, such as a memory 502 including a computer program, which can be executed by a processor 501 of a data acceleration apparatus 500 to perform the steps of the foregoing method. The computer readable storage medium can be Memory such as FRAM, ROM, PROM, EPROM, EEPROM, Flash Memory, magnetic surface Memory, optical disk, or CD-ROM; or may be a variety of devices including one or any combination of the above memories, such as a mobile phone, computer, tablet device, personal digital assistant, etc.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, performs: determining the bandwidth full load rate of a target network node;
and when the bandwidth full load rate reaches a preset load threshold value of the target network node, sending a notification message for adjusting an acceleration strategy to the target network node, wherein the notification message at least comprises an acceleration mode.
The computer program, when executed by the processor, further performs determining a bandwidth full rate of the target network node by:
Figure GDA0002960269540000301
wherein Rv represents a bandwidth full load rate, Tn represents a current bandwidth of the target network node, and a represents a value corresponding to the bandwidth availability of the target network node; v represents the peak value of the bandwidth capacity of the target network node, and alpha represents the value corresponding to the bandwidth availability factor of the target network node.
According to another embodiment of the present invention, there is also provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs: executing a notification message for receiving an acceleration strategy adjustment sent by a scheduling center, wherein the notification message at least comprises an acceleration mode;
monitoring a first network parameter of a target network node according to the notification message, wherein the first network parameter at least comprises real-time bandwidth data, network resource utilization data and Transmission Control Protocol (TCP) connection data of the target network node;
and according to the first network parameter and the acceleration mode, performing parameter configuration on the target network node to adjust the acceleration strategy.
The computer program, when executed by the processor, further executes the real-time bandwidth data including at least one of: network card actual throughput flow, network card actual throughput data packet quantity and request number scheduled by load balancing equipment SLB of the target network node;
the network resource utilization data includes at least one of: the TCP connection number, the TCP half-open connection number and the connection number queue length of the target network node;
the TCP connection data includes at least one of: the size of a sliding window, the size of a congestion window, the retransmission proportion of a TCP (transmission control protocol) packet and the transmission rate of the current connection of the target network node;
correspondingly, according to the first network parameter and the acceleration mode, performing parameter configuration on the target network node includes:
calculating a first acceleration factor for parameter configuration of the target network node according to the network card actual throughput flow, the network card actual throughput data packet quantity, the request number scheduled by the load balancing device SLB, the TCP connection number, the TCP semi-open connection number and the connection number queue length by using the following matrix formula:
a first acceleration factor [ pat-value, bw-value, ct-value ];
wherein, pat-value represents a value corresponding to an acceleration mode, bw-value represents (1-network card flow utilization rate) × (1-network card data packet throughput rate) × (1-SLB request number/request number design capacity)/(network card flow utilization rate × network card data packet throughput rate ([ SLB request number/request number design capacity)), and ct-value represents (1- (TCP connection number + TCP half-open connection number 0.5)/connection number queue length)/((TCP connection number + TCP half-open connection number 0.5)/connection number queue length);
calculating a second acceleration factor for parameter configuration of the target network node according to the size of a sliding window, the size of a congestion window, the retransmission proportion of a TCP packet and the transmission rate by using the following matrix formula;
second accelerator factor [ S-win, TCP-ret, Bitrate]T
Wherein S-win represents the size of the sliding window, TCP-ret represents the number of TCP retransmission packets/the number of TCP completion transmission packets, Bitrate represents the transmission rate, and]Trepresenting a transpose;
and according to the first acceleration factor and the second acceleration factor, performing parameter configuration on the target network node.
When the computer program is run by the processor, the size of the sliding window is compared with a first preset threshold value to generate a first comparison result; determining the second network parameter for increasing or decreasing the size of the sliding window according to the first comparison result;
or, comparing the TCP packet retransmission ratio data with a second preset threshold value to generate a second comparison result; determining the second network parameter for adjusting the transmission ratio of the TCP packet according to the second comparison result;
or, detecting whether the transmission rate reaches a preset target transmission rate; generating a detection result; determining the second network parameter for adjusting the transmission rate when determining that the transmission rate reaches a preset target transmission rate according to the detection result;
and performing parameter configuration on the target network node according to the second network parameter.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention.

Claims (10)

1.一种数据的加速方法,其特征在于,所述方法包括:1. a data acceleration method, is characterized in that, described method comprises: 确定目标网络节点的带宽满载率;Determine the bandwidth full load rate of the target network node; 所述带宽满载率达到所述目标网络节点的预设负载阈值时,向所述目标网络节点发送调整加速策略的通知消息,所述通知消息中至少包括加速模式;When the bandwidth full load rate reaches the preset load threshold of the target network node, a notification message for adjusting the acceleration policy is sent to the target network node, and the notification message at least includes an acceleration mode; 所述通知消息,用于指示监测所述目标网络节点的第一网络参数;其中,所述第一网络参数至少包括所述目标网络节点的实时带宽数据、网络资源利用数据和传输控制协议TCP连接数据。The notification message is used to instruct to monitor the first network parameter of the target network node; wherein, the first network parameter includes at least real-time bandwidth data, network resource utilization data and transmission control protocol TCP connection of the target network node data. 2.根据权利要求1所述的方法,其特征在于,通过以下公式确定目标网络节点的带宽满载率:2. The method according to claim 1, wherein the bandwidth full load rate of the target network node is determined by the following formula:
Figure FDA0002961857150000011
Figure FDA0002961857150000011
其中,所述Rv表示带宽满载率,Tn表示所述目标网络节点的当前带宽,A表示所述目标网络节点的带宽可用性对应的值;V表示所述目标网络节点的带宽能力峰值,α表示所述目标网络节点的带宽可利用系数对应的值。Wherein, the Rv represents the bandwidth full load rate, Tn represents the current bandwidth of the target network node, A represents the value corresponding to the bandwidth availability of the target network node; V represents the peak bandwidth capability of the target network node, and α represents the The value corresponding to the bandwidth availability coefficient of the target network node.
3.一种数据的加速方法,其特征在于,所述方法包括:3. A data acceleration method, wherein the method comprises: 接收调度中心发送的调整加速策略的通知消息,所述通知消息中至少包括加速模式;receiving a notification message for adjusting the acceleration strategy sent by the dispatch center, where the notification message at least includes the acceleration mode; 根据所述通知消息,监测目标网络节点的第一网络参数,所述第一网络参数中至少包括所述目标网络节点的实时带宽数据、网络资源利用数据和传输控制协议TCP连接数据;According to the notification message, monitor the first network parameter of the target network node, where the first network parameter at least includes real-time bandwidth data, network resource utilization data and transmission control protocol TCP connection data of the target network node; 根据所述第一网络参数和所述加速模式,对所述目标网络节点进行参数配置,以调整所述加速策略;According to the first network parameter and the acceleration mode, parameter configuration is performed on the target network node to adjust the acceleration strategy; 其中,所述接收调度中心发送的调整加速策略的通知消息包括:Wherein, the notification message for adjusting the acceleration policy sent by the receiving dispatch center includes: 在所述目标网络节点的带宽满载率达到所述目标网络节点的预设负载阈值时,接收所述调度中心发送的调整加速策略的通知消息。When the bandwidth full load rate of the target network node reaches the preset load threshold of the target network node, a notification message for adjusting the acceleration policy sent by the dispatch center is received. 4.根据权利要求3所述的方法,其特征在于,所述实时带宽数据包括下述至少一种:所述目标网络节点的网卡实际吞吐流量、网卡实际吞吐数据包量和负载均衡设备SLB调度的请求数;4. The method according to claim 3, wherein the real-time bandwidth data comprises at least one of the following: actual network card throughput of the target network node, network card actual throughput of packets, and load balancing device SLB scheduling number of requests; 所述网络资源利用数据包括下述至少一种:所述目标网络节点的TCP连接数、TCP半开连接数和连接数队列长度;The network resource utilization data includes at least one of the following: the number of TCP connections, the number of TCP half-open connections, and the number of connections queue lengths of the target network node; 所述TCP连接数据包括下述至少一种:所述目标网络节点当前连接的滑动窗口大小、拥塞窗口大小、TCP包重传比例、传输速率;The TCP connection data includes at least one of the following: sliding window size, congestion window size, TCP packet retransmission ratio, and transmission rate currently connected to the target network node; 相应地,根据所述第一网络参数和所述加速模式,对所述目标网络节点进行参数配置,包括:Correspondingly, according to the first network parameter and the acceleration mode, perform parameter configuration on the target network node, including: 根据所述网卡实际吞吐流量、网卡实际吞吐数据包量、负载均衡设备SLB调度的请求数、TCP连接数、TCP半开连接数和连接数队列长度,通过以下矩阵公式计算对所述目标网络节点进行参数配置的第一加速因子:According to the actual throughput traffic of the network card, the actual data packet throughput of the network card, the number of requests scheduled by the load balancing device SLB, the number of TCP connections, the number of TCP half-open connections and the number of connections queue length, the following matrix formula is used to calculate the target network node. The first acceleration factor for parameter configuration: 第一加速因子=[pat-valve,bw-value,ct-value];first acceleration factor = [pat-valve, bw-value, ct-value]; 其中,pat-valve表示加速模式对应的值,bw-value表示(1-网卡流量利用率)*(1-网卡数据包吞吐率)*(1-SLB请求数/请求数设计容量)/(网卡流量利用率*网卡数据包吞吐率*(SLB请求数/请求数设计容量)),ct-value表示(1-(TCP连接数+TCP半开连接数*0.5)/连接数队列长度)/( (TCP连接数+TCP半开连接数*0.5)/连接数队列长度);Among them, pat-valve represents the value corresponding to the acceleration mode, and bw-value represents (1-network card traffic utilization)*(1-network card data packet throughput rate)*(1-number of SLB requests/number of requests designed capacity)/(network card Traffic utilization * NIC data packet throughput rate * (number of SLB requests/number of requests designed capacity)), ct-value means (1-(number of TCP connections + number of TCP half-open connections * 0.5)/number of connections queue length)/( (Number of TCP connections + Number of TCP half-open connections*0.5)/Number of connections queue length); 根据滑动窗口大小、拥塞窗口大小、TCP包重传比例、传输速率,通过以下矩阵公式计算对所述目标网络节点进行参数配置的第二加速因子;According to the size of the sliding window, the size of the congestion window, the retransmission ratio of TCP packets, and the transmission rate, the second acceleration factor for parameter configuration of the target network node is calculated by the following matrix formula; 第二加速因子=[S-win,TCP-ret,Bitrate]TSecond acceleration factor=[S-win, TCP-ret, Bitrate] T ; 其中,S-win表示滑动窗口大小,TCP-ret表示TCP重传包数量/TCP完成传输包数量,Bitrate表示传输速率,[]T表示转置;Among them, S-win represents the size of the sliding window, TCP-ret represents the number of TCP retransmission packets/the number of TCP completed transmission packets, Bitrate represents the transmission rate, and [] T represents the transposition; 根据所述第一加速因子和所述第二加速因子,对所述目标网络节点进行参数配置。Perform parameter configuration on the target network node according to the first acceleration factor and the second acceleration factor. 5.根据权利要求4所述的方法,其特征在于,根据所述第一网络参数和所述加速模式,对所述目标网络节点进行参数配置,还包括:5 . The method according to claim 4 , wherein, according to the first network parameter and the acceleration mode, configuring parameters for the target network node, further comprising: 6 . 将所述滑动窗口大小与第一预设阈值进行比较,生成第一比较结果;根据所述第一比较结果,确定用于增大或减小所述滑动窗口大小的第二网络参数;comparing the sliding window size with a first preset threshold to generate a first comparison result; determining a second network parameter for increasing or decreasing the sliding window size according to the first comparison result; 或者,将所述TCP包重传比例数据与第二预设阈值进行比较,生成第二比较结果;根据所述第二比较结果,确定用于调整所述TCP包的发送比例的所述第二网络参数;Or, comparing the TCP packet retransmission ratio data with a second preset threshold to generate a second comparison result; and determining the second comparison result for adjusting the sending ratio of the TCP packets according to the second comparison result. Network parameters; 或者,检测所述传输速率是否达到预设目标传输速率;生成检测结果;根据所述检测结果,确定所述传输速率达到预设目标传输速率时,确定用于调整所述传输速率的所述第二网络参数;Or, detecting whether the transmission rate has reached a preset target transmission rate; generating a detection result; according to the detection result, when it is determined that the transmission rate has reached a preset target transmission rate, determining the first parameter for adjusting the transmission rate Two network parameters; 根据所述第二网络参数对所述目标网络节点进行参数配置。Perform parameter configuration on the target network node according to the second network parameter. 6.一种数据的加速装置,其特征在于,所述装置包括:6. A data acceleration device, wherein the device comprises: 确定单元,用于确定目标网络节点的带宽满载率;A determination unit, used to determine the bandwidth full load rate of the target network node; 发送单元,用于所述带宽满载率达到所述目标网络节点的预设负载阈值时,向所述目标网络节点发送调整加速策略的通知消息,所述通知消息中至少包括加速模式;A sending unit, configured to send a notification message for adjusting the acceleration strategy to the target network node when the bandwidth full load rate reaches the preset load threshold of the target network node, where the notification message at least includes an acceleration mode; 所述通知消息,用于指示监测所述目标网络节点的第一网络参数;其中,所述第一网络参数至少包括所述目标网络节点的实时带宽数据、网络资源利用数据和传输控制协议TCP连接数据。The notification message is used to instruct to monitor the first network parameter of the target network node; wherein, the first network parameter includes at least real-time bandwidth data, network resource utilization data and transmission control protocol TCP connection of the target network node data. 7.根据权利要求6所述的装置,其特征在于,所述确定单元,具体通过以下公式确定所述目标网络节点的带宽满载率:7. The apparatus according to claim 6, wherein the determining unit determines the bandwidth full load rate of the target network node by the following formula:
Figure FDA0002961857150000041
Figure FDA0002961857150000041
其中,所述Rv表示带宽满载率,Tn表示所述目标网络节点的当前带宽,A表示所述目标网络节点的带宽可用性对应的值;V表示所述目标网络节点的带宽能力峰值,α表示所述目标网络节点的带宽可利用系数对应的值。Wherein, the Rv represents the bandwidth full load rate, Tn represents the current bandwidth of the target network node, A represents the value corresponding to the bandwidth availability of the target network node; V represents the peak bandwidth capability of the target network node, and α represents the The value corresponding to the bandwidth availability coefficient of the target network node.
8.一种数据的加速装置,其特征在于,所述装置包括:8. A data acceleration device, wherein the device comprises: 接收单元,用于接收调度中心发送的调整加速策略的通知消息,所述通知消息中至少包括加速模式;a receiving unit, configured to receive a notification message for adjusting the acceleration strategy sent by the dispatch center, where the notification message at least includes the acceleration mode; 监测单元,用于根据所述通知消息,监测目标网络节点的第一网络参数,所述第一网络参数中至少包括所述目标网络节点的实时带宽数据、网络资源利用数据和传输控制协议TCP连接数据;A monitoring unit, configured to monitor the first network parameter of the target network node according to the notification message, where the first network parameter at least includes real-time bandwidth data of the target network node, network resource utilization data and transmission control protocol TCP connection data; 配置单元,用于根据所述第一网络参数和所述加速模式,对所述目标网络节点进行参数配置,以调整所述加速策略;a configuration unit, configured to perform parameter configuration on the target network node according to the first network parameter and the acceleration mode, so as to adjust the acceleration strategy; 其中,所述接收调度中心发送的调整加速策略的通知消息包括:Wherein, the notification message for adjusting the acceleration policy sent by the receiving dispatch center includes: 在所述目标网络节点的带宽满载率达到所述目标网络节点的预设负载阈值时,接收所述调度中心发送的调整加速策略的通知消息。When the bandwidth full load rate of the target network node reaches the preset load threshold of the target network node, a notification message for adjusting the acceleration policy sent by the dispatch center is received. 9.一种数据的加速装置,其特征在于,所述装置包括:存储器和处理器;9. A data acceleration device, wherein the device comprises: a memory and a processor; 其中,所述存储器,用于存储能够在所述处理器上运行的计算机程序;Wherein, the memory is used to store a computer program that can run on the processor; 所述处理器,用于运行所述计算机程序时,执行权利要求1至5任一项所述方法的步骤。The processor is configured to execute the steps of the method according to any one of claims 1 to 5 when running the computer program. 10.一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1至5任一项所述方法的步骤。10. A computer-readable storage medium on which a computer program is stored, characterized in that, when the computer program is executed by a processor, the steps of the method according to any one of claims 1 to 5 are implemented.
CN201710931628.7A 2017-10-09 2017-10-09 A data acceleration method, device and storage medium Active CN107786371B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710931628.7A CN107786371B (en) 2017-10-09 2017-10-09 A data acceleration method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710931628.7A CN107786371B (en) 2017-10-09 2017-10-09 A data acceleration method, device and storage medium

Publications (2)

Publication Number Publication Date
CN107786371A CN107786371A (en) 2018-03-09
CN107786371B true CN107786371B (en) 2021-06-29

Family

ID=61434218

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710931628.7A Active CN107786371B (en) 2017-10-09 2017-10-09 A data acceleration method, device and storage medium

Country Status (1)

Country Link
CN (1) CN107786371B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113098782B (en) * 2021-03-22 2022-08-30 武汉大学 Network congestion control method and computer equipment
CN113391985A (en) * 2021-06-09 2021-09-14 北京猿力未来科技有限公司 Resource allocation method and device
CN114500663B (en) * 2021-12-28 2024-04-12 网宿科技股份有限公司 Scheduling method, device, equipment and storage medium of content distribution network equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101668005A (en) * 2009-09-25 2010-03-10 东南大学 Data transmission accelerating engine method based on multiple access passages of transmitting end
CN107172179A (en) * 2017-06-05 2017-09-15 网宿科技股份有限公司 A kind of bilateral acceleration transmission method and system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102104908B (en) * 2011-01-18 2014-05-07 华为技术有限公司 Data transmission control method and equipment
US9386127B2 (en) * 2011-09-28 2016-07-05 Open Text S.A. System and method for data transfer, including protocols for use in data transfer
CN102546832B (en) * 2012-02-29 2014-09-24 北京快网科技有限公司 Message transmission method based on transmission control protocol (TCP)
CN103391585B (en) * 2012-05-07 2019-06-18 中兴通讯股份有限公司 The method of adjustment and device of bandwidth
CN102891804B (en) * 2012-10-16 2018-08-10 南京中兴新软件有限责任公司 The method of adjustment and system of control strategy

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101668005A (en) * 2009-09-25 2010-03-10 东南大学 Data transmission accelerating engine method based on multiple access passages of transmitting end
CN107172179A (en) * 2017-06-05 2017-09-15 网宿科技股份有限公司 A kind of bilateral acceleration transmission method and system

Also Published As

Publication number Publication date
CN107786371A (en) 2018-03-09

Similar Documents

Publication Publication Date Title
US7388839B2 (en) Methods, apparatus and computer programs for managing performance and resource utilization within cluster-based systems
US9596281B2 (en) Transport accelerator implementing request manager and connection manager functionality
JP7743629B2 (en) Method, device and readable storage medium for analyzing model transmission status in a subscription network
KR101240143B1 (en) Non-blocking admission control
WO2009138000A1 (en) Method, device and system for controlling network flow
WO2010129275A2 (en) Adaptive rate control based on overload signals
CN105933234A (en) Node management method and system in CDN network
US8341265B2 (en) Hybrid server overload control scheme for maximizing server throughput
CN109660467B (en) Method and apparatus for controlling flow
Devkota et al. Performance of quantized congestion notification in TCP incast scenarios of data centers
CN107800574B (en) Storage QOS adjustment method, system, device and computer readable memory
CN107786371B (en) A data acceleration method, device and storage medium
CN114430397B (en) Deterministic service forwarding method and device
CN102916906B (en) One realizes the adaptive method of application performance, Apparatus and system
CN118827563A (en) Flow control method, device, equipment, storage medium and program product
Zhang et al. MoWIE: toward systematic, adaptive network information exposure as an enabling technique for cloud-based applications over 5G and beyond
CN111786901B (en) Transmission parameter self-adaptive adjustment method and acceleration service system
CN119697126B (en) SIP signaling transmission and flow control method, device and computer equipment
WO2020036079A1 (en) Network control device, network control method, and program
CN110858844A (en) Service request processing method, control method, device, system and electronic equipment
He et al. Accurate and fast congestion feedback in MEC-enabled RDMA datacenters
CN113949670B (en) Method, device, system, equipment and storage medium for allocating bandwidth of uplink channel
CN111611068B (en) Data writing method, server and client in distributed system
CN119629062B (en) Data processing method, device and cloud service system
CN119629680B (en) Method, device and readable storage medium for smoothing message peaks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant