[go: up one dir, main page]

HK40084958A - Data transmission method, apparatus, device, and medium - Google Patents

Data transmission method, apparatus, device, and medium Download PDF

Info

Publication number
HK40084958A
HK40084958A HK42023073412.1A HK42023073412A HK40084958A HK 40084958 A HK40084958 A HK 40084958A HK 42023073412 A HK42023073412 A HK 42023073412A HK 40084958 A HK40084958 A HK 40084958A
Authority
HK
Hong Kong
Prior art keywords
data stream
transmission
data
target
transmission channel
Prior art date
Application number
HK42023073412.1A
Other languages
Chinese (zh)
Other versions
HK40084958B (en
Inventor
吴波
Original Assignee
腾讯科技(深圳)有限公司
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of HK40084958A publication Critical patent/HK40084958A/en
Publication of HK40084958B publication Critical patent/HK40084958B/en

Links

Description

Data transmission method, device, equipment and medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a data flow method, apparatus, device, and medium.
Background
With the rapid development of computer technology, traffic transmission of computer network communication becomes an important performance affecting network communication quality. Under the condition of better traffic transmission performance of computer network communication, the integrity and rapidity of traffic transmission between the end and the end can be ensured.
However, at present computer network communications tend to suffer from network congestion; for example, multiple terminals in the same live room in live service often need to request traffic from a live server, and if the requested traffic is more, network congestion may occur between the live server and the terminals, resulting in network packet loss, and reducing traffic transmission performance. Therefore, how to reduce network packet loss and improve traffic transmission performance becomes a research hotspot in the field of computer network communication.
Disclosure of Invention
The embodiment of the application provides a data transmission method, a device, equipment and a medium, which can improve the traffic transmission performance in network communication.
In one aspect, an embodiment of the present application provides a data transmission method, where the method includes:
when a first data stream to be transmitted is generated in a transmission channel, acquiring the network state of the transmission channel; a transmission channel refers to a data path from a source address of a first data stream to a destination address of the first data stream;
if the network state indicates that the transmission channel has network congestion, acquiring at least one second data stream existing in the transmission channel;
screening the target data stream from the at least one second data stream;
and sharing the transmission resources occupied by the target data stream to the first data stream, and transmitting the first data stream by adopting the shared transmission resources.
In another aspect, an embodiment of the present application provides a data transmission apparatus, including:
an obtaining unit, configured to obtain a network state of a transmission channel when a first data stream to be transmitted is generated in the transmission channel; a transmission channel refers to a data path from a source address of a first data stream to a destination address of the first data stream;
the processing unit is used for acquiring at least one second data stream existing in the transmission channel if the network state indicates that the transmission channel has network congestion;
a processing unit for screening the target data stream from the at least one second data stream;
the processing unit is further configured to share transmission resources occupied by the target data stream to the first data stream, and transmit the first data stream using the shared transmission resources.
In one implementation, at least one second data stream already present, associated with a transmission channel, comprises:
transmitting an existing data stream in the channel; wherein, the source address of the existing data stream in the transmission channel is the same as the source address of the first data stream, and the destination address is the same as the destination address of the first data stream;
or, the existing data streams in other transmission channels with the same link with the transmission channel; the source address of the existing data stream in the other transmission channels is the same as the source address of the first data stream, and the destination address of the first data stream belong to the same object group.
In one implementation, one data stream corresponds to one service; the processing unit is used for screening the target data stream from the at least one second data stream, and is specifically used for:
screening a target data stream from at least one second data stream existing in association with the transmission channel according to the data stream screening rule;
wherein, the data flow screening rule includes: taking a second data stream occupying transmission resources larger than a resource threshold value in the at least one second data stream as a target data stream; or, taking the second data stream occupying the most transmission resources in at least one second data stream as a target data stream; or, taking the second data stream with the longest duration in the at least one second data stream as a target data stream; or, the second data stream with the service level of the corresponding service lower than the level threshold value in the at least one second data stream is taken as the target data stream.
In one implementation, the data transmission method is applied to the data transmitting end; the first data stream to be transmitted in the transmission channel is generated based on a flow request message sent by the object terminal; the processing unit is configured to, when acquiring the network state of the transmission channel, specifically:
Acquiring network parameters of a transmission channel, wherein the network parameters are obtained by periodically counting according to a counting period by a data transmitting end;
and if the network parameter is greater than or equal to the parameter threshold, determining that the network state of the transmission channel indicates that the transmission channel has network congestion.
In one implementation, the network parameters include at least a maximum available bandwidth and an amount of in-transit data; the maximum available bandwidth refers to the transmission rate required by the first data stream during transmission, and the data volume in transit refers to the used transmission window in the transmission channel;
when the network parameter is the maximum available bandwidth, the network parameter being greater than or equal to the parameter threshold value means that the maximum available bandwidth is greater than or equal to the bandwidth threshold value;
when the network parameter is the data volume in transit, the network parameter being equal to or greater than the parameter threshold means that the data volume in transit is equal to or greater than the data volume threshold.
In one implementation, the processing unit is configured to, when sharing transmission resources occupied by the target data stream to the first data stream, specifically:
acquiring transmission resources occupied by a target data stream;
reducing the transmission resources occupied by the target data stream by the target variable resource amount to obtain new transmission resources occupied by the target data stream;
And taking the target variable resource quantity of which the target data stream is reduced as the transmission resource of the first data stream.
In one implementation, when the network parameter is the maximum available bandwidth, the transmission resource is the sending rate, and the target variable resource amount is the target variable rate amount; when the network parameter is the data volume in transit, the transmission resource is a sending window, and the target variable resource volume is a target variable window volume;
the target variable resource amount is determined based on a preset resource parameter proportion.
In one implementation, the processing unit is configured to, when transmitting the first data stream using the shared transmission resource, specifically:
according to the transmission resource shared by the target data stream to the first data stream, the first data stream is sent to the target terminal;
the processing unit is further used for: and sending the target data stream to the object group to which the object terminal belongs according to the new transmission resource occupied by the target data stream.
In one implementation, the data transmission method is applied to an intermediate routing node; the first data stream is forwarded by the data sending end; the processing unit is configured to, when acquiring the network state of the transmission channel, specifically:
determining an object group corresponding to the first data stream based on the destination address of the first data stream; the object group comprises one or more object terminals for transmitting data streams through all or part of links of the transmission channel, and the destination address of the first data stream points to one object terminal in the object group;
Acquiring a forwarding queue corresponding to the object group, wherein the forwarding queue sequentially stores the messages of the second data stream corresponding to each object terminal in the object group according to the message receiving sequence;
according to the number of the messages of the second data stream corresponding to each object terminal, calculating the utilization rate of the forwarding queue;
if the usage rate is greater than or equal to the usage threshold, determining that the network state of the transmission channel indicates that the transmission channel has network congestion.
In one implementation, the target data stream is a second data stream occupying transmission resources greater than a resource threshold, or the target data stream is a second data stream occupying the most transmission resources among at least one second data stream; the more messages the second data stream occupies, the more transmission resources the second data stream occupies; the processing unit is used for screening the target data stream from the at least one second data stream, and is specifically used for:
acquiring a statistical list corresponding to the forwarding queue, wherein the statistical list is stored with flow identifiers and message numbers of second data flows in the forwarding queue in an associated manner; the statistical list is dynamically updated along with the forwarding queue;
and screening out the target data stream based on the message number of each second data stream recorded in the statistical list.
In one implementation manner, the processing unit is configured to, when screening out the target data stream based on the number of packets of each second data stream recorded in the statistics list, specifically:
screening a second data stream with the largest message number from the statistical list;
and taking the screened second data stream as a target data stream.
In one implementation manner, the processing unit is configured to, when screening out the target data stream based on the number of packets of each second data stream recorded in the statistics list, specifically:
sequentially searching the number of messages recorded in the statistical list by the second data stream corresponding to each message from the end message stored in the forwarding queue until the number of the messages recorded in the statistical list by the second data stream corresponding to the target message is larger than or equal to a number threshold;
and taking the second data stream corresponding to the target message as a target data stream.
In one implementation, the transmission resource is a memory space occupied by the message; the processing unit is configured to, when sharing transmission resources occupied by the target data stream to the first data stream, specifically:
removing K messages of the target data flow from the forwarding queue; the number of messages contained in the target data stream is greater than or equal to K, wherein K is a positive integer;
K messages of the first data flow are added into a forwarding queue, and an updated forwarding queue is obtained;
the processing unit is configured to, when transmitting the first data stream using the shared transmission resource, specifically:
and sending the first data stream to the target terminal according to the updated forwarding queue.
In one implementation, the processing unit is further configured to:
deleting the removed K messages in the target data stream, and sending a packet loss feedback message to the data sending end, wherein the packet loss feedback message is used for indicating that the K messages in the target data stream are lost;
or forwarding the K messages removed from the target data stream to the last routing node of the intermediate routing node;
and when the use rate of the forwarding queue is detected to be smaller than the use threshold value, receiving the removed K messages in the target data stream from the last routing node, and forwarding the removed K messages in the target data stream.
In another aspect, embodiments of the present application provide a computer device, the device comprising:
a processor for loading and executing the computer program;
a computer readable storage medium having a computer program stored therein, which when executed by a processor, implements the above-described data transmission method.
In another aspect, embodiments of the present application provide a computer readable storage medium storing a computer program adapted to be loaded by a processor and to perform the above-described data transmission method.
In another aspect, embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, which when executed by the processor implement the data transmission method described above.
In this embodiment of the present application, a new first data stream to be transmitted (i.e. a new traffic) is generated in a transmission channel, and when detecting that a network state of the transmission channel indicates that there is network congestion in the transmission channel, at least one existing second data stream associated with the transmission channel may be acquired. Then, a target data stream is screened from the at least one second data stream; for example, the target data stream may be the second data stream with the largest data amount or the longest duration in the at least one second data stream, or the second data stream with lower correlation between target services (such as the services corresponding to the first data stream or the services related to the services corresponding to the first data stream) in the at least one second data stream, or the like; considering that the number of the target data flows is large or the duration is long or the correlation with the target service is low, when the target data flows lose a small amount of packets or slow down the transmission rate, the influence on the service corresponding to the target data flows is small, so that the embodiment of the application supports sharing a part of transmission resources occupied by the target data flows to the first data flow to be transmitted; therefore, the first data flow is ensured not to be lost (namely, can still normally transmit) under the condition of network congestion, the influence on the transmission performance of the target data flow is small, the network congestion in the network communication process is effectively relieved, and the transmission efficiency of the whole flow is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a network communication for transmitting multiple data streams over a transmission channel according to an exemplary embodiment of the present application;
fig. 2a is a schematic diagram of network congestion at a data sending end according to an exemplary embodiment of the present application;
FIG. 2b is a schematic diagram of an intermediate routing node having network congestion provided in accordance with an exemplary embodiment of the present application;
fig. 3 is a schematic diagram of a differential congestion control manner provided in an exemplary embodiment of the present application;
fig. 4 is a flowchart of a data transmission method according to an exemplary embodiment of the present application;
FIG. 5 is a schematic diagram of acquiring at least one second data stream provided by an exemplary embodiment of the present application;
FIG. 6 is a schematic diagram of another acquisition of at least one second data stream provided by an exemplary embodiment of the present application;
FIG. 7 is a flow chart of another method for data transmission according to an exemplary embodiment of the present application;
FIG. 8 is a schematic diagram of a plurality of object groups corresponding to a data sending end according to an exemplary embodiment of the present application;
FIG. 9 is a schematic diagram of resource sharing implemented at a data sending end according to an exemplary embodiment of the present application;
fig. 10 is a flowchart of yet another data transmission method according to an exemplary embodiment of the present application;
FIG. 11 is a schematic diagram of a statistics list corresponding to a forwarding queue according to an exemplary embodiment of the present application;
FIG. 12 is a schematic diagram of an intermediate routing node screening target data flow provided in an exemplary embodiment of the present application;
FIG. 13 is a schematic diagram of another intermediate routing node screening target data flow provided in accordance with an exemplary embodiment of the present application;
FIG. 14 is a schematic diagram of K packets of a target data flow removed and K packets of a first data flow added to a forwarding queue according to one exemplary embodiment of the present application;
fig. 15 is a schematic structural view of a data transmission device according to an exemplary embodiment of the present application;
Fig. 16 is a schematic structural diagram of a computer device according to an exemplary embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
The embodiments of the present application relate to computer network communications (or simply referred to as network communications), and refer to using a physical link to connect individual isolated workstations or hosts together to form a transmission channel (a data path for transmitting data or information), through which a data stream can be transferred from one computer device (such as the above-mentioned workstation or host) to another computer device, so as to achieve the purpose of communication between the computer devices. Notably, (1) resource sharing and communication between workstations or hosts is achieved when data streams are transmitted over a transmission channel, particularly in dependence upon a network communication protocol; only when the computer equipment for receiving and transmitting data streams uses the same network communication protocol, the communication and exchange of information can be carried out; common network communication protocols may include, but are not limited to: transmission control protocol (Transmission Control Protocol, TCP), user datagram protocol (User Datagram Protocol, UDP), UDP based low latency internet transport layer protocol (Quick UDP Internet Connection, qic) or internet protocol (Internet Protocol, IP), and so on. (2) The above mentioned data flow may be referred to as network traffic (or simply referred to as traffic, flow), and may refer to the amount of data transmitted over a computer network (e.g. data transmitted from a data sender to a target terminal), in particular, the amount of data transmitted through a transmission channel. The data stream has a source address, which may be an internet protocol (Internet Protocol, IP) address of a computer device (e.g., cloud server) that generated the data stream, and a destination address, which may be an IP address of a computer device (e.g., user terminal) that receives the data stream; that is, each data stream is transmitted from its own source address to its destination address, and has streaming properties. Further, a transmission channel for transmitting a data stream may refer to a section of a data path from a source address of the data stream to a destination address of the data stream.
It should be appreciated that there are often phenomena in computer networks in which multiple data streams share a transmission resource (alternatively referred to as a network resource) to enable the transmission of multiple data streams in a transmission channel. For example, the destination addresses of the multiple data streams may be the same address (e.g., IP address of a certain target terminal); in a live broadcast scene, the object terminal always keeps communication connection with a service server (such as a live broadcast server), so that the service server can continuously send one or more data streams related to a live broadcast service to the object terminal through a transmission channel, and the object terminal can continuously output live broadcast audio and video; in this implementation manner, the transmission channels used for transmitting the data streams are the same, that is, different data streams issued to the same object terminal are transmitted by using the same data channel. For another example, the destination addresses of the plurality of data streams may be different addresses (e.g., IP addresses of a plurality of target terminals); in a live broadcast scene, a plurality of object devices in the same live broadcast room need to make a flow request to a service server (such as a live broadcast server), so that the service server can send data streams to the plurality of object servers in the same live broadcast room through a transmission channel, and users of all object terminals can watch live broadcast simultaneously; in this implementation, the transmission channels used for transmitting the data streams are not identical, but the transmission channels have identical partial data paths, for example, each data stream is sent from the live server to the router node, and then the router node forwards each data stream to different target terminals, where the data paths of each data stream between the live server and the router node are identical, but the data paths of each data stream between the router node and each target terminal are different.
An exemplary schematic diagram of network communications for transmitting a plurality of data streams over a transmission channel can be seen in fig. 1; as shown in fig. 1, the computer network includes a server cluster, a gateway cluster and an object cluster. Wherein, the server cluster may refer to a cluster constructed by a plurality of servers responsible for the same service, the gateway cluster may refer to a cluster including a plurality of routers (routers), and the object cluster may refer to a cluster composed of object terminals held by objects connected to the server cluster; the number and naming of the computer devices included in each cluster are not limited in the embodiments of the present application, and are also described herein. Wherein:
(1) The server cluster comprises one or more business servers; a business server may refer to a server that carries the functions of a business service, primarily for computing and processing business related transactions. The gateway cluster includes one or more routers (or referred to as routing servers). The router can be understood as transfer equipment responsible for data transfer between the service server and the object terminal, and plays a role of a gateway; the service server only needs to send the data to the router according to the network communication protocol negotiated in advance, and the router is used as proxy equipment to forward the data to the destination address (such as the object terminal).
The servers (such as a business server or a routing server) mentioned above may be independent physical servers, may be a server cluster or a distributed system formed by a plurality of physical servers, and may also be cloud servers that provide cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, and basic cloud computing services such as big data and artificial intelligence platforms. Among them, cloud computing (Cloud computing) is an important branch in the field of Cloud technology; the cloud technology is a hosting technology for unifying serial resources such as hardware, software, network and the like in a wide area network or a local area network to realize calculation, storage, processing and sharing of data; the cloud computing business model application-based network technology, information technology, integration technology, management platform technology, application technology and the like are collectively called, and can form a resource pool, and the cloud computing business model application-based network technology, the information technology, the integration technology, the management platform technology, the application technology and the like are flexible and convenient. Cloud computing is an important support for cloud technology; background services of technical network systems require a large amount of computing and storage resources, such as video websites, picture websites and more portal websites; along with the high development and application of the internet industry, each article possibly has an own identification mark in the future, the identification mark needs to be transmitted to a background system for logic processing, data with different levels can be processed separately, and various industry data needs strong system rear shield support and can be realized only through cloud computing.
(2) The object cluster comprises one or more object terminals which are in communication connection with the server cluster; the target terminal is terminal equipment held by a user and is used for carrying out service butt joint with a service server through a router, for example, acquiring service data from the service server and outputting the service data. The terminal device may include, but is not limited to: smart phones (e.g., smart phones deploying Android systems, or smart phones deploying internet operating systems (Internetworking Operating System, IOS)), tablet computers, portable personal computers, mobile internet devices (Mobile Internet Devices, MID), vehicle devices, headsets, smart speakers, smartwatches, and desktop computers, among others, but are not limited thereto. The service server, the router and the object terminal mentioned above may be directly or indirectly connected through a wired or wireless communication manner, and the specific communication connection manner between each computer device is not limited in the embodiment of the present application; for example, the service server and the router can be connected through a network cable, and the router and the object terminal can be connected through wireless (such as wifi) and the like.
In more detail, the embodiment of the application supports dividing one or more object terminals in an object cluster into different object groups according to a certain division rule; in this way, the data streams corresponding to a plurality of object terminals belonging to the same object group can be forwarded to the corresponding object terminals through the same routing node (i.e. the router mentioned above), so that the resource cost of data stream transmission is saved to a certain extent, and the flow transmission efficiency is improved. Among them, the partitioning rules may include, but are not limited to: region division rules or object attribute division rules (such as age group or gender), and the like. For convenience of explanation, the embodiment of the present application will be described taking the division rule as a region division rule as an example, where the region division may refer to: dividing the object terminal into a plurality of object groups (UG) according to provinces, cities, operators, parks or access modes where the IP addresses of the object terminal are located; so that each object terminal with similar region where the object terminal is located is divided into the same object group (or each object terminal with a smaller distance is divided into the same object group). For example, the object cluster includes the object terminal 1, the object terminal 2, the object terminal 3, the object terminal 4, the object terminal 5, and the comparison terminal 6, and if the distance between the object terminal 1 and the object terminal 2 is 10 meters and the distances between the object terminal 1 and the object terminals other than the object terminal 2 are all greater than 10 meters, the object terminal 1 and the object terminal 2 may be divided into the same object group 1 (i.e., the user group 1 shown in fig. 1); similarly, object group 2 and object group 3, etc. can be obtained.
With continued reference to fig. 1, for each object terminal (e.g., object terminal 5 and object terminal 6) in the user group 1, the service server may forward the data flow corresponding to each object terminal to the corresponding object terminal sequentially via the router r1→the router r2→the router R3; the data stream 1 corresponding to the object terminal 5 is forwarded to the object terminal 5 via the router r1→the router r2→the router R3, and the data stream 2 corresponding to the object terminal 6 is forwarded to the object terminal 6 via the router r1→the router r2→the router R3. Therefore, the source addresses of the data streams corresponding to each object terminal in the user group 1 are the same and all point to the service server; the destination address of the data stream corresponding to each object terminal is different, for example, the destination address of the data stream 1 corresponding to the object terminal 5 points to the object terminal 5, and the destination address of the data stream 2 corresponding to the object terminal 6 points to the object terminal 6; although the destination addresses of the data streams are different, the transmission channels for transmitting the data streams have the same (or common, shared) link, and as shown in fig. 1, the transmission channels for transmitting the data streams have the same link 1, link 2, and link 3.
As can be seen from the above description that the transmission channels corresponding to the data streams have the same link, in practical application, the transmission channels may be used to transmit multiple data streams; however, since the transmission resources of the transmission channel are limited, such as the buffer space of the routing node is limited, or the bandwidth of the transmission channel is limited, etc., when the data transmitted in the transmission channel flows too much, a phenomenon of network congestion occurs. The network congestion can be simply understood as a network state of continuous overload of a transmission channel, specifically, a phenomenon that the data volume transmitted by the transmission channel exceeds the upper limit that can be processed by the transmission channel or a routing node, resulting in the decrease of the traffic transmission performance; for example, when network congestion occurs, the network speed of the object terminal is slow, delay occurs in data reception (e.g., a jam is caused in a video scene, a Round Trip Time (RTT) indicator may be used to measure the Time delay performance in a computer network, and RTT is used to represent the total Time delay from the start of sending data by a data sending end (e.g., a service server) to the start of receiving data by the data sending end to the acknowledgement of receiving the data by the data receiving end (e.g., the acknowledgement is sent immediately after the data receiving end receives the data), or a packet loss of data (e.g., a frame loss (e.g., a video frame) in the video scene). Therefore, the computer network maintains better flow transmission performance, can effectively ensure the normal transmission of data flow and ensures the service experience of users.
In order to relieve network congestion in a computer network and ensure traffic transmission performance, the embodiment of the application provides a data transmission scheme based on resource sharing; the scheme starts from the generation cause of network congestion (such as congestion caused by the existence of data streams occupying transmission resources for a long time or data streams with larger data volume (which can be called as "large stream" in the embodiment of the application), and proposes to relieve the network congestion by effective interaction (namely "resource sharing") among a plurality of data streams, in particular to "share" or "borrow" a part of transmission resources occupied by the existing data streams in a transmission channel to the data streams to be transmitted; therefore, the data stream borrowed by the transmission resource can be normally transmitted, and the new data stream to be transmitted cannot be lost due to the preemption of the resource, so that the transmission efficiency and the transmission performance of the traffic transmission are ensured.
The general flow of the data transmission scheme provided in the embodiment of the application may include: assuming that a new first data stream to be transmitted is generated in the transmission channel, first, a network state of the transmission channel, which may be a data path for transmitting the first data stream, may be acquired (or detected). If the network status indicates that the transmission channel has network congestion, determining that the first data stream to be transmitted may need to occupy transmission resources occupied by the second data stream existing in the transmission channel, or the first data stream to be transmitted may be lost due to failure to store in a forwarding queue of the routing node; at this time, the existing at least one second data stream associated with the transmission channel may be acquired, and the existing at least one second data stream associated with the transmission channel may refer to a data stream transmitted through the transmission channel. The target data stream is then selected from the at least one second data stream, which may include data streams that have less impact on traffic (e.g., a "large stream", a data stream with a lower traffic class, etc.) even after being shared with the transmission resources. Finally, the transmission resources occupied by the target data stream are shared to the first data stream (in particular, part of the transmission resources are shared), and the first data stream can be transmitted by using the transmission resources shared from the target data stream.
Therefore, compared with the conventional indiscriminate transmission parameter configuration and packet loss strategy (such as adopting default transmission parameters to transmit each data stream, or adopting default transmission parameters to prevent new data streams which cannot occupy transmission resources from being lost), the embodiment of the application fully considers that the transmission resources can be shared among the data streams, and can ensure stable transmission of the data streams which share the transmission resources through effective interaction of the transmission resources among the data streams, avoid that newly generated data streams to be transmitted cannot be lost due to the fact that the transmission resources cannot be occupied, effectively relieve network congestion, ensure traffic transmission performance of a transmission channel and improve service experience of an object.
It should be noted that, according to different network congestion occurrence positions in the transmission channel, specific implementation processes of the data transmission scheme provided in the embodiments of the present application are different. The main locations where network congestion is considered to occur during the transmission of data streams over the transmission channel include:
1) A data sender (e.g., a computer device that generates and begins sending a data stream). As shown in fig. 2a, in the beginning stage of traffic transmission, only data stream 1 and data stream 2 exist between the cloud server and the target terminal, and it is assumed that data stream 1 is a "small stream (as opposed to a" large stream ", which may refer to a data stream occupying a shorter time of transmission resources or a smaller amount of data)" and data stream 2 is a "large stream", and the sum of transmission rates of data stream 1 and data stream 2 is equal to the maximum available bandwidth in the computer network; then when the object terminal 3 sends a traffic request message to the cloud server, the cloud server performs network transmission of the data stream 3 according to the default initial transmission parameters. In practice, however, this "forcing" the data stream 3 to occupy the transmission resources of the data stream 1 and the data stream 2 by default (as shown in fig. 2a, the data stream 3 occupies part of the transmission resources of the data stream 1 and part of the transmission resources of the data stream 2), and during the occupying process, network packet loss occurs to different extents in both the newly added data stream 3 and the original data stream 1 and the original data stream 2.
2) Intermediate routing nodes (e.g., any routing node on a transmission channel). As shown in fig. 2b, a forwarding queue (for buffering a data stream waiting to be forwarded) of the intermediate routing node R2 includes a data stream 1 (e.g. "small stream") and a data stream 2 (e.g. "large stream"), and the forwarding queue is already in a full load state; when another data stream 3 enters the intermediate routing node R2, the traffic message of the data stream 3 is discarded, i.e. the intermediate routing node R2 will not forward the data stream 3, since there is no additional storage space in the forwarding queue. It is obvious that the packet loss at the time of the full forwarding queue is unfair for three data flows, because the large flow (data flow 2) occupies most of the transmission resources, and the small flow (data flow 3) arriving later than the large flow at the intermediate routing node is discarded.
Therefore, the embodiment of the application provides a more specific differential congestion control mode based on data flow resource sharing aiming at different network congestion from the angles of the data sending end and the intermediate routing node respectively, so that the network congestion is effectively relieved, and the traffic transmission performance is improved. Wherein:
1) As shown in fig. 3, from the perspective of the data sending end, the principle of the differentiated congestion control manner provided in the embodiment of the present application may be summarized as follows: the data transmitting end configures corresponding transmitting parameters for corresponding data streams to be transmitted by sharing transmission resources of target data streams (such as 'large stream') according to the current network state of the transmission channel, so as to ensure that the target data streams are not lost due to the fact that the data streams to be transmitted occupy the transmission resources. That is, the transmission resource of the target data stream is borrowed to configure the corresponding transmission parameters for the newly added data stream (i.e. the small stream), so that the phenomenon that a large amount of messages are lost due to the preemption of the transmission resource is avoided for both the newly added data stream and the borrowed target data stream, the phenomenon that the small stream contends for the transmission resource in the initial stage to cause the loss of the packets of the large stream and the small stream is reduced, and the service experience of the target terminal is improved.
2) As shown in fig. 3, from the perspective of the intermediate routing node, the principles of the differentiated congestion control manner provided in the embodiments of the present application may be summarized as follows: the intermediate routing node identifies the data flows in the forwarding queue, and screens out target data flows (such as 'large flows'); when the forwarding queue is or is about to be congested, the forwarding efficiency of the data stream to be transmitted is improved by preferentially discarding the message of the target data stream, and the phenomena of low new data stream sending rate and poor user experience caused by long-term transmission resource occupation of the target data stream are avoided. That is, when the intermediate routing node detects that the forwarding queue is full or is about to be full, the packet loss rate of the data stream to be transmitted is reduced by sharing the transmission resource of the target data stream to the data stream to be transmitted, so that the transmission efficiency of the data stream is improved; when the target data stream is a large stream, the individual packet loss of the large stream does not have too serious influence on the object experience.
Therefore, compared with the traditional network feedback aiming at a single flow to relieve network congestion, the scheme of resource sharing among a plurality of data flows designed by the embodiment of the application can better ensure the network transmission efficiency of small flows, and corresponding transmission resources are shared from large flows at a data transmitting end and a forwarding side (namely a middle routing node), so that the problems of high small flow packet loss rate, unstable transmission rate and the like caused by long-term resource occupation of the large flows are avoided, and the service experience of users is facilitated to be improved.
It should be further noted that, the data transmission scheme provided in the embodiments of the present application may be implemented by a computer device, which may be a service server/cloud server as mentioned above, or an intermediate routing node; the embodiments of the present application are not limited to a particular type of computer device. For example, the computer device for performing the data transmission scheme may also include any device deployed with a content delivery network (Content Delivery Network, CDN); the data transmission scheme provided by the embodiment of the application is specifically provided by the content distribution network. Wherein the content distribution network can be understood as: the tool can be used for optimizing the network transmission and access speed, and has the functions of helping to avoid network blocking and ensuring that the network is kept smooth at all times. In practical applications, the cloud server may rely on the content distribution network to provide efficient cloud traffic transmission performance, so as to obtain more content service provider clients (such as live broadcast service providers, short video service providers, game service providers, social service providers, friend-making service providers, music service providers, etc., where the services provided by different service providers are different), thereby obtaining more benefits; similarly, various content service providers are more inclined to select a high-performance content distribution network to provide traffic transmission services for themselves, so as to be expected to improve the business experience of users, and thus obtain more benefits. As can be seen from the above description, when the data transmission scheme capable of promoting the traffic transmission to always maintain high performance is applied to the content distribution network, the transmission performance and transmission efficiency of the content distribution network for the traffic transmission can be improved, so that effective popularization of the content distribution network is realized.
Based on the above described data transmission scheme, the embodiments of the present application propose a more detailed data transmission method, and the data transmission method proposed by the embodiments of the present application is described in detail below with reference to the accompanying drawings.
Fig. 4 is a flowchart of a data transmission method according to an exemplary embodiment of the present application; the data transmission method shown in fig. 4 may be performed by the aforementioned computer device, for example, the computer device is a service server (i.e., a data transmitting end) or an intermediate routing node, and the data transmission method may include, but is not limited to, steps S401-S404:
s401: when a first data stream to be transmitted is generated in the transmission channel, the network state of the transmission channel is acquired.
According to the method for transmitting data provided by the embodiment of the application, the computer equipment is different in the manner of acquiring the first data stream to be transmitted. Optionally, when the computer device is a data sending end, the first data stream to be transmitted is a data stream generated by the data sending end based on a flow request message sent by the object terminal. Optionally, when the computer device is an intermediate routing node, the first data to be transmitted is forwarded by the intermediate routing node through a previous node (such as a data sending end or a routing node); it should be noted that, although the intermediate routing node is the first data stream acquired from the previous node, the first data stream is still generated by the data transmitting end when it traces back to the data transmitting end.
In a specific implementation, when determining that there is a first data stream to be transmitted in the transmission channel, for example, when the intermediate routing node receives the first data stream to be transmitted forwarded by the previous node, the computer device may acquire a network state of the transmission channel, where the network state is used to indicate whether network congestion currently occurs or is about to occur in the transmission channel. In this way, the computer device can further determine whether to perform traffic transmission on the first data stream in a conventional manner or perform traffic transmission on the first data stream in the present scheme based on the network state of the current transmission channel; compared with the mode of carrying out traffic transmission without judging the network state, the method and the device have the advantages that the first data stream is transmitted by adopting resource sharing only when the network congestion exists in the transmission channel, the intelligence of traffic transmission can be improved, the utilization rate of transmission resources is ensured, and the traffic transmission performance is improved.
It should be appreciated that the network state of the transmission channel is not a constant one, but varies according to the variation of the transmitted data stream; therefore, the embodiment of the application supports that the network state of the transmission channel to be transmitted is detected once only when the first data stream to be transmitted is acquired, so that the detected network state of the transmission channel is ensured to be real-time.
S402: if the network status indicates that the transmission channel has network congestion, acquiring at least one existing second data stream associated with the transmission channel.
Wherein the network status indicates that the transmission channel has network congestion, indicating that the transmission resources of the transmission channel have been or are about to be fully used by the existing data stream, and at this time, at least one second data stream existing in association with the transmission channel can be acquired; any second data stream is created earlier than the first data stream to be transmitted. Considering that the transmission channels corresponding to a plurality of data streams have a shared link, the situation of network congestion may occur; thus, the transmission channel of the at least one second data stream mentioned in the embodiments of the present application may be the same as the transmission channel of the first data stream to be transmitted; alternatively, the transmission channel of the at least one second data stream and the transmission channel of the first data stream to be transmitted may be different channels (e.g. different data streams flowing to the same object group) having a common link. The manner in which the above-described second data stream is determined is described below with reference to the accompanying drawings, in which:
(1) As shown in fig. 5, assuming that when the object 1 plays a game through a game application program executed by the object terminal 1, the object terminal 1 (specifically, the game application program passes through the object terminal 1) sends a traffic request message 1 to a service server corresponding to a target application program at a first time, and at this time, the service server responds to the traffic request message 1 to issue a data stream 1 to the object terminal 1; if the target terminal 1 sends the flow request message 2 to the service server corresponding to the target application program at a second time (such as any time later than the first time), the service server responds to the flow request message 2 and sends the data stream 2 to the target terminal. Therefore, when the data flows flowing to the same object terminal are more, network congestion can occur in the transmission channels.
Based on this, the at least one existing second data stream associated with the transmission channel mentioned in the embodiment of the present application may include: the data flow exists in the transmission channel corresponding to the first data flow; wherein the source address of the existing data stream in the transmission channel is the same as the source address of the first data stream, and the destination address is the same as the destination address of the first data stream. That is, at least one second data stream may be a data stream which is transmitted from a data transmitting end that generates the first data stream and flows to an object terminal that is to receive the first data stream according to a transmission path of the first data stream; i.e. the transmission path of the second data stream is exactly the same as the transmission path of the first data stream; when the same transmission channel transmits a plurality of data streams, network congestion may occur in the transmission channel, so that the embodiment of the application supports sharing of transmission resources of second data streams existing in the same transmission channel to first data streams to be transmitted in the transmission channel; in short, transmission resources are supported to be shared among a plurality of data streams flowing to a single object terminal (e.g., a certain user requests traffic a plurality of times). By sharing the transmission resources among a plurality of data streams of the same target terminal, the traffic transmission of other target terminals is not affected.
(2) As shown in fig. 6, it is assumed that the object group includes the object terminal 1 and the object terminal 2; the target terminal 1 sends a flow request message 1 to the service server at a first time, and when the service server responds to the flow request message 1 to send a data stream 1 to the target terminal 1, a transmission channel of the data stream 1 comprises: service server→router r1→router r2→router r3→object terminal 1. If the target terminal 2 sends the traffic request message 2 to the service server at the second time (different from or the same as the first time), and at this time, when the service server sends the data stream 2 to the target terminal 2 in response to the traffic request message 2, a transmission channel of the data stream 2 includes: service server→router r1→router r2→router r3→object terminal. It can be seen that the transmission channels of different data streams flowing to the same object group have a common link, so that when more data streams flow to the same object group, network congestion occurs in the transmission channels.
Based on this, the at least one existing second data stream associated with the transmission channel mentioned in the embodiment of the present application may include: the transmission channel corresponding to the first data stream has the data streams existing in other transmission channels of the same link; the source address of the existing data stream in the other transmission channel is the same as the source address of the first data stream, and the destination address of the first data stream belong to the same object group. As can be seen from the foregoing description, the embodiments of the present application support dividing the objects into different object groups according to the dividing rule, where the transmission channels of the data streams received by each object terminal in the different object groups from the same data transmitting end have a common link; then network congestion may occur in the transmission channel when different object terminals in the same object group receive the data stream. Therefore, the embodiment of the application supports sharing transmission resources of the second data stream existing in the same object group to the first data stream to be transmitted in the object group, and does not pay attention to which object terminal in the object group each data stream (such as the second data stream and the first data stream) points to; in short, transmission resources are supported to be shared among a plurality of data streams of a plurality of object terminals included in an object group. By the mode of sharing the transmission resources among a plurality of data streams in the same object group, the transmission resources can be allocated according to the actual flow demand, and reasonable division and use of the transmission resources can be realized.
The above description will take the second data stream and the receiver of the first data stream as the same object terminal or the same object group as examples. It should be understood that each of the at least one second data stream associated with the transmission channel may belong to a different object terminal of the same object group; alternatively, part of the at least one second data stream may belong to the same object terminal of the same object group; the specific source of the at least one second data stream is not limited in this embodiment.
For convenience of explanation, in the subsequent embodiments of the present application, the transmission resource is shared among a plurality of data streams of the same object group, that is, the existing at least one second data associated with the transmission channel of the first data stream to be transmitted belongs to the same object group is described as an example, which is specifically described herein.
S403: the target data stream is screened from the at least one second data stream.
After obtaining at least one second data stream associated with the transmission channel based on the foregoing steps, the embodiments of the present application further support screening a target data stream from the at least one second data stream, so as to share part of transmission resources of the target data stream to a first data stream to be transmitted, and ensure that the first data stream can be normally transmitted, thereby improving traffic transmission performance. In particular implementations, embodiments of the present application support screening a target data stream from at least one second data stream already existing in association with a transmission channel according to data stream screening rules. The embodiment of the application mainly formulates the data flow screening rule from two dimensions, and comprises the following steps:
(1) The dimension of the amount of transmission resources occupied by the data stream. Specifically, the causes of network congestion are considered to include: the network congestion caused by the fact that a large flow occupies more transmission resources exists in the transmission channel; these "large streams" may refer to data streams that contain a large amount of data (e.g., a data amount greater than 100 Kilobytes (KB)), or may refer to data streams that have a long duration (e.g., a duration of the data stream from the time of creation to the current time); conversely, a "streamlet" may refer to a data stream that contains a smaller amount of data or has a shorter duration; the "large stream" is obtained by gradually accumulating the "small stream". However, in various service scenarios (such as on-demand (such as short video) and live broadcast (such as live broadcast e-commerce and the like)), the ubiquitous data stream is a "small stream", and after the computer device acquires the "small stream", the "small stream" often occupies more transmission channels due to the existing "large stream" in the transmission channels, so that the "small stream" cannot acquire corresponding transmission resources through competition, and a packet loss phenomenon occurs, that is, the "small stream" has the characteristics of short duration, less flow data, higher packet loss rate and the like.
Based on this, the embodiment of the application supports the number dimension of transmission resources occupied by the data stream, and formulates a data stream screening rule, and at this time, the screening principle of the data stream screening rule is as follows: and taking the second data stream which occupies more transmission resources from the at least one second data stream as a target data stream. Data flow screening rules formulated based on the screening principles may include, but are not limited to: taking a second data stream occupying transmission resources larger than a resource threshold value in the at least one second data stream as a target data stream; or, taking the second data stream occupying the most transmission resources in at least one second data stream as a target data stream; or, the longer the duration of the data stream, the more transmission resources the data stream occupies, so the second data stream with the longest duration in the at least one second data stream is taken as a target data stream; etc. Further, after the target data stream is screened by adopting the data stream screening rule, the slow transmission of the large stream or the small packet loss is considered to have small influence on the service, so that the transmission resources of the target data stream occupying a large amount of transmission resources can be shared to the small stream (namely the first data stream) to be transmitted, so that the first data stream is ensured not to be lost or the transmission response can not be obtained for a long time, and the flow transmission performance of the large stream and the small stream in the transmission channel is improved.
(2) Dimension of service class of service corresponding to data flow. Specifically, the causes of network congestion are considered to include: the data stream with lower service level is crowded with the data stream with higher service level, so that the data stream with higher service level cannot be transmitted in time; one data stream corresponds to one service (e.g., live service corresponds to live data stream, game service corresponds to game data stream, etc.), and one service may include multiple data streams. The service level of the service may be classified according to the emergency degree of the service, for example, the higher the emergency degree of the service is, the higher the service level of the service is, the lower the emergency degree of the service is, and the lower the service level of the service is; further, the urgency of the service may be determined by the requirement of the user on the service itself, for example, assuming that the user is using the object terminal to control the game and play music, the higher the urgency of the service of the game service, i.e. the higher the service level of the game service, compared to the music service when network congestion occurs; further, the service level of each service may be preset, for example, a level list may be stored in the computer device, where the service levels of each service are recorded; when the computer equipment obtains the service level comparison required to be carried out, the service level of the service corresponding to the data flow can be determined according to the level list; of course, the embodiment of the application is not limited to a specific determination manner of the service class of the service corresponding to the data flow.
Based on this, the embodiment of the present application supports the establishment of a data flow screening rule from the dimension of the service class of the service corresponding to the data flow, where the screening principle of the data flow screening rule is: and taking the second data stream with lower service level as a target data stream. Data flow screening rules formulated based on the screening principles may include, but are not limited to: taking a second data stream with the service level of the corresponding service lower than a level threshold value in at least one second data stream as a target data stream; the level threshold may be a preset and fixed value or may be dynamically changed, if the level threshold is a traffic level of the first data stream to be transmitted, the level threshold is different with each time the first data stream to be transmitted is different; therefore, the transmission resources of the target data stream with the service level lower than that of the first data stream to be transmitted can be always shared to the first data stream, and the first data stream with the higher service level can be ensured to be transmitted preferentially. Or, taking the second data stream with the lowest service level of the corresponding service in the at least one second data stream as a target data stream; etc. Further, after the target data stream is screened out by adopting the data stream screening rule, the slow transmission of the data stream with a lower service level or the small packet loss is considered to have a small influence on the service, so that the transmission resource of the target data stream with a lower service level can be shared to the first data stream with a higher service level, the first data stream with a higher service level can be preferentially transmitted, the service with a higher service level can be preferentially executed, and the service experience of a user is improved.
Based on the above description about the data flow screening rule, it should also be noted that the specific rule content of the data flow screening rule in the embodiment of the present application is not limited; the foregoing are just a few exemplary data flow screening rules presented in the embodiments of the present application. Furthermore, embodiments of the present application do not limit the number of target data streams; that is, embodiments of the present application support screening one or more target data streams from at least one second data stream already present in association with a transmission channel; the number of the target data streams can be determined by comprehensively considering the number of transmission resources required by the first data streams to be transmitted, the number of transmission resources occupied by each second data stream and the service level of each second data stream. For example: the at least one second data stream comprises: a second data stream 1 and a second data stream 2, wherein the second data stream 1 is a small stream and has a lower traffic level than the first data stream, the second data stream 2 is a large stream and has a higher traffic level than the second data stream 1 but is smaller than the first data stream, and the transmission resources occupied by the second data stream 1 can be shared to the first data stream in consideration of the fact that the second data stream 2 is a small stream but has a higher traffic level; further, if the transmission resources occupied by the second data stream 1 are smaller than the number of transmission resources required by the first data stream, considering that the traffic level of the first data stream is higher than that of the second data stream 2, part of the transmission of the second data stream 2 may be shared to the first data stream; that is, the first data stream "borrows" the transmission resources of both the second data stream 1 and the second data stream 2 for traffic transmission.
Therefore, the computer equipment screens one or more target data streams from at least one second data stream according to the actual requirement of the first data stream to be transmitted to share the transmission resource, thereby being beneficial to the rapid transmission of the first data stream and ensuring the traffic transmission performance. For convenience of explanation, the embodiments of the present application will be described with reference to the example in which the number of target data streams selected from the at least one second data stream is 1, which is specifically described herein.
S404: and sharing the transmission resources occupied by the target data stream to the first data stream, and transmitting the first data stream by adopting the shared transmission resources.
After determining the target data stream with the capacity of sharing the transmission resources based on the above steps, part of the transmission resources of the target data stream can be shared to the first data stream, so that the shared part of the transmission resources can be used for transmitting the first data stream, normal transmission of the first data stream is ensured, and the influence on the traffic transmission of the target data stream is small.
In this embodiment of the present application, a new first data stream to be transmitted (i.e. a new traffic) is generated in a transmission channel, and when detecting that a network state of the transmission channel indicates that there is network congestion in the transmission channel, at least one existing second data stream associated with the transmission channel may be acquired. Then, a target data stream is screened from the at least one second data stream; for example, the target data stream may be the second data stream with the largest data amount or the longest duration in the at least one second data stream, or the second data stream with lower correlation between target services (such as the services corresponding to the first data stream or the services related to the services corresponding to the first data stream) in the at least one second data stream, or the like; considering that the number of the target data flows is large or the duration is long or the correlation with the target service is low, when the target data flows lose a small amount of packets or slow down the transmission rate, the influence on the service corresponding to the target data flows is small, so that the embodiment of the application supports sharing a part of transmission resources occupied by the target data flows to the first data flow to be transmitted; therefore, the first data flow is ensured not to be lost (namely, can still normally transmit) under the condition of network congestion, the influence on the transmission performance of the target data flow is small, the network congestion in the network communication process is effectively relieved, and the transmission efficiency of the whole flow is improved.
The embodiment shown in fig. 4 is not limited to the computer device for performing the data transmission method, specifically, the data transmitting end or the intermediate routing node; and when the computer device is a data transmitting end or an intermediate routing node, the specific implementation of the data transmission method is different. For example, a schematic flow chart of the data transmission method applied to the data transmitting end may be referred to fig. 7; as shown in fig. 7, the data transmission method is performed by the data transmitting end, and may include, but is not limited to, steps S701 to S705:
s701: when a first data stream to be transmitted is generated in the transmission channel, network parameters of the transmission channel are acquired.
Specifically, when the target terminal has a flow request requirement, the target terminal may generate a flow request message for requesting the first data flow; then, the object terminal may send the flow request packet to the data sending end, so that the data sending end generates a corresponding first data stream after receiving the flow request packet from the object terminal. That is, when the computer device is a data transmitting end, the first data stream generated in the transmission channel is generated based on the flow request message sent by the target terminal. Further, after the first data stream to be transmitted is generated, the method supports inquiring the statistical information of the object terminal so as to execute subsequent transmission for the first data stream according to the statistical information; considering that the embodiment of the application supports sharing transmission resources in an object group where an object terminal is located, particularly sharing transmission resources among a plurality of data streams flowing to the object group at a data sending end; thus, the above-mentioned statistical information of the query object terminal may specifically be statistical information of the object group in which the query object terminal is located.
Further, the statistical information of the object group may be obtained by the data transmitting end through statistics at intervals (for example, the statistical period is marked as T); in other words, the data transmitting end may perform statistics on the information in each object group in advance according to the statistics period, so as to obtain the statistics information of each object group. The statistical period T may be preconfigured by an administrator and recorded in a configuration file, for example, the preset statistical period t=10 seconds; of course, the statistical periods of different object groups may be the same or different, which is not limited in the embodiments of the present application.
Further, the statistical information of the data sending end for each object group statistics may include, but is not limited to: the stream information (such as the data size or duration of each data stream) of each data stream in the object group, and the network quality information (or called network parameters) are obtained by periodically counting according to a counting period by the data transmitting end. Wherein, the network parameters may at least include: maximum available bandwidth BW_max, in-transit data quantity Infinight and other information; the maximum available bandwidth bw_max refers to a transmission rate required at the time of transmission of the first data stream (e.g., a transmission rate of 1Mbps, representing 1000000 bits (bits) per second transmission), and the in-transit data amount Inflight refers to a transmission window (a limit of a sequence number queue representing a frame (frame) that has been transmitted by the sender but has not yet been acknowledged) that has been used in the transmission channel, in bytes. As shown in fig. 8, the object group corresponding to the data sending end (such as a cloud server) includes: object group 1, object group 2, object group 3, and object group 4; the statistical information contained in the statistical list obtained by the data sending end by counting the object group i (i=1, 2, 3, 4) is respectively: the amount of data per data flow (or called the Size of the traffic), the maximum available bandwidth BW _ max to the object group i-path network, and the current in-transit data amount Inflight. Each data stream has a flow identifier (or simply referred to as a flow identifier ID) for uniquely identifying the data stream, where the flow identifier of the data stream may be obtained by calculating based on a source address src, a destination address dst, a source port sport, a destination port dport, and a protocol number protocol of the data stream; the calculation formula is as follows:
ID = Hash(src||dst||sport||dport||protocol)(1)
Where Hash () represents a Hash operation, and the symbol "||" represents a concatenation operation.
As can be seen from the above description, after the data transmitting end receives the flow request message from the object terminal, the data transmitting end generates the first data stream to be transmitted in response to the flow request message, and then can acquire the network parameters of the transmission channel from the statistical information of the object group where the object terminal is located; the network parameters include at least the maximum available bandwidth and the amount of data in transit mentioned above. The data transmitting end periodically counts the object groups, so that the data transmitting end can master the network parameters of the transmission channel and the stream information of each data stream in real time, and can timely and rapidly acquire the statistical information to determine the network congestion condition when the newly generated data stream is required to be transmitted.
S702: judging whether the network parameter is larger than a parameter threshold; if the network parameter is greater than the parameter threshold, determining that the network status indicates that the transmission channel has network congestion.
S703: if the network status indicates that the transmission channel has network congestion, acquiring at least one existing second data stream associated with the transmission channel.
S704: the target data stream is screened from the at least one second data stream.
S705: and sharing the transmission resources occupied by the target data stream to the first data stream, and transmitting the first data stream by adopting the shared transmission resources.
In steps S702 to S705, after the network parameters of the transmission channel are obtained based on the foregoing steps, whether the transmission channel is in or about to be in network congestion may be determined according to the network parameters, so as to determine whether to use the resource sharing method provided in the embodiment of the present application to perform traffic transmission on the first data stream to be transmitted. In particular, the network parameters may be compared with corresponding parameter thresholds; if the network parameter is greater than or equal to the parameter threshold, determining that the network state of the transmission channel indicates that the transmission channel has network congestion; if the network parameter is less than the parameter threshold, determining that the network status of the transmission channel indicates that the channel is free of network congestion.
When the network parameter is the maximum available bandwidth bw_max, the network parameter equal to or greater than the parameter threshold refers to that the maximum available bandwidth bw_max is equal to or greater than the bandwidth threshold bw_thres, for example, the maximum available bandwidth is 10 Gigabytes (GB), and the bandwidth threshold is 9.5GB, where it is determined that the transmission channel has network congestion. When the network parameter is the in-transit data quantity, the network parameter is larger than or equal to the parameter threshold value, namely the in-transit data quantity is larger than or equal to the data quantity threshold value, and at the moment, the network congestion of the transmission channel is determined.
In the embodiment of the present application, when any of the above-described network parameters is determined to be greater than or equal to the corresponding parameter threshold, it may be determined that there is network congestion in the transmission channel, and at this time, the step of acquiring at least one second data stream associated with the transmission channel (i.e. step S703 described above) and the step of screening the target data stream from the at least one second data stream (i.e. step S704 described above) may be performed. It should be noted that the implementation process shown in steps S703-S704 is similar to the implementation process shown in steps S402-S403 in the embodiment shown in fig. 4, and reference may be made to the related descriptions shown in steps S402-S403, which are not repeated here.
Further, when it is determined that the network parameter is greater than or equal to the parameter threshold, that is, when it is determined that the transmission channel has network congestion, an operation of sharing transmission resources occupied by the target data stream to the first data stream may be performed; the operations may specifically include: first, the transmission resources occupied by the target data stream are acquired. Then, reducing the transmission resources occupied by the target data stream by the target variable resource amount to obtain new transmission resources occupied by the target data stream; the target variable resource amount is determined based on a preset resource parameter proportion. And finally, taking the target variable resource quantity reduced by the target data stream as the transmission resource of the first data stream. By the above process, the part of transmission resources occupied by the target data stream can be borrowed to the first data stream, so that the first data stream can be ensured to be normally transmitted.
Furthermore, the specific implementation process of the transmission resources occupied by the above-mentioned shared target data stream is different according to the different network parameters; a detailed procedure for resource sharing of the target data stream when both the network parameters "maximum available bandwidth" and "amount of in-transit data" are greater than the corresponding parameter thresholds is given below in connection with fig. 9. As shown in fig. 9, the data transmitting end may identify flow information (such as flow size) of each data flow in the object group and count network parameters of the object group to obtain statistical information. After the data sending end receives a flow request message sent by a certain object terminal in the object group, a new first data stream can be generated in response to the flow request message, and then the object group where the object terminal is located is determined according to the object terminal pointed by the destination address of the new first data stream. Then, before transmitting the first data stream, the data transmitting end acquires the network parameter from the statistical information, and compares the network parameter with the same parameter threshold value, thereby determining whether the transmission channel has network congestion. Finally, when the network parameter is greater than or equal to the parameter threshold, the transmission resource of the target data stream can be shared to the first data stream, and the shared transmission resource is adopted to transmit the first data stream; when the network parameter is less than the parameter threshold, a default network parameter (e.g., a default sending rate and/or sending window) may be used to traffic the first data stream. Comparing the maximum available bandwidth with a bandwidth threshold, carrying out flow transmission on the first data stream according to a default sending rate when the maximum available bandwidth is determined to be smaller than the bandwidth threshold, and sharing a part of the sending rate of the target data stream to the first data stream when the maximum available bandwidth is determined to be larger than or equal to the bandwidth threshold; further, the in-transit data volume is compared with the data volume threshold, when the in-transit data volume is determined to be smaller than the data volume threshold, the first data stream can be subjected to traffic transmission according to a default transmission window and a shared transmission rate, when the in-transit data volume is determined to be larger than or equal to the data volume threshold, a part of the transmission window of the target data stream can be shared with the first data stream, at this time, the first data stream can be subjected to traffic transmission according to the latest configured transmission rate and the transmission window, and the target data stream can also be subjected to traffic transmission by adopting the transmission rate and the transmission window after the transmission resource sharing. It should be understood that the foregoing is described by taking the example of judging the maximum available bandwidth first and then judging the amount of in-transit data as an example, but in practical application, the judging sequence of the maximum available bandwidth and the amount of in-transit data in the embodiment of the present application is not limited; for example, it may be determined whether the amount of data in transit is greater than a data amount threshold, and then whether the maximum available bandwidth is greater than a bandwidth threshold, as described herein.
The transmission resources are different according to different network parameters, and the sharing process of the transmission resources is different. With continued reference to fig. 9, when the network parameter "maximum available bandwidth" is greater than the bandwidth threshold and the network parameter "amount of data in transit" is greater than the bandwidth threshold, exemplary sharing of transmission resources occupied by the target data stream and transmission flows for the target data stream and the first data stream may include:
(1) If the network parameter is the maximum available bandwidth and the transmission resource to be shared is the transmission rate, the specific process of sharing a part of the transmission rate of the target data stream (such as the maximum stream in the at least one second data stream) to the newly created small stream "first data stream" includes:
firstly, reducing the current sending rate Paing_max of a target data stream by a target variable resource amount (the target variable rate delta_bw at the moment) to a new sending rate Paing_cur; as in formula (2):
Pacing_curPacing_max - delta_bw(2)
then, taking the target change rate delta_bw as an initial sending rate Paing_new of the first data stream; that is, the initial transmission rate of the new first data stream, paging_new, is set to the partial rate delta_bw at which the transmission rate of the target data stream is reduced; as shown in formula (3):
Pacing_new = delta_bw(3)
Wherein the target rate of change delta_bw is determined by the parameter ratio Paxing_ratio (0Pacing_ratio1) Calculating to obtain; as shown in formula (4):
delta_bw = Pacing_maxPacing_ratio(4)
the specific value of the parameter ratio paging_ratio is preconfigured by an administrator and recorded in a configuration file, for example, the parameter ratio paging_ratio is set to 20% by default.
Finally, the first data stream is sent to the target terminal according to the transmission resource shared by the target data stream to the first data stream; specifically, the first data stream is transmitted at a transmission rate that is newly configured for the first data stream. In addition, the target data stream is sent to the object group according to the new transmission resource occupied by the target data stream; specifically, the target data stream is transmitted at a new transmission rate after the target rate of change is reduced by the transmission rate of the target data stream.
(2) If the network parameter is the amount of in-transit data and the transmission resource to be shared is a transmission window, the specific process of sharing a part of the transmission window of the target data stream (such as the maximum stream in the at least one second data stream) to the newly created small stream "first data stream" includes:
first, the current transmission window cwnd_max of the target data stream is reduced by the target variable resource amount (the target variable window amount cwnd_bw at this time) to a new transmission window cwnd_cur; as in formula (5):
cwnd_curcwnd_max - delta_cwnd(5)
Then, taking the target change window quantity cwnd_bw as an initial sending window paging_new of the first data stream; that is, the initial transmission rate paging_new of the new first data stream is set to the partial rate cwnd_bw at which the transmission window of the target data stream decreases; as shown in formula (6):
Pacing_new = delta_bw(6)
wherein the target variation window amount cwnd_bw is defined by the parameter ratio cwnd_rate (0cwnd_rate/>1) Calculating to obtain; as shown in formula (7):
delta_cwnd = cwnd_maxcwnd_rate(7)
the specific value of the parameter ratio cwnd_rate is preconfigured by an administrator and recorded in a configuration file, for example, the parameter ratio cwnd_rate is set to 20% by default.
Finally, the first data stream is sent to the target terminal according to the transmission resource shared by the target data stream to the first data stream; specifically, the first data stream is transmitted according to a transmission window newly configured for the first data stream. In addition, the target data stream is sent to the object group according to the new transmission resource occupied by the target data stream; specifically, the target data stream is transmitted according to a new transmission window after the target change window amount is reduced according to the transmission window of the target data stream.
In summary, according to the embodiment of the present application, aiming at network congestion of a data transmitting end, corresponding transmission parameters (transmission rate and transmission window) are allocated to a first data stream to be transmitted by reducing an initiation window or a transmission rate of a target data stream, that is, transmission parameters are differentially allocated to a large and small stream, so that a situation that a large amount of packets of the first data stream to be transmitted are lost in an initial contention process of network resources to affect service experience of an object is avoided.
The embodiment shown in fig. 7 mainly provides a specific implementation process of transmission resource sharing implemented by the data transmitting end when the data transmission method is applied to the data transmitting end. When the data transmission method is applied to the intermediate routing node, the intermediate routing node detects the network state of each forwarding queue at any time, and ensures the transmission performance of the whole flow by adopting a differential packet loss strategy based on resource sharing aiming at each data flow of the path according to whether the forwarding queue is in (or is about to be in) network congestion. In detail, the general flow of the intermediate routing node performing the data transmission method may be seen to include: (1) after receiving the traffic message (i.e., the first data stream) from the cloud server, the intermediate routing node checks whether the forwarding queue itself is in (or is about to be in) a network congestion state. (2) If the forwarding queue is (or is about to be) in network congestion, the intermediate routing node checks the number of messages for each data flow in the forwarding queue (where the data flow with the largest number of messages in the forwarding queue is referred to as the forwarding queue maximum flow). (3) The intermediate routing node shares a part of transmission resources of the maximum flow of the forwarding queue to the newly arrived traffic message, namely discards a small amount of messages of the maximum flow of the forwarding queue, and adds the newly arrived traffic message to the forwarding queue so as to forward the newly arrived traffic message normally.
The foregoing only briefly provides a general flow of the data transmission method performed by the intermediate routing node, and the detailed implementation procedure of the data transmission method performed by the intermediate routing node will be described with reference to fig. 10; as shown in fig. 10, the data transmission method may include, but is not limited to, steps S1001 to S1006:
s1001: when a first data stream to be transmitted is generated in the transmission channel, a forwarding queue in an object group corresponding to the first data stream is acquired.
As can be seen from the foregoing description, when the data stream flows from the source address to the destination address through the transmission channel, the data stream sequentially passes through the data transmitting end, the intermediate routing node(s) (the number is one or more), and the target terminal. For the intermediate routing node, the first data stream sent by the previous node can be received, and the first data stream is added to a forwarding queue of the intermediate routing node (specifically, a storage space for caching each data stream in an object group where an object terminal corresponding to the first data stream is located) so as to wait for the intermediate routing node to forward the first data stream; the last node here refers to a node adjacent to and upstream of the intermediate routing node in a path from a source address to a destination address of the data stream, and may include a data transmitting end or a routing node. The intermediate routing node stores a forwarding queue corresponding to each object group in the plurality of object groups, where the forwarding queue is used to store flow messages (or simply called messages) of one or more data flows flowing to the corresponding object group, and the flow messages are data units exchanged and transmitted in a computer network, that is, data blocks to be sent by a station at one time; when the number of messages of the data flow is larger, the larger the data volume of the data flow is, the higher the utilization rate of the forwarding queue (namely, the ratio between the occupied storage memory in the storage space of the forwarding queue and the total storage memory of the forwarding queue) is.
S1002: and judging whether the forwarding queue is congested.
Considering that the intermediate routing node has a high usage rate of the forwarding queue, it indicates that the forwarding queue has insufficient buffer space to store the first data flow, and the first data flow may face a possibility of being discarded. Based on the above, after the intermediate routing node obtains the first data stream to be forwarded, the intermediate routing node may first obtain a forwarding queue in the object group corresponding to the first data stream, and determine whether the forwarding queue may store the data stream; thus, whether the resource sharing operation needs to be executed is judged to ensure that the first data stream is not lost.
The intermediate routing node provided in the embodiment of the present application mainly determines whether the intermediate routing node is in (or is about to be in) a congestion state according to the usage rate ntize_rate of the forwarding queue; when the usage rate Utilize_rate is greater than or equal to (or exceeds) the usage threshold Utilize_thres, determining that the current state of the forwarding queue is congestion; when the usage Utilize_rate is less than (or not exceeding) the usage threshold Utilize_thres, it is determined that the forwarding queue is not currently congested. The usage threshold value utize_thres mentioned above is preconfigured by an administrator and recorded in a configuration file, for example, the usage threshold value is set to 90% by default.
In a specific implementation, after receiving a first data stream to be transmitted sent by a previous node, an intermediate routing node may determine, based on a destination address of the first data stream, an object terminal to which the destination address of the first data stream points; further, based on the terminal identifier of the object terminal (e.g., information for uniquely identifying the object terminal), an object group to which the object terminal belongs, that is, an object group corresponding to the first data stream, is determined, where the object group includes one or more object terminals that transmit the data stream through all or part of the links of the transmission channel, and a destination address of the first data stream points to one object terminal in the object group. Then, a forwarding queue corresponding to the object group is obtained, and the forwarding queue sequentially stores messages of a second data stream corresponding to each object terminal in the object group according to a message receiving sequence; the more messages the second data flow occupies, the more transmission resources the second data flow occupies, and the more memory the second data flow occupies in the forwarding queue. Then, according to the number of the messages of the second data flow corresponding to each object terminal, calculating the utilization rate of the forwarding queue; for example, the storage space of the forwarding queue may store 100 messages in total, and 20 messages of data stream 1, 60 messages of data stream 2, and 10 messages of data stream 3 are currently stored, and the usage rate of the forwarding queue is determined to be (20+60+10)/100 100% = 90%. Finally, judging the size between the utilization rate of the forwarding queue and the utilization threshold value; if the usage rate is greater than or equal to the usage threshold, determining that the network state of the transmission channel indicates that the transmission channel has network congestion, and at this time, the intermediate routing node may share a part of transmission resources (on the intermediate routing node side, the transmission resources or referred to as forwarding resources) of the data stream with a low service level with large flow occupied by transmission resources in the forwarding queue, so as to reduce the packet loss rate of most flows (such as small flows), see step S1005 below; if the usage rate is smaller than the usage threshold, it is determined that the network state of the transmission channel indicates that the transmission channel has no network congestion, and at this time, the intermediate routing node may add the packet of the new first data flow to the forwarding queue according to the original traffic transmission mode, and forward the packet to the next routing node (or directly forward to the target terminal).
In more detail, the embodiment of the application supports the intermediate routing node to maintain a statistical list for the traffic data in each forwarding queue; the statistics list is stored with flow identification and message number of each second data flow in the forwarding queue in an associated mode, and is updated dynamically along with the forwarding queue (if new messages of the data flows enter the forwarding queue of the intermediate routing node, the intermediate routing node adds 1 to the message number of the corresponding data flow in the statistics list corresponding to the forwarding queue, and when the messages of the data flows leave the forwarding queue of the intermediate routing node, the intermediate routing node subtracts 1 to the message number of the corresponding data flow in the statistics list). Therefore, the intermediate routing node can quickly acquire the utilization rate of the forwarding queue corresponding to the object group according to the statistical list corresponding to the object group, so that the speed and the efficiency of traffic transmission are improved.
A schematic diagram of an exemplary statistical list of forwarding queues may be seen in fig. 11; as shown in fig. 11, the forwarding queue of the intermediate routing node R2 includes a data stream 1 and a data stream 2, where the number of messages of the data stream 1 in the forwarding queue is 2, and the number of messages of the data stream 2 is 3, which indicates that the transmission resource occupied by the data stream 2 is greater than the transmission resource occupied by the data stream 1. In addition, the intermediate routing node R2 maintains a statistical list of forwarding queues, where the number of packets of the data flow 1 and the data flow 2 is recorded as q_pkt_1 (e.g., q_pkt_1=2) and q_pkt_2 (e.g., q_pkt_2=3), respectively.
S1003: if the network status indicates that the transmission channel has network congestion, acquiring at least one existing second data stream associated with the transmission channel.
S1004: the target data stream is screened from the at least one second data stream.
In steps S1003-S1004, as can be seen from the foregoing description, the target data stream may be a second data stream occupying more transmission resources, or a second data stream with a lower service level, of the at least one second data stream; for convenience of explanation, in this embodiment, the target data stream is taken as the second data stream occupying more transmission resources, specifically, the target data stream is the second data stream occupying more transmission resources than the resource threshold, or the target data stream is the second data stream occupying the most transmission resources among the at least one second data stream, which is described as an example, and a specific implementation process of selecting the target data stream from the at least one second data stream is described.
In a specific implementation, when the intermediate routing node checks that the forwarding queue of the intermediate routing node is in (or is about to be in) a congestion state, at least one existing second data stream associated with a transmission channel is acquired, and then a target data stream is screened out from the at least one second data stream; the method comprises the steps of obtaining a statistical list corresponding to a forwarding queue, and screening out target data streams based on the number of messages of each second data stream recorded in the statistical list corresponding to the forwarding queue. The specific implementation manner of filtering the target data flow based on the statistical list corresponding to the forwarding queue provided in the embodiment of the present application may include, but is not limited to:
in one implementation, the supporting intermediate routing node screens out the data stream with the largest message number in the current forwarding queue as a target data stream according to the self-maintained forwarding queue corresponding statistical list; that is, the second data stream with the largest number of messages is filtered from the statistics list, and the filtered second data stream is directly used as the target data stream. As shown in fig. 11, the intermediate routing node may directly find a statistical list maintained for the forwarding queue, where the number of messages of the data stream 2 is 3 and the number of messages of the data stream 1 is 1, and then it may be determined that the data stream 2 with a large number of messages is taken as the target data stream. By adopting the mode of directly screening the target data stream from the statistical list corresponding to the forwarding queue, the speed of screening the large stream from a plurality of data streams is greatly improved, and therefore the flow transmission efficiency is improved.
In another implementation manner, the support intermediate routing node sequentially retrieves the number of messages of the data stream corresponding to each message according to the arrangement sequence of each message in the forwarding queue, and takes the data stream with the number of messages of the data stream corresponding to the message greater than the number threshold as the target data stream. In the specific implementation, the method comprises the steps of supporting to sequentially search the number of messages recorded in a statistical list by a second data stream corresponding to each message from the end message stored in a forwarding queue until the number of the messages recorded in the statistical list by the second data stream corresponding to the searched target message is larger than or equal to a number threshold; and then, taking the second data stream corresponding to the target message as a target data stream.
For example, as shown in fig. 12, in the forwarding queue of the intermediate routing node R2, three messages of the data flow 2, that is, a message 1, a message 2 and a message 3, respectively, are sequentially received according to the sequence of receiving the messages, that is, a message 4 and a message 5, respectively, of the data flow 1; then the retrieval may be started from the end message (or called the tail message) "message 5" stored in the forwarding queue, specifically, the flow id_i corresponding to the message 5 is calculated, so as to determine the data flow to which the message belongs according to the flow id_i. Then, determining the message number Q_pkt_i of the corresponding data flow from the statistic list corresponding to the forwarding queue through the flow identifier ID_i, wherein if the flow identifier of the data flow 1 to which the message 5 belongs is ID_1, inquiring the message number of the data flow 1 corresponding to the flow identifier ID_1 from the statistic list is 2. Finally, judging whether the number Q_pkt_i of the messages is larger than a number threshold Q_pkt_thres or not; if the number q_pkt_i of the data stream 1 corresponding to the message 5 is greater than or equal to the number threshold q_pkt_thres, the data stream 1 corresponding to the message 5 is taken as the target data stream, and at this time, the search can be stopped. Otherwise, if the number q_pkt_i of the messages in the data stream 1 corresponding to the message 5 is smaller than the number threshold q_pkt_thres, continuing to search for the "message 4", wherein the specific search process for the message 4 is similar to the specific search process for the search message 5 described above, and will not be described in detail herein; as can be seen from the foregoing description, the message 4 also belongs to the data stream 1, and it is determined that the number q_pkt_i of the messages of the data stream 1 corresponding to the message 4 is also smaller than the number threshold q_pkt_thres, at this time, "the message 3" may be continuously searched until the number of the messages of the data stream corresponding to a certain message is larger than the number threshold.
It should be noted that, the specific implementation manner of screening the target data stream based on the statistical list corresponding to the forwarding queue is not limited to the above two types. In addition, the embodiment of the application also supports the intermediate routing node to directly count the messages of each data flow in the forwarding queue, namely, the intermediate routing node does not depend on the corresponding statistical list of the forwarding queue to screen the target data flow. As shown in fig. 13, after receiving a packet of a data stream 3 to be transmitted, if the intermediate routing node R2 detects that its forwarding queue is in a congestion state, the intermediate routing node R2 starts counting a large stream in the forwarding queue, if the data stream 2 containing 3 packets is counted as a large stream, and the data stream 1 containing 2 packets is counted as a small stream, the large stream occupies 60% of the storage space of the forwarding queue, and can be considered that the large stream occupies 60% of transmission resources.
S1005: and sharing the transmission resources occupied by the target data stream to the first data stream, and transmitting the first data stream by adopting the shared transmission resources.
Considering that when the intermediate routing node receives a first data stream to be transmitted, if normal transmission of the first data stream is to be realized, a message of the first data stream needs to be added into a forwarding queue; therefore, when there are more messages of the data stream stored in the forwarding queue, the message of the first data stream cannot be added to the forwarding queue, so as to generate network congestion; that is, when the transmission resource in the forwarding queue (the transmission resource is the memory space occupied by the packet) is occupied more, it is determined that the packet of the first data flow cannot be further added to the forwarding queue, thereby generating network congestion. Based on this, the intermediate routing node mainly realizes sharing of transmission resources of the target data stream to the first data stream by removing K messages of the target data stream from the forwarding queue and adding K messages of the first data stream to the forwarding queue.
In the specific implementation, firstly, K messages of a target data stream are removed from a forwarding queue, wherein the number of the messages contained in the target data stream is greater than or equal to K, and K is a positive integer; and then, adding the K messages of the first data flow into the forwarding queue to obtain an updated forwarding queue. Thus, the first data stream can be sent to the target terminal according to the updated forwarding queue. For example, a schematic diagram of resource sharing by removing large flow messages and adding small flow messages in a forwarding queue can be seen in fig. 14; as shown in fig. 14, in the forwarding queue, according to the sequence of message receiving, messages 1, 2, 3, 4 and 5 are sequentially stored; wherein, message 1, message 2 and message 3 belong to the message of data stream 2, and message 4 and message 5 belong to the message of data stream 1. As can be seen from fig. 14, the number of messages included in the data stream 2 is greater than the number of messages included in the data stream 1, and the data stream 2 is regarded as the target data stream; then, K messages may be extracted from the 3 messages included in the data stream 2, and K messages included in the newly received data stream 3 are added to the tail (or end) of the forwarding queue, so as to obtain an updated forwarding queue. The value of K may be preconfigured by an administrator and stored in a configuration file, for example, k=1. With continued reference to fig. 14, when k=1 and the first data flow includes the packet 6, the packets sequentially included in the updated forwarding queue are: by removing the large-flow message from the forwarding queue and newly adding the small-flow message, the high packet loss rate caused by the fact that the small-flow message cannot occupy transmission resources can be reduced, and the data flow corresponding to the service of the existing object terminal is always small, so that the normal operation of the service of the object terminal can be ensured, and the service experience of the object is improved.
It should be noted that, fig. 14 is described above by taking, as an example, a packet 3 for removing a target data stream (such as a data stream 2) from the forwarding queue; in practical applications, however, embodiments of the present application do not limit which message or messages of the target data stream are specifically removed. For example, K packets in the target data stream may be randomly removed from the forwarding queue. For another example, K messages may be selectively removed according to the data type and the data amount included in each message, for example, a message corresponding to a data type with low service correlation is removed, or a message with a small data amount is removed.
S1006: and carrying out resource processing on the transmission resources shared from the target data stream.
Based on the above steps, after removing K messages of the target data stream from the forwarding queue, the embodiments of the present application further support resource processing for the K messages. The specific process of the resource processing is not limited in this embodiment, and two specific implementation manners of the resource processing are given below, where:
optionally, the method supports directly deleting the K messages removed from the target data stream and sending a packet loss feedback message to the data sending terminal; that is, the intermediate routing node may delete the K packets removed in the target data stream, and send a packet loss feedback packet to the data sending end, where the packet loss feedback packet is used to indicate that the K packets in the target data stream are lost, so that the data sending end may generate the K packets again in idle time, or feedback the situation that the K packets are lost to the target terminal, etc. Through the mode of feeding back the packet loss condition to the data sending end, normal transmission of the first data stream can be ensured, the data sending end can know the traffic transmission condition in time, and effective control of traffic transmission is realized.
Optionally, the removed K messages are supported to be forwarded to the upper routing node for buffering, and the K messages are received and forwarded continuously when the intermediate routing node is idle. Specifically, the intermediate routing node forwards K messages pkt_large removed or extracted from the target data stream to the last routing node of the intermediate routing node for storage, namely, forwards the K messages removed from the target data stream to the last routing node of the intermediate routing node; considering that the previous routing node of the intermediate routing node may include a plurality of previous routing nodes, the previous routing node herein may specifically refer to the previous routing node of the removed K packets, that is, the K packets are forwarded from the previous routing node to the current intermediate routing node. Then, when the forwarding queue of the intermediate routing node is no longer congested, acquiring the K messages from the previous routing node and forwarding the K messages to the next routing node so as to realize the transmission of the K messages; that is, when the usage rate of the forwarding queue is detected to be smaller than the usage threshold, K removed packets in the target data stream are received from the last routing node, and the K removed packets in the target data stream are forwarded. By adopting the mode of forwarding the removed K messages to the upper route node buffer, the loss of the K messages of the target data stream can be avoided, and the delay forwarding of the K large-stream messages is adopted to exchange the fast forwarding of the small-stream messages, so that the timeliness of the small-stream messages corresponding to the service is ensured.
It should be understood that the resource processing of the removed K messages may also include other implementation manners, which are just specific implementation procedures of the exemplary resource processing given in the embodiments of the present application, and will limit the embodiments of the present application.
In summary, the embodiment of the application supports, for network congestion of the intermediate routing node, improving forwarding efficiency of the small-flow data by preferentially removing the large-flow message, and avoiding phenomena of low small-flow sending rate and poor user experience caused by long-term occupation of transmission resources by the large-flow data.
The foregoing details of the method of embodiments of the present application are set forth in order to provide a better understanding of the foregoing aspects of embodiments of the present application, and accordingly, the following provides a device of embodiments of the present application.
Fig. 15 is a schematic structural view of a data transmission device according to an exemplary embodiment of the present application; the data transmission means may be a computer program (comprising program code) running on a computer device, for example the data transmission means may be an application program of a computer device; the data transmission device may be used to perform some or all of the steps of the method embodiments shown in fig. 4, 7 and 10. Referring to fig. 15, the data transmission apparatus includes the following units:
An obtaining unit 1501, configured to obtain a network state of a transmission channel when a first data stream to be transmitted is generated in the transmission channel; a transmission channel refers to a data path from a source address of a first data stream to a destination address of the first data stream;
a processing unit 1502, configured to acquire at least one second data stream existing in the transmission channel if the network status indicates that the transmission channel has network congestion;
a processing unit 1502 further configured to screen a target data stream from the at least one second data stream;
the processing unit 1502 is further configured to share transmission resources occupied by the target data stream to the first data stream, and transmit the first data stream using the shared transmission resources.
In one implementation, at least one second data stream already present, associated with a transmission channel, comprises:
transmitting an existing data stream in the channel; wherein, the source address of the existing data stream in the transmission channel is the same as the source address of the first data stream, and the destination address is the same as the destination address of the first data stream;
or, the existing data streams in other transmission channels with the same link with the transmission channel; the source address of the existing data stream in the other transmission channels is the same as the source address of the first data stream, and the destination address of the first data stream belong to the same object group.
In one implementation, one data stream corresponds to one service; the processing unit 1502 is configured to, when selecting a target data stream from at least one second data stream, specifically:
screening a target data stream from at least one second data stream existing in association with the transmission channel according to the data stream screening rule;
wherein, the data flow screening rule includes: taking a second data stream occupying transmission resources larger than a resource threshold value in the at least one second data stream as a target data stream; or, taking the second data stream occupying the most transmission resources in at least one second data stream as a target data stream; or, taking the second data stream with the longest duration in the at least one second data stream as a target data stream; or, the second data stream with the service level of the corresponding service lower than the level threshold value in the at least one second data stream is taken as the target data stream.
In one implementation, the data transmission method is applied to the data transmitting end; the first data stream to be transmitted in the transmission channel is generated based on a flow request message sent by the object terminal; the processing unit 1502 is configured to, when acquiring the network state of the transmission channel, specifically:
Acquiring network parameters of a transmission channel, wherein the network parameters are obtained by periodically counting according to a counting period by a data transmitting end;
and if the network parameter is greater than or equal to the parameter threshold, determining that the network state of the transmission channel indicates that the transmission channel has network congestion.
In one implementation, the network parameters include at least a maximum available bandwidth and an amount of in-transit data; the maximum available bandwidth refers to the transmission rate required by the first data stream during transmission, and the data volume in transit refers to the used transmission window in the transmission channel;
when the network parameter is the maximum available bandwidth, the network parameter being greater than or equal to the parameter threshold value means that the maximum available bandwidth is greater than or equal to the bandwidth threshold value;
when the network parameter is the data volume in transit, the network parameter being equal to or greater than the parameter threshold means that the data volume in transit is equal to or greater than the data volume threshold.
In one implementation, the processing unit 1502 is configured to, when sharing transmission resources occupied by the target data stream to the first data stream, specifically:
acquiring transmission resources occupied by a target data stream;
reducing the transmission resources occupied by the target data stream by the target variable resource amount to obtain new transmission resources occupied by the target data stream;
And taking the target variable resource quantity of which the target data stream is reduced as the transmission resource of the first data stream.
In one implementation, when the network parameter is the maximum available bandwidth, the transmission resource is the sending rate, and the target variable resource amount is the target variable rate amount; when the network parameter is the data volume in transit, the transmission resource is a sending window, and the target variable resource volume is a target variable window volume;
the target variable resource amount is determined based on a preset resource parameter proportion.
In one implementation, the processing unit 1502 is configured to, when transmitting the first data stream using the shared transmission resource, specifically:
according to the transmission resource shared by the target data stream to the first data stream, the first data stream is sent to the target terminal;
the processing unit 1502 is further configured to: and sending the target data stream to the object group to which the object terminal belongs according to the new transmission resource occupied by the target data stream.
In one implementation, the data transmission method is applied to an intermediate routing node; the first data stream is forwarded by the data sending end; the processing unit 1502 is configured to, when acquiring the network state of the transmission channel, specifically:
determining an object group corresponding to the first data stream based on the destination address of the first data stream; the object group comprises one or more object terminals for transmitting data streams through all or part of links of the transmission channel, and the destination address of the first data stream points to one object terminal in the object group;
Acquiring a forwarding queue corresponding to the object group, wherein the forwarding queue sequentially stores the messages of the second data stream corresponding to each object terminal in the object group according to the message receiving sequence;
according to the number of the messages of the second data stream corresponding to each object terminal, calculating the utilization rate of the forwarding queue;
if the usage rate is greater than or equal to the usage threshold, determining that the network state of the transmission channel indicates that the transmission channel has network congestion.
In one implementation, the target data stream is a second data stream occupying transmission resources greater than a resource threshold, or the target data stream is a second data stream occupying the most transmission resources among at least one second data stream; the more messages the second data stream occupies, the more transmission resources the second data stream occupies; the processing unit 1502 is configured to, when selecting a target data stream from at least one second data stream, specifically:
acquiring a statistical list corresponding to the forwarding queue, wherein the statistical list is stored with flow identifiers and message numbers of second data flows in the forwarding queue in an associated manner; the statistical list is dynamically updated along with the forwarding queue;
and screening out the target data stream based on the message number of each second data stream recorded in the statistical list.
In one implementation manner, the processing unit 1502 is configured to, when screening out the target data stream based on the number of packets of each second data stream recorded in the statistics list, specifically:
screening a second data stream with the largest message number from the statistical list;
and taking the screened second data stream as a target data stream.
In one implementation manner, the processing unit 1502 is configured to, when screening out the target data stream based on the number of packets of each second data stream recorded in the statistics list, specifically:
sequentially searching the number of messages recorded in the statistical list by the second data stream corresponding to each message from the end message stored in the forwarding queue until the number of the messages recorded in the statistical list by the second data stream corresponding to the target message is larger than or equal to a number threshold;
and taking the second data stream corresponding to the target message as a target data stream.
In one implementation, the transmission resource is a memory space occupied by the message; the processing unit 1502 is configured to, when sharing transmission resources occupied by the target data stream to the first data stream, specifically:
removing K messages of the target data flow from the forwarding queue; the number of messages contained in the target data stream is greater than or equal to K, wherein K is a positive integer;
K messages of the first data flow are added into a forwarding queue, and an updated forwarding queue is obtained;
the processing unit 1502 is configured to, when transmitting the first data stream using the shared transmission resource, specifically:
and sending the first data stream to the target terminal according to the updated forwarding queue.
In one implementation, the processing unit 1502 is further configured to:
deleting the removed K messages in the target data stream, and sending a packet loss feedback message to the data sending end, wherein the packet loss feedback message is used for indicating that the K messages in the target data stream are lost;
or forwarding the K messages removed from the target data stream to the last routing node of the intermediate routing node;
and when the use rate of the forwarding queue is detected to be smaller than the use threshold value, receiving the removed K messages in the target data stream from the last routing node, and forwarding the removed K messages in the target data stream.
According to an embodiment of the present application, each unit in the data transmission apparatus shown in fig. 15 may be separately or completely combined into one or several additional units, or some unit(s) thereof may be further split into a plurality of units with smaller functions, which may achieve the same operation without affecting the implementation of the technical effects of the embodiments of the present application. The above units are divided based on logic functions, and in practical applications, the functions of one unit may be implemented by a plurality of units, or the functions of a plurality of units may be implemented by one unit. In other embodiments of the present application, the data transmission device may also include other units, and in practical applications, these functions may also be implemented with assistance from other units, and may be implemented by cooperation of multiple units. According to another embodiment of the present application, a data transmission apparatus as shown in fig. 15 may be constructed by running a computer program (including program code) capable of executing the steps involved in the respective methods shown in fig. 4, 7 and 10 on a general-purpose computing device such as a computer including a processing element such as a Central Processing Unit (CPU), a random access storage medium (RAM), a read only storage medium (ROM), and the like, and a storage element, and the data transmission method of the embodiment of the present application is implemented. The computer program may be recorded on, for example, a computer-readable recording medium, and loaded into and run in the above-described computing device through the computer-readable recording medium.
In this embodiment of the present application, a new first data stream to be transmitted (i.e. a new traffic) is generated in a transmission channel, and when detecting that a network state of the transmission channel indicates that there is network congestion in the transmission channel, at least one existing second data stream associated with the transmission channel may be acquired. Then, a target data stream is screened from the at least one second data stream; for example, the target data stream may be the second data stream with the largest data amount or the longest duration in the at least one second data stream, or the second data stream with lower correlation between target services (such as the services corresponding to the first data stream or the services related to the services corresponding to the first data stream) in the at least one second data stream, or the like; considering that the number of the target data flows is large or the duration is long or the correlation with the target service is low, when the target data flows lose a small amount of packets or slow down the transmission rate, the influence on the service corresponding to the target data flows is small, so that the embodiment of the application supports sharing a part of transmission resources occupied by the target data flows to the first data flow to be transmitted; therefore, the first data flow is ensured not to be lost (namely, can still normally transmit) under the condition of network congestion, the influence on the transmission performance of the target data flow is small, the network congestion in the network communication process is effectively relieved, and the transmission efficiency of the whole flow is improved.
Fig. 16 shows a schematic structural diagram of a computer device according to an exemplary embodiment of the present application. Referring to fig. 16, the computer device includes a processor 1601, a communication interface 1602, and a computer readable storage medium 1603. Wherein the processor 1601, the communication interface 1602, and the computer-readable storage medium 1603 may be connected by a bus or other means. Wherein the communication interface 1602 is for receiving and transmitting data. The computer readable storage medium 1603 may be stored in a memory of a computer device, the computer readable storage medium 1603 for storing a computer program comprising program instructions, and the processor 1601 for executing the program instructions stored by the computer readable storage medium 1603. The processor 1601 (or CPU (Central Processing Unit, central processing unit)) is a computing core as well as a control core of the computer device, adapted to implement one or more instructions, in particular to load and execute one or more instructions to implement a corresponding method flow or a corresponding function.
The embodiments of the present application also provide a computer-readable storage medium (Memory), which is a Memory device in a computer device, for storing programs and data. It is understood that the computer readable storage medium herein may include both built-in storage media in a computer device and extended storage media supported by the computer device. The computer readable storage medium provides storage space that stores a processing system of a computer device. Also stored in this memory space are one or more instructions, which may be one or more computer programs (including program code), adapted to be loaded and executed by the processor 1601. Note that the computer readable storage medium can be either a high-speed RAM memory or a non-volatile memory (non-volatile memory), such as at least one magnetic disk memory; alternatively, it may be at least one computer-readable storage medium located remotely from the aforementioned processor.
In one embodiment, the computer device may be a data sender or an intermediate routing node as mentioned in the previous embodiments; the computer-readable storage medium having one or more instructions stored therein; loading and executing, by the processor 1601, one or more instructions stored in a computer-readable storage medium to implement the corresponding steps in the above-described vulnerability processing method embodiments; in particular implementations, one or more instructions in a computer-readable storage medium are loaded by the processor 1601 and perform the following steps:
when a first data stream to be transmitted is generated in a transmission channel, acquiring the network state of the transmission channel; a transmission channel refers to a data path from a source address of a first data stream to a destination address of the first data stream;
if the network state indicates that the transmission channel has network congestion, acquiring at least one second data stream existing in the transmission channel;
screening the target data stream from the at least one second data stream;
and sharing the transmission resources occupied by the target data stream to the first data stream, and transmitting the first data stream by adopting the shared transmission resources.
In one implementation, at least one second data stream already present, associated with a transmission channel, comprises:
Transmitting an existing data stream in the channel; wherein, the source address of the existing data stream in the transmission channel is the same as the source address of the first data stream, and the destination address is the same as the destination address of the first data stream;
or, the existing data streams in other transmission channels with the same link with the transmission channel; the source address of the existing data stream in the other transmission channels is the same as the source address of the first data stream, and the destination address of the first data stream belong to the same object group.
In one implementation, one data stream corresponds to one service; one or more instructions in the computer-readable storage medium are loaded by the processor 1601 and when executed to screen a target data stream from at least one second data stream, perform the steps of:
screening a target data stream from at least one second data stream existing in association with the transmission channel according to the data stream screening rule;
wherein, the data flow screening rule includes: taking a second data stream occupying transmission resources larger than a resource threshold value in the at least one second data stream as a target data stream; or, taking the second data stream occupying the most transmission resources in at least one second data stream as a target data stream; or, taking the second data stream with the longest duration in the at least one second data stream as a target data stream; or, the second data stream with the service level of the corresponding service lower than the level threshold value in the at least one second data stream is taken as the target data stream.
In one implementation, the data transmission method is applied to the data transmitting end; the first data stream to be transmitted in the transmission channel is generated based on a flow request message sent by the object terminal; one or more instructions in the computer-readable storage medium are loaded by the processor 1601 and when executed to obtain a network state of a transmission channel, perform the steps of:
acquiring network parameters of a transmission channel, wherein the network parameters are obtained by periodically counting according to a counting period by a data transmitting end;
and if the network parameter is greater than or equal to the parameter threshold, determining that the network state of the transmission channel indicates that the transmission channel has network congestion.
In one implementation, the network parameters include at least a maximum available bandwidth and an amount of in-transit data; the maximum available bandwidth refers to the transmission rate required by the first data stream during transmission, and the data volume in transit refers to the used transmission window in the transmission channel;
when the network parameter is the maximum available bandwidth, the network parameter being greater than or equal to the parameter threshold value means that the maximum available bandwidth is greater than or equal to the bandwidth threshold value;
when the network parameter is the data volume in transit, the network parameter being equal to or greater than the parameter threshold means that the data volume in transit is equal to or greater than the data volume threshold.
In one implementation, one or more instructions in a computer-readable storage medium are loaded by the processor 1601 and when executed to share transmission resources occupied by a target data stream to a first data stream, specifically perform the steps of:
acquiring transmission resources occupied by a target data stream;
reducing the transmission resources occupied by the target data stream by the target variable resource amount to obtain new transmission resources occupied by the target data stream;
and taking the target variable resource quantity of which the target data stream is reduced as the transmission resource of the first data stream.
In one implementation, when the network parameter is the maximum available bandwidth, the transmission resource is the sending rate, and the target variable resource amount is the target variable rate amount; when the network parameter is the data volume in transit, the transmission resource is a sending window, and the target variable resource volume is a target variable window volume;
the target variable resource amount is determined based on a preset resource parameter proportion.
In one implementation, one or more instructions in a computer-readable storage medium are loaded by the processor 1601 and when executed perform the steps of:
according to the transmission resource shared by the target data stream to the first data stream, the first data stream is sent to the target terminal;
One or more instructions in the computer-readable storage medium are loaded by the processor 1601 and further perform the steps of: and sending the target data stream to the object group to which the object terminal belongs according to the new transmission resource occupied by the target data stream.
In one implementation, the data transmission method is applied to an intermediate routing node; the first data stream is forwarded by the data sending end; one or more instructions in the computer-readable storage medium are loaded by the processor 1601 and when executed to obtain a network state of a transmission channel, perform the steps of:
determining an object group corresponding to the first data stream based on the destination address of the first data stream; the object group comprises one or more object terminals for transmitting data streams through all or part of links of the transmission channel, and the destination address of the first data stream points to one object terminal in the object group;
acquiring a forwarding queue corresponding to the object group, wherein the forwarding queue sequentially stores the messages of the second data stream corresponding to each object terminal in the object group according to the message receiving sequence;
according to the number of the messages of the second data stream corresponding to each object terminal, calculating the utilization rate of the forwarding queue;
If the usage rate is greater than or equal to the usage threshold, determining that the network state of the transmission channel indicates that the transmission channel has network congestion.
In one implementation, the target data stream is a second data stream occupying transmission resources greater than a resource threshold, or the target data stream is a second data stream occupying the most transmission resources among at least one second data stream; the more messages the second data stream occupies, the more transmission resources the second data stream occupies; one or more instructions in the computer-readable storage medium are loaded by the processor 1601 and when executed to screen a target data stream from at least one second data stream, perform the steps of:
acquiring a statistical list corresponding to the forwarding queue, wherein the statistical list is stored with flow identifiers and message numbers of second data flows in the forwarding queue in an associated manner; the statistical list is dynamically updated along with the forwarding queue;
and screening out the target data stream based on the message number of each second data stream recorded in the statistical list.
In one implementation, one or more instructions in the computer-readable storage medium are loaded by the processor 1601 and when executing screening out the target data stream based on the number of messages of each second data stream recorded in the statistics list, specifically perform the following steps:
Screening a second data stream with the largest message number from the statistical list;
and taking the screened second data stream as a target data stream.
In one implementation, one or more instructions in the computer-readable storage medium are loaded by the processor 1601 and when executing screening out the target data stream based on the number of messages of each second data stream recorded in the statistics list, specifically perform the following steps:
sequentially searching the number of messages recorded in the statistical list by the second data stream corresponding to each message from the end message stored in the forwarding queue until the number of the messages recorded in the statistical list by the second data stream corresponding to the target message is larger than or equal to a number threshold;
and taking the second data stream corresponding to the target message as a target data stream.
In one implementation, the transmission resource is a memory space occupied by the message; one or more instructions in the computer-readable storage medium are loaded by the processor 1601 and when executed to share transmission resources occupied by a target data stream to a first data stream, perform the steps of:
removing K messages of the target data flow from the forwarding queue; the number of messages contained in the target data stream is greater than or equal to K, wherein K is a positive integer;
K messages of the first data flow are added into a forwarding queue, and an updated forwarding queue is obtained;
one or more instructions in the computer-readable storage medium are loaded by the processor 1601 and when executed perform the steps of:
and sending the first data stream to the target terminal according to the updated forwarding queue.
In one implementation, one or more instructions in the computer-readable storage medium are loaded by the processor 1601 and further perform the steps of:
deleting the removed K messages in the target data stream, and sending a packet loss feedback message to the data sending end, wherein the packet loss feedback message is used for indicating that the K messages in the target data stream are lost;
or forwarding the K messages removed from the target data stream to the last routing node of the intermediate routing node;
and when the use rate of the forwarding queue is detected to be smaller than the use threshold value, receiving the removed K messages in the target data stream from the last routing node, and forwarding the removed K messages in the target data stream.
Based on the same inventive concept, the principle and beneficial effects of the computer device for solving the problems provided in the embodiments of the present application are similar to those of the data transmission method in the embodiments of the method of the present application, and may refer to the principle and beneficial effects of implementation of the method, which are not described herein for brevity.
Embodiments of the present application also provide a computer program product or computer program comprising computer instructions stored in a computer-readable storage medium. The processor of the block link point device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the data transmission method described above.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable devices. The computer instructions may be stored in or transmitted across a computer-readable storage medium. The computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). Computer readable storage media can be any available media that can be accessed by a computer or data processing device, such as a server, data center, or the like, that contains an integration of one or more of the available media. The usable medium may be a magnetic medium (e.g., a floppy Disk, a hard Disk, a magnetic tape), an optical medium (e.g., a DVD), or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art will readily recognize that changes and substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (17)

1. A data transmission method, comprising:
when a first data stream to be transmitted is generated in a transmission channel, acquiring the network state of the transmission channel; the transmission channel refers to a data path from a source address of the first data stream to a destination address of the first data stream;
if the network state indicates that the transmission channel has network congestion, acquiring at least one existing second data flow associated with the transmission channel;
screening a target data stream from the at least one second data stream;
and sharing transmission resources occupied by the target data stream to the first data stream, and transmitting the first data stream by adopting the shared transmission resources.
2. The method of claim 1, wherein the existing at least one second data stream associated with the transmission channel comprises:
The existing data stream in the transmission channel; wherein, the source address of the existing data stream in the transmission channel is the same as the source address of the first data stream, and the destination address is the same as the destination address of the first data stream;
or, the existing data streams in other transmission channels with the same link with the transmission channel; the source address of the existing data stream in the other transmission channels is the same as the source address of the first data stream, and the destination address of the first data stream belong to the same object group.
3. The method of claim 1, wherein one data stream corresponds to one service; the screening the target data stream from the at least one second data stream includes:
screening a target data stream from at least one second data stream existing in association with the transmission channel according to the data stream screening rule;
wherein the data flow screening rule comprises: taking a second data stream occupying transmission resources larger than a resource threshold value in the at least one second data stream as a target data stream; or, taking the second data stream occupying the most transmission resources in at least one second data stream as a target data stream; or, taking the second data stream with the longest duration in the at least one second data stream as a target data stream; or, the second data stream with the service level of the corresponding service lower than the level threshold value in the at least one second data stream is taken as the target data stream.
4. A method according to any one of claims 1-3, wherein the method is applied to a data transmitting end; the first data stream to be transmitted in the transmission channel is generated based on a flow request message sent by the object terminal; the acquiring the network state of the transmission channel includes:
acquiring network parameters of the transmission channel, wherein the network parameters are obtained by periodically counting the data transmitting end according to a counting period;
and if the network parameter is greater than or equal to a parameter threshold, determining that the network state of the transmission channel indicates that the transmission channel has network congestion.
5. The method of claim 4, wherein the network parameters include at least a maximum available bandwidth and an amount of in-transit data; the maximum available bandwidth refers to a sending rate required by the first data stream in transmission, and the in-transit data quantity refers to a used sending window in the transmission channel;
when the network parameter is the maximum available bandwidth, the network parameter is greater than or equal to a parameter threshold, which means that the maximum available bandwidth is greater than or equal to a bandwidth threshold;
and when the network parameter is the on-road data quantity, the network parameter is more than or equal to a parameter threshold value, namely the on-road data quantity is more than or equal to a data quantity threshold value.
6. The method of claim 4, wherein the sharing transmission resources occupied by the target data stream to the first data stream comprises:
acquiring transmission resources occupied by the target data stream;
reducing the transmission resources occupied by the target data stream by a target variable resource amount to obtain new transmission resources occupied by the target data stream;
and taking the target variable resource quantity reduced by the target data stream as the transmission resource of the first data stream.
7. The method of claim 6, wherein when the network parameter is a maximum available bandwidth, the transmission resource is a transmission rate, and the target change resource amount is a target change rate amount; when the network parameter is the data volume in transit, the transmission resource is a sending window, and the target variable resource volume is a target variable window volume;
wherein the target variable resource amount is determined based on a preset resource parameter ratio.
8. The method of claim 6, wherein the transmitting the first data stream using the shared transmission resources comprises:
transmitting the first data stream to the object terminal according to the transmission resource shared by the target data stream to the first data stream;
The method further comprises the steps of: and sending the target data stream to an object group to which the object terminal belongs according to the new transmission resource occupied by the target data stream.
9. A method according to any of claims 1-3, wherein the method is applied to an intermediate routing node; the first data flow is forwarded by a data sending end; the acquiring the network state of the transmission channel includes:
determining an object group corresponding to the first data stream based on the destination address of the first data stream; the object group comprises one or more object terminals for transmitting data streams through all or part of links of the transmission channel, and the destination address of the first data stream points to one object terminal in the object group;
acquiring a forwarding queue corresponding to the object group, wherein the forwarding queue sequentially stores messages of a second data stream corresponding to each object terminal in the object group according to a message receiving sequence;
calculating the utilization rate of the forwarding queue according to the number of messages of the second data stream corresponding to each object terminal;
and if the usage rate is greater than or equal to a usage threshold, determining that the network state of the transmission channel indicates that network congestion exists in the transmission channel.
10. The method of claim 9, wherein the target data stream is a second data stream occupying transmission resources greater than a resource threshold, or wherein the target data stream is a second data stream occupying the most transmission resources of the at least one second data stream; the more the number of messages of the second data stream, the more transmission resources are occupied by the second data stream; the screening the target data stream from the at least one second data stream includes:
acquiring a statistical list corresponding to the forwarding queue, wherein the statistical list is stored with flow identifiers and message numbers of second data flows in the forwarding queue in an associated manner; the statistical list is dynamically updated following the forwarding queue;
and screening out target data streams based on the message quantity of each second data stream recorded in the statistical list.
11. The method of claim 10, wherein screening out the target data stream based on the number of messages of each second data stream recorded in the statistics list comprises:
screening the second data stream with the largest message number from the statistical list;
and taking the screened second data stream as a target data stream.
12. The method of claim 10, wherein screening out the target data stream based on the number of messages of each second data stream recorded in the statistics list comprises:
sequentially searching the number of messages recorded in the statistical list by the second data stream corresponding to each message from the end message stored in the forwarding queue until the number of the messages recorded in the statistical list by the second data stream corresponding to the target message is larger than or equal to a number threshold;
and taking the second data stream corresponding to the target message as a target data stream.
13. The method of claim 9, wherein the transmission resource is a memory space occupied by a message; the sharing the transmission resource occupied by the target data stream to the first data stream includes:
removing K messages of the target data flow from the forwarding queue; the number of messages contained in the target data stream is greater than or equal to K, wherein K is a positive integer;
adding the K messages of the first data flow into the forwarding queue to obtain an updated forwarding queue;
the transmitting the first data stream using the shared transmission resource includes:
And sending the first data flow to the target terminal according to the updated forwarding queue.
14. The method of claim 13, wherein the method further comprises:
deleting the removed K messages in the target data stream, and sending a packet loss feedback message to the data sending end, wherein the packet loss feedback message is used for indicating that the K messages in the target data stream are lost;
or forwarding the K messages removed from the target data stream to a last routing node of the intermediate routing node;
and when the use rate of the forwarding queue is detected to be smaller than a use threshold value, receiving K messages removed from the target data stream from the last routing node, and forwarding the K messages removed from the target data stream.
15. A data transmission apparatus, comprising:
an obtaining unit, configured to obtain a network state of a transmission channel when a first data stream to be transmitted is generated in the transmission channel; the transmission channel refers to a data path from a source address of the first data stream to a destination address of the first data stream;
a processing unit, configured to acquire at least one second data flow existing in the transmission channel if the network status indicates that the transmission channel has network congestion;
The processing unit is further configured to screen a target data stream from the at least one second data stream;
the processing unit is further configured to share transmission resources occupied by the target data stream to the first data stream, and transmit the first data stream using the shared transmission resources.
16. A computer device, comprising:
a processor adapted to execute a computer program;
a computer readable storage medium having stored therein a computer program which, when executed by the processor, implements the data transmission method according to any one of claims 1-14.
17. A computer readable storage medium, wherein the computer readable storage medium stores a computer application program, which when executed, implements the data transmission method according to any one of claims 1-14.
HK42023073412.1A 2023-05-24 Data transmission method, apparatus, device, and medium HK40084958B (en)

Publications (2)

Publication Number Publication Date
HK40084958A true HK40084958A (en) 2023-07-28
HK40084958B HK40084958B (en) 2023-09-08

Family

ID=

Similar Documents

Publication Publication Date Title
US8694675B2 (en) Generalized dual-mode data forwarding plane for information-centric network
CN103765832B (en) General dual-mode data Forwarding plane for information centre's network
US10200300B2 (en) Maintaining named data networking (NDN) flow balance with highly variable data object sizes
US20180262432A1 (en) Vertical packet aggregation using a distributed network
KR20080063333A (en) Packet Routing in a Wireless Communication Environment
US20140013005A1 (en) Method and apparatus for load sharing
CN109120540B (en) Method for transmitting message, proxy server and computer readable storage medium
KR20080063334A (en) Provision of QoS processing based on multiple requests
WO2019179157A1 (en) Data traffic processing method and related network device
WO2016062106A1 (en) Packet processing method, device and system
CN107231269B (en) Accurate cluster speed limiting method and device
CN107770085A (en) A kind of network load balancing method, equipment and system
CN105898388A (en) Node downloading scheduling method and node downloading scheduling device
CN105577537A (en) A multi-path forwarding method and system for an information center network based on historical records
US11991088B2 (en) System and method for congestion management in computer networks
US8601151B2 (en) Apparatus and method for receiving data
CN117081984A (en) Route adjustment method, device and electronic equipment
JP5784234B2 (en) Generalized dual-mode data transfer plane for information-centric networks
CN115996195B (en) Data transmission method, device, equipment and medium
HK40084958A (en) Data transmission method, apparatus, device, and medium
HK40084958B (en) Data transmission method, apparatus, device, and medium
KR20230157194A (en) Apparatus and method for traffic processing using programmable switch
CN109361928B (en) A kind of information center network system and video transmission method
WO2017140076A1 (en) Data transmission method and device
CN119211152B (en) Message forwarding method, network card, gateway device, storage medium, and program