CN118175110B - Data resource delivery method based on dynamic flow pool - Google Patents
Data resource delivery method based on dynamic flow pool Download PDFInfo
- Publication number
- CN118175110B CN118175110B CN202410585798.4A CN202410585798A CN118175110B CN 118175110 B CN118175110 B CN 118175110B CN 202410585798 A CN202410585798 A CN 202410585798A CN 118175110 B CN118175110 B CN 118175110B
- Authority
- CN
- China
- Prior art keywords
- flow
- space
- communication
- pipeline
- pool
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000002716 delivery method Methods 0.000 title claims abstract description 10
- 238000004891 communication Methods 0.000 claims abstract description 125
- 230000002159 abnormal effect Effects 0.000 claims abstract description 52
- 238000000034 method Methods 0.000 claims description 11
- 230000001419 dependent effect Effects 0.000 claims description 10
- 238000004458 analytical method Methods 0.000 claims description 9
- 238000012544 monitoring process Methods 0.000 claims description 9
- 238000004064 recycling Methods 0.000 claims description 9
- 230000003068 static effect Effects 0.000 abstract description 4
- 238000013461 design Methods 0.000 abstract description 3
- 239000002699 waste material Substances 0.000 abstract description 3
- 206010033799 Paralysis Diseases 0.000 abstract description 2
- 230000006399 behavior Effects 0.000 description 37
- 238000010606 normalization Methods 0.000 description 6
- 230000009466 transformation Effects 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000004451 qualitative analysis Methods 0.000 description 2
- 238000007493 shaping process Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000005856 abnormality Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000008676 import Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/24—Traffic characterised by specific attributes, e.g. priority or QoS
- H04L47/2441—Traffic characterised by specific attributes, e.g. priority or QoS relying on flow classification, e.g. using integrated services [IntServ]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/24—Traffic characterised by specific attributes, e.g. priority or QoS
- H04L47/2425—Traffic characterised by specific attributes, e.g. priority or QoS for supporting services specification, e.g. SLA
- H04L47/2433—Allocation of priorities to traffic types
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/70—Admission control; Resource allocation
- H04L47/76—Admission control; Resource allocation using dynamic resource allocation, e.g. in-call renegotiation requested by the user or requested by the network in response to changing network conditions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/70—Admission control; Resource allocation
- H04L47/80—Actions related to the user profile or the type of traffic
- H04L47/805—QOS or priority aware
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The invention is applicable to the technical field of Internet, and provides a data resource delivery method based on a dynamic flow pool, which comprises the following steps of S1: pre-distributing the initial flow demands of a plurality of users into a plurality of communication pipelines; s2: newly creating a shared pipeline, and importing the recovered spare flow space into the shared pipeline after finishing to serve as a standby space; s3: the flow prediction model predicts flow demands of a plurality of future time points; s4: calculating the difference between the predicted flow demand and the initial flow at a plurality of time points, and finding out abnormal flow; s5: and preferentially introducing the standby space in the shared pipeline to a communication pipeline required by the abnormal flow with higher priority. The invention is used for solving the problems that the communication pipeline for delivering the data resources in the prior art adopts a static design, the flow space in the communication pipeline cannot be dynamically adjusted, the flow space is in an underutilized state for a long time, the resource waste is caused, and serious network paralysis is caused.
Description
Technical Field
The invention belongs to the technical field of Internet, and particularly relates to a data resource delivery method based on a dynamic flow pool.
Background
The communication pipeline technology based on the dynamic flow pool is a new method for utilizing the communication pipeline established based on a multidimensional calculation model, and the system pre-guides data resources or flow of users into a specific communication pipeline.
When a user moves on a network, traffic of application delivery is generated, in general, a static design is adopted for communication pipelines in traffic transmission, a numerical value is agreed for each communication pipeline, the traffic generated by application delivery is led into the communication pipeline, if no traffic is generated in a certain time period or the traffic in a certain time period is suddenly reduced, at the moment, the traffic space in the pipeline is in an idle state, so that the communication pipeline is not in a fully utilized state, but the traffic space in the communication pipeline cannot be distributed to other users, and therefore the waste of the traffic space is caused; or a sudden increase in traffic occurs in a certain period of time, and the traffic space of the communication pipeline cannot meet a large amount of traffic, so that the part of users cannot obtain the traffic or obtain smaller traffic, thereby affecting the experience of the user group.
Disclosure of Invention
The invention aims to provide a data resource delivery method based on a dynamic flow pool, which is used for solving the problems that a communication pipeline for data resource delivery in the prior art adopts a static design, and the flow space in the communication pipeline cannot be dynamically adjusted, so that the flow space is in an underutilized state for a long time, the resource waste is caused, the network abnormality is caused, and serious network paralysis is caused.
In order to solve the above problems, the present invention provides a data resource delivery method based on a dynamic traffic pool, the method comprising the following steps:
s1: pre-distributing the initial flow demands of a plurality of users into a plurality of communication pipelines, monitoring the idle flow space in each communication pipeline in real time by a dynamic flow pool, and recycling the idle flow space into the dynamic flow pool;
further, the step S1 specifically includes the following steps:
S11: creating a plurality of communication pipelines in the dynamic flow pool, and configuring preset flow for each communication pipeline;
S12: receiving initial flow demands of a plurality of users, respectively distributing the users to different communication pipelines, enabling the initial flow demands to be transmitted to a dynamic flow pool through the communication pipelines, and distributing initial flow to the users through the communication pipelines by the dynamic flow pool;
S13: and monitoring the residual flow space in each communication pipeline by the dynamic flow pool, taking the residual flow space which is more than 10% of the initial flow as a spare flow space, and recycling the spare flow space into the dynamic flow pool.
Specifically, the dynamic flow pool monitors the idle flow space in the communication pipeline in real time, wherein the idle flow space=the preset flow in the communication pipeline-the initial flow in the communication pipeline.
S2: newly creating a shared pipeline, and importing the recovered spare flow space into the shared pipeline after finishing to serve as a standby space;
specifically, the dynamic flow pool can detect the residual flow space in all the current communication pipelines in real time, collect the available spare flow space and guide the spare flow space into a newly built shared pipeline until a new license exists, the shared pipeline is attached with a sharing attribute, and each communication pipeline can share the spare space therein.
S3: constructing a flow prediction model based on user behavior analysis, and predicting the flow demands of the users at a plurality of future time points by the flow prediction model;
further, the step S3 specifically includes the following steps:
s31: collecting behavior data of a user in a preset time period, and collecting flow data about the user in the next time period; specifically, the behavior data of the user is normalized, so that the data is in the [0,1] interval to induce the statistical distribution of the unified sample, and the normalization is as follows:
The preset time period is t= (T 1,t2,t3,...,tn), the next time period is (T 1+n,t2+n,t3+n,...,t2n), wherein the continuous time point represented by T 1,t2,t3,...,tn is the user behavior data obtained in the preset time period Wherein, the method comprises the steps of, wherein,User behavior data at time points of the preset period respectively;
performing linear transformation on the original data set through a conversion function to obtain normalization of a [0,1] interval, and obtaining converted data The transformation function is:
。
S32: constructing a flow prediction model based on user behavior analysis, and training the flow prediction model by taking the behavior data as an input object and the flow data in the next period as an output object to obtain a trained flow prediction model;
Further, the step S32 specifically includes the following steps:
S321: the data is processed As an argument, traffic dataConstructing a first prediction model as a dependent variable:
wherein, Respectively obtaining flow data of the users at the time points of the next time period;
Specifically, the first prediction model comprises a linear model and an error compensation model, and performs hierarchical analysis on user behaviors according to the characteristics of the user behaviors, and the behavior hierarchy affecting data flow is summarized to a flow hierarchy, so that the linear model is established; then an error compensation model can be established according to abnormal fluctuation of the user behavior, and the linear model and the error compensation model are added to obtain a first prediction model;
S322: the data is processed As a dependent variable, other variables of the first prediction model are taken as independent variables, a second prediction model is established, and specifically, according to the characteristics of user behaviors, the second prediction model can be established into a time sequence model, a linear model and a shaping analysis model;
S323: decomposing the independent variable in the second prediction model until the independent variable and the dependent variable in the first prediction model and the second prediction model can be interpreted, and bringing the second prediction model into the first prediction model to construct a complete network prediction model;
Specifically, the purpose of decomposing the independent variables in the second prediction model is to analyze and evaluate the user behavior until the dependent variables and the independent variables of the first prediction model and the second prediction model can be interpreted, the independent variables and the independent variables can be directly obtained in a natural mode, and then the second prediction model is brought into the first prediction model to construct a complete flow prediction model, so that the flow demand of the next period can be predicted according to the user behavior.
S33: and acquiring the behavior data of the plurality of users in real time, and predicting the flow demand of the users in a future time period by utilizing the flow prediction model.
S4: calculating the difference between the predicted flow demand and the initial flow at a plurality of time points, and marking the flow demand at the time point as abnormal flow if the difference between the certain time point and the initial flow is larger than a preset value;
Further, in the step S4, the difference between the predicted flow demand and the initial flow at a plurality of time points is obtained, and if the difference between the certain time point and the initial flow is greater than 10% of the initial flow, the flow demand at the time point is marked as an abnormal flow.
S5: and classifying the priority of the traffic demand, and preferentially guiding the standby space in the shared pipeline to a communication pipeline required by the abnormal traffic with higher priority.
Further, the step S5 specifically includes the following steps:
s51: carrying out priority classification according to the flow requirements to obtain A, B, C-level flow requirements;
s52: preferentially judging whether the class A flow demand has abnormal flow A at a future time point;
S53: if so, introducing a standby space in the shared pipeline into the communication pipeline A of the abnormal flow A, and enabling the total flow space in the communication pipeline A after the standby space is introduced to be 110% of the abnormal flow A;
s54: judging whether the B-level flow demand has abnormal flow B at a future time point in sequence;
S55: if so, the spare space in the shared pipe is then sequentially introduced into the communication pipe B of the abnormal flow B, so that the total flow space in the communication pipe B after the introduction is 110% of the abnormal flow B.
S56: finally judging whether the C-level flow demand has abnormal flow C at a future time point or not;
S57: and finally, introducing the standby space in the shared pipeline into the communication pipeline C of the abnormal flow C, wherein the total flow space in the communication pipeline C after the introduction is 110% of the abnormal flow C.
Further, the step S53 specifically includes:
if the standby space in the shared pipeline is insufficient to enable the total flow space in the communication pipeline A to be 110% of the abnormal flow A, preferentially introducing the flow space in the communication pipeline C into the communication pipeline A;
If the flow in the communication pipe C is introduced into the communication pipe a, the spare space in the shared pipe is insufficient to make the total flow space in the communication pipe a be 110% of the abnormal flow a, and then the flow space in the communication pipe B is introduced into the communication pipe a.
Further, the step S55 specifically includes:
if the spare space in the shared pipe is insufficient to make the total flow space in the communication pipe B be 110% of the abnormal flow B, the flow space in the communication pipe C is led into the communication pipe B.
Further, the step S5 further includes:
S58: monitoring the spare flow space in each communication pipeline in real time by a dynamic flow pool;
s59: and (3) sequentially recycling the idle flow spaces in the communication pipeline C, the communication pipeline B and the communication pipeline A into a dynamic flow pool according to the sequence, and then guiding the recycled idle flow spaces into the shared pipeline after finishing by the dynamic flow pool.
In summary, the invention has the following beneficial effects:
the invention provides a complete resource delivery method, which comprises the steps of distributing initial flow, recovering spare flow space, predicting future flow demand, classifying the priority of the predicted flow demand and analyzing abnormal flow demand, and finally distributing enough flow space for communication pipelines through which the predicted flow demand flows under the condition of considering the priority, so as to ensure the transmission quality.
The beneficial effects that whole technical scheme reached include: 1. compared with the similar static communication pipelines, the invention can fully utilize the resources of each pipeline, transfer flow data in multiple dimensions and improve the utilization rate of the communication pipelines by more than 145 percent; 2. the invention reasonably distributes the flow space according to the flow priority, not only relieves the congestion of the network, but also ensures the important flow demand to carry out flow transmission.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the following description will briefly explain the embodiments of the present invention or the drawings used in the description of the prior art, and it is obvious that the drawings described below are only some embodiments of the present invention, and other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a data resource delivery method provided by an embodiment of the present invention;
FIG. 2 is a flowchart of the step of S5 provided by an embodiment of the present invention;
FIG. 3 is a schematic diagram of a communication pipeline provided by an embodiment of the present invention;
Fig. 4 is a schematic diagram of a shared pipeline according to an embodiment of the present invention.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative and intended to explain the present invention and should not be construed as limiting the invention.
In the description of the present invention, it will be understood that when an element is referred to as being "fixed" or "disposed" on another element, it can be directly on the other element or be indirectly on the other element. When an element is referred to as being "connected to" another element, it can be directly connected to the other element or be indirectly connected to the other element.
It is to be understood that the terms "length," "width," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like are merely for convenience in describing and simplifying the description based on the orientation or positional relationship shown in the drawings, and do not indicate or imply that the devices or elements referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus are not to be construed as limiting the invention.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present invention, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art according to the specific circumstances.
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent.
Referring to fig. 1, the present invention provides a data resource delivery method based on a dynamic flow pool, which includes the following steps:
s1: pre-distributing the initial flow demands of a plurality of users into a plurality of communication pipelines, monitoring the idle flow space in each communication pipeline in real time by a dynamic flow pool, and recycling the idle flow space into the dynamic flow pool;
further, the step S1 specifically includes the following steps:
S11: creating a plurality of communication pipelines in the dynamic flow pool, and configuring preset flow for each communication pipeline;
S12: receiving initial flow demands of a plurality of users, respectively distributing the users to different communication pipelines, enabling the initial flow demands to be transmitted to a dynamic flow pool through the communication pipelines, and distributing initial flow to the users through the communication pipelines by the dynamic flow pool;
S13: and monitoring the residual flow space in each communication pipeline by the dynamic flow pool, taking the residual flow space which is more than 10% of the initial flow as a spare flow space, and recycling the spare flow space into the dynamic flow pool.
Specifically, the dynamic flow pool monitors the idle flow space in the communication pipeline in real time, wherein the idle flow space=preset flow in the communication pipeline-initial flow in the communication pipeline is multiplied by 110%;
referring to fig. 3, four communication pipes are respectively referred to as Pipe1, pipe2, pipe3 and Pipe4, and each Pipe is preliminarily preset with a flow, and a user a, a user B, a user C and a user D are respectively allocated to Pipe1, pipe2, pipe3 and Pipe4;
let the preset traffic of Pipe1, pipe2, pipe3, pipe4 be 200M, let the initial traffic requests of user A, user B, user C, user D be 150M, 100M, 50M, 80M, respectively, the shaded portion in the figure is the used traffic space.
It is known that the residual flow volumes of Pipe1, pipe2, pipe3, pipe4 are 50M, 100M, 150M, 120M, respectively;
the free traffic spaces for Pipe1, pipe2, pipe3, pipe4 are 35M, 90M, 145M, 112M, respectively.
Referring to fig. 4, S2: newly creating a shared pipeline S, and importing the recovered spare flow space into the shared pipeline S to serve as a standby space after finishing;
The spare flow spaces of Pipe1, pipe2, pipe3 and Pipe4 are respectively recovered to the shared pipeline S, and the spare space in the shared pipeline S is 382M, so that after the spare flow spaces of Pipe1, pipe2, pipe3 and Pipe4 are recovered, the Pipe diameter of the Pipe is reduced, but each communication pipeline is provided with 10% space so as to prevent the flow flowing through from fluctuating slightly.
Specifically, the dynamic flow pool can detect the residual flow space in all the current communication pipelines in real time, and when the flow in the communication pipelines is reduced, the dynamic flow pool collects the available spare flow space in real time and guides the spare flow space into the newly-built shared pipeline S, wherein the shared pipeline S is attached with a sharing attribute, and each communication pipeline can share the spare space therein.
S3: constructing a flow prediction model based on user behavior analysis, and predicting the flow demands of the users at a plurality of future time points by the flow prediction model;
further, the step S3 specifically includes the following steps:
s31: collecting behavior data of a user in a preset time period, and collecting flow data about the user in the next time period; specifically, the behavior data of the user is normalized, so that the data is in the [0,1] interval to induce the statistical distribution of the unified sample, and the normalization is as follows:
The preset time period is t= (T 1,t2,t3,...,tn), the next time period is (T 1+n,t2+n,t3+n,...,t2n), wherein the continuous time point represented by T 1,t2,t3,...,tn is the user behavior data obtained in the preset time period Wherein, the method comprises the steps of, wherein,User behavior data at time points of the preset period are respectively set according to actual requirements, for example: setting the time period to be 10min, namely, 10min is a time period, dividing the time period into a plurality of N time points, and collecting user behavior data of the N time periods and flow data in the n+1 time periods;
performing linear transformation on the original data set through a conversion function to obtain normalization of a [0,1] interval, and obtaining converted data The transformation function is:
; (1)
S32: constructing a flow prediction model based on user behavior analysis, and training the flow prediction model by taking the behavior data as an input object and the flow data in the next period as an output object to obtain a trained flow prediction model;
Further, the step S32 specifically includes the following steps:
S321: the data is processed As an argument, traffic dataConstructing a first prediction model as a dependent variable:
Specifically, the first prediction model comprises a linear model and an error compensation model, and performs hierarchical analysis on user behaviors according to the characteristics of the user behaviors, and the behavior hierarchy affecting data flow is summarized to a flow hierarchy, so that the linear model is established; then an error compensation model can be established according to abnormal fluctuation of the user behavior, and the linear model and the error compensation model are added to obtain a first prediction model;
S322: the data is processed As a dependent variable, other variables of the first prediction model are taken as independent variables, a second prediction model is established, and specifically, according to the characteristics of user behaviors, the second prediction model can be established into a time sequence model, a linear model and a shaping analysis model;
Specific:
judging whether the user behavior has periodicity, if so, then the data is sent to the user terminal Establishing a time sequence model with the time independent variable;
if the periodicity does not exist, judging whether a linear relation exists between the user behavior and other independent variables in the first prediction model; if yes, establishing a linear model;
And if the linear relation does not exist, carrying out qualitative analysis on the user behavior, and establishing a qualitative analysis model when the qualitative relation exists.
S323: decomposing the independent variable in the second prediction model until the independent variable and the dependent variable in the first prediction model and the second prediction model can be interpreted, and bringing the second prediction model into the first prediction model to construct a complete network prediction model;
Specifically, the purpose of decomposing the independent variables in the second prediction model is to analyze and evaluate the user behavior until the dependent variables and the independent variables of the first prediction model and the second prediction model can be interpreted, the independent variables and the independent variables can be directly obtained in a natural mode, and then the second prediction model is brought into the first prediction model to construct a complete flow prediction model, so that the flow demand of the next period can be predicted according to the user behavior.
S33: and acquiring the behavior data of the plurality of users in real time, and predicting the flow demand of the users in a future time period by utilizing the flow prediction model.
Specifically, behavior data of a plurality of users in occurrence are obtained in real time, and normalization processing is carried out on the behavior data respectively to obtain normalization data;
After the normalized data are input into the first prediction model, outputting the flow demands of a plurality of users in a plurality of time points in the next time period by the first prediction model, namely judging the flow demands of N time points in the (n+1) th time period according to the behavior data of the users in the (N) th time period;
s4: calculating the difference between the predicted flow demand and the initial flow at a plurality of time points, and marking the flow demand at the time point as abnormal flow if the difference between the certain time point and the initial flow is larger than a preset value;
Further, in the step S4, the difference between the predicted flow demand and the initial flow at a plurality of time points is obtained, and if the difference between the certain time point and the initial flow is greater than 10% of the initial flow, the flow demand at the time point is marked as an abnormal flow.
S5: and classifying the priority of the traffic demand, and preferentially introducing the standby space in the shared pipeline S into a communication pipeline required by abnormal traffic with higher priority, wherein the priority of the traffic can be classified according to an IP address, a destination IP address, a source port number, a destination port number, a protocol ID and the like.
Referring to fig. 2, further, the step S5 specifically includes the following steps:
S51: the priority classification is carried out according to the flow demand to obtain A, B, C three-level flow demand, the flow demand of the user 1 is set as A level, the flow demand of the user 2 is set as B level, the flow demands of the user 3 and the user 4 are set as C level,
S52: preferentially judging whether the class A flow demand has abnormal flow A at a future time point;
S53: if so, introducing a standby space in the shared pipeline S into the communication pipeline A of the abnormal flow A, and enabling the total flow space in the communication pipeline A after the standby space is introduced to be 110% of the abnormal flow A;
since the abnormal flow rate of the user 1 at the future point of time is 300M and the final remaining flow rate in the Pipe1 is 165M, the shared Pipe S needs to introduce 165M flow rate space into the Pipe1 again, so that the flow rate space in the Pipe1 is 330M and the spare space in the shared Pipe S is 217M.
S54: judging whether the B-level flow demand has abnormal flow B at a future time point or not;
s55: if so, the spare space in the shared pipe S is then sequentially introduced into the communication pipe B of the abnormal flow B so that the total flow space in the communication pipe B after the introduction is 110% of the abnormal flow B.
S56: finally judging whether the C-level flow demand has abnormal flow C at a future time point or not;
S57: and finally, introducing the standby space in the shared pipeline S into the communication pipeline C with the abnormal flow rate C, wherein the total flow rate in the communication pipeline C after the introduction is 110% of the abnormal flow rate C.
Further, the step S53 specifically includes:
If the spare space in the shared pipeline S is insufficient to enable the total flow space in the communication pipeline A to be 110% of the abnormal flow A, preferentially introducing the flow space in the communication pipeline C into the communication pipeline A;
Assuming that the abnormal traffic of the user 1 at a future point of time is 600M and the final remaining traffic in the Pipe1 is 165M, the shared Pipe S needs to import 505M traffic space into the Pipe1 again, so that the traffic space in the Pipe1 is 660M and the spare space in the shared Pipe S is less than 505M, and the traffic space is borrowed from the communication Pipe C.
If the flow space in the communication pipe C is introduced into the communication pipe a, the spare space in the shared pipe S is insufficient to make the total flow space in the communication pipe a be 110% of the abnormal flow a, and then the flow in the communication pipe B is introduced into the communication pipe a.
In this embodiment, the normal passing of the class a flow demand is preferentially ensured, and then the class B flow demand and the class C flow demand are sequentially considered.
Further, the step S55 specifically includes:
if the spare space in the shared pipe S is insufficient to make the total flow space in the communication pipe B be 110% of the abnormal flow B, the flow space in the communication pipe C is led into the communication pipe B.
Further, the step S5 further includes:
S58: monitoring the spare flow space in each communication pipeline in real time by a dynamic flow pool;
S59: and (3) sequentially recycling the idle flow spaces in the communication pipeline C, the communication pipeline B and the communication pipeline A into a dynamic flow pool according to the sequence, and then guiding the recycled idle flow spaces into the shared pipeline S after finishing by the dynamic flow pool.
In this embodiment, after classifying the traffic demand, the idle traffic space needs to be recovered according to the rank order, so that the communication pipeline a holds the idle traffic space for a certain time to prevent an emergency; and once the communication pipeline C has spare traffic space, the spare traffic space is recovered at the fastest speed for higher priority traffic demands.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, and alternatives falling within the spirit and principles of the invention.
Claims (7)
1. A method for delivering data resources based on a dynamic traffic pool, the method comprising the steps of:
s1: pre-distributing the initial flow demands of a plurality of users into a plurality of communication pipelines, monitoring the idle flow space in each communication pipeline in real time by a dynamic flow pool, and recycling the idle flow space into the dynamic flow pool;
S2: newly creating a shared pipeline, and importing the recovered spare flow space into the shared pipeline after finishing to serve as a standby space;
S3: constructing a flow prediction model based on user behavior analysis, predicting flow demands of the users at a plurality of future time points by the flow prediction model, wherein the flow demands comprise,
S31: collecting behavior data of a user in a preset time period, and collecting flow data about the user in the next time period;
S321: taking the user data as an independent variable, and taking the flow data as the independent variable to construct a first prediction model;
s322: taking the user data as a dependent variable, taking other variables of the first prediction model as independent variables, and establishing a second prediction model;
S323: decomposing the independent variable in the second prediction model until the independent variable and the dependent variable in the first prediction model and the second prediction model can be interpreted, and bringing the second prediction model into the first prediction model to construct a complete network prediction model;
S33: acquiring behavior data of a plurality of users in real time, and predicting the flow demand of the users in a future time period by utilizing the flow prediction model;
S4: calculating the difference between the predicted flow demand and the initial flow at a plurality of time points, and marking the flow demand at a certain time point as abnormal flow if the difference between the time point and the initial flow is larger than a preset value;
S5: and classifying the priority of the traffic demand, and preferentially guiding the standby space in the shared pipeline to a communication pipeline required by the abnormal traffic with higher priority.
2. The method for delivering data resources based on dynamic traffic pool as recited in claim 1, wherein said S1 specifically comprises the steps of:
S11: creating a plurality of communication pipelines in the dynamic flow pool, and configuring preset flow for each communication pipeline;
S12: receiving initial flow demands of a plurality of users, respectively distributing the users to different communication pipelines, enabling the initial flow demands to be transmitted to a dynamic flow pool through the communication pipelines, and distributing initial flow to the users through the communication pipelines by the dynamic flow pool;
S13: and monitoring the residual flow space in each communication pipeline by the dynamic flow pool, taking the residual flow space which is more than 10% of the initial flow as a spare flow space, and recycling the spare flow space into the dynamic flow pool.
3. The method for delivering data resources based on dynamic traffic pool as claimed in claim 1, wherein in the step S4, the difference between the predicted traffic demand and the initial traffic at a plurality of time points is obtained, and if the difference between the predicted traffic demand and the initial traffic at a certain time point is greater than 10% of the initial traffic, the traffic demand at the time point is marked as abnormal traffic.
4. The method for delivering data resources based on dynamic traffic pool as recited in claim 1, wherein said step S5 specifically comprises the steps of:
s51: carrying out priority classification according to the flow requirements to obtain A, B, C-level flow requirements;
s52: preferentially judging whether the class A flow demand has abnormal flow A at a future time point;
S53: if so, introducing a standby space in the shared pipeline into the communication pipeline A of the abnormal flow A, and enabling the total flow space in the communication pipeline A after the standby space is introduced to be 110% of the abnormal flow A;
s54: judging whether the B-level flow demand has abnormal flow B at a future time point or not;
s55: if yes, introducing the spare space in the shared pipeline into the communication pipeline B of the abnormal flow B, and enabling the total flow space in the communication pipeline B after the introduction to be 110% of the abnormal flow B;
s56: finally judging whether the C-level flow demand has abnormal flow C at a future time point or not;
S57: and finally, introducing the standby space in the shared pipeline into the communication pipeline C of the abnormal flow C, wherein the total flow space in the communication pipeline C after the introduction is 110% of the abnormal flow C.
5. The data resource delivery method based on the dynamic traffic pool as claimed in claim 4, wherein said S53 specifically comprises:
if the standby space in the shared pipeline is insufficient to enable the total flow space in the communication pipeline A to be 110% of the abnormal flow A, preferentially introducing the flow space in the communication pipeline C into the communication pipeline A;
If the flow space in the communication pipe C is introduced into the communication pipe a, the spare space in the shared pipe is insufficient to make the total flow space in the communication pipe a be 110% of the abnormal flow a, and then the flow space in the communication pipe B is introduced into the communication pipe a.
6. The method for delivering data resources based on dynamic traffic pool as recited in claim 4, wherein said S55 specifically comprises:
if the spare space in the shared pipe is insufficient to make the total flow space in the communication pipe B be 110% of the abnormal flow B, the flow space in the communication pipe C is led into the communication pipe B.
7. The method for delivering data resources based on dynamic traffic pool as recited in claim 4, wherein said S5 further comprises:
S58: monitoring the spare flow space in each communication pipeline in real time by a dynamic flow pool;
s59: and (3) sequentially recycling the idle flow spaces in the communication pipeline C, the communication pipeline B and the communication pipeline A into a dynamic flow pool according to the sequence, and then guiding the recycled idle flow spaces into the shared pipeline after finishing by the dynamic flow pool.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410585798.4A CN118175110B (en) | 2024-05-13 | 2024-05-13 | Data resource delivery method based on dynamic flow pool |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410585798.4A CN118175110B (en) | 2024-05-13 | 2024-05-13 | Data resource delivery method based on dynamic flow pool |
Publications (2)
Publication Number | Publication Date |
---|---|
CN118175110A CN118175110A (en) | 2024-06-11 |
CN118175110B true CN118175110B (en) | 2024-07-09 |
Family
ID=91356948
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410585798.4A Active CN118175110B (en) | 2024-05-13 | 2024-05-13 | Data resource delivery method based on dynamic flow pool |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118175110B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101448321A (en) * | 2008-10-28 | 2009-06-03 | 北京邮电大学 | Method for sharing frequency spectrum resource of isomerism wireless network and device thereof |
CN105897484A (en) * | 2016-06-01 | 2016-08-24 | 努比亚技术有限公司 | Traffic management device, server and method |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2001090957A1 (en) * | 2000-05-19 | 2001-11-29 | Channelogics, Inc. | Allocating access across shared communications medium |
US9451473B2 (en) * | 2014-04-08 | 2016-09-20 | Cellco Partnership | Analyzing and forecasting network traffic |
CN111565323B (en) * | 2020-03-23 | 2022-11-08 | 视联动力信息技术股份有限公司 | Flow control method and device, electronic equipment and storage medium |
CN117527717A (en) * | 2023-11-01 | 2024-02-06 | 中国建设银行股份有限公司 | Bandwidth resource allocation method and device |
-
2024
- 2024-05-13 CN CN202410585798.4A patent/CN118175110B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101448321A (en) * | 2008-10-28 | 2009-06-03 | 北京邮电大学 | Method for sharing frequency spectrum resource of isomerism wireless network and device thereof |
CN105897484A (en) * | 2016-06-01 | 2016-08-24 | 努比亚技术有限公司 | Traffic management device, server and method |
Also Published As
Publication number | Publication date |
---|---|
CN118175110A (en) | 2024-06-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109710405B (en) | Block chain intelligent contract management method and device, electronic equipment and storage medium | |
CN118590430A (en) | A network integrated dynamic resource routing management system and method | |
CN113886454B (en) | LSTM-RBF-based cloud resource prediction method | |
CN119402440B (en) | A data analysis method, system, device and medium based on distributed communication | |
CN117539619A (en) | Computing power scheduling method, system, equipment and storage medium based on cloud edge fusion | |
CN114172820A (en) | Cross-domain SFC dynamic deployment method, device, computer equipment and storage medium | |
CN116385207B (en) | Internet of things trust analysis method and related device facing offshore wind power monitoring | |
Kosenko et al. | Methods of managing traffic distribution in information and communication networks of critical infrastructure systems | |
Deng | A hybrid network congestion prediction method integrating association rules and LSTM for enhanced spatiotemporal forecasting | |
CN117707797A (en) | Task scheduling method, device and related equipment based on distributed cloud platform | |
CN118735222A (en) | Complex task scheduling decision method and system for power scenarios based on semantic big model | |
CN117493020A (en) | A method to implement computing resource scheduling for data grid | |
CN118175110B (en) | Data resource delivery method based on dynamic flow pool | |
Chen et al. | An efficient collaborative task offloading approach based on multi-objective algorithm in MEC-assisted vehicular networks | |
Al-Rubaie et al. | Simulating fog computing in OMNeT++ | |
CN116109058A (en) | Substation inspection management method and device based on deep reinforcement learning | |
Chen et al. | Geo-distributed IoT data analytics with deadline constraints across network edge | |
CN116566696B (en) | Security assessment system and method based on cloud computing | |
Yun et al. | Intelligent Traffic Scheduling for Mobile Edge Computing in IoT via Deep Learning. | |
CN112347371B (en) | Resource return increase ratio method, device and electronic device based on social text information | |
CN117076106A (en) | Elastic telescoping method and system for cloud server resource management | |
CN109688068A (en) | Network load balancing method and device based on big data analysis | |
CN107734000A (en) | Storage and calculating Integrated optimization system towards the value-orientation of typing resource | |
CN114401195A (en) | Server capacity adjustment method and device, storage medium and electronic device | |
CN112637359A (en) | Terminal resource scheduling method based on edge calculation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |