[go: up one dir, main page]

CN117914773A - A secure and low-latency two-layer network resource allocation method based on SDN/NFV network slicing technology - Google Patents

A secure and low-latency two-layer network resource allocation method based on SDN/NFV network slicing technology Download PDF

Info

Publication number
CN117914773A
CN117914773A CN202311872057.6A CN202311872057A CN117914773A CN 117914773 A CN117914773 A CN 117914773A CN 202311872057 A CN202311872057 A CN 202311872057A CN 117914773 A CN117914773 A CN 117914773A
Authority
CN
China
Prior art keywords
delay
nis
sdn
resource allocation
nfv
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311872057.6A
Other languages
Chinese (zh)
Inventor
陈家璘
周正
陈琪美
金波
周德坤
查志勇
王蔚然
余铮
高飞
孟浩华
郑蕾
龙霏
徐焕
夏凡
赵靑尧
魏晓燕
梅子薇
王红卫
曾铮
王逸兮
李磊
王晟玮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Information and Telecommunication Branch of State Grid Hubei Electric Power Co Ltd
Beijing Zhongdian Feihua Communication Co Ltd
Original Assignee
Wuhan University WHU
Information and Telecommunication Branch of State Grid Hubei Electric Power Co Ltd
Beijing Zhongdian Feihua Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU, Information and Telecommunication Branch of State Grid Hubei Electric Power Co Ltd, Beijing Zhongdian Feihua Communication Co Ltd filed Critical Wuhan University WHU
Priority to CN202311872057.6A priority Critical patent/CN117914773A/en
Publication of CN117914773A publication Critical patent/CN117914773A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/12Shortest path evaluation
    • H04L45/121Shortest path evaluation by minimising delays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/12Discovery or management of network topologies
    • H04L41/122Discovery or management of network topologies of virtualised topologies, e.g. software-defined networks [SDN] or network function virtualisation [NFV]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/147Network analysis or design for predicting network behaviour
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention provides a safe and low-delay double-layer network resource allocation method based on SDN/NFV network slicing technology, which comprises a double-layer network resource allocation framework based on SDN/NFV network slicing technology so as to fully utilize network resources; secondly, deploying an SDN/NFV-based power grid infrastructure environment, defining end-to-end flow delay, service processing delay and packet delivery time estimation, and restricting a short path to establish an objective function for minimizing delay; finally we improve the delay minimization of the network topology by two ways of NIS middlebox placement based on optimized kmeans-based graph cluster analysis and dynamic resource allocation based on regression tree analysis to predict NIS usage. Compared with the existing intelligent power grid technology, the safety and low-delay double-layer network resource allocation method based on the SDN/NFV network slicing technology can improve the network safety level of the power network and reduce delay, and provides a more intelligent network resource allocation scheme for the future power interconnection network.

Description

Safety and low-delay double-layer network resource allocation method based on SDN/NFV network slicing technology
Technical Field
The invention belongs to the technical field of electric power Internet of things, and particularly relates to a safe and low-delay double-layer network resource allocation method based on SDN/NFV network slicing technology.
Background
Future 5G networks will pose considerable challenges for power internet of things requiring advanced services such as real-time high quality services, intelligent metering, etc., specifically this includes not only a substantial increase in overall data rate, but also a decrease in end-to-end delay, an increase in energy efficiency, and a large-scale connection. Mobile broadband communications is the driving force for 5G network traffic growth. Thus, the grid will need to invest in new solutions or enhance its infrastructure through radio access technologies, fronthaul, backhaul, etc. Typically, as traffic increases, the grid deploys new dedicated network devices for the control plane and the data plane. These deployments are often excessive in view of the anticipated loads during peak hours of three to five years in the future. And some network resources are often wasted, resulting in policy inefficiency in terms of capital expenditures and operational expenditures. In order to better utilize the available resources, a more flexible, agile and economical solution has arisen, which relies on the features provided by Network Function Virtualization (NFV) and Software Defined Networks (SDN).
NFV deploys network functions as Virtual Network Function (VNFs) software instances, which provides important benefits to the grid in terms of cost, flexibility, openness, configuration homogeneity, migration, etc. NFV is applicable to any data plane packet processing and control plane functions in fixed and mobile network infrastructure. NFV is currently solving the use case of mobile core virtualization, and has recently received great attention.
On the other hand, SDN may handle logically centralized control, implementing the programmability of the network by decoupling the data and control planes. SDN provides a logical plane abstraction that hides the hardware specific to a department of the power grid, thereby facilitating interoperability of departments of the power grid. This abstraction enables network virtualization, i.e., partitioning (slicing) of the physical infrastructure, to create multiple co-existence and independent network tenants thereon.
In recent years, smart grids have been supporting and approving research into embedded artificial intelligence, SDN, and Network Function Virtualization (NFV). In the network security context, with the service policy as input, a set of artificial intelligence driven Network Intelligent Services (NIS) can be virtually applied in NFV-based middleware overlaid on an SDN architecture, becoming an SDN/NFV-based network infrastructure. However, using this approach to protect the application data stream may significantly increase the end-to-end stream delay. In view of the definition of reliability requirements, successful delivery of the application data stream, but with delays higher than the defined requirements, can be considered as failure.
A balance between the required network security and low-delay communication is considered essential for many applications in the power grid. The inventors have found that in practice the end-to-end flow delay always depends on the position of the middlebox in the network. However, studies have demonstrated that this problem is a non-deterministic polynomial time problem, a complex and time consuming decision problem. Thus, there is a need for a trade-off optimization method to implement a heuristic solution that provides the effort of delay optimization NIS in SDN/NFV based grid infrastructure.
Disclosure of Invention
In order to solve the technical problems, the invention provides a safe and low-delay double-layer network resource allocation method based on SDN/NFV network slicing technology, so as to improve the network safety level of a power network and reduce delay.
The invention provides a safe and low-delay double-layer network resource allocation method based on SDN/NFV network slicing technology, which comprises a double-layer network resource allocation framework based on SDN/NFV network slicing technology so as to fully utilize network resources; secondly, deploying an SDN/NFV-based power grid infrastructure environment, defining end-to-end flow delay, service processing delay and packet delivery time estimation, and restricting a short path to establish an objective function for minimizing delay; finally we improve the delay minimization of the network topology by two ways of NIS middlebox placement based on optimized kmeans-based graph cluster analysis and dynamic resource allocation based on regression tree analysis to predict NIS usage. Compared with the existing intelligent power grid technology, the safety and low-delay double-layer network resource allocation method based on the SDN/NFV network slicing technology can improve the network safety level of the power network and reduce delay, and provides a more intelligent network resource allocation architecture for the future power interconnection network.
The technical scheme adopted by the invention is as follows: a secure and low-delay dual-layer network resource allocation method based on SDN/NFV network slicing technology comprises the following steps:
Step 1: taking a service strategy as an input, constructing a double-layer network resource allocation framework based on SDN/NFV network slicing technology;
Step 2: according to the double-layer network resource allocation architecture, deploying an SDN/NFV-based power grid infrastructure environment, defining end-to-end flow delay, service processing delay and data packet delivery time estimation, restricting a short path, and establishing an objective function for minimizing delay;
And 3, solving the objective function by adopting a K-means clustering algorithm to obtain an optimal intermediate box placement scheme, and providing a dynamic resource allocation method for predicting NIS (network identification) utilization rate based on regression tree analysis.
The first layer in the double-layer network resource allocation architecture is a power center cloud node and an SDN/NFV plane, and the second layer is a power network terminal.
The step 2 specifically comprises the following steps:
step 2.1: deploying an SDN/NFV-based power grid infrastructure environment:
SDN/NFV-based power grid infrastructure is represented as a directed graph Wherein/>Is a set of nodes, ε= { v i,vj } is a set of links, where i= {1,2,3,..N v } and j= {1,2,3,..N v } are subscripts of node pairs, and N v is the total number of nodes; SDN switch/>The maximum number of rules that can be stored is P s, so the number of rules currently stored in the switch flow table is denoted P s∈Ps; the set of all NIS's is noted asThen the NFV-based middlebox that provides these NIS is/>There may be N q middleboxes available in the network, which will/>Expressed as the number of NIS middleboxes, wherein/>Q is a rational number set; each intermediate box has a maximum processing capacity O b to execute a set of NIS;
The application data stream f can be described as Source n and dest n are the source node and destination node, respectively; /(I)Is the NIS that network traffic of a flow must access from source to destination; o n is the middlebox processing capacity occupied by NIS, t n is the daily clock period of the streaming request NIS; generating/>, after obtaining an application data stream For a group of streams that require NIS from the middlebox;
step 2.2: defining end-to-end flow delay, traffic processing delay and packet delivery time estimate, constraining short paths:
end-to-end flow delay is defined as:
wherein d if,bf is the aggregate delay time from the source ingress switch to the corresponding NIS middlebox, and d bf,ef is the aggregate delay time from the source ingress switch to the destination egress switch; the aggregate delay depends on the traffic processing delay Packet transfer time/>, per link between o n and two nodes per NIS n
The traffic processing delay and packet delivery time estimate are expressed as:
Where M is the number of application data streams requesting NIS; z max is the maximum data packet size, and the unit is bit; b r is the transmission bit rate, the unit is bit/s; Distance or length of the transmission medium is in meters; l s is the propagation speed in the medium in m/s;
The constrained short path is expressed as:
I.e. find one route R from the set of all routes R st, minimize the objective function f C (R) that minimizes the delay, such that the delay D (R) is less than or equal to the threshold D max; further, constraints can be varied, including traffic chain ratio, bandwidth consumption, deployment cost, energy consumption, and the like. However, whatever flow routing scheme is used, the impact of middlebox deployment on network latency is most pronounced. Therefore, there is a need to develop an appropriate deployment strategy to minimize each flow Is a function of the total delay of (a).
Step 2.3: this step will establish an objective function that minimizes the delay for subsequent optimization and solution.
The objective function to minimize the delay is expressed as:
the first constraint is to ensure that each intermediate box can be successfully connected to any SDN switch in the network; the second constraint is middleware processing capability to ensure that the corresponding middleware has the ability to handle NIS of application data flow requests; a third constraint is SDN switch memory capacity, which is used to confirm that the switch has available resources to store new rule entries.
The step 3 specifically comprises the following steps:
Step 3.1: solving the objective function by adopting a K-means clustering algorithm to obtain an optimal intermediate box placement scheme, which comprises the following specific steps:
First, collecting Shortest Path (SP) calculations between ingress switches per pair of flows; secondly, initializing the clusters using a careful seeding initialization process; wherein an SDN switch is used as a cluster center; then updating and verifying the center of each cluster to minimize the sum of SP delay times to reach all switches in the optimal number of clusters; checking and calculating the number of policies stored in each SDN switch each time; repeating the steps until the sub-network is divided into the optimal K sub-networks; finally, a NIS middle box is placed in each cluster center.
Step 3.2: providing a dynamic resource allocation method for predicting NIS utilization rate based on regression tree analysis;
After placing all NIS middleboxes in the optimal position, resources can be dynamically allocated to each NIS; in this case, the resource allocation of each service at a particular time depends on the proportion of services repeatedly requested by the application/user in the respective cluster, using the historical data of the application data stream as input, using a regression tree algorithm to predict the usage of the next time window, the NIS with higher predicted usage obtaining higher resource allocation at the next stage, and vice versa. Furthermore, to protect NIS from failures due to excessive and unpredictable requests, we have adopted the NFV-thread program.
Compared with the prior art, the invention has the beneficial effects that: the dual-layer network architecture based on SDN/NFV is constructed, high-level network security and low-delay communication are guaranteed, the optimization problem of a deployment middlebox is solved, NIS resource allocation is carried out based on the prediction of service utilization rate in each corresponding cluster, delay is minimized, and communication security and low delay are maintained.
Drawings
FIG. 1 is a flow chart of an embodiment of the present invention.
Detailed Description
In order to facilitate the understanding and practice of the invention, those of ordinary skill in the art will now make further details with reference to the drawings and examples, it being understood that the examples described herein are for the purpose of illustration and explanation only and are not intended to limit the invention thereto.
The present invention proposes an Artificial Intelligence (AI) driven solution consisting of two phases (nim middlebox placement based on optimized kmeans-based graph cluster analysis, dynamic resource allocation based on regression tree analysis to predict NIS usage) that helps to provide a secure and low-latency SDN/NFV based network infrastructure for the grid. The method for distributing the safe and low-delay double-layer network resources based on SDN/NFV network slicing technology provided by the invention, referring to figure 1, comprises the following steps:
Step 1: constructing a double-layer network resource allocation architecture based on SDN/NFV network slicing technology;
Step 2: according to the double-layer network resource allocation architecture, deploying an SDN/NFV-based power grid infrastructure environment, defining end-to-end flow delay, service processing delay and data packet delivery time estimation, restricting a short path, and establishing an objective function for minimizing delay;
Step 3, providing an NIS middle box placement method based on optimized kmeans-based graph cluster analysis and dynamic resource allocation of predicted NIS utilization rate based on regression tree analysis by using the objective function in the step 2;
The step 1 comprises the following steps: under the network security background, taking a service strategy as an input, virtually applying a set of artificial intelligent driven Network Intelligent Services (NIS) to a Network Function Virtualization (NFV) -based middlebox covered on a Software Defined Network (SDN) architecture, and constructing a SDN/NFV-based double-layer network resource allocation architecture, wherein a first layer in the double-layer network resource allocation architecture is an electric power center cloud node and an SDN/NFV plane; the second layer is a power network terminal.
The step 2 comprises the following steps:
Step 2.1: the step is to deploy an SDN/NFV-based power grid infrastructure environment;
SDN/NFV-based grid infrastructure is represented as a simple directed graph Wherein the method comprises the steps ofIs a set of nodes, ε= { v i,vj } is a set of links, where i= {1,2,3,..N v } and j= {1,2,3,..N v } are subscripts of node pairs and N v is the total number of nodes. SDN switch/>The maximum number of rules that can be stored is P s, so the number of rules currently stored in the switch flow table is denoted P s∈Ps. If the set of all NIS is recorded as/>Then the NFV-based middlebox that provides these services isThere may be N q middleboxes available in the network, so we will/>Expressed as the number of NIS middleboxes, whereQ is a rational number set. Each middlebox has a maximum processing capacity O b for executing a set of NIS. This processing capability is expressed in units of Mbps depending on the Central Processing Unit (CPU) available in each middlebox.
The application data stream f can be described asSource n and dest n are the source node and destination node, respectively. /(I)Is the NIS that network traffic of a flow must access from source to destination. Where o n is the middlebox processing capacity occupied by the NIS and t n is the daily clock period of the streaming request NIS. After knowing the application data stream, we can generate/>This is a set of streams that require NIS from the middlebox.
Step 2.2: this step will define end-to-end flow delays, traffic processing delays and packet delivery time estimates, constraining short paths.
First define the end-to-end flow delay as
Where d if,bf is the aggregate delay time from the source ingress switch to the corresponding NIS middlebox and d bf,ef is the aggregate delay time from the source ingress switch to the destination egress switch. The aggregate delay depends on the traffic processing delayThe time/>, of packet delivery per link between o n of each NIS n (i.e., the nth NIS) and two nodes
The traffic processing delay and packet delivery time estimate are expressed as
Where M is the number of application data streams requesting NIS. For delay-sensitive applications, satisfaction follows an s-type utility function. Thus, an accurate resource allocation policy is an unavoidable task to improve QoS. Wherein Z max is the maximum data packet size, and the unit is bit; b r is the transmission bit rate, the unit is bit/s; Distance or length of the transmission medium is in meters; l s is the propagation speed in the medium in m/s. Propagation speed depends on the physical medium of the link, e.g. copper wire is 2 x 108 m/s and wireless communication is 3 x 108 m/s.
Constrained short paths are expressed as
I.e. one route R is found from the set of all routes R st, the objective function f C (R) is minimized such that the delay D (R) is less than or equal to the threshold D max. Further, constraints can be varied, including traffic chain ratio, bandwidth consumption, deployment cost, energy consumption, and the like. However, whatever flow routing scheme is used, the impact of middlebox deployment on network latency is most pronounced. Therefore, there is a need to develop an appropriate deployment strategy to minimize each flowIs a function of the total delay of (a).
Step 2.3: this step will establish an objective function that minimizes the delay for subsequent optimization and solution.
Delay minimization problem is expressed as
There are three constraints on this problem, the first constraint being to ensure that each middlebox can successfully connect to any one SDN switch in the network. The second constraint is middleware processing capability to ensure that the corresponding middleware has the ability to handle NIS of application data flow requests. A third constraint is SDN switch memory capacity, which is used to confirm that the switch has available resources to store new rule entries.
And 3, providing a graph clustering analysis method for middle box placement.
Under the condition of avoiding using a threshold method for graph clustering analysis, adopting a K-means clustering algorithm, and considering the following additional conditions:
1) Since the goal is to find the NIS middlebox placement with minimal latency, the cluster center initialization method plays a key role. Thus, an initialization and careful seed selection process is essential.
2) The recalculated cluster center should be selected from the SDN ingress switches. Then, a cluster refinement process is performed, reassigning all SDN switches to the appropriate clusters.
3) The distance calculation method also needs to take into account nodes that are indirectly connected or that are not physically connected to the cluster center.
The step 3 specifically comprises the following steps:
Step 3.1: solving the objective function by adopting a K-means clustering algorithm to obtain an optimal intermediate box placement scheme, which comprises the following specific steps:
First, collecting Shortest Path (SP) calculations between ingress switches per pair of flows; secondly, initializing the clusters using a careful seeding initialization process; furthermore, much like the K-means clustering approach, we propose a graph clustering based middlebox placement approach using selected nodes, where SDN switches are used as the cluster centers; rather than using the nearest average. Then updating and verifying the center of each cluster to minimize the sum of SP delay times to reach all switches in the optimal number of clusters; taking additional conditions into account, the number of policies stored in each SDN switch needs to be checked and calculated each time; repeating the steps until the sub-network is divided into the optimal K sub-networks; finally, a NIS middle box is placed in each cluster center.
Step 3.2: this step will propose a corresponding dynamic resource allocation method.
After placing all NIS middleboxes in the optimal position, resources can be dynamically allocated to each NIS; in this case, the resource allocation of each service at a particular time depends on the proportion of services repeatedly requested by the application/user in the respective cluster, using the historical data of the application data stream as input, using a regression tree algorithm to predict the usage of the next time window, the NIS with higher predicted usage obtaining higher resource allocation at the next stage, and vice versa. Furthermore, to protect NIS from failures due to excessive and unpredictable requests, we have adopted the NFV-thread program.
It should be understood that parts of the specification not specifically set forth herein are all prior art.
It should be understood that the foregoing description of the preferred embodiments is not intended to limit the scope of the invention, but rather to limit the scope of the claims, and that those skilled in the art can make substitutions or modifications without departing from the scope of the invention as set forth in the appended claims.

Claims (4)

1.一种基于SDN/NFV网络切片技术的安全和低延迟双层网络资源分配方法,其特征在于,包括以下步骤:1. A secure and low-latency dual-layer network resource allocation method based on SDN/NFV network slicing technology, characterized in that it includes the following steps: 步骤1:搭建基于SDN/NFV网络切片技术的双层网络资源分配架构;Step 1: Build a two-layer network resource allocation architecture based on SDN/NFV network slicing technology; 步骤2:根据上述双层网络资源分配架构,部署基于SDN/NFV的电网基础设施环境,定义端到端流延迟、业务处理延迟和数据包交付时间估计、约束短路径,建立最小化延迟的目标函数;Step 2: Based on the above two-layer network resource allocation architecture, deploy the SDN/NFV-based power grid infrastructure environment, define the end-to-end flow delay, service processing delay and packet delivery time estimation, constrain the short path, and establish the objective function of minimizing delay; 步骤3,采用K-means聚类算法对上述目标函数求解,得到最优的中间盒放置方案,并提出基于回归树分析的预测NIS使用率的动态资源分配方法。Step 3: Use K-means clustering algorithm to solve the above objective function, obtain the optimal middlebox placement solution, and propose a dynamic resource allocation method based on regression tree analysis to predict NIS usage. 2.根据权利要求1所述的基于SDN/NFV网络切片技术的安全和低延迟双层网络资源分配方法,其特征在于,所述双层网络资源分配架构中第一层为电力中心云节点和SDN/NFV平面,第二层为电力网络终端。2. According to claim 1, the secure and low-latency two-layer network resource allocation method based on SDN/NFV network slicing technology is characterized in that the first layer in the two-layer network resource allocation architecture is the power center cloud node and SDN/NFV plane, and the second layer is the power network terminal. 3.根据权利要求1所述的基于SDN/NFV网络切片技术的安全和低延迟双层网络资源分配方法,其特征在于,所述步骤2具体如下:3. According to the secure and low-latency dual-layer network resource allocation method based on SDN/NFV network slicing technology of claim 1, it is characterized in that the step 2 is specifically as follows: 步骤2.1:部署基于SDN/NFV的电网基础设施环境:Step 2.1: Deploy SDN/NFV-based power grid infrastructure environment: 基于SDN/NFV的电网基础设施表示为一个有向图其中/>是节点集合,ε={vi,vj}是链路集合,其中i={1,2,3,...,Nv}和j={1,2,3,...,Nv}为节点对的下标,Nv为节点总数;一台SDN交换机/>可以存储的最大规则数是Ps,因此当前存储在开关流表中的规则数表示为ps∈Ps;所有NIS的集合记为那么提供这些NIS的基于NFV的中间盒就是/>网络中有Nq个可用的中间盒,/>为NIS中间盒的数量,其中/>每个中间盒具有执行一组NIS的最大处理能力ObThe SDN/NFV-based power grid infrastructure is represented as a directed graph Where/> is a node set, ε={ vi , vj } is a link set, where i={1,2,3,..., Nv } and j={1,2,3,..., Nv } are the subscripts of node pairs, and Nv is the total number of nodes; an SDN switch/> The maximum number of rules that can be stored is P s , so the number of rules currently stored in the switch flow table is denoted as p s ∈ P s ; the set of all NIS is denoted as Then the NFV-based middlebox that provides these NIS is/> There are Nq middleboxes available in the network,/> is the number of NIS middleboxes, where/> Each middlebox has a maximum processing capacity O b to execute a set of NIS; 应用程序数据流f可以描述为sourcen和destn分别是源节点和目的节点;/>是一个流的网络流量从源到目的必须访问的NIS;on为NIS占用的中间盒处理能力,tn为流请求NIS的每日时钟周期;得到应用程序数据流后生成/> 为一组需要来自中间盒的NIS的流;The application data flow f can be described as source n and dest n are the source node and destination node respectively;/> is the NIS that a flow of network traffic must access from source to destination; o n is the middlebox processing capacity occupied by NIS, t n is the daily clock cycle of the flow requesting NIS; after obtaining the application data flow, it is generated/> is a set of flows that require NIS from the middlebox; 步骤2.2:定义端到端流延迟、业务处理延迟和数据包交付时间估计、约束短路径:Step 2.2: Define end-to-end flow delay, service processing delay and packet delivery time estimation, constrain short paths: 将端到端流延迟定义为:The end-to-end flow delay is defined as: 式中,dif,bf为源入口交换机到对应NIS中间盒的聚合延迟时间,dbf,ef为源入口交换机到目的出口交换机的聚合延迟时间;聚合延迟取决于业务处理延迟每个NISn的on和两个节点之间每条链路的数据包传递时间/>业务处理延迟和数据包交付时间估计表示为:Where d if,bf is the aggregation delay from the source ingress switch to the corresponding NIS middlebox, and d bf,ef is the aggregation delay from the source ingress switch to the destination egress switch. The aggregation delay depends on the service processing delay. The packet delivery time of each link between two nodes on each NISn/> The traffic processing delay and packet delivery time are estimated as: 其中M是请求NIS的应用程序数据流的数量;Zmax为最大数据包大小,单位为bit;Br为传输比特率,单位为bit/s;为传输介质的距离或长度,单位为米;Ls为介质中的传播速度,单位为m/s;Where M is the number of application data flows requesting NIS; Z max is the maximum packet size in bits; Br is the transmission bit rate in bits/s; is the distance or length of the transmission medium, in meters; Ls is the propagation speed in the medium, in m/s; 约束短路径表述为:The constrained short path is expressed as: 即从所有路由Rst的集合中找到一条路由r,使最小化延迟的目标函数fC(r)最小,使得延迟D(r)小于或等于阈值DmaxThat is, find a route r from the set of all routes R st to minimize the objective function f C (r) of minimizing delay, so that the delay D (r) is less than or equal to the threshold D max ; 步骤2.3:本步骤将建立最小化延迟的目标函数;Step 2.3: This step will establish the objective function of minimizing delay; 最小化延迟的目标函数表述为:The objective function for minimizing latency is expressed as: 4.根据权利要求1所述的基于SDN/NFV网络切片技术的安全和低延迟双层网络资源分配方法,其特征在于,所述步骤3具体如下:4. According to the secure and low-latency dual-layer network resource allocation method based on SDN/NFV network slicing technology according to claim 1, it is characterized in that the step 3 is specifically as follows: 步骤3.1:采用K-means聚类算法对上述目标函数求解,得到最优的中间盒放置方案,具体步骤如下:Step 3.1: Use the K-means clustering algorithm to solve the above objective function and obtain the optimal middlebox placement solution. The specific steps are as follows: 首先,收集每对流的入口交换器之间的最短路径SP计算结果;其次,使用谨慎的播种初始化过程初始化集群;其中使用SDN交换机作为聚类中心;然后更新和验证每个集群的中心,以最小化SP延迟时间的总和,以达到最优数量的集群中的所有交换机;每次需要检查和计算每个SDN交换机中存储的策略的数量;重复上述步骤,直到将其划分为最优的K个子网络;最后,在每个集群中心放置NIS中间盒。First, collect the shortest path SP calculation results between the ingress switches of each pair of flows; second, initialize the cluster using a cautious seeding initialization process; use the SDN switch as the cluster center; then update and verify the center of each cluster to minimize the sum of the SP delay time to achieve the optimal number of switches in the cluster; each time, check and calculate the number of policies stored in each SDN switch; repeat the above steps until it is divided into the optimal K sub-networks; finally, place the NIS middlebox at the center of each cluster. 步骤3.2:提出基于回归树分析的预测NIS使用率的动态资源分配方法;Step 3.2: Propose a dynamic resource allocation method for predicting NIS utilization based on regression tree analysis; 将所有NIS中间盒放置在最佳位置后,就可以动态地为每个NIS分配资源;在这种情况下,每个服务在特定时间的资源分配取决于相应集群中应用程序/用户重复请求的服务的比例,利用应用程序数据流的历史数据作为输入,使用回归树算法预测下一个时间窗口的使用率,具有较高预测使用率的NIS在下一阶段获得较高的资源分配。After placing all NIS middleboxes in the best locations, resources can be allocated dynamically to each NIS; in this case, the resource allocation of each service at a specific time depends on the proportion of services repeatedly requested by applications/users in the corresponding cluster. The historical data of application data flows is used as input, and the regression tree algorithm is used to predict the utilization rate in the next time window. NIS with higher predicted utilization rate will get higher resource allocation in the next stage.
CN202311872057.6A 2023-12-30 2023-12-30 A secure and low-latency two-layer network resource allocation method based on SDN/NFV network slicing technology Pending CN117914773A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311872057.6A CN117914773A (en) 2023-12-30 2023-12-30 A secure and low-latency two-layer network resource allocation method based on SDN/NFV network slicing technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311872057.6A CN117914773A (en) 2023-12-30 2023-12-30 A secure and low-latency two-layer network resource allocation method based on SDN/NFV network slicing technology

Publications (1)

Publication Number Publication Date
CN117914773A true CN117914773A (en) 2024-04-19

Family

ID=90691668

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311872057.6A Pending CN117914773A (en) 2023-12-30 2023-12-30 A secure and low-latency two-layer network resource allocation method based on SDN/NFV network slicing technology

Country Status (1)

Country Link
CN (1) CN117914773A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN120358190A (en) * 2025-06-25 2025-07-22 武汉大学 Optimization method and device for multi-mode network global route, electronic equipment and medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020181403A1 (en) * 2001-05-31 2002-12-05 Nec Corporation Communication path designing method, communication path designing device, and program to have computer execute same method
CN108206790A (en) * 2018-01-11 2018-06-26 重庆邮电大学 A kind of selection of SDN joint routes and resource allocation methods based on network slice
US20200099625A1 (en) * 2018-09-24 2020-03-26 Netsia, Inc. Path determination method and system for delay-optimized service function chaining
CN110971451A (en) * 2019-11-13 2020-04-07 国网河北省电力有限公司雄安新区供电公司 NFV resource allocation method
CN111865681A (en) * 2020-07-14 2020-10-30 中国电力科学研究院有限公司 End-to-end delay optimization method, system and storage medium for core network slicing
CN111865668A (en) * 2020-06-30 2020-10-30 南京邮电大学 A network slicing method based on SDN and NFV
CN112491619A (en) * 2020-11-25 2021-03-12 东北大学 Self-adaptive distribution technology for service customized network resources based on SDN
CN114205316A (en) * 2021-12-31 2022-03-18 全球能源互联网研究院有限公司 Network slice resource allocation method and device based on power service

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020181403A1 (en) * 2001-05-31 2002-12-05 Nec Corporation Communication path designing method, communication path designing device, and program to have computer execute same method
CN108206790A (en) * 2018-01-11 2018-06-26 重庆邮电大学 A kind of selection of SDN joint routes and resource allocation methods based on network slice
US20200099625A1 (en) * 2018-09-24 2020-03-26 Netsia, Inc. Path determination method and system for delay-optimized service function chaining
CN110971451A (en) * 2019-11-13 2020-04-07 国网河北省电力有限公司雄安新区供电公司 NFV resource allocation method
CN111865668A (en) * 2020-06-30 2020-10-30 南京邮电大学 A network slicing method based on SDN and NFV
CN111865681A (en) * 2020-07-14 2020-10-30 中国电力科学研究院有限公司 End-to-end delay optimization method, system and storage medium for core network slicing
CN112491619A (en) * 2020-11-25 2021-03-12 东北大学 Self-adaptive distribution technology for service customized network resources based on SDN
CN114205316A (en) * 2021-12-31 2022-03-18 全球能源互联网研究院有限公司 Network slice resource allocation method and device based on power service

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ARDIANSYAH ET AL: "Latency-Optimal Network Intelligence Services in SDN/NFV-Based Energy Internet Cyberinfrastructure", SPECIAL SECTION ON SOFTWARE DEFINED NETWORKS FOR ENERGY INTERNET AND SMART GRID COMMUNICATIONS, 24 October 2019 (2019-10-24) *
徐冉;王文东;龚向阳;阙喜戎;: "网络功能虚拟化中延时感知的资源调度优化方法", 计算机研究与发展, no. 04, 15 April 2018 (2018-04-15) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN120358190A (en) * 2025-06-25 2025-07-22 武汉大学 Optimization method and device for multi-mode network global route, electronic equipment and medium

Similar Documents

Publication Publication Date Title
WO2023039965A1 (en) Cloud-edge computing network computational resource balancing and scheduling method for traffic grooming, and system
US11516146B2 (en) Method and system to allocate bandwidth based on task deadline in cloud computing networks
CN105960783B (en) Inter-domain SDN traffic engineering
CN101707788B (en) Differential pricing strategy based dynamic programming method of multilayer network services
CN108566659A (en) A kind of online mapping method of 5G networks slice based on reliability
Buyakar et al. Resource allocation with admission control for GBR and delay QoS in 5G network slices
EP2225851B1 (en) Improved resource allocation plan in a network
Mahapatra et al. Utilization-aware VB migration strategy for inter-BBU load balancing in 5G cloud radio access networks
Nouruzi et al. Online service provisioning in NFV-enabled networks using deep reinforcement learning
CN115002799A (en) A task offloading and resource allocation method for industrial hybrid networks
CN114710196A (en) Software-defined satellite network virtual network function migration method
CN108092895A (en) A kind of software defined network joint route selection and network function dispositions method
CN117914773A (en) A secure and low-latency two-layer network resource allocation method based on SDN/NFV network slicing technology
Tzanakaki et al. A converged network architecture for energy efficient mobile cloud computing
Zhao et al. On the parallel reconfiguration of virtual networks in hybrid optical/electrical datacenter networks
Wu et al. Resource allocation optimization in the NFV-enabled MEC network based on game theory
Zhu et al. Efficient hybrid multicast approach in wireless data center network
CN108880895B (en) SDN joint routing selection and network function deployment method based on user stream transmission cost optimization
Luo et al. Traffic-aware VDC embedding in data center: A case study of fattree
Lv et al. Mobile edge computing oriented multi-agent cooperative routing algorithm: A DRL-based approach
CN117858095A (en) Wireless access network slicing method based on group learning of DTA in Internet of things
CN115277531B (en) A Two-Phase Routing Method for Multipath Bottleneck Fairness Constraints in Cloud Wide Area Networks
CN114221948B (en) Cloud network system and task processing method
CN106850726B (en) Load-aware request routing method for cloud data center based on SDN
Guanqiang et al. A method for IOT device management and traffic scheduling in Distribution Station area based on Distributed Sdn Architecture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Xu Huan

Inventor after: Zhou Dekun

Inventor after: Meng Haohua

Inventor after: Zheng Lei

Inventor after: Long Fei

Inventor after: Xia Fan

Inventor after: Zhao Jingyao

Inventor after: Wei Xiaoyan

Inventor after: Mei Ziwei

Inventor after: Wang Hongwei

Inventor after: Zeng Zheng

Inventor after: Zhou Zheng

Inventor after: Wang Yixi

Inventor after: Li Lei

Inventor after: Wang Chengwei

Inventor after: Chen Qimei

Inventor after: Jin Bo

Inventor after: Chen Jialin

Inventor after: Gao Fei

Inventor after: Wang Weiran

Inventor after: Zha Zhiyong

Inventor after: Yu Zheng

Inventor before: Chen Jialin

Inventor before: Meng Haohua

Inventor before: Zheng Lei

Inventor before: Long Fei

Inventor before: Xu Huan

Inventor before: Xia Fan

Inventor before: Zhao Jingyao

Inventor before: Wei Xiaoyan

Inventor before: Mei Ziwei

Inventor before: Wang Hongwei

Inventor before: Zeng Zheng

Inventor before: Zhou Zheng

Inventor before: Wang Yixi

Inventor before: Li Lei

Inventor before: Wang Chengwei

Inventor before: Chen Qimei

Inventor before: Jin Bo

Inventor before: Zhou Dekun

Inventor before: Zha Zhiyong

Inventor before: Wang Weiran

Inventor before: Yu Zheng

Inventor before: Gao Fei

CB03 Change of inventor or designer information