[go: up one dir, main page]

WO2019238101A1 - Système et procédé de transport géré dynamiquement pour réseaux mobiles - Google Patents

Système et procédé de transport géré dynamiquement pour réseaux mobiles Download PDF

Info

Publication number
WO2019238101A1
WO2019238101A1 PCT/CN2019/091180 CN2019091180W WO2019238101A1 WO 2019238101 A1 WO2019238101 A1 WO 2019238101A1 CN 2019091180 W CN2019091180 W CN 2019091180W WO 2019238101 A1 WO2019238101 A1 WO 2019238101A1
Authority
WO
WIPO (PCT)
Prior art keywords
network
transport
customer network
traffic
customer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2019/091180
Other languages
English (en)
Inventor
Kaippallimalil Mathew John
Young Lee
James N. Guichard
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of WO2019238101A1 publication Critical patent/WO2019238101A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/12Discovery or management of network topologies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/302Route determination based on requested QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/0268Traffic management, e.g. flow control or congestion control using specific QoS parameters for wireless networks, e.g. QoS class identifier [QCI] or guaranteed bit rate [GBR]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0852Delays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0876Network utilisation, e.g. volume of load or congestion level
    • H04L43/0894Packet rate
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/20Arrangements for monitoring or testing data switching networks the monitoring system or the monitored elements being virtualised, abstracted or software-defined entities, e.g. SDN or NFV

Definitions

  • the present disclosure relates generally to wireless communications and, in particular embodiments, to a system and method of dynamically managed transport for mobile networks.
  • Traffic engineered (TE) mobile network backhauls use provisioning based on, more or less, static engineering estimates. These estimates may be changed, and traffic engineering may be configured periodically based on demand and other performance criteria. However, such a traffic engineering process may take a long time (e.g., on the order of weeks or months) , and thus may not be suitable for networks having dynamically changing contexts, such as the fifth generation (5G) mobile networks. It is desirable to provide dynamically traffic engineered paths in backhaul networks to meet the need of changing traffic demands.
  • 5G fifth generation
  • a computer-implemented that includes negotiating, by a first controller function of a first customer network with a second controller function of a second customer network, a traffic matrix for transmitting traffic flows of a quality of service (QoS) class to the second customer network, the QoS class being associated with virtual network connection (VNC) resources provisioned for traffic transmitted between the first customer network and the second customer network, determining, by the first controller function of the first customer network, provider edge (PE) routers of a transport network for transmitting the traffic flows from the first customer network to the second customer network through the transport network, and sending, by the first controller function of the first customer network to a third controller function of the transport network, a request for configuring a transport path in the transport network based on the request, the transport path in the transport network being configured for transmission of the traffic flows of the QoS class, and the request comprising information of the PE routers that are determined, the QoS class, the traffic matrix and a set of transport path configuration constraints
  • the first controller function allows the first controller function to dynamically request the transport network to configure the transport path based on QoS classes, traffic matrices and transport path configuration constraints that may vary dynamically.
  • This also allows the first customer network to regularly or intermittently negotiate traffic matrices with the second customer network according to varying traffic demand of the first customer network.
  • This further enables the transport network to configure transport paths to meet QoS requirements of the traffic flows transmitted from the first customer network to the second customer network through the transport network.
  • the first customer network or the second customer network is a mobile network.
  • the first customer network or the second customer network is an edge computing network.
  • one of the first customer network and the second customer network is a radio access network, and the other is a mobile core.
  • the method further includes: mapping, by the first controller function in the first customer network, a customer edge (CE) router in the first customer network to a PE router in the PE routers that are determined.
  • CE customer edge
  • the method further includes: receiving, by the first controller function in the first customer network, topology information of the transport network for determining the PE routers.
  • determining the PE routers of the transport network comprises: determining the PE routers based on distance between the PE routers and CE routers of the first customer network.
  • the method further includes: receiving, by the first controller function of the first customer network, a trigger triggering configuration of the transport path.
  • sending the request for configuring the transport path in the transport network comprises: sending the request for reconfiguring the transport path that has been previously configured.
  • sending the request for configuring the transport path in the transport network comprises: sending the request for creating the transport path based on the request.
  • sending the request for configuring the transport path in the transport network comprises: sending the request for deleting the transport path.
  • the PE routers comprise a PE router for receiving inbound traffic of the transport network, and a PE router for transmitting outbound traffic of the transport network.
  • the set of transport path configuration constraints comprises a deterministic latency, a non-deterministic latency, a bandwidth required for tunneling in the transport network, a path isolation requirement, a path protection level, or any combination thereof.
  • negotiation of the traffic matrix is based on a network policy of the first customer network, a demand estimation of the first customer network, a QoS requirement for transmitting the traffic flows, a capacity estimation of the first customer network, or any combination thereof.
  • the third controller function of the transport network is a software defined network-controller (SDN-C) of the transport network.
  • SDN-C software defined network-controller
  • the method further comprises: collecting, by the first controller function of the first customer network, data regarding traffic communicated between the first customer network and the second customer network through the transport network and regarding performance metrics of the transport path; and determining, by the first controller function in the first customer network, whether to configure the transport path based on the data collected.
  • the data regarding performance metrics of the transport path is identified by a PE router of the transport network on the transport path for receiving traffic from the first customer network, and a PE router of the transport network on the transport path for forwarding traffic to the second customer network.
  • the performance metrics comprises metrics of end-to-end (E2E) delay in the transport network, delay jitter, packet loss, or bandwidth utilization.
  • E2E end-to-end
  • the method further comprises: receiving, by the first controller function in the first customer network, the performance metrics from the third controller function.
  • the method further comprises: subscribing, by the first controller function in the first customer network with the third controller function in the transport network, the performance metrics of the transport path.
  • the request is transmitted in an information element (IE) .
  • IE information element
  • the request comprises an identify (ID) of a first PE in the transport network that is to receive traffic from the first customer network, and an ID of a second PE in the transport network that is to transmit traffic from the transport network to the second customer network.
  • ID an identify of a first PE in the transport network that is to receive traffic from the first customer network
  • ID of a second PE in the transport network that is to transmit traffic from the transport network to the second customer network.
  • a system that includes: a first customer network comprising a first controller function, a second customer network comprising a second controller function, and a transport network, configured to forward traffic between the first customer network and the second customer network
  • the first controller function is configured to: collect first traffic demand data of the first customer network, collect first performance data of transport paths in the transport network that are configured for forwarding traffic from the first customer network to the second customer network, estimate traffic demand and classes of services (CoSs) of the first customer network based on the first traffic demand data, and determine, based on the estimated traffic demand and CoSs of the first customer network and the first performance data, whether to configure a transport path in the transport network for forwarding traffic of a first CoS from the first customer network to the second customer network
  • the second controller function is configured to: collect second traffic demand data of the second customer network, collect second performance data of transport paths in the transport network that are configured for forwarding traffic from the second customer network to the first customer network, estimate traffic demand and Co
  • the system allows each of the first customer network and the second customer network to dynamically estimate traffic demand and CoSs of a respective customer network, and dynamically determine whether to configure a transport path in the transport network for forwarding traffic of a CoS from the respective customer network to another customer network.
  • Each customer network may thus dynamically request the transport network to configure transport paths in the transport network.
  • the system enables the transport network to configure transport paths to meet QoS requirements of traffic flows transmitted from one of the first customer network and the second customer network to the other one through the transport network.
  • the first customer network or the second customer network is a mobile network.
  • the first customer network or the second customer network is an edge computing network.
  • one of the first customer network and the second customer network is a radio access network, and the other is a mobile core.
  • the first performance data or the second performance data comprises metrics of end-to-end (E2E) delay in the transport network, delay jitter, packet loss, or bandwidth utilization.
  • E2E end-to-end
  • performance data of a transport path is identified by two PE routers of the transport network on the transport path.
  • the first performance data or the second performance data is received from a third controller function in the transport network.
  • the third controller function is a software defined network-controller (SDN-C) of the transport network.
  • SDN-C software defined network-controller
  • the first controller function is configured to subscribe the first performance data with the third controller function in the transport network
  • the second controller function is configured to subscribe the second performance data with the third controller function in the transport network
  • a CoS of the first customer network or the second customer network is associated with a QoS class comprising a set of requirements and a network slice.
  • the first controller function in the first customer network is configured to: send, upon determining to configure the transport path in the transport network for forwarding traffic of the first CoS from the first customer network to the second customer network, a request to the transport network for configuring the transport path in the transport network based on the request, the request comprising information of PE routers of the transport path, the first CoS, a traffic matrix and a set of transport path configuration constraints.
  • the second controller function in the second customer network is configured to: send, upon determining to configure the transport path in the transport network for forwarding traffic of the second CoS from the second customer network to the first customer network, a request to the transport network for configuring the transport path in the transport network based on the request, the request comprising information of PE routers of the transport path, the second CoS, a traffic matrix and a set of transport path configuration constraints.
  • the set of transport path configuration constraints comprises a deterministic latency, a non-deterministic latency, a bandwidth required for tunneling in the transport network, a path isolation requirement, a path protection level, or any combination thereof
  • the request for configuring the transport path is sent for creating the transport path based on the request.
  • the request for configuring the transport path is sent for deleting the transport path based on the request.
  • the request for configuring the transport path is sent for reconfiguring the transport path that has been previously configured.
  • determining whether to configure the transport path in the transport network comprises: determining whether the transport path satisfies a performance criterion.
  • determining whether the transport path satisfies the performance criterion comprises: determining whether a latency, jitter, or a bandwidth utilization of the transport path satisfies the performance criterion.
  • each of the first controller function and the second controller function is configured to determine provider edge (PE) routers of the transport network for routing traffic between the first customer network and the second customer network through the transport network.
  • PE provider edge
  • the first controller function and the second controller function are configured to negotiate a traffic matrix for transmitting traffic of a CoS from one of the first customer network and the second customer network to the other one of the first customer network and the second customer network.
  • negotiation of the traffic matrix is based on a network policy of the first customer network or the second customer network, a demand estimation of the first customer network or the second customer network, a QoS requirement for transmitting the traffic flow, a capacity estimation of the first customer network or the second customer network, or any combination thereof.
  • each of the first controller function and the second controller function is configured to receive topology information of the transport network for determining PE routers of the transport network that receive traffic sent to the transport network.
  • the first controller function is configured to map a customer edge (CE) router in the first customer network to a first PE router in the transport network for traffic routing
  • the second controller function is configured to map a customer edge (CE) router in the second customer network to a second PE router in the transport network for traffic routing.
  • CE customer edge
  • each of the first controller function and the second controller function is configured to collect traffic data regularly.
  • an apparatus includes: a non-transitory storage medium storing instructions, and one or more processors in communication with the storage medium, wherein the one or more processors execute the instructions to perform any one of the preceding aspects.
  • a non-transitory computer-readable media storing computer instructions, that when executed by one or more processors, cause the one or more processors to perform any one of the preceding aspects.
  • FIG. 1 illustrates a diagram of an embodiment wireless communications network
  • FIG. 2 illustrates a diagram showing Third Generation Partnership Project (3GPP) control plane functions
  • FIG. 3 illustrates a diagram of an embodiment communications system highlighting per-service routing with integrated visibility of transport underlay
  • FIG. 4 illustrates a diagram of an embodiment communications system for configuring transport paths in a transport network
  • FIG. 5 illustrates a diagram of an embodiment communications system for dynamically provisioning and managing a transport network
  • FIG. 6 illustrates a diagram of an embodiment communications system highlighting interfacing between transport path managers (TPMs) and software defined network-controllers (SDN-Cs) of different domains;
  • TPMs transport path managers
  • SDN-Cs software defined network-controllers
  • FIG. 7 illustrates a diagram of embodiment communications for dynamically configuring transport paths of a transport network
  • FIG. 8 illustrates a diagram of an embodiment access points information element (IE) ;
  • FIG. 9 illustrates a diagram of an embodiment virtual network connection (VNC) traffic matrix IE
  • FIG. 10 illustrates a diagram of an embodiment performance monitoring (PM) data IE
  • FIG. 11 illustrates a diagram of embodiment communications highlighting monitoring and feedback of transport paths
  • FIG. 12 illustrates a diagram of an embodiment communications system highlighting user plane packet classification and forwarding
  • FIG. 13 illustrates a diagram of an embodiment communications system for distributed DC interconnect
  • FIG. 14 illustrates a diagram of an embodiment method for wireless communications
  • FIG. 15 illustrates a block diagram of an embodiment processing system
  • FIG. 16 illustrates a block diagram of a transceiver.
  • traffic engineered (TE) mobile network backhauls performs traffic path provisioning based generally on static traffic demand estimates, e.g., weekly or monthly.
  • static traffic path provisioning is unable to meet the need of dynamically varying traffic demand in 5G networks. It is desirable to provide a mechanism where backhaul networks are able to dynamically provide traffic engineered paths to accommodate varying traffic demand and other QoS requirements.
  • Embodiments of the present disclosure provide systems and methods for dynamically traffic engineering paths in transport networks.
  • a new control plane function a transport path manager (TPM)
  • TPM transport path manager
  • customer network such as a mobile network
  • a QoS class may be associated with a set of QoS requirements or characteristics.
  • the TPM may substantially continually collect data about traffic demand, and also feedback data about transport paths configured in the transport network, and made a determination based on the collected data and feedback data.
  • a first TPM of a first customer network may negotiate, with a second TPM of a second customer network, a traffic matrix for transmitting traffic flows of a QoS class to the second customer network from the first customer network.
  • the QoS class may be associated with virtual network connection (VNC) resources provisioned for the traffic flows transmitted between the first customer network and the second customer network.
  • VNC virtual network connection
  • the first TPM may request a transport network to configure a transport path in the transport network for transmission of the traffic flows from the first customer network to the second customer network through the transport network, based on information of provider edge (PE) routers of the transport network that are determined by the first TPM, the QoS class, the traffic matrix and a set of transport path configuration constraints.
  • PE provider edge
  • the embodiments allow a TPM to dynamically request a transport network to configure transport paths based on QoS classes, traffic matrices and transport path configuration constraints that may vary dynamically. This allows a customer network to regularly or intermittently negotiate traffic matrices with another customer network according to varying traffic demand of the customer network. This further enables the transport network to configure transport paths to meet QoS requirements of traffic flows transmitted from the customer network to other customer network through the transport network. Details of the embodiments will be provided in the following.
  • FIG. 1 illustrates a network 100 for communicating data.
  • the network 100 comprises a base station 110 having a coverage area 101, a plurality of mobile devices 120, and a backhaul network 130.
  • the base station 110 establishes uplink (dashed line) and/or downlink (dotted line) connections with the mobile devices 120, which serve to carry data from the mobile devices 120 to the base station 110 and vice-versa.
  • Data carried over the uplink/downlink connections may include data communicated between the mobile devices 120, as well as data communicated to/from a remote-end (not shown) by way of the backhaul network 130.
  • base station refers to any component (or collection of components) configured to provide wireless access to a network, such as an enhanced base station (eNB) , a next generation gigabit NodeB (gNB) , a transmit/receive point (TRP) , a macro-cell, a femtocell, a Wi-Fi access point (AP) , or other wirelessly enabled devices.
  • Base stations may provide wireless access in accordance with one or more wireless communication protocols, e.g., long term evolution (LTE) , LTE advanced (LTE-A) , High Speed Packet Access (HSPA) , Wi-Fi 802.11a/b/g/n/ac, etc.
  • LTE long term evolution
  • LTE-A LTE advanced
  • HSPA High Speed Packet Access
  • Wi-Fi 802.11a/b/g/n/ac etc.
  • the term “mobile device” refers to any component (or collection of components) capable of establishing a wireless connection with a base station, such as a user equipment (UE) , a mobile station (STA) , and other wirelessly enabled devices.
  • a base station such as a user equipment (UE) , a mobile station (STA) , and other wirelessly enabled devices.
  • the network 100 may comprise various other wireless devices, such as relays, low power nodes, etc.
  • FIG. 2 illustrates a diagram 200 showing Third Generation Partnership Project (3GPP) control plane functions for setting up user plane connections.
  • 3GPP control plane functions e.g., a access and mobility management function (AMF) , a session management function (SMF) , etc.
  • AMF access and mobility management function
  • SMF session management function
  • FIG. 2 illustrates 3GPP control plane functions (e.g., a access and mobility management function (AMF) , a session management function (SMF) , etc. ) that provide access and session handling capabilities for setting up user plane connections across an interface N3 (for a communication segment between a radio access network (RAN) and a user plane Function (UPF) ) , across an interface N9 (for a segment between UPFs) , and across an interface N6 (for a segment between a UPF and an edge network and/or other external destinations) .
  • AMF access and mobility management function
  • SMF session management function
  • the control plane functions shown in FIG. 2 include a policy control function (PCF) 212, a network data analysis function (NWDAF) 214, an access and mobility management function (AMF) 216, and a session management function (SMF) 218.
  • PCF policy control function
  • NWDAF network data analysis function
  • AMF access and mobility management function
  • SMF session management function
  • the 3GPP specifications including Technical Specification (TS) 23.501 and TS 23.502, describe these control plane and user plane functions in detail.
  • the SMF 218 is responsible for handling individual user sessions, in particular, IP addresses, routing and mobility.
  • the SMF 218 provisions user sessions that are subject to network and subscription policy as defined in the PCF 212.
  • the NWDAF 214 is responsible for network data analysis, i.e., analysis on data from various 3GPP network functions (NFs) .
  • the AMF 216 is responsible for handling connection and mobility management.
  • control plane functions communicate with other functions through their specific interfaces.
  • the PCF 212 communicates via an interface Npcf
  • the NWDAF 214 communicates via an interface Nnwdaf
  • the AMF 216 communicates via an interface Namf
  • the SMF 218 communicates via an interface Nsmf.
  • UEs may access a (R) AN 232 for wireless communication, and traffic may be routed between the (R) AN 232 and a UPF 234 via N3, between the UPF 234 and a UPF 236 via N9, and between the UPF 236 and an application server (AS) 238 via N6.
  • the interface between the UPF 236 and the AS 238 may be N6 or a 3GPP external network interface.
  • the end-to-end connections for those interfaces may traverse a backhaul network or a data center (DC) network.
  • a backhaul/DC network 240 For example, the connection over N3 traverses a backhaul/DC network 240, the connection over N9 traverses backhaul/DC network 242, and the connection over N6 traverses a backhaul/DC network 244.
  • Each of the backhaul or DC network may be referred to as a transport network, and traffic is routed or transported through a transport network corresponding to an interface.
  • the corresponding transport underlay for these interfaces N3, N6 and N9 may need to be traffic engineered to support various 5G use cases.
  • SDN software defined network
  • SDN-Cs software defined network -controllers
  • traffic engineered (TE) mobile network backhauls perform traffic path provisioning based generally on static engineering estimates, where traffic engineering is configured periodically (e.g., weekly or monthly) based on demand and other performance criteria.
  • the backhauls generally provide statically traffic engineered paths for forwarding traffic.
  • the demand estimate varies much more dynamically than systems in previous generations. For example, the demand may vary on the order of several minutes in the worst cases.
  • backhaul networks that provide capabilities to reprogram routers and switches to meet the dynamically changing traffic demand profiles are desirable.
  • the base capability found in Internet Engineering Task Force (IETF) Abstraction and Control of Traffic Engineered Networks (ACTN) may be desirably applied in a 3GPP mobile network.
  • the IETF ACTN capabilities in the transport underlay need to interact with a mobile network controller to know how to program the routes based on traffic demand information, slices, quality of service (QoS) , and network policies.
  • a network policy may specify packet data network (PDN) session establishment and detach information for each transport path, traffic routing and re-routing information derived from performance data, etc.
  • PDN packet data network
  • the 3GPP mobile network also requires dynamic feedback from the underlay transport network to recalculate projected demand for TE paths on a continuous basis.
  • the backhaul and DC networks may support or provide dynamically traffic engineered paths to accommodate the dynamically varying traffic demand, as well as other requirements, such as latencies (including both deterministic latency and non-deterministic latency) , jitter (in case of non-deterministic latency) , bandwidths, and protection levels. These paths may be set up for data plane as well as control plane traffic. It is also desirable that the backhaul and DC networks may support or provide provisioning of the dynamic paths on per slice and per QoS class basis, provide feedback (or monitoring) information of these paths from the transport network underlay, and provide capabilities for configuring these paths across more than one administrative domain.
  • Embodiments of the present disclosure provide systems and methods for dynamically traffic engineering paths in transport networks.
  • traffic is transmitted over an end-to-end connection path from a source network to a destination network.
  • the source network and the destination network are referred to as customer networks.
  • Communication of traffic between the customer networks may be through a backhaul network or data center (DC) .
  • DC data center
  • traffic is routed from the source network to the destination network through the backhaul network or DC.
  • the backhaul network or DC for routing traffic between two customer networks is referred to as a transport network.
  • a customer network may be a mobile network, or an edge computing network.
  • one of the source network and the destination network is a radio access network, and the other one is a mobile core.
  • 3GPP mobile networks as examples for merely illustrative purposes.
  • Other networks may also be applicable, such as a content delivery network (CDN) or a DC network, without departing the principle and spirit of the present disclosure.
  • CDN content delivery
  • traffic communicated between the customer networks through the transport network may also be referred to as communicated across different domains.
  • the source network, destination network and the transport network may be viewed as being associated with different domains.
  • the 3GPP mobile networks are associated with 3GPP domains
  • the transport network is associated with a transport domain.
  • traffic is communicated across 3GPP domains and the transport domain.
  • the embodiments of the present disclosure introduce a new function, i.e., a transport path manager (TPM) function, on the control plane in a customer network.
  • TPM transport path manager
  • the terms of “TPM function” and “TPM” will be used interchangeably throughout the disclosure.
  • the TPM is configured to dynamically determine whether to configure a transport path in a transport network for forwarding traffic of a class of service (CoS) , between the customer network and another customer network, and request the transport network to configure the transport path corresponding to the CoS and according to a set of requirements (or constraints) .
  • the TPM may request to delete or reconfigure a transport path that has previously been configured, or create a new transport path.
  • a CoS is associated with a quality of service (QoS) class.
  • QoS quality of service
  • a QoS class may be associated with a set of QoS requirements or characteristics, and also associated with virtual network connection (VNC) resources provisioned for traffic transmitted between the two customer networks.
  • the TPM may collect traffic demand data and performance data of configured transport paths, based on which the TPM may make determination of whether to configure a transport path.
  • the requirements or constraints may include bandwidth, latencies, jitter, deterministic latency (i.e., maximum latency) , a non-deterministic latency (including an average latency and jitter) , a guaranteed bandwidth, an isolation requirement, and a protection level.
  • Each customer network may include a TPM.
  • a TPM in a first customer network (such as a 3GPP mobile network) may interact with a TPM function in a second network to negotiate traffic matrices for transmitting traffic to the second networks.
  • Each TPM may interact with a SDN-C in an associated transport network for configuring and managing traffic engineered paths.
  • a TPM may use collected data from control functions, such as traffic related data analytics from the NWDAF, as well as policy information from the PCF, for determining transport paths needed.
  • the TPM functions may be used to set up and tune the transport paths (i.e., the traffic engineered paths) on a continuous basis.
  • the embodiments also provide a communications system that includes TPMs and SDN-Cs, and utilizes communications between the TMPs and SDN-Cs, and between TPMs in multiple 3GPP provider domains to set up TE paths.
  • the embodiments allow operators to specify transport path related policies, initial engineering estimates, and other configurations.
  • the transport path related policies may specify requirements (or rules) for selecting a path from multiple viable paths for a destination, allowing for optimizing capacity, guarantees of services, degree of resilience, and measurement of performance (e.g., frequency) , etc. Details will be provided in the following description.
  • FIG. 3 illustrates a diagram of an embodiment communications system 300 highlighting per-service routing with integrated visibility of transport underlay.
  • each service is classified based a set of QoS requirements that is requested and a slice it belongs to. Traffic of a service is then routed based on the class of the service indicating the QoS requirements and the slice.
  • FIG. 3 shows an end-to-end view of this routing.
  • per-service routing refers to routing per virtual network slice or slice itself.
  • GMA Global System for Mobile Communication Association
  • [a] network slice is a logical network that provides specific network capabilities and characteristics in order to serve a defined business purpose of a customer. Network slicing allows multiple virtual networks to be created on top of a common shared physical infrastructure.
  • a network slice consists of different subnets, example: Radio Access Network (RAN) subnet, Core Network (CN) subnet, Transport network subnet. ” (GSMA, “Network Slicing Use Case Requirements” , April 2018, page 13. See https: //www. gsma. com/futurenetworks/wp-content/uploads/2018/04/NS- Final. pdf . )
  • a “subnet” in this definition may correspond to a slice segment together with isolation for compute and storage in the embodiments.
  • a TE network slice is a collection of resources that is used to establish a logically dedicated virtual network over one or more TE networks.
  • TE network slicing allows a network operator to provide dedicated virtual networks for applications/customers over a common network infrastructure.
  • the logically dedicated resources are a part of the larger common network infrastructures that are shared among various TE network slice instances, which are the end-to-end realization of TE network slicing, consisting of the combination of physically or logically dedicated resources.
  • a slice of a network may be associated with a set of resources of the network.
  • a transport slice may be associated with a set of transport network resources.
  • a 3GPP slice may be associated with a set of 3GPP network resources. There may be multiple slice instances corresponding to a slice, and they may be dedicated or shared.
  • Class of service indicates classification of services into categories, so that traffic of the services is treated according to the classification.
  • CoS may be associated with a set of QoS characteristics or requirements for the slice or service.
  • 3GPP TS 23.501, Release 15, section 5.7.4 shows mapping from a 5G QoS identifier (5QI) that maps CoS, to QoS characteristics.
  • 5QI value “1” corresponds to a set of QoS characteristics of 100ms packet delay budget, and a 2000ms averaging window.
  • the communications system 300 is a 3GPP system and includes an access provider network 310, a transport provider network 330 and a mobile core provider network 350.
  • the access provider network 310 includes an application server (AS) 312 and a UPF 314 attached to the application server 312.
  • the application server 312 may be a mobile edge computing (MEC) server or a similar application server.
  • the access provider network 310 also includes a plurality of next generation node Bs (gNBs) , e.g., gNB1-gNB5, and each may be associated with one or more attached UPFs.
  • UPF1 316 is attached to gNB1-gNB3, and UPF2 318 is attached to gNB3-gNB5.
  • the transport provider network 330 includes a plurality of routers, including provider edge (PE) routers R1-R4. Traffic may be routed in the transport provider network 330 through the PE routers R1-R4 via routing paths a, b, c, d and e.
  • the mobile core provider network 350 may also include a plurality of UPFs, such as one or more UPF-x 352.
  • the UPFs 314, 316 and 318 attached to the gNBs perform functions that handle Packet Data Network (PDN) flows, uplink classification (UL CL) , and branching point (BP) as described in TS 23.501.
  • the UPF-x 352 in FIG. 3 is an anchor point to communicate with external networks, such as the Internet.
  • Traffic may be routed from a gNB to a UPF in the access provider network 310, and then routed to another UPF through the transport provider network 330.
  • Virtual network connections (VNCs) between destinations across two 3GPP nodes may be established for forwarding traffic, and these VNCs make it possible to forward packets within the VNCs that meet criteria for QoS classification and isolation, or other specific requirements of a slice.
  • a VNC corresponds to a transport abstraction for a 3GPP slice instance.
  • FIG. 3 shows per-service and/or per-slice VNCs 362, 364 and 366, and per flow tunnel connections 370.
  • the UPFs may use these VNCs on a per QoS, slice and/or destination basis.
  • the PE routers may classify traffic based on the QoS and use the destination for setting up the forwarding paths.
  • the classification at a source (or input) PE router may be based on QoS class and destination.
  • a path used herein is one that is set up using path management and traffic engineering, and continually modified based on continuous monitoring feedback and dynamic estimates.
  • the number of VNCs in the backhaul may be the product of (a number of QoS classes *a number of transport slices *a number of destinations) .
  • the number of virtual connections or GTP connections is (a number of QoS classes *a number of flows *a number of destinations) . That is, the VNCs set up for forwarding traffic flows are not based on the number of traffic flows, but the number of transport slices that the associated services belong to.
  • a transport path manager (TPM) function in the 3GPP control plane interacts with a SDN-C in the data plane and performs the necessary provisioning. It may be noted that the TPM may be part of the NWDAF or a separate functional entity.
  • Traffic Engineered paths may be supported for a number of scenarios. For example, traffic engineering may be use to provision a data plane path from a UE to a destination (e.g., a MEC server) within an operator, or a path across two operators for roaming. In each of these cases, there may be backhaul, data center and core networks (and radio networks) involved.
  • FIG. 4 illustrates a diagram of an embodiment communications system 400 highlighting interactions between a mobile network 410, a mobile network 430 and a transport network 450 for configuring transport paths in the transport network 450.
  • FIG. 4 shows processes for estimating traffic demand within a 3GPP domain (the mobile network 410 or 430) , negotiating a traffic matrix for traffic engineered paths across two 3GPP domains (the mobile networks 410 and 430) and managing transport paths.
  • the mobile network 410 includes a transport path manager (TPM) 412, a NWDAF 414, a SMF 416 and a SDN-C 418.
  • the mobile network 430 includes a transport path manager (TPM) 432, a NWDAF 434, a SMF 436 and a SDN-C 438.
  • a UE 402 communicates with a gNB 404 in the mobile network 410. Traffic may be transmitted from the gNB 404 to a UPF 406 in the mobile network 410, routed by the transport network 450 on a path including PE routers 452 and 454, forwarded to a UPF 442 and then to an application server 440 in the mobile network 430.
  • Communication between the UE 402 and the gNB 404 traverses a radio network.
  • Communication between the gNB 404 and the UPF 406 is through the interface N3 traversing a data center (or a transport network) .
  • Communication between the UPF 406 and the UPF 442 is through the interface N9 traversing a backhaul network (or a transport network) .
  • Communication between the UPF 442 and the application server 440 is through the interface N6 traversing a data center (or a transport network) .
  • the path for communicating traffic between the UE 402 and the application server 440 is referred to as an end-to-end connection path 460.
  • a TPM of a network may be configured to collect data, e.g., user session information, traffic volume, etc., regarding traffic demand of the network, collect topology information of a transport network that is used to forward traffic between the network and other networks, negotiate with a TPM of another network for traffic matrix, and collect performance data regarding transport paths of a transport network for routing traffic between the network and other networks through the transport network.
  • the TPM may be configured to dynamically determine whether to request to configure a transport path in a transport network for routing traffic from the network to another network through the transport network, based on the collected data, estimates and/or requirements, such as QoSs determined, demands estimated, traffic matrix negotiated, PE routers of the transport network that are determined, and one or more transport path configuration constraints.
  • a TPM may subscribe to session and other traffic related data from a NWDAF (e.g., the NWDAF 414 or 434) , based on which demand is estimated (steps 472 and 474) .
  • NWDAF e.g., the NWDAF 414 or 434.
  • the two 3GPP domains, through the TPMs 412 and 432 may negotiate paths for communication between the two 3GPP domains based on transport path related policies of the mobile networks 410 and 430, configuration information, and other requirements or information, such as estimates of communication bandwidths, latencies, protection and isolation levels, and demand estimates obtained at steps 472 and 474.
  • a transport path related policy specifies policies for selecting a path corresponding to a destination, allowing for optimizing capacity, guarantees of services, degree of resilience, and measurement of performance (e.g., frequency) .
  • the configuration information may include information about device and virtual functions, locations, addressing and discovery of nodes, or other information addressed by a 3GPP a NRF (network repository function) , as specified in 3GPP TS 23.501.
  • Path messages e.g., for creating, modifying or deleting paths, may be sent to SDN-controllers (e.g., the SDN-C 418, the SDN-C 438 and a SDN-C 456 of the transport network 450) to set up and manage TE paths (steps 478, 480 and 482) .
  • the TPMs 412 and 432 may also receive feedback information, from the SDN-C 418, the SDN-C 438 and the SDN-C 456, regarding the path performance.
  • the communications system 400 may use an estimate (steps 472, 474, 476) -control (steps 478, 480 and 482) -feedback loop to configure traffic engineered paths for communications between the mobile networks 410 and 430 through the transport network 450.
  • This estimate-control-feedback loop may be repeated continuously to adjust (or reconfigure) paths, based on traffic demand from 3GPP users and control plane traffic, and path availability in the transport network. More details regarding configuring the TE paths will be provided in the following.
  • FIG. 5 illustrates a diagram of an embodiment communications system 400 highlighting interactions between a customer network 510 (Provider-A) , a customer network 530 (Provider-B) , and a transport network 550 (Provider-TP) for dynamically provisioning and managing the transport network 550.
  • Traffic may be routed by the transport network 550 between the customer network 510 and the customer network 530.
  • FIG. 5 shows a general model for dynamically provisioning and managing the transport network 550.
  • the model has three providers, i.e., Provider-A (i.e., the customer network 510) , Provider-B (i.e., the customer network 530) , and Provider-TP (i.e., the transport network 550) .
  • Provider-A i.e., the customer network 510
  • Provider-B i.e., the customer network 530
  • Provider-TP i.e., the transport network 550
  • Provider-A and Provider-B may be 3GPP network providers, and may be access providers or core network providers.
  • the Provider-A and Provider-B provide 3GPP access or core services that are consumed by UEs
  • the Provider-TP provides transport connectivity services that are consumed by the 3GPP Provider-A and B.
  • Table 1 below shows examples of networks that the Provider A, Provider B and the Provider TP may provide.
  • Table 1 shows examples of 3GPP providers and transport providers.
  • the customer network 510 includes function entities, such as a NWDAF 512, a SMF 514, a UPF 516 and a TPM 518.
  • the customer network 530 includes function entities, such as a NWDAF 532, a SMF 534, a UPF 536 and a TPM 538.
  • the networks 510 and 530 may also include SDN-Cs for provisioning and managing routers for routing traffic within the respective networks.
  • a SMF of a network may program each UE session’s forwarding behavior at a UPF of the network.
  • a NWDAF may collect data from each network function (NF) entity, such as a SMF or a gNB.
  • NF network function
  • each TPM may subscribe, from a NWDAF, traffic related information, such as new, modified and released session information, for calculating and requesting traffic demands for a transport path.
  • traffic related information such as new, modified and released session information
  • the TPM 518 may subscribe traffic related information from the NWDAF 512
  • the TPM 538 may subscribe traffic related information from the NWDAF 532.
  • a TPM e.g., the TPM 518 or the TPM 538, may determine control plane transport requirements based on the number of sites (e.g., data centers) and distribution of 3GPP NFs across these sites.
  • 3GPP provider networks may be radio access or core networks.
  • FIG. 5 shows two PE routers 552 and 554 in the transport network 550 for routing traffic between the networks 510 and 530.
  • a SDN-C 556 is responsible for provisioning and managing routers of the transport network 550 for traffic routing.
  • a TPM may obtain a traffic (or connection) demand profile for transport as described above, e.g., from a NWDAF.
  • each TPM may interact with the SDN-C 556.
  • the SDN-C 556 may provide information for the 3GPP providers (i.e., the Provider-A and Provider-B) to discover the transport topology and the relationship to the UPFs.
  • the TPMs in different provider networks e.g., the TPMs 518 and 538) may negotiate traffic matrices, e.g., continuously, corresponding to the demand estimate.
  • each TPM (the TPMs 518 and 538) and the SDN-C 556 may set up, modify and otherwise manage traffic engineered paths.
  • the TPMs may also receive (in addition to the connection demand profile) feedback information from the SDN-C 556.
  • the feedback information may include information regarding transport paths of the transport network 550, such as performance metrics of transport paths that are monitored.
  • the feedback information may include an end-to-end (E2E) delay in the transport network, delay jitter, packet loss, or bandwidth utilization, etc., of one or more transport paths of the transport network. This feedback information may be used to continuously adapt the routes and pick the best transport paths. Details of these procedures will be provided below.
  • two interfaces may be provided, i.e., a TPM –TPM interface (I TPM ) and a SDN-C –TPM interface (I TP ) .
  • the I TPM is an interface by which TPMs in two domains interact to negotiate a traffic matrix for paths across access points (e.g., UPFs) of each domain, and across multiple classes of service (corresponding to slices and per flow QoS) .
  • the I TP is an interface by which a TPM may receive transport path topology and feedback information from the transport network, and may request for TE paths provisioned in the transport network.
  • the interface between the SDN-C 556 and a TPM may be also referred to as TPM-SDN-C Interface.
  • the I TP is an interface across the two domains of the customer network 510 (or 530) and the transport network 550.
  • a SDN-C in a network where 3GPP functionality is deployed may be the entity interfacing a TPM of the network. That is, a TPM of a customer network may not directly interface with a SDN-C of a transport network; instead, the TPM may interface with the SDN-C of the transport network through a SDN-C of the customer network.
  • FIG. 6 illustrates such an embodiment.
  • FIG. 6 illustrates a diagram of an embodiment communications system 600 highlighting interfacing between TPMs and SDN-Cs of different domains.
  • the communications system 600 is similar to the communications system 500.
  • the communications system 600 includes a customer network 610 communicating with a customer network 630 through a transport network 650.
  • the customer network 610, the customer network 630 and the transport network 650 may be associated with different providers.
  • the customer network 610 includes a NWDAF 612, a SMF 614, a UPF 616 and a TPM 618
  • the customer network 630 includes a NWDAF 632, a SMF 634, a UPF 636 and a TPM 638. Each of these functions operates similarly to that shown in FIG. 5.
  • Traffic communicated between the customer network 610 and the customer network 630 may be forwarded from the UPF 616 to a customer edge (CE) router 620, routed by the transport network 650 on a path including two PE routers 652, 654, and forwarded by a CE router 640 of the customer network 630 to the UPF 636.
  • the TPMs 618 and 638 may negotiate, through the interface I TPM , with each other for a traffic matrix associating with transmitting traffic from one customer network to the other. Different from the communications system 500 in FIG. 5, in the communications system 600, each TPM communicates with a SDN-C 656 of the transport network 650 through a local SDN-C.
  • the TPM 618 communicates with a SDN-C 622 through an interface I TP
  • the SDN-C 622 communicates with the SDN-C 656 through an interface I TP ’
  • the TPM 638 communicates with a SDN-C 642 through an interface I TP
  • the SDN-C 642 communicates with the SDN-C 656 through an interface I TP ’.
  • the SDN-Cs 622 and 642 manage CE routers and data center routers of the customer networks 610 and 630, respectively.
  • the SDN-C 656 manages transport and backhaul resources and paths, and the TPMs manage the overlay path between the two 3GPP domains (i.e., the customer networks 610 and 630) .
  • Control flows and high-level APIs may be provided for implementing the functionalities of the TPMs and the SDN-Cs.
  • FIG. 7 illustrates a diagram of embodiment communications 700 for dynamically configuring transport paths of a transport network.
  • FIG. 7 shows control flows across a TPM-A 702 of a first customer network (i.e., a network A) , a TPM-B 704 of a second customer network (i.e., a network B) , and a SDN-C 706 of the transport network.
  • the first and the second customer networks communicate traffic through routing of the transport network.
  • the TPMs and the SDN-C exchange messages to discover transport topology, to negotiate traffic matrix between two 3GPP domains, and to set up and modify transport paths.
  • FIG. 7 it is assumed that business relationships have been established across the first customer network, the second customer network and the transport network prior to control message exchanges. This implies that trust and security relationships have also been established amongst the entities of the TPM-A 702, the TPM-B 704 and the SDN-C 706.
  • communications are performed between the TPM-A 702, the TPM-B 704 and the SDN-C 706 for topology mapping, traffic matrix negotiation, and path setup and modification.
  • the topology mapping involves interactions between the SDN-C 706 and each of the TPM-A 702 and the TPM-B 704.
  • the SDN-C 706 maps CE routers (consequently, UPFs that the CE routers are attached to) in the 3GPP domain (e.g., the first or the second customer network) to PE routers in the transport domain (i.e., the transport network) . This helps negotiate and build end-to-end paths between CEs (UPFs) in one domain and CEs (UPFs) in the other domain.
  • the traffic matrix negotiation includes negotiation of a traffic matrix by the TPM-A 702 and the TPM-B 704 based on estimation of capacity and other class of service needs, such as latencies, packet loss, or resilience.
  • the estimation is unidirectional. That is, the estimation is performed for traffic transmitted from one customer network to the other, and the traffic matrix negotiated is also for the traffic transmitted from one customer network to the other.
  • the TPM-A 702 and the TPM-B 704 may negotiate one traffic matrix for transmissions from the first customer network to the second customer network, and one traffic matrix for transmissions from the second customer network to the first customer network.
  • the estimation may be revised based on a network policy and demand estimates via feedback (e.g., transport path information feedback, call setup rates, etc. ) .
  • a traffic matrix is a two-dimensional matrix with its ij-th element denoting the amount of traffic sourcing from a node i and exiting at a node j.
  • Each node may be an individual router, or a site that contains multiple routers.
  • the traffic matrix may represent peak traffic, 95th percentile traffic, or traffic at a specific time.
  • the traffic matrix depicts how much and where traffic enters a network, its distribution inside the network, and at what places the traffic exits the network. It may be referred to as a traffic map of the network.
  • a traffic matrix is useful for network planning.
  • Each TPM may make a virtual network connection (VNC) request to the SDN-C 706 for configuring (or setting up) a set of transport paths per (PEin, PEout, CoS) with path configuration constraints (also referred to as path constraints) .
  • VNC virtual network connection
  • PEin represents a PE router of the transport network that receives traffic from outside the transport network
  • PEout represents a PE router of the transport network that transmit traffic from the transport network to another network.
  • the PEin may also be referred to as a source PE router, and the PEout may also be referred to as a destination PE router.
  • a path constraint may include a deterministic latency (i.e., maximum latency) , a non-deterministic latency (including an average latency and jitter) , a guaranteed bandwidth, an isolation requirement (e.g., hard isolation, soft isolation, or no isolation) , or a protection level of a transport path. Isolation may be used for providing better QoS and security for flows. A protection level may refer to a level of security and an amount of sharing of resources. Hard isolation means that all transport resources (including resources in all layers, packet resources and/or optical resources) allocated for a VNC are dedicated for the VNC without sharing with another VNC. Soft isolation is generally the same as hard isolation except that optical resources may be shared with other VNCs. No isolation means that the VNC is permitted to share, with other VNCs, all transport resources. Multiple transport paths may be set up for one set of (PEin, PEout, CoS) , e.g., for supporting resilience or backup in case of failure.
  • PEin, PEout, CoS
  • Each TPM may also indicate a performance monitoring (PM) policy (i.e., delegate or not, which will be discussed later) to the SDN-C 706 for monitoring performance of a created path.
  • the PM policy may specify requirements or rules for monitoring and measuring performance.
  • the SDN-C 706 may monitor the path, and feed back performance data of the monitored path to a corresponding TPM.
  • the performance data may be used to determine whether the path is to be reconfigured or deleted.
  • Each TPM may further indicate PM data sets of interest, such as performance metrics, and a mode of operation, such as a time-scale (e.g., a periodicity of PM data feedback) , event set, etc.
  • the performance metrics may include an end-to-end (E2E) delay in the transport network, delay jitter, packet loss, or bandwidth utilization.
  • E2E end-to-end
  • the TPM-A 702 sends a “sets up access points” message to the SDN-C 706 (step 712) .
  • the message requests to set up access points with the SDN-C 706, i.e., mapping CE routers of the first customer network and PE routers of the transport network. This message triggers active access point configuration and mapping information exchange between CE routers associated with UPFs of the first customer network and PEs of the transport network. Mapping CE routers to PE routers enables the TPM-A702 to determine PE routers of the transport network, based on which transport (or routing) paths may be configured.
  • the SDN-C 706 may respond the TPM-B 702 with a confirmation message for setting up the access points (step 714) .
  • the TPM-B 704 may also send a “sets up access points” message to the SDN-C 706 (step 716) .
  • the message requests to set up access points with the SDN-C 706, i.e., mapping CE routers of the second customer network and PE routers of the transport network.
  • the SDN-C 706 may respond to the TPM-B 704 with a confirmation message for setting up the access points (step 718) . Consequently, each of the TPM-A 702 and TPM-B 704 determines PE routers on transport paths of the transport network for routing traffic from it corresponding network through the transport network.
  • the SDN-C 706 is ready to set up VNCs for the first and the second customer networks, based on the determined PE routers.
  • the SDN-C 706 may wait for a trigger provided by the TPM-A 702 or the TPM-B 704 for set up a VNC.
  • the TPM-A 702 and TPM-B 704 may regularly or intermittently perform topology mapping with the SDN-C 706, which help dynamically update mapping of CE routers and PE routers, and consequently, update transport paths that have been configured.
  • the TPM-A 702 and TPM-B 704 may negotiate with each other for traffic matrices.
  • the TPM-A 702 may negotiate a traffic matrix with the TPM-B 704 for traffic transmitted from the first customer network to the second customer network (step 720) .
  • the TPM-B 704 may negotiate a traffic matrix with the TPM-A 702 for traffic transmitted from the second customer network to the first customer network (step 722) .
  • the TPM-A 702 and TPM-B 704 each may regularly or intermittently negotiate a traffic matrix with the other one based on dynamically changing network capacity and other traffic related conditions, such as quality of service (QoS) requirements, and slices.
  • QoS quality of service
  • Each of the TPM-A 702 and TPM-B 704 may send a message, such as a “Negotiate VNC’s Traffic Matrix” message to the other TPM requesting for negotiating a traffic matrix.
  • This message triggers negotiation of a VNC’s traffic matrix that is going to be instantiated in the transport network.
  • the network A provides a unidirectional traffic matrix that expresses VNC’s connectivity from the network A’s Ingress to the network B’s Egress arrangement.
  • the network B provides a unidirectional traffic matrix from network B’s Ingress to network A’s Egress.
  • the TPM-A 702 may instantiate a VNC for traffic transmitted from the first customer network (the network A) to the second customer network (the network B) with a path monitoring (PM) policy (step 724) .
  • the PM policy specifies a set of performance parameters that are to be monitored for the VNC that is instantiated.
  • the TPM-A 702 may send a request to the SDN-C 706, requesting the SDN-C 706 to set up (or create) the requested VNC, and monitor the VNC according to the PM policy.
  • the monitored metrics may be fed back to the TPM-A 702, based on which, the TPM-A 702 may determine whether to later reconfigure, modify, or delete the VNC.
  • the request may include a CoS associated with the VNC, and/or a set of path configuration constraints as discussed above.
  • the request may also include PE routers of the transport network determined at the step 712, and the traffic matrix negotiated at the step 720.
  • the SDN-C 706 may set up the VNC in the transport network in a transmission direction from the network A to the network B, and respond to the TPM-A 702 that the PM policy update is complete (step 726) .
  • the TPM-B 704 may instantiate a VNC for traffic transmitted from the second customer network (the network B) to the first customer network (the network A) with a path monitoring (PM) policy (step 728) .
  • the TPM-B 704 may send a request to the SDN-C 706, requesting the SDN-C 706 to set up (or create) the requested VNC, and monitor the VNC according to the PM policy.
  • the monitored metrics may be fed back to the TPM-B 704, based on which, the TPM-B 704 may determine whether to reconfigure, modify, or delete the VNC later.
  • the request may include a CoS associated with the VNC, and/or a set of path configuration constraints as discussed above.
  • the request may also include PE routers of the transport network determined at the step 716, and the traffic matrix negotiated at the step 722.
  • the SDN-C 706 may set up the VNC in the transport network in a transmission direction from the network B to the network A, and respond to the TPM-B 704 that the PM policy update is complete (step 730) .
  • the TPM-A 702 may send a message to the SDN-C 706 to modify the VNC created previously for communications from the network A to the network B with a corresponding PM policy (step 732) .
  • the PM policy is used for monitoring the modified VNC.
  • the SDN-C 706 may modify the VNC and update the PM policy (step 734) .
  • the TPM-B 704 may send a message to the SDN-C 706 to modify the VNC created previously for communications from the network B to the network A with a corresponding PM policy (step 736) .
  • the PM policy is used for monitoring the modified VNC.
  • the SDN-C 706 may modify the VNC and update the PM policy (step 738) .
  • the TPM-A 702 or the TPM-B 704 may send a message to the SDN-C 706 to create, modify, or delete a VNC in the transport network, e.g., at steps 724, 728, 732 and 736.
  • the transport network may update performance (monitoring) data to the networks A and B, respectively.
  • the SDN-C 706 may send messages, such as an “update performance monitoring” message including the performance data.
  • FIG. 8 illustrates a diagram of an embodiment access points IE 800 for setting up access points between a customer network (such as the network A in FIG. 7) and a transport network (such as the transport network in FIG. 7) .
  • the access points IE includes fields such as a provider Id (i.e., an identify of a customer network) , an instance Id identifying a virtual network service instance, and an access point list including reference to CE port (i.e., identification information of a CE router) and reference to PE port (i.e., identification information of a PE router) .
  • the access points IE may be used at step 712 or step 716.
  • FIG. 9 illustrates a diagram of an embodiment VNC traffic matrix IE 900 for configuring a VNC in a transport network.
  • the VNC traffic matrix IE includes an instance ID, and a mode of operation which indicates whether this IE is for creating, modifying or delete a VNC.
  • the VNC traffic matrix IE also indicates, for each CoS, a set of QoS parameters and path constraints for configuring each path (identified by a source PE router and a destination PE router) , a path monitoring interval and monitoring metrics for monitoring the corresponding paths configured.
  • the VNC traffic matrix IE may be used at the steps 724, 728, 732 and 736.
  • FIG. 10 illustrates a diagram of an embodiment PM data IE 1000 for updating performance data of a VNC.
  • the PM data IE 1000 includes fields such as an instance identifier (Id) , a source PE Id (for identifying a source PE router) , a destination PE Id (for identifying a destination PE router) , and a CoS level.
  • the CoS level includes performance metrics that are monitored according to a PM policy. In this example, the performance metrics include bandwidth utilization, an average latency and jitter.
  • a SDN-C of a transport network may provision the requested VNC in the transport network, and provide the TPM with updates of PM data based on a PM policy set by the TPM. Based on the PM data, the TPM may determine whether or not to reconfigure (e.g., modify or delete) the VNC, and may send a request to the SDN-C to reconfigure the VNC.
  • Two modes of VNC reconfiguration may be utilized, i.e., no delegation and delegation.
  • the PM policy may specify which mode is to be used.
  • all PM data may need to be updated to the TPM, and VNC reconfiguration is determined and requested by the TPM based on the PM data, and other factors such as traffic matrix change, etc.
  • the SDN-C needs to receive a request from the TPM for reconfiguring a VNC, and then implement the reconfiguration.
  • the TPM may delegate its authority to the SDN-C for implementing a policy, i.e., determining whether to reconfigure a path.
  • the TPM may send the policy and the SDN-C may implement the policy without resorting to repeated requests to the TPM.
  • the VNC reconfiguration is the responsibility of the TPM, it may also be delegated to the SDN-C so that the SDN-C may execute a delegated VNC reconfiguration decision applicable to its domain network.
  • an autonomic scale in and out policy may be preset by the TPM regarding TE performance metrics, such as bandwidth utilization, latency, and other data set. Any combination of operators (e.g., AND, OR, Max, Min, etc.
  • the performance result may be used to determine reconfiguration of a VNC.
  • the performance result may be compared with a threshold.
  • the preset policy or rule
  • the preset policy may automatically trigger the SDN-C to reconfigure (e.g., change) the VNC to meet a criterion.
  • the SDN-C may reconfigure the VNC so that its latency is less than the predefined threshold, without the need to wait for the TPM to determine and request to reconfigure the VNC.
  • the SDN-C may inform the TPM of this execution.
  • the preset policy and operations i.e., delegated operation or non- delegated operation for VNC reconfigurations
  • the preset policy and operations set by the TPM may also be dynamically changed and programmed to the SDN-C. Delegation of VNC Modification may be performed during the PM policy exchange, e.g., at steps 724, 728, 732 or 736.
  • the communications systems describe above are configured for setup and management of a dynamic transport plane. That is, transport paths may be dynamically created, configured, reconfigured, or deleted. These communications systems may utilize monitored and feedback information to make estimates of capacity and classes of services across different paths. To meet low latency constraints, UPFs may be more distributed in a network, and as a result, variations of paths for forwarding traffic across different access points may be greater than that in networks with more centralized UPFs.
  • FIG. 11 illustrates a diagram of embodiment communications 1100 highlighting monitoring and feedback of transport paths.
  • FIG. 11 shows a TPM 1102 using monitoring feedback information in addition to other data collected from 3GPP virtual network functions (VNFs) .
  • the TPM 1102 receives aggregate connection data from 3GPP VNFs (used to estimate demand) as well as transport path feedback, all of which may be analyzed and used along with network policies and other requirements to estimate and reconfigure VNCs.
  • VNFs virtual network functions
  • a 3GPP control plane (CP) function 1106, such as a SMF or a NWDAF, may subscribe from the TPM 1102 for events on paths that it uses for routing traffic or signaling across two domains (step 1112) .
  • An event may include metrics about load and number of flows, and failure information.
  • the TPM 1102 may determine PE routers of the transport network for the VNC (step 114) .
  • the PE routers may include a source PE router and a destination PE router on the VNC.
  • the TPM 1102 may transmit a request to a SDN-C 1104 of the transport network, requesting configuration of the VNC.
  • the TPM 1102 may send a transport path IE to the SDN-C 1104 (step 1116) indicating a set of requirements for configuring the VNC.
  • the transport path IE may include an operation field indicating an operation to be performed for configuring the VNC, such as creating, deleting or modifying the VNC.
  • the transport path IE may indicate a PE IN and a PE OUT of the transport along the VNC, a set of QoS requirement (such as bandwidth, latency) , and a set of path constraints (such as a protection level, an isolation requirement) , as discussed above.
  • the SDN-C 1104 may create (or delete or modify) the VNC based on the operation indicated in the transport path IE, the PE IN and the PE OUT , the set of QoS requirement and the set of path constraints indicated in the transport path IE.
  • the SDN-C 1104 may send a response to the TPM to confirm the configuration of the VNC or to inform the failure of the configuration (step 1118) .
  • the TPM 1102 may process the response (step 1120) , and take other actions based on the processed response. For example, because the 3GPP CP 1106 has subscribed for path events, the TPM 1102 may notify the 3GPP CP 1106 about the VNC status (e.g., configured, or not configured in the case of failure) . The TPM 1102 may notify the 3GPP control plane (CP) entity 1106 regarding UPF IN /CE IN to UPF OUT /CE OUT status (step 1122) . The UPF IN /CE IN to UPF OUT /CE OUT status indicates the status of the VNC, and shows the status in terms of load, available capacity and failure. The TPM 1102 may also collect data about the UPF IN /CE IN to UPF OUT /CE OUT status.
  • CP control plane
  • the TPM 1102 may subscribe, from the SDN-C 1104, monitoring events for transport paths (with each identified by a PE IN and a PE OUT on the path) of the transport network (step 1124) .
  • the SDN-C 1104 monitors the transport paths that are subscribed, and may feedback monitoring data to the TPM 1102.
  • the SDN-C 1104 may send monitoring data including monitored values for the transport paths, each of which is identified by a PE IN and a PE OUT on a corresponding path (step 1126) .
  • the TPM 1102 may request performance data of one or transport paths, and the SDN-C 1104 may provide requested performance data in response.
  • the TPM 1102 may collect or receive data obtained from various resources (e.g., CP functions, and SDN-Cs) and perform analysis on the collected data (step 1130) , and determine, based on the collected data, whether to reconfigure a VNC that has been configured, e.g., created or reconfigured/modified (step 1132) , or determine whether to create a new VNC.
  • resources e.g., CP functions, and SDN-Cs
  • a Big Data Analytic Engine in the TPM 1102 may analyze all traffic data and determine if VNC reconfiguration is needed or not.
  • the TPM 1102 may proceed to step 1114 and send a “Modify” Message to the SDN-C 1104 instructing the SDN-C 1104 to modify the VNC based on a set of requirements, e.g., as shown at step 1116. If there are a plurality of VNCs, the TPM 1102 may check, one-by-one, whether each VNC needs reconfiguration. If a VNC needs reconfiguration, the TPM 1102 proceeds to step 1114. If a VNC does not need reconfiguration, the TPM checks for the next VNC until all VNCs are checked.
  • the TPM 1102 may collect data that is used to estimate demand from various 3GPP VNFs, such as a NWDAF, a SMF, a PCF, an AMF, etc. Based on the collected demand data, the TPM 1102 may calculate and estimate traffic on each path. The TPM 1102 may also use historical data for these estimates. The resulted estimates may be used by the TPM 1102 to negotiate traffic matrix, bandwidth, latency and other QoS provisioning with other TPMs. The TPM 1102 may collect data about the UPF IN /CE IN to UPF OUT /CE OUT status associated with a transport path configured in the transport network. The TPM 1102 may also collect PM data, such as path performance metrics, from the SDN-C 1104 provided by the transport network based on a configured PM policy.
  • PM data such as path performance metrics
  • the TPM 1102 may check whether the VNC’s performance satisfies a criterion. For example, the TPM 1102 may check the latency, bandwidth utilization, jitter, etc. of the VNC against the criterion, e.g., a threshold. As an illustrative example, if the latency of the VNC exceeds a latency threshold, the TPM 1102 may determine that the VNC needs to be reconfigured to improve the latency. As another example, if the jitter of the VNC exceeds a jitter threshold, the TPM 1102 may determine that the VNC needs to be deleted because of the poor performance.
  • a criterion e.g., a threshold.
  • the TPM 1102 may determine that the VNC needs to be reconfigured to improve the latency.
  • the jitter of the VNC exceeds a jitter threshold
  • the TPM 1102 may determine that the VNC needs to be deleted because of the poor performance.
  • the TPM 1102 may determine, based on the collected data, that a new transport path may be needed to accommodate traffic of a new CoS, a new slice and/or a new path constraint. In this case, the TPM 1102 may also proceed to the step 1114 and follow the subsequent steps. For example, the TPM 1102 may determine PE routers for the new transport path, and send a request to the SDN-C 1104 for creating the new transport path, with a set of QoS requirements and/or path constraints.
  • TPM 1102 the SDN-C 1104 or the 3GPP CP 1106, such subscribing for data, confirming for path configuration, may be changed (e.g., combined, omitted, or enhanced) in different embodiments.
  • FIG. 11 shows the communications between the TPM 1102, the SDN-C 1104 without using the delegation mode.
  • it may delegate policy rules by which the SDN-C 1104 may take actions.
  • the TPM 1102 may configure the SDN-C 1104 with a policy, which delegates the SDN-C 1104 to take certain actions upon a criterion being satisfied.
  • the policy may include (or specify) one or more actions to take by the SDN-C 1104 based on metrics such as bandwidth usage, latency or other metrics. For example, if a VNC’s bandwidth usage exceeds 80%that is provisioned, the SDN-C 1104 may then take a specified action in the policy, such re-routing, or add more capacity, etc., for the VNC.
  • a SDN-C of a transport network provisions a VNC in the transport network in response to a request from a TPM of a customer network.
  • the provisioned VNC may be used for forwarding traffic between two UPFs in the data plane and between 3GPP VNFs in the control plane.
  • FIG. 12 illustrates a diagram of an embodiment communications system 1200 highlighting user plane packet classification and forwarding.
  • the communications system includes a transport network 1210 for forwarding user plane packets from a UPF1 1202 to a UPF-x 1204.
  • the transport network 1210 i.e., a backhaul transport network
  • LSPs label switched paths
  • Forwarding rules for end-user (UE) flows are provisioned in the UPF 1202, and managed by the SMF associated with the UPF 1202 for handling user mobility, QoS, slice information, idle state, and policy and charging information.
  • a forwarding state provisioned in the UPF 1202 may be used for forwarding in the transport network 1210.
  • the UPF 1202 may select a destination address that a packet is to be routed via the backhaul transport network, encapsulate the packet with a VNC, such as a GPRS tunneling protocol (GTP) tunnel established for the GTP-user plane (GTP-U) , virtual extensible LAN (VXLAN) , etc., and set a differentiated services code point (DSCP) value or network service header (NSH) based on a QoS class identifier (QCI) or a 5G QoS indicator (5QI) value and a slice that are associated with the packet.
  • GTP GPRS tunneling protocol
  • GTP-U GTP-user plane
  • VXLAN virtual extensible LAN
  • DSCP differentiated services code point
  • NSH network service header
  • QCI QoS class identifier
  • 5QI 5G QoS indicator
  • the UPF1 1002 then forwards the packet to the PE router R1 in backhaul transport network.
  • a PE router of the transport network 1210 may use the DSCP or NSH header values of the packet along with VNC E2E provisioning to select a transport route (and pushes label stacks accordingly) .
  • the PE router R1 may parse the CoS of the packet, and push a SR, or push a NSH service path indicator (SPI) for routing.
  • the PE router R3 at the egress forwards the packet to the UPF-x 1204 (or a destination address) in VNC.
  • the destination UPF-x 1204 may decapsulate the packet for further processing.
  • FIG. 12 shows a packet 1220 that includes a payload, an IP source address (SA) and a destination address (DA) filed, and a DSCP field.
  • the DSCP field includes information indicating the CoS of the packet.
  • the PE router R1 may determine a label stack (or SR (segment routing) ) based on the SA, DA and the DSCP field (i.e., the CoS of the packet) , and push the SR into the packet 1220 to generate a packet 1230, which is then routed on a LSP corresponding to the SR.
  • forwarding of the packet in the transport network is based on the SR inserted at the source (PE) router.
  • the UPF1 1202 inserts NSH metadata in a NSH field of the packet 1220 to indicate information about the CoS of the packet.
  • the PE router R1 may determine a NSH SPI for NSH based on the SA, DA and the NSH metadata, and populate the NSH SPI in a SPI filed to generate a packet 1250, which is routed on a LSP corresponding to the NSH SPI.
  • control plane packets For control plane packets, signaling messages between control plane network functions in the mobile core, and between core and access (or radio access network (RAN) ) may be processed similarly to user plane packets from the point of the transport network.
  • RAN radio access network
  • each control plane network function generates signaling packets destined for another network function.
  • a VNF generating a signaling packet may encapsulates the signaling packet with a VXLAN, or that the native IP packet from the VNF is routable across the packet network.
  • the forgoing embodiments may be applied to various communications systems different than those discussed above with respect to FIGs. 4-6, such as a communications system for distributed DC interconnect, or CDN networks where CDN contents may be dynamically allocated and cached depending on the proximity to the content users.
  • FIG. 13 illustrates a diagram of an embodiment communications system 1300 for distributed DC interconnect.
  • FIG. 13 highlights interactions between a DC provider network 1310, a DC provider network 1330 and a transport provider network 1350.
  • Each of the DC provider network 1310 and the DC provider network 1330 includes a plurality of VMs.
  • VM virtual machine
  • VM virtual machine
  • data center SDN controllers (DC-SDN-Cs) of the DC provider network 1310 and the DC provider network 1330 may be configured to interface with each other via an interface IDCI for signaling their resource information.
  • Each of the DC-SDN-Cs may also be configured to interface with a SDN controller of the transport provider network 1350 via an interface IDC-TN to exchange control signaling information.
  • Embodiments of the present disclosure provide a communications systems, where a TPM in the 3GPP domain analyses and negotiates dynamically with a software-defined networking (SDN) controller in the transport domain to set up traffic engineered paths based on class of service and slice.
  • SDN software-defined networking
  • These embodiments may also be applied to a system in which a data center SDN controller in distributed DC networks negotiates dynamically with an SDN controller in the transport domain to set up traffic engineered paths for VMs based on class of service and slice.
  • CDN contents may be allocated dynamically in the transport domain.
  • two TPMs in two 3GPP domains may be configured to derive a transport path matrix dynamically, based on a network policy, engineered constraints, and monitored feedback from the transport network.
  • a transport path matrix may include a set of source UPFs, gNBs or other user plane network functions in one domain and a set of destination network functions in another domain.
  • the data plane may make use of the DSCP field or network service header (NSH) metadata to indicate CoSs (in a case where the DSCP field is not used) of data packets, and forward the data packets to a label switched path (LSP) using a SR or a NSH service path identifier (SPI) field, to indicate the next hop on a hop-by-hop basis.
  • NSH network service header
  • 3GPP networks use transport underlays that are traffic engineered statically, that is, static traffic engineering.
  • 3GPP networks are typically (over) engineered based on capacity estimates and service usage projections.
  • the 5G network caters to a wider range of services and across a more distributed access points (for satisfying lower latency) , the assumptions to engineer these transports statically become difficult.
  • the addition of cloud and virtualized functions and sharing of transports across multiple service provider domains makes engineering these transports even more challenging.
  • the embodiments of the present disclosure provide dynamic traffic engineering.
  • the embodiments allow operators to specify policies, initial engineering estimates, and other configurations for a communications system.
  • the communications system may then set up and tune transport paths on a continual basis.
  • the disclosed embodiment systems use novel mechanisms between a TPM and an SDNC-C as well as between TPMs in multiple 3GPP provider domains, the open networking automation platform (ONAP) standards, 3GPP service and system aspects working group 5 (SA5) standards, and IETF standards focus on various aspects in individual domains (i.e., management, 3GPP, and internet protocol (IP) networks) , but none of these standards have specified the solutions disclosed herein.
  • FIG. 14 illustrates a diagram of an embodiment method 1400 for wireless communications.
  • the method 1400 may be a computer implemented method.
  • the method 1400 may be performed at a network device of a first customer network, such as a 3GPP mobile network.
  • a first controller function of the first customer network negotiates, with a second controller function of a second customer network, a traffic matrix for transmitting traffic flows of a quality of service (QoS) class to the second customer network.
  • QoS quality of service
  • the QoS class is associated with virtual network connection (VNC) resources provisioned for traffic transmitted between the first customer network and the second customer network.
  • VNC virtual network connection
  • the first controller function of the first customer network determines provider edge (PE) routers of a transport network for transmitting the traffic flows from the first customer network to the second customer network through the transport network.
  • the first controller function of the first customer network sends, to a third controller function of the transport network, a request for configuring a transport path in the transport network based on the request, where the request includes information of the PE routers that are determined, the QoS class, the traffic matrix, and a set of transport path configuration constraints.
  • the transport path in the transport network is configured for transmission of the traffic flows of the QoS class.
  • a TPM in a 3GPP domain is configured to analyze and negotiate dynamically with an SDN controller in a transport domain to set up traffic engineered paths based on class of service and slice.
  • a method for managing transport for mobile networks comprises deriving, by two TPMs in two 3GPP domains, a transport path matrix dynamically based on network policy, engineered constraints, and monitored feedback from a transport network.
  • a method for managing transport for mobile networks comprises using, by a data plane, NSH metadata to indicate COS and forward data packets to a LSP using an NSH SPI field to indicate a next hop on a hop-by-hop basis.
  • FIG. 15 illustrates a block diagram of an embodiment processing system 1500 for performing methods described herein, which may be installed in a host device.
  • the processing system 1500 includes a processor 1504, a memory 1506, and interfaces 1510-1514, which may (or may not) be arranged as shown in FIG. 15.
  • the processor 1504 may be any component or collection of components adapted to perform computations and/or other processing related tasks
  • the memory 1506 may be any component or collection of components adapted to store programming and/or instructions for execution by the processor 1504.
  • the memory 1506 includes a non-transitory computer readable medium.
  • the interfaces 1510, 1512, 1514 may be any component or collection of components that allow the processing system 1500 to communicate with other devices/components and/or users.
  • one or more of the interfaces 1510, 1512, 1514 may be adapted to communicate data, control, or management messages from the processor 1504 to applications installed on the host device and/or a remote device.
  • one or more of the interfaces 1510, 1512, 1514 may be adapted to allow a user or user device (e.g., personal computer (PC) , etc. ) to interact/communicate with the processing system 1500.
  • the processing system 1500 may include additional components not depicted in FIG. 15, such as long term storage (e.g., non-volatile memory, etc. ) .
  • the processing system 1500 is included in a network device that is accessing, or part otherwise of, a telecommunications network.
  • the processing system 1500 is in a network-side device in a wireless or wireline telecommunications network, such as a base station, a relay station, a scheduler, a controller, a gateway, a router, an applications server, or any other device in the telecommunications network.
  • the processing system 1500 is in a user-side device accessing a wireless or wireline telecommunications network, such as a mobile station, a user equipment (UE) , a personal computer (PC) , a tablet, a wearable communications device (e.g., a smartwatch, etc. ) , or any other device adapted to access a telecommunications network.
  • UE user equipment
  • PC personal computer
  • tablet a wearable communications device
  • one or more of the interfaces 1510, 1512, 1514 connects the processing system 1500 to a transceiver adapted to transmit and receive signaling over the telecommunications network.
  • the processing system 1500 includes a negotiation module negotiating, by a first controller function of a first customer network with a second controller function of a second customer network, a traffic matrix for transmitting traffic flows of a quality of service (QoS) class to the second customer network, the QoS class being associated with virtual network connection (VNC) resources provisioned for traffic transmitted between the first customer network and the second customer network, a determination module determining, by the first controller function of the first customer network, provider edge (PE) routers of a transport network for transmitting the traffic flows from the first customer network to the second customer network through the transport network, and a send module sending, by the first controller function of the first customer network to a third controller function of the transport network, a request for configuring a transport path in the transport network based on the request, the transport path in the transport network being configured for transmission of the traffic flows of the QoS class, and the request comprising information of the PE routers that are determined, the QoS class, the traffic matrix and a set of transport path configuration
  • the processing system 1500 may include other or additional modules for performing any one of or combination of steps described in the embodiments. Further, any of the additional or alternative embodiments or aspects of the method, as shown in any of the figures or recited in any of the claims, are also contemplated to include similar modules.
  • FIG. 16 illustrates a block diagram of a transceiver 1600 adapted to transmit and receive signaling over a telecommunications network.
  • the transceiver 1600 may be installed in a host device. As shown, the transceiver 1600 comprises a network-side interface 1602, a coupler 1604, a transmitter 1606, a receiver 1608, a signal processor 1610, and a device-side interface 1612.
  • the network-side interface 1602 may include any component or collection of components adapted to transmit or receive signaling over a wireless or wireline telecommunications network.
  • the coupler 1604 may include any component or collection of components adapted to facilitate bi-directional communication over the network-side interface 1602.
  • the transmitter 1606 may include any component or collection of components (e.g., up-converter, power amplifier, etc. ) adapted to convert a baseband signal into a modulated carrier signal suitable for transmission over the network-side interface 1602.
  • the receiver 1608 may include any component or collection of components (e.g., down-converter, low noise amplifier, etc. ) adapted to convert a carrier signal received over the network-side interface 1602 into a baseband signal.
  • the signal processor 1610 may include any component or collection of components adapted to convert a baseband signal into a data signal suitable for communication over the device-side interface (s) 1612, or vice-versa.
  • the device-side interface (s) 1612 may include any component or collection of components adapted to communicate data-signals between the signal processor 1610 and components within the host device (e.g., the processing system 1500, local area network (LAN) ports, etc. ) .
  • the transceiver 1600 may transmit and receive signaling over any type of communications medium.
  • the transceiver 1600 transmits and receives signaling over a wireless medium.
  • the transceiver 1600 may be a wireless transceiver adapted to communicate in accordance with a wireless telecommunications protocol, such as a cellular protocol (e.g., long-term evolution (LTE) , etc. ) , a wireless local area network (WLAN) protocol (e.g., Wi-Fi, etc. ) , or any other type of wireless protocol (e.g., Bluetooth, near field communication (NFC) , etc. ) .
  • the network-side interface 1602 comprises one or more antenna/radiating elements.
  • the network-side interface 1602 may include a single antenna, multiple separate antennas, or a multi-antenna array configured for multi-layer communication, e.g., single input multiple output (SIMO) , multiple input single output (MISO) , multiple input multiple output (MIMO) , etc.
  • the transceiver 1600 transmits and receives signaling over a wireline medium, e.g., twisted-pair cable, coaxial cable, optical fiber, etc.
  • Specific processing systems and/or transceivers may utilize all of the components shown, or only a subset of the components, and levels of integration may vary from device to device.
  • a signal may be transmitted by a transmitting unit or a transmitting module.
  • a signal may be received by a receiving unit or a receiving module.
  • a signal may be processed by a processing unit or a processing module.
  • Other steps may be performed by a negotiating unit/module, a determining unit/module, a mapping unit/module, a routing unit/module, an instructing unit/module, a collecting unit/module, a monitoring unit/module, a subscribing unit/module, an estimating unit/module, and/or a requesting unit/module.
  • the respective units/modules may be hardware, software, or a combination thereof.
  • one or more of the units/modules may be an integrated circuit, such as field programmable gate arrays (FPGAs) or application-specific integrated circuits (ASICs) .
  • FPGAs field programmable gate arrays
  • ASICs application-specific integrated circuits

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

Selon l'invention, une première fonction de contrôleur, p. ex. un gestionnaire de trajets de transport (TPM), d'un premier réseau de client, p. ex. un réseau mobile, peut déterminer dynamiquement s'il convient de configurer un trajet de transport dans un réseau de transport pour transmettre du trafic d'une classe de QoS du premier réseau de client à un second réseau de client par l'intermédiaire du réseau de transport d'après des estimations de demande de trafic et des données de performances de trajets de transport renvoyées en provenance du réseau de transport, et demander au réseau de transport de configurer le trajet de transport d'après une matrice de trafic du premier réseau de client, des routeurs périphériques de fournisseur du réseau de transport déterminés par le TPM, la classe de QoS et un ensemble de contraintes de configuration de trajet de transport.
PCT/CN2019/091180 2018-06-14 2019-06-14 Système et procédé de transport géré dynamiquement pour réseaux mobiles Ceased WO2019238101A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201862685184P 2018-06-14 2018-06-14
US62/685,184 2018-06-14
US201962822644P 2019-03-22 2019-03-22
US62/822,644 2019-03-22

Publications (1)

Publication Number Publication Date
WO2019238101A1 true WO2019238101A1 (fr) 2019-12-19

Family

ID=68842716

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/091180 Ceased WO2019238101A1 (fr) 2018-06-14 2019-06-14 Système et procédé de transport géré dynamiquement pour réseaux mobiles

Country Status (1)

Country Link
WO (1) WO2019238101A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113055241A (zh) * 2019-12-27 2021-06-29 中兴通讯股份有限公司 一种带宽调整修正方法、装置、设备及储存介质
US11188785B2 (en) 2019-11-08 2021-11-30 Electronics And Telecommunications Research Institute Optimization of network data analysis device
EP4096163A1 (fr) * 2021-05-27 2022-11-30 Siemens Sanayi ve Ticaret A. S. Procédé mis en uvre par ordinateur permettant d'améliorer la latence d'une communication dans un réseau 5g

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140160924A1 (en) * 2012-12-06 2014-06-12 At&T Intellectual Property I, L.P. Advertising network layer reachability information specifying a quality of service for an identified network flow
US20140219096A1 (en) * 2004-01-20 2014-08-07 Rockstar Consortium Us Lp Ethernet lan service enhancements
CN105306333A (zh) * 2014-06-30 2016-02-03 瞻博网络公司 跨越多个网络的服务链接

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140219096A1 (en) * 2004-01-20 2014-08-07 Rockstar Consortium Us Lp Ethernet lan service enhancements
US20140160924A1 (en) * 2012-12-06 2014-06-12 At&T Intellectual Property I, L.P. Advertising network layer reachability information specifying a quality of service for an identified network flow
CN105306333A (zh) * 2014-06-30 2016-02-03 瞻博网络公司 跨越多个网络的服务链接

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ERICSSON. ET AL.: "QOS for LTE when connected to 5G-CN", 3GPP TSG-RAN WG2 #99 TDOC R2-1707792, 25 August 2017 (2017-08-25), XP051317752 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11188785B2 (en) 2019-11-08 2021-11-30 Electronics And Telecommunications Research Institute Optimization of network data analysis device
CN113055241A (zh) * 2019-12-27 2021-06-29 中兴通讯股份有限公司 一种带宽调整修正方法、装置、设备及储存介质
US12224908B2 (en) 2019-12-27 2025-02-11 Zte Corporation Bandwidth adjustment and correction method, apparatus and device, and storage medium
EP4096163A1 (fr) * 2021-05-27 2022-11-30 Siemens Sanayi ve Ticaret A. S. Procédé mis en uvre par ordinateur permettant d'améliorer la latence d'une communication dans un réseau 5g

Similar Documents

Publication Publication Date Title
US12231945B2 (en) Integrated backhaul transport for 5Gs
US11206551B2 (en) System and method for using dedicated PAL band for control plane and GAA band as well as parts of PAL band for data plan on a CBRS network
EP4054125B1 (fr) Gestion de l'état de réseau global
US11368862B2 (en) Point-to-multipoint or multipoint-to-multipoint mesh self-organized network over WIGIG standards with new MAC layer
CN114080789B (zh) 用于应用工作负载的网络定义的边缘路由
US9913151B2 (en) System and method for modifying a service-specific data plane configuration
US12432119B2 (en) Integration of communication network in time sensitive networking system
US10334446B2 (en) Private multefire network with SDR-based massive MIMO, multefire and network slicing
EP3869855A1 (fr) Procédé de transmission d'informations et appareil associé
US11057796B2 (en) Employing self organizing network (SON) techniques to manage data over cable service interface specification (DOCSIS) backhaul for small cells
CN113228592B (zh) 提供传输上下文和路径上元数据以支持启用5g的网络的方法和装置
EP3412007B1 (fr) Procédé et appareil pour tampons programmables dans des réseaux mobiles
KR20170088425A (ko) 서비스 지향 네트워크 자동 생성에 기초한 맞춤형 가상 무선 네트워크를 제공하기 위한 시스템 및 방법
CN114710975A (zh) 多域间传输多传输网络上下文标识
WO2022222666A1 (fr) Procédé et appareil de communication
WO2019238101A1 (fr) Système et procédé de transport géré dynamiquement pour réseaux mobiles
EP4319225A1 (fr) Détermination de regroupement basée sur un service pour un déploiement 5g dans des usines
Harkous et al. Programmability for Flexible 6G Architecture

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19818586

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19818586

Country of ref document: EP

Kind code of ref document: A1