[go: up one dir, main page]

CN116032893B - A data channel service system and implementation method for IMS network - Google Patents

A data channel service system and implementation method for IMS network Download PDF

Info

Publication number
CN116032893B
CN116032893B CN202211614952.3A CN202211614952A CN116032893B CN 116032893 B CN116032893 B CN 116032893B CN 202211614952 A CN202211614952 A CN 202211614952A CN 116032893 B CN116032893 B CN 116032893B
Authority
CN
China
Prior art keywords
fpm
data channel
sctp
data
rate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211614952.3A
Other languages
Chinese (zh)
Other versions
CN116032893A (en
Inventor
廖建新
朱小琳
张乾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xinxun Digital Technology Hangzhou Co ltd
Original Assignee
Xinxun Digital Technology Hangzhou Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xinxun Digital Technology Hangzhou Co ltd filed Critical Xinxun Digital Technology Hangzhou Co ltd
Priority to CN202211614952.3A priority Critical patent/CN116032893B/en
Publication of CN116032893A publication Critical patent/CN116032893A/en
Application granted granted Critical
Publication of CN116032893B publication Critical patent/CN116032893B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

一种用于IMS网络的数据通道服务系统及实现方法,方法包括:步骤一、每个移动终端UE和数据通道服务器建立基于SCTP协议的数据通道;步骤二、数据通道服务器接收每个UE发来的网络信息反馈包,然后同时采用基于延迟的拥塞控制算法和基于损失的拥塞控制算法计算相应的发送速率,并从中选择小的发送速率向UE发送SCTP数据流。本发明涉及通信领域,能基于IMS网络实现基于SCTP的数据通道,并有效优化数据通道在传输效率、利用率、传输延迟等方面的性能。

A data channel service system and implementation method for an IMS network, the method comprising: step 1, each mobile terminal UE and a data channel server establish a data channel based on the SCTP protocol; step 2, the data channel server receives a network information feedback packet sent by each UE, and then simultaneously uses a delay-based congestion control algorithm and a loss-based congestion control algorithm to calculate the corresponding sending rate, and selects a small sending rate to send an SCTP data stream to the UE. The present invention relates to the field of communications, can implement a data channel based on SCTP based on an IMS network, and effectively optimize the performance of the data channel in terms of transmission efficiency, utilization, transmission delay, etc.

Description

Data channel service system for IMS network and implementation method
Technical Field
The invention relates to a data channel service system for an IMS network and an implementation method thereof, which relate to the field of communication.
Background
In the past, the original video call service based on IMS by operators has not been popularized, mainly because the terminal supporting video call has low popularization degree, higher cost, insufficient network coverage, incapability of cross-network intercommunication and the like. In the 5G age, these obstacles have been or are being eliminated and IMS-based video telephony has been the basis for large-scale business.
WebRTC is a new standard defined by the Internet Engineering Task Force (IETF) and the world wide Web consortium (W3C) for implementing real-time communications in Web browsers. It includes the ability to transmit audio and video over so-called media channels using secure real-time transport protocol (SRTP), and the ability to transmit arbitrary data over Stream Control Transport Protocol (SCTP) based data channels.
However, implementing a data path server in an IMS network using SCTP over DTLS presents a series of performance problems. The DTLS functions to encrypt data in the data channel to ensure data security, and add a link certificate checking mechanism to prevent network attacks. Unlike TLS over TCP, the UDP layer does not handle the disorder and packet loss of the data packet, which may cause the link certificate check negotiation to be unable to be guaranteed. DTLS requires an added reliability transport mechanism within the handshake message at the time of connection creation. Because UDP has higher real-time requirement on data, when transmitting application data after the link is established stably, no data reliability transmission mechanism is provided, and SCTP protocol or application layer assurance is required. However, since SCTP uses a congestion control algorithm like TCP, and uses packet loss as a congestion signal, when the packet loss rate of a link is large, the performance of the algorithm is deteriorated.
Existing studies propose a solution for RTP streaming called "stream state switching" (FSE), which combines congestion control sharing the same bottleneck, with congestion controllers for traffic, allowing them to share information with each other. The FSE-NG combines the active FSE and ROSIEEE algorithms to support priority of traffic while still being able to couple and manage traffic based on losses and delays. However, current research at home and abroad is only focused on the coupling flows running between two peers, whereas DCS may have all traffic sent by a given peer to one or more destinations may share the same network bottleneck.
Therefore, how to realize the SCTP-based data channel based on the IMS network and effectively optimize the performance of the data channel in terms of transmission efficiency, utilization rate, transmission delay, etc. has become a technical problem that is of great concern to technicians.
Disclosure of Invention
In view of the above, the present invention aims to provide a data channel service system and implementation method for an IMS network, which can implement a SCTP-based data channel based on the IMS network, and effectively optimize performance of the data channel in terms of transmission efficiency, utilization rate, transmission delay, and the like.
In order to achieve the above object, the present invention provides a data channel service system for an IMS network, comprising a data channel server and a plurality of mobile terminals UE, wherein:
a data channel server, establishing a data channel based on SCTP protocol with each UE, then receiving the network information feedback packet sent by each UE, adopting a congestion control algorithm based on delay and a congestion control algorithm based on loss to calculate corresponding sending rate, selecting small sending rate from the corresponding sending rate to send SCTP data flow to the corresponding UE,
The data channel server further comprises a priority control device and a priority management device FPM, wherein:
A priority control device registers a plurality of SCTP data flows passing through the same network path to the FPM, and inputs the congestion window size and network round trip delay of each SCTP data flow: cc_cwnd (f i) and last_rtt (f i), where f i represents the i-th SCTP data stream, cc_cwnd (f i) is the congestion window size of f i, last_rtt (f i) is the network round trip delay of f i,
The FPM further comprises:
A congestion window allocation unit, which forms a flow set of all SCTP data streams passing through a same network path, and stores network round trip delay of each SCTP data stream: last_rtt (f i) and then calculates the output rate of each SCTP data stream in the traffic set: cc_r (f i) is the output rate of f i, thus calculating the current output total rate s_cr for the obtained traffic set: N is the total number of SCTP data streams in the traffic set, the priority of each SCTP data stream in the traffic set is set, and the successful transmission income of each data stream is calculated:
Q (f i) is the benefit of f i transmission success, time now is the current time, time create(fi is the creation time of the f i corresponding data block, blcok remainsize(fi) is the remaining size of the f i corresponding data block, P (f i) is the priority of f i, and the sum of benefits s_q of all data streams in the traffic set transmission success is calculated: and finally, calculating the congestion window size of each SCTP data flow in the flow set according to the successful transmission income of each data flow: Fpm_cwnd (f i)=FPM_R(fi)×last_rtt(fi),FPM_R(fi) is the priority algorithm output rate of f i, fpm_cwnd (f i) is the congestion window size of the priority algorithm output of f i, l_r is the current remaining allocated total rate of the traffic set, its value is set to s_cr, and the calculated congestion window size of the priority algorithm output is allocated to the corresponding SCTP data stream in the traffic set.
In order to achieve the above object, the present invention further provides a method for implementing a data channel service for an IMS network, including:
Step one, each mobile terminal UE and a data channel server establish a data channel based on an SCTP protocol;
step two, the data channel server receives the network information feedback packet sent by each UE, then calculates the corresponding sending rate by adopting a congestion control algorithm based on delay and a congestion control algorithm based on loss at the same time, selects a small sending rate from the sending rate to send SCTP data flow to the UE,
The data channel server comprises a priority control device and a priority management device FPM, when the priority control device finds that a plurality of SCTP data streams flow through the same network path, the data channel server further comprises:
Step B1, the priority control device registers a plurality of SCTP data flows passing through the same network path to the FPM, and inputs the congestion window size and network round trip delay of each SCTP data flow: cc_cwnd (f i) and last_rtt (f i), where f i represents the i-th SCTP data stream, cc_cwnd (f i) is the congestion window size of f i, last_rtt (f i) is the network round trip delay of f i;
Step B2, FPM forms a flow set of all SCTP data flows passing through the same network path, and saves the network round trip delay of each SCTP data flow: last_rtt (f i) and then calculates the output rate of each SCTP data stream in the traffic set: Cc_r (f i) is the output rate of f i and the current output total rate s_cr of the traffic set is calculated from this: wherein N is the total number of SCTP data streams in the traffic set;
step B3, the FPM sets the priority of each SCTP data stream in the traffic set, and then calculates the successful transmission income of each data stream: Where Q (f i) is the benefit of f i transmission success, time now is the current time, time create(fi is the creation time of the f i corresponding data block, blcok remainsize(fi) is the remaining size of the f i corresponding data block, P (f i) is the priority of f i, and the sum of benefits s_q of all data stream transmission success in the traffic set is calculated based on this:
step B4, the FPM calculates the congestion window size of each SCTP data flow in the flow set according to the successful transmission income of each data flow: FPM_CWND (f i)=FPM_R(fi)×last_rtt(fi), wherein FPM_R (f i) is the output rate of the priority algorithm of f i, FPM_CWND (f i) is the congestion window size of the output of the priority algorithm of f i, L_R is the current total remaining allocation rate of the traffic set, the value is set as S_CR, and the calculated congestion window size of the output of the priority algorithm is allocated to the corresponding SCTP data flow in the traffic set.
Compared with the prior art, the invention has the beneficial effects that: the invention constructs an available data channel server based on IMS special bearing, creates a new call data channel on the basis of the existing audio and video capability, and introduces a congestion control algorithm into the data channel service, thereby introducing text and picture expression position actions and even sharing richer interaction information such as a mobile phone desktop and the like before, during and after calling on the basis of the original audio and video channel, upgrading the call from single media to multimedia, realizing multidimensional interaction and supporting a series of emerging multimedia call applications; the invention expands the FPM into the data channel service, shares a plurality of SCTP flows of the same path, gives different priorities according to different service types, dynamically adjusts the speed of the FPM to the flows of different priorities, and distributes more bandwidths for the service types with high priorities, thereby fairly distributing the available bandwidth, reducing the overall delay and loss and solving the related performance problems after the data channel service is introduced.
Drawings
Fig. 1 is a schematic diagram of the composition and structure of a data channel service system for an IMS network according to the present invention.
Fig. 2 is a flow chart of a data channel service implementation method for an IMS network according to the present invention.
Fig. 3 is a flowchart of specific steps for dynamically allocating congestion window sizes to flows of different priorities by the FPM when the priority control device finds that there are multiple SCTP streams passing through the same network path.
Fig. 4 is a flowchart of specific steps for the FPM to dynamically adjust the congestion window size of all the SCTP data streams in the traffic set when the congestion window size CWND of any one of the data streams in the traffic set changes.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings, in order to make the objects, technical solutions and advantages of the present invention more apparent.
As shown in fig. 1, the data channel service system for an IMS network of the present invention includes a data channel server and a plurality of mobile terminals UE, wherein:
And the data channel server establishes a data channel based on an SCTP protocol with each UE, then receives a network information feedback packet sent by each UE, calculates a corresponding sending rate by adopting a congestion control algorithm based on delay and a congestion control algorithm based on loss, and selects a small sending rate from the corresponding sending rate to send an SCTP data stream to the corresponding UE.
The data channel server further comprises:
The data channel constructing device firstly completes media description information SDP negotiation exchange with the UE, then exchanges candidate addresses CANDIDATE with the UE to establish a PeerConnection connection, and finally establishes DATACHANNEL connection with the UE.
The data channel construction device further comprises:
An SDP negotiation unit, when receiving the Offer SDP of the UE, sets up to local through a remote SDP setting method SetRemoteDescription, then creates ANSWER SDP through a remote create Answer method CREATEANSWER, sets up Answer to local through a local SDP setting method setLocalDescription, and sends ANSWER SDP to the UE;
CANDIDATE exchange unit, monitor OnICECandidate, trigger the method after setting up the local SDP description information, send candidate to UE when ICE CANDIDATE is available, and when candidate sent by UE is received, execute add partner network information method ADDICECANDIDATE to add candidate to local,
The UE further includes:
The UE-SDP negotiation device creates an Offer SDP to the data channel server through an Offer creating method CreateOffer, sets LocalDescription and sends the Offer SDP to the data channel server, then receives ANSWER SDP sent by the data channel server and sets SetRemoteDescription to the local;
The UE-CANDIDATE switching means listens OnICECandidate, sends candidate to the data path server when ICE CANDIDATE is available, and performs ADDICECANDIDATE adding candidate locally when candidate is received from the data path server.
The data channel construction means of the data channel server can configure reliability by parameters of ordered (whether sequential transmission is guaranteed or not), maxRetransmitTimeMs (maximum time allowed for retransmission), maxRetransmits (maximum number of retransmission allowed) when creating DATACHANNEL connections.
The invention can also use the reinforcement learning optimization algorithm GCC to adjust the parameters in the congestion control algorithm based on delay or the congestion control algorithm based on loss in the data channel server, and the data channel server also comprises:
reinforcement learning optimizing device, construction reinforcement learning model GCC, input is state space Wherein T t represents a throughput vector at time T, D t represents an inter-packet delay jitter vector at time T, R t represents an RTT vector at time T, L t represents a packet loss rate vector at time T, the output is a predicted transmission rate, and the reward function reward uses throughput as positive feedback, delay and packet loss rate as negative feedback: the method comprises the steps of (1) re-ward = alpha-throughput + beta-delay + gamma-packet loss, wherein alpha, beta and gamma are respectively weight coefficients of throughput, delay and packet loss, represent the influence of the throughput, delay and packet loss on the re-ward, alpha is more than 0, beta is less than 0, gamma is less than 0, and feature extraction is performed by using a fully connected network, the number of layers of the fully connected network is 2, and the number of neurons of each layer is set to 64.
When multiple data streams compete for shared bandwidth in the same network path, the rate is usually the result of the competition at the bottleneck, and the congestion window size can be dynamically adjusted for flows with different priorities by the priority management device FPM, so that the available bandwidth is distributed fairly, and the overall delay and loss are reduced. The data channel server further comprises a priority control device and an FPM, wherein:
A priority control device registers a plurality of SCTP data flows passing through the same network path to the FPM, and inputs the congestion window size and network round trip delay of each SCTP data flow: cc_cwnd (f i) and last_rtt (f i), where f i represents the i-th SCTP data stream, cc_cwnd (f i) is the congestion window size of f i, last_rtt (f i) is the network round trip delay of f i,
The FPM further comprises:
A congestion window allocation unit, which forms a flow set of all SCTP data streams passing through a same network path, and stores network round trip delay of each SCTP data stream: last_rtt (f i) and then calculates the output rate of each SCTP data stream in the traffic set: cc_r (f i) is the output rate of f i, thus calculating the current output total rate s_cr for the obtained traffic set: N is the total number of SCTP data streams in the traffic set, the priority of each SCTP data stream in the traffic set is set, and the successful transmission income of each data stream is calculated: Q (f i) is the benefit of f i transmission success, time now is the current time, time create(fi is the creation time of the f i corresponding data block, blcok remainsize(fi) is the remaining size of the f i corresponding data block, P (f i) is the priority of f i, and the sum of benefits s_q of all data streams in the traffic set transmission success is calculated: and finally, calculating the congestion window size of each SCTP data flow in the flow set according to the successful transmission income of each data flow: Fpm_cwnd (f i)=FPM_R(fi)×last_rtt(fi),FPM_R(fi) is the priority algorithm output rate of f i, fpm_cwnd (f i) is the congestion window size of the priority algorithm output of f i, l_r is the current remaining allocated total rate of the traffic set, its value is set to s_cr, and the calculated congestion window size of the priority algorithm output is allocated to the corresponding SCTP data stream in the traffic set.
When the congestion window size CWND of a certain SCTP data stream in the traffic set changes, the FPM may also dynamically adjust the congestion window size for all data streams in the traffic set. When monitoring that the congestion window size CWND of any SCTP data stream f a in the traffic set changes, the priority control device sends the current cc_cwnd (f a) and last_rtt (f a) of f a to the FPM, where the FPM further includes:
congestion window adjusting section updates last_rtt (f a) of f a stored locally, and calculates output rate cc_r (f a) of f a: Then, querying the flow set to which f a belongs, and adjusting the current output total rate S_CR' of the flow set to which f a belongs: s_cr '=s_cr+cc_r (f a)-FPM_R(fa),FPM_R(fa) is the output rate of the priority algorithm that is last allocated to f a by FPM, and updates the value of the current remaining total allocation rate l_r of the traffic set to which f a belongs to s_cr', and recalculates the congestion window size output by the priority algorithm of each SCTP data stream in the traffic set to which f a belongs: FPM_CWND (f j)=FPM_R(fj)×last_rtt(fj),fj is the jth SCTP data stream of the traffic set to which f a belongs, then)
And distributing the congestion window size output by the calculated priority algorithm to the SCTP data flow corresponding to the flow set to which f a belongs.
As shown in fig. 2, the method for implementing a data channel server for an IMS network of the present invention includes:
Step one, each mobile terminal UE and a data channel server establish a data channel based on an SCTP protocol;
And step two, the data channel server receives the network information feedback packet sent by each UE, calculates the corresponding sending rate by adopting a congestion control algorithm based on delay and a congestion control algorithm based on loss at the same time, and selects a small sending rate from the sending rate to send SCTP data streams to the UE. And step two, a google congestion control algorithm can be specifically adopted to calculate the sending rate.
Step one of fig. 2 may further include:
step A1, the UE and the data channel server complete SDP negotiation exchange of the media description information;
Step A2, the UE and the data channel server exchange candidate addresses CANDIDATE, and then a PeerConnection connection is established;
and step A3, the UE establishes DATACHANNEL connection with the data channel server.
Step A1 may further include:
Step A11, the UE creates an Offer SDP to the data channel server through an Offer creating method CreateOffer, sets LocalDescription and sends the SDP to the data channel server;
Step A12, the data channel server receives the Offer SDP of the UE and sets the Offer SDP to the local through setting a far-end SDP description information method SetRemoteDescription;
Step A13, the data channel server creates ANSWER SDP through a remote creation response method CREATEANSWER, and sets Answer to local through a local SDP description information setting method setLocalDescription, and sends ANSWER SDP to the UE;
step a14, the UE receives ANSWER SDP sent from the data channel server and sets it locally through SetRemoteDescription.
Step A2 may further include: the data path server (or UE) listens OnICECandidate, when ICE CANDIDATE is available, sends candidate to the UE (or data path server), which performs the add partner network information method ADDICECANDIDATE to add candidate locally.
When creating DATACHANNEL connection in step A3, the reliability can be configured by the data channel. Init's ordered (whether or not sequential transmission is guaranteed), maxRetransmitTimeMs (maximum time allowed for retransmission), maxRetransmits (maximum number of times allowed for retransmission) parameters.
In step two of fig. 2, the present invention may further use a reinforcement learning optimization algorithm GCC to adjust parameters in a delay-based congestion control algorithm or a loss-based congestion control algorithm, and further includes:
constructing a reinforcement learning model GCC, wherein the input is a state space Wherein T t represents a throughput vector at time T, D t represents an inter-packet delay jitter vector at time T, R t represents an RTT vector at time T, L t represents a packet loss rate vector at time T, the output is a predicted transmission rate, and the reward function reward uses throughput as positive feedback, delay and packet loss rate as negative feedback: the method comprises the steps of (1) re-ward = alpha-throughput + beta-delay + gamma-packet loss, wherein alpha, beta and gamma are respectively weight coefficients of throughput, delay and packet loss, represent the influence of the throughput, delay and packet loss on the re-ward, alpha is more than 0, beta is less than 0, gamma is less than 0, and feature extraction is performed by using a fully connected network, the number of layers of the fully connected network is 2, and the number of neurons of each layer is set to 64.
When multiple data streams compete for shared bandwidth in the same network path, the rate is usually the result of the competition at the bottleneck, and the congestion window size can be dynamically allocated to the flows with different priorities by the priority management device FPM, so that the available bandwidth can be fairly allocated, and the overall delay and loss can be reduced. As shown in fig. 3, the data channel server includes a priority control device and an FPM, and when the priority control device finds that there are multiple SCTP data streams flowing through the same network path, the data channel server further includes:
Step B1, the priority control device registers a plurality of SCTP data flows passing through the same network path to the FPM, and inputs the congestion window size and network round trip delay of each SCTP data flow: cc_cwnd (f i) and last_rtt (f i), where f i represents the i-th SCTP data stream, cc_cwnd (f i) is the congestion window size of f i, last_rtt (f i) is the network round trip delay of f i;
Step B2, FPM forms a flow set of all SCTP data flows passing through the same network path, and saves the network round trip delay of each SCTP data flow: last_rtt (f i) and then calculates the output rate of each SCTP data stream in the traffic set: Cc_r (f i) is the output rate of f i and the current output total rate s_cr of the traffic set is calculated from this: wherein N is the total number of SCTP data streams in the traffic set;
step B3, the FPM sets the priority of each SCTP data stream in the traffic set, and then calculates the successful transmission income of each data stream: Where Q (f i) is the benefit of f i transmission success, time now is the current time, time create(fi is the creation time of the f i corresponding data block, blcok remainsize(fi) is the remaining size of the f i corresponding data block, P (f i) is the priority of f i, and the sum of benefits s_q of all data stream transmission success in the traffic set is calculated based on this:
step B4, the FPM calculates the congestion window size of each SCTP data flow in the flow set according to the successful transmission income of each data flow: FPM_CWND (f i)=FPM_R(fi)×last_rtt(fi), wherein FPM_R (f i) is the output rate of the priority algorithm of f i, FPM_CWND (f i) is the congestion window size of the output of the priority algorithm of f i, L_R is the current total remaining allocation rate of the traffic set, the value is set as S_CR, and the calculated congestion window size of the output of the priority algorithm is allocated to the corresponding SCTP data flow in the traffic set.
As shown in fig. 4, when the congestion window size CWND of any SCTP data stream in the traffic set changes, the FPM may dynamically adjust the congestion window sizes of all data streams in the traffic set, and the method further includes:
Step C1, the priority control device sends the current CC_CWND (f a) and last_rtt (f a) of f a to the FPM;
Step C2, the FPM updates the locally saved last_rtt of f a (f a), and calculates the output rate cc_r of f a (f a): Then, querying the flow set to which f a belongs, and adjusting the current output total rate S_CR' of the flow set to which f a belongs: s_cr' =s_cr+cc_r (f a)-FPM_R(fa), where fpm_r (f a) is the last assigned priority algorithm output rate for FPM f a;
Step C3, the FPM updates the value of the current remaining allocation total rate l_r of the traffic set to which f a belongs to s_cr', and recalculates the congestion window size output by the priority algorithm of each SCTP data stream in the traffic set to which f a belongs: FPM_CWND (f j)=FPM_R(fj)×last_rtt(fj), wherein f j is the jth SCTP data stream of the traffic set to which f a belongs, and then the congestion window size output by the calculated priority algorithm is distributed to the corresponding SCTP data stream of the traffic set to which f a belongs.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather to enable any modification, equivalent replacement, improvement or the like to be made within the spirit and principles of the invention.

Claims (10)

1.一种用于IMS网络的数据通道服务系统,其特征在于,包括数据通道服务器和多个移动终端UE,其中:1. A data channel service system for an IMS network, comprising a data channel server and a plurality of mobile terminals UE, wherein: 数据通道服务器,和每个UE建立基于SCTP协议的数据通道,然后接收每个UE发来的网络信息反馈包,同时采用基于延迟的拥塞控制算法和基于损失的拥塞控制算法计算相应的发送速率,并从中选择小的发送速率向UE发送SCTP数据流,The data channel server establishes a data channel based on the SCTP protocol with each UE, then receives the network information feedback packet sent by each UE, and uses the delay-based congestion control algorithm and the loss-based congestion control algorithm to calculate the corresponding sending rate, and selects a small sending rate to send the SCTP data stream to the UE. 数据通道服务器还包括有优先级控制装置和优先级管理装置FPM,其中:The data channel server also includes a priority control device and a priority management device FPM, wherein: 优先级控制装置,将通过同一网络路径的多个SCTP数据流注册至FPM,并输入每个SCTP数据流的拥塞窗口大小和网络往返时延:CC_CWND(fi)和last_rtt(fi),其中,fi表示第i个SCTP数据流,CC_CWND(fi)是fi的拥塞窗口大小,last_rtt(fi)是fi的网络往返时延,The priority control device registers multiple SCTP data streams passing through the same network path to the FPM, and inputs the congestion window size and network round-trip delay of each SCTP data stream: CC_CWND(f i ) and last_rtt(f i ), where f i represents the i-th SCTP data stream, CC_CWND(f i ) is the congestion window size of f i, and last_rtt(f i ) is the network round-trip delay of f i . FPM进一步包括有:FPM further includes: 拥塞窗口分配单元,将通过同一网络路径的所有SCTP数据流构成一个流量集,保存每个SCTP数据流的网络往返时延:last_rtt(fi),然后计算流量集中每个SCTP数据流的输出速率:CC_R(fi)是fi的输出速率,从而计算获得流量集的当前输出总速率S_CR:N是流量集中的SCTP数据流总数,再设置流量集中每个SCTP数据流的优先级,计算每个数据流传输成功的收益:The congestion window allocation unit forms a traffic set with all SCTP data flows passing through the same network path, saves the network round trip delay of each SCTP data flow: last_rtt( fi ), and then calculates the output rate of each SCTP data flow in the traffic set: CC_R( fi ) is the output rate of fi , so the current total output rate S_CR of the traffic set is calculated: N is the total number of SCTP data streams in the traffic set. Then set the priority of each SCTP data stream in the traffic set and calculate the benefit of successful transmission of each data stream: Q(fi)是fi传输成功的收益,timenow是当前时间,timecreate(fi)是fi对应数据块的创建时间,blcokremainsize(fi)是fi对应数据块的剩余大小,P(fi)是fi的优先级,计算流量集中所有数据流传输成功的收益总和S_Q:最后按照每个数据流传输成功的收益,计算流量集中每个SCTP数据流的拥塞窗口大小:FPM_CWND(fi)=FPM_R(fi)×last_rtt(fi),PPM_R(fi)是fi的优先级算法输出速率,FPM_CWND(fi)是fi的优先级算法输出的拥塞窗口大小,L_R是流量集的当前剩余分配总速率,将其值设置为S_CR,并将计算得到的优先级算法输出的拥塞窗口大小分配给流量集中对应的SCTP数据流。 Q( fi ) is the profit of successful transmission of fi , time now is the current time, time create ( fi ) is the creation time of the data block corresponding to fi , block remainsize ( fi ) is the remaining size of the data block corresponding to fi , P( fi ) is the priority of fi , and the sum of the profits S_Q of successful transmission of all data flows in the traffic set is calculated: Finally, according to the benefits of successful transmission of each data stream, the congestion window size of each SCTP data stream in the traffic set is calculated: FPM_CWND(f i )=FPM_R(f i )×last_rtt(f i ), PPM_R(f i ) is the priority algorithm output rate of fi , FPM_CWND(f i ) is the congestion window size output by the priority algorithm of fi , L_R is the current remaining allocation total rate of the traffic set, its value is set to S_CR, and the calculated congestion window size output by the priority algorithm is allocated to the corresponding SCTP data flow in the traffic set. 2.根据权利要求1所述的系统,其特征在于,数据通道服务器进一步包括有:2. The system according to claim 1, characterized in that the data channel server further comprises: 数据通道构建装置,先和UE完成媒体描述信息SDP协商交换,然后和UE交换候选地址Candidate以建立起一个PeerConnection连接,最后和UE建立DataChannel连接。The data channel construction device first completes the media description information SDP negotiation exchange with the UE, then exchanges the candidate address Candidate with the UE to establish a PeerConnection connection, and finally establishes a DataChannel connection with the UE. 3.根据权利要求2所述的系统,其特征在于,数据通道构建装置进一步包括有:3. The system according to claim 2, characterized in that the data channel construction device further comprises: SDP协商单元,接收到UE的Offer SDP时,通过设置远端SDP描述信息方法SetRemoteDescription设置到本地,然后通过远端创建应答Answer方法CreateAnswer创建Answer SDP,并通过设置本地SDP描述信息方法setLocalDescription设置answer到本地,向UE发送Answer SDP;When the SDP negotiation unit receives the Offer SDP from the UE, it sets the remote SDP description information to the local by using the SetRemoteDescription method, then creates the Answer SDP by using the CreateAnswer method, and sets the answer to the local by using the setLocalDescription method, and sends the Answer SDP to the UE; Candidate交换单元,监听OnICECandidate,当ice candidate可用时,将candidate发送给UE,并当收到UE发来的candidate时,执行添加对方网络信息方法AddICECandidate添加candidate到本地,The candidate switching unit monitors OnICECandidate, and when the ice candidate is available, it sends the candidate to the UE, and when receiving the candidate sent by the UE, it executes the AddICECandidate method to add the other party's network information to add the candidate to the local. UE进一步包括有:The UE further comprises: UE-SDP协商装置,向数据通道服务器通过创建Offer方法CreateOffer创建Offer SDP,设置LocalDescription并发给数据通道服务器,然后接收数据通道服务器发来的AnswerSDP,并通过SetRemoteDescription设置到本地;The UE-SDP negotiation device creates an Offer SDP to the data channel server through the CreateOffer method, sets LocalDescription and sends it to the data channel server, then receives the AnswerSDP sent by the data channel server, and sets it to the local through SetRemoteDescription; UE-Candidate交换装置,监听OnICECandidate,当ice candidate可用时,将candidate发送给数据通道服务器,并当收到数据通道服务器发来的candidate时,执行AddICECandidate添加candidate到本地。The UE-Candidate switching device monitors OnICECandidate, and when an ice candidate is available, sends the candidate to the data channel server, and when receiving the candidate sent by the data channel server, executes AddICECandidate to add the candidate to the local. 4.根据权利要求1所述的系统,其特征在于,数据通道服务器还包括有:4. The system according to claim 1, characterized in that the data channel server also includes: 强化学习优化装置,构建强化学习模型GCC,输入是状态空间其中,Tt表示t时刻的吞吐量向量,Dt表示t时刻的包间延迟抖动向量,Rt表示t时刻的RTT向量,Lt表示t时刻的丢包率向量,输出是预测的发送速率,奖励函数reward将吞吐量作为正反馈、延迟和丢包率作为负反馈:reward=α*吞吐量+β*延迟+γ*丢包率,其中,α、β、γ分别是吞吐量、延迟、丢包率的权重系数,代表吞吐量、延迟、丢包率对reward的影响,且α>0,β<0,γ<0,并使用全连接网络进行特征提取,全连接的层数为2层,每一层神经元个数设为64。Reinforcement learning optimization device, build reinforcement learning model GCC, the input is the state space Among them, T t represents the throughput vector at time t, D t represents the inter-packet delay jitter vector at time t, R t represents the RTT vector at time t, L t represents the packet loss rate vector at time t, and the output is the predicted sending rate. The reward function reward takes throughput as positive feedback, delay and packet loss rate as negative feedback: reward = α*throughput+β*delay+γ*packet loss rate, among which α, β, and γ are the weight coefficients of throughput, delay, and packet loss rate, respectively, representing the impact of throughput, delay, and packet loss rate on reward, and α>0, β<0, γ<0, and a fully connected network is used for feature extraction. The number of fully connected layers is 2, and the number of neurons in each layer is set to 64. 5.根据权利要求1所述的系统,其特征在于,优先级控制装置当监测发现流量集中的一个SCTP数据流fa的拥塞窗口大小CWND发生变化时,将fa当前的CC_CWND(fa)和last_rtt(fa)发送给FPM,5. The system according to claim 1, characterized in that when the priority control device monitors and finds that the congestion window size CWND of an SCTP data flow f a in the traffic concentration changes, it sends f a 's current CC_CWND (f a ) and last_rtt (f a ) to FPM, FPM还包括有:FPM also includes: 拥塞窗口调整单元,更新本地保存的fa的last_rtt(fa),计算fa的输出速率CC_R(fa):然后查询fa所属流量集,调整fa所属流量集的当前输出总速率S_CR′:S_CR′=S_CR+CC_R(fa)_FPM_R(fa),FPM_R(fa)是FPM为fa上一次分配的优先级算法输出速率,再将fa所属流量集的当前剩余分配总速率L_R的值更新为S_CR′,并重新计算fa所属流量集中每个SCTP数据流的优先级算法输出的拥塞窗口大小:FPM_CWND(fj)=FPM_R(fj)×last_rtt(fj),fj是fa所属流量集的第j个SCTP数据流,然后将计算得到的优先级算法输出的拥塞窗口大小分配给fa所属流量集中对应的SCTP数据流。The congestion window adjustment unit updates the locally saved last_rtt( fa ) of fa and calculates the output rate CC_R( fa ): Then query the traffic set to which f a belongs, adjust the current total output rate S_CR′ of the traffic set to which f a belongs: S_CR′=S_CR+CC_R(f a )_FPM_R(f a ), where FPM_R(f a ) is the output rate of the priority algorithm last allocated by FPM to f a , and then update the value of the current remaining total allocation rate L_R of the traffic set to which f a belongs to S_CR′, and recalculate the congestion window size output by the priority algorithm of each SCTP data flow in the traffic set to which f a belongs: FPM_CWND(f j )=FPM_R(f j )×last_rtt(f j ), where f j is the jth SCTP data flow in the traffic set to which f a belongs, and then the congestion window size output by the calculated priority algorithm is allocated to the corresponding SCTP data flow in the traffic set to which f a belongs. 6.一种用于IMS网络的数据通道服务器实现方法,其特征在于,包括有:6. A method for implementing a data channel server for an IMS network, characterized by comprising: 步骤一、每个移动终端UE和数据通道服务器建立基于SCTP协议的数据通道;Step 1: Each mobile terminal UE and the data channel server establish a data channel based on the SCTP protocol; 步骤二、数据通道服务器接收每个UE发来的网络信息反馈包,然后同时采用基于延迟的拥塞控制算法和基于损失的拥塞控制算法计算相应的发送速率,并从中选择小的发送速率向UE发送SCTP数据流,Step 2: The data channel server receives the network information feedback packet sent by each UE, and then uses both the delay-based congestion control algorithm and the loss-based congestion control algorithm to calculate the corresponding sending rate, and selects a smaller sending rate to send the SCTP data stream to the UE. 数据通道服务器包括有优先级控制装置和优先级管理装置FPM,当优先级控制装置发现有多个SCTP数据流通过同一网络路径时,还包括有:The data channel server includes a priority control device and a priority management device FPM. When the priority control device finds that multiple SCTP data streams pass through the same network path, it also includes: 步骤B1、优先级控制装置将通过同一网络路径的多个SCTP数据流注册至FPM,并输入每个SCTP数据流的拥塞窗口大小和网络往返时延:CC_CWND(fi)和last_rtt(fi),其中,fi表示第i个SCTP数据流,CC_CWND(fi)是fi的拥塞窗口大小,last_rtt(fi)是fi的网络往返时延;Step B1, the priority control device registers multiple SCTP data streams passing through the same network path to FPM, and inputs the congestion window size and network round-trip delay of each SCTP data stream: CC_CWND(f i ) and last_rtt(f i ), where f i represents the i-th SCTP data stream, CC_CWND(f i ) is the congestion window size of f i , and last_rtt(f i ) is the network round-trip delay of f i ; 步骤B2、FPM将通过同一网络路径的所有SCTP数据流构成一个流量集,并保存每个SCTP数据流的网络往返时延:last_rtt(fi),然后计算流量集中每个SCTP数据流的输出速率:CC_R(fi)是fi的输出速率,并据此计算流量集的当前输出总速率S_CR:其中,N是流量集中的SCTP数据流总数;Step B2, FPM forms a traffic set with all SCTP data flows passing through the same network path, and saves the network round trip delay of each SCTP data flow: last_rtt( fi ), and then calculates the output rate of each SCTP data flow in the traffic set: CC_R( fi ) is the output rate of fi , and the current total output rate S_CR of the traffic set is calculated based on it: Where N is the total number of SCTP data flows in the traffic set; 步骤B3、FPM设置流量集中每个SCTP数据流的优先级,然后计算每个数据流传输成功的收益:其中Q(fi)是fi传输成功的收益,timenow是当前时间,timecreate(fi)是fi对应数据块的创建时间,blcokremainsize(fi)是fi对应数据块的剩余大小,P(fi)是fi的优先级,并据此计算流量集中所有数据流传输成功的收益总和S_Q: Step B3, FPM sets the priority of each SCTP data stream in the traffic set, and then calculates the benefit of successful transmission of each data stream: Where Q( fi ) is the profit of successful transmission of fi , time now is the current time, time create ( fi ) is the creation time of the data block corresponding to fi , block remainsize ( fi ) is the remaining size of the data block corresponding to fi , and P( fi ) is the priority of fi . Based on this, the total profit S_Q of successful transmission of all data flows in the traffic set is calculated: 步骤B4、FPM按照每个数据流传输成功的收益,计算流量集中每个SCTP数据流的拥塞窗口大小:FPM_CWND(fi)=FPM_R(fi)×last_rtt(fi),其中,FPM_R(fi)是fi的优先级算法输出速率,FPM_CWND(fi)是fi的优先级算法输出的拥塞窗口大小,L_R是流量集的当前剩余分配总速率,将其值设置为S_CR,然后将计算得到的优先级算法输出的拥塞窗口大小分配给流量集中对应的SCTP数据流。Step B4: FPM calculates the congestion window size of each SCTP data flow in the traffic set according to the revenue of successful transmission of each data flow: FPM_CWND(f i )=FPM_R(f i )×last_rtt(f i ), where FPM_R(f i ) is the priority algorithm output rate of fi , FPM_CWND(f i ) is the congestion window size output by the priority algorithm of fi , L_R is the current remaining allocation total rate of the traffic set, its value is set to S_CR, and then the calculated congestion window size output by the priority algorithm is allocated to the corresponding SCTP data flow in the traffic set. 7.根据权利要求6所述的方法,其特征在于,步骤一包括有:7. The method according to claim 6, characterized in that step 1 comprises: 步骤A1、UE和数据通道服务器完成媒体描述信息SDP协商交换;Step A1: UE and data channel server complete the negotiation and exchange of media description information SDP; 步骤A2、UE和数据通道服务器交换候选地址Candidate,然后建立起一个PeerConnection连接;Step A2: The UE and the data channel server exchange candidate addresses Candidate, and then establish a PeerConnection connection; 步骤A3、UE和数据通道服务器建立DataChannel连接。Step A3: The UE and the data channel server establish a DataChannel connection. 8.根据权利要求7所述的方法,其特征在于,步骤A1包括有:8. The method according to claim 7, characterized in that step A1 comprises: 步骤A11、UE向数据通道服务器通过创建Offer方法CreateOffer创建Offer SDP,设置LocalDescription并发给数据通道服务器;Step A11: The UE creates an Offer SDP to the data channel server through the CreateOffer method, sets LocalDescription, and sends it to the data channel server; 步骤A12、数据通道服务器接收到UE的Offer SDP,并通过设置远端SDP描述信息方法SetRemoteDescription设置到本地;Step A12: The data channel server receives the Offer SDP from the UE and sets it locally through the method SetRemoteDescription to set the remote SDP description information; 步骤A13、数据通道服务器通过远端创建应答Answer方法CreateAnswer创建AnswerSDP,并通过设置本地SDP描述信息方法setLocalDescription设置answer到本地,向UE发送Answer SDP;Step A13: The data channel server creates Answer SDP by remotely creating the answer method CreateAnswer, sets the answer to the local by setting the local SDP description information method setLocalDescription, and sends the Answer SDP to the UE; 步骤A14、UE接收数据通道服务器发来的Answer SDP,并通过SetRemoteDescription设置到本地,Step A14: UE receives the Answer SDP sent by the data channel server and sets it locally via SetRemoteDescription. 步骤A2包括有:Step A2 includes: 数据通道服务器或UE监听OnICECandidate,当ice candidate可用时,将candidate发送给UE或数据通道服务器,UE或数据通道服务器执行添加对方网络信息方法AddICECandidate添加candidate到本地。The data channel server or UE listens to OnICECandidate, and when the ice candidate is available, the candidate is sent to the UE or the data channel server. The UE or the data channel server executes the AddICECandidate method to add the other party's network information to add the candidate to the local. 9.根据权利要求6所述的方法,其特征在于,使用强化学习优化算法GCC对基于延迟的拥塞控制算法或基于损失的拥塞控制算法中的参数进行调整,还包括有:9. The method according to claim 6, characterized in that the parameters in the delay-based congestion control algorithm or the loss-based congestion control algorithm are adjusted using the reinforcement learning optimization algorithm GCC, and further comprising: 构建强化学习模型GCC,输入是状态空间其中,Tt表示t时刻的吞吐量向量,Dt表示t时刻的包间延迟抖动向量,Rt表示t时刻的RTT向量,Lt表示t时刻的丢包率向量,输出是预测的发送速率,奖励函数reward将吞吐量作为正反馈、延迟和丢包率作为负反馈:reward=α*吞吐量+β*延迟+γ*丢包率,其中,α、β、γ分别是吞吐量、延迟、丢包率的权重系数,代表吞吐量、延迟、丢包率对reward的影响,且α>0,β<0,γ<0,并使用全连接网络进行特征提取,全连接的层数为2层,每一层神经元个数设为64。Construct a reinforcement learning model GCC, the input is the state space Among them, T t represents the throughput vector at time t, D t represents the inter-packet delay jitter vector at time t, R t represents the RTT vector at time t, L t represents the packet loss rate vector at time t, and the output is the predicted sending rate. The reward function reward takes throughput as positive feedback, delay and packet loss rate as negative feedback: reward = α*throughput+β*delay+γ*packet loss rate, among which α, β, and γ are the weight coefficients of throughput, delay, and packet loss rate, respectively, representing the impact of throughput, delay, and packet loss rate on reward, and α>0, β<0, γ<0, and a fully connected network is used for feature extraction. The number of fully connected layers is 2, and the number of neurons in each layer is set to 64. 10.根据权利要求6所述的方法,其特征在于,当流量集中的一个SCTP数据流的拥塞窗口大小CWND发生变化时,以数据流fa为例进行说明,还包括有:10. The method according to claim 6, characterized in that when the congestion window size CWND of an SCTP data stream in the traffic concentration changes, taking data stream fa as an example for explanation, it also includes: 步骤C1、优先级控制装置将fa当前的CC_CWND(fa)和last_rtt(fa)发送给FPM;Step C1, the priority control device sends the current CC_CWND ( fa ) and last_rtt ( fa ) of fa to FPM; 步骤C2、FPM更新本地保存的fa的last_rtt(fa),并计算fa的输出速率CC_R(fa):然后查询fa所属流量集,调整fa所属流量集的当前输出总速率S_CR′:S_CR′=S_CR+CC_R(fa)-FPM_R(fa),其中,FPM_R(fa)是FPM为fa上一次分配的优先级算法输出速率;Step C2: FPM updates the locally saved last_rtt( fa ) of fa and calculates the output rate CC_R( fa ): Then query the traffic set to which f a belongs, and adjust the current total output rate S_CR′ of the traffic set to which f a belongs: S_CR′=S_CR+CC_R(f a )-FPM_R(f a ), where FPM_R(f a ) is the priority algorithm output rate that FPM last allocated to f a ; 步骤C3、FPM将fa所属流量集的当前剩余分配总速率L_R的值更新为S_CR′,并重新计算fa所属流量集中每个SCTP数据流的优先级算法输出的拥塞窗口大小:FPM_CWND(fj)=FPM_R(fj)×last_rtt(fj),其中,fj是fa所属流量集的第j个SCTP数据流,然后将计算得到的优先级算法输出的拥塞窗口大小分配给fa所属流量集中对应的SCTP数据流。Step C3, FPM updates the value of the current remaining allocated total rate L_R of the traffic set to which f a belongs to S_CR′, and recalculates the congestion window size output by the priority algorithm of each SCTP data flow in the traffic set to which f a belongs: FPM_CWND(f j )=FPM_R(f j )×last_rtt(f j ), where f j is the jth SCTP data flow in the traffic set to which fa belongs, and then the congestion window size output by the calculated priority algorithm is allocated to the corresponding SCTP data flow in the traffic set to which fa belongs.
CN202211614952.3A 2022-12-14 2022-12-14 A data channel service system and implementation method for IMS network Active CN116032893B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211614952.3A CN116032893B (en) 2022-12-14 2022-12-14 A data channel service system and implementation method for IMS network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211614952.3A CN116032893B (en) 2022-12-14 2022-12-14 A data channel service system and implementation method for IMS network

Publications (2)

Publication Number Publication Date
CN116032893A CN116032893A (en) 2023-04-28
CN116032893B true CN116032893B (en) 2024-11-26

Family

ID=86090409

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211614952.3A Active CN116032893B (en) 2022-12-14 2022-12-14 A data channel service system and implementation method for IMS network

Country Status (1)

Country Link
CN (1) CN116032893B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN120111001B (en) * 2025-05-07 2025-07-22 中国人民解放军国防科技大学 Receiver-driven hybrid traffic transmission method and computing network

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101809961A (en) * 2007-09-28 2010-08-18 爱立信电话股份有限公司 Fault recovery in the IP Multimedia System network
CN112469079A (en) * 2020-11-05 2021-03-09 南京大学 Novel congestion control method combining deep reinforcement learning and traditional congestion control

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102769520B (en) * 2012-07-17 2015-01-28 西安电子科技大学 Wireless network congestion control method based on stream control transmission protocol (SCTP)
US9819701B2 (en) * 2013-06-25 2017-11-14 Avago Technologies General Ip (Singapore) Pte. Ltd. Low latency IMS-based media handoff between a cellular network and a WLAN
CN103634299B (en) * 2013-11-14 2016-09-14 北京邮电大学 Based on multi-link real time streaming terminal and method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101809961A (en) * 2007-09-28 2010-08-18 爱立信电话股份有限公司 Fault recovery in the IP Multimedia System network
CN112469079A (en) * 2020-11-05 2021-03-09 南京大学 Novel congestion control method combining deep reinforcement learning and traditional congestion control

Also Published As

Publication number Publication date
CN116032893A (en) 2023-04-28

Similar Documents

Publication Publication Date Title
JP5276589B2 (en) A method for optimizing information transfer in telecommunications networks.
CN107743698B (en) Method and apparatus for multipath media delivery
CN102365857B (en) Method and apparatus for efficient transmission of multimedia streams for conference calls
CN105991856B (en) VOIP routing based on RTP server to server routing
US9100279B2 (en) Method, apparatus, and system for forwarding data in communications system
CN101902392A (en) Communication method and system
KR101065810B1 (en) Bidding network
EP2785007B1 (en) Managing streamed communication
KR101705440B1 (en) Hybrid cloud media architecture for media communications
EP1964334A2 (en) System and/or method for bidding
US12028382B2 (en) Communication method, communication apparatus, and communication system
EP1966953A2 (en) System and/or method for downstream bidding
CN116032893B (en) A data channel service system and implementation method for IMS network
US8428074B2 (en) Back-to back H.323 proxy gatekeeper
US7599399B1 (en) Jitter buffer management
US8068128B2 (en) Visual communication server and communication system
US11546398B1 (en) Real-time transport (RTC) with low latency and high scalability
Vieira et al. VoIP traffic and resource management using Software-Defined Networking
Mohameda et al. RSVP BASED MPLS VERSUS IP PERFORMANCE EVALUATION
US20070153828A1 (en) System and method to negotiate the addition or deletion of a PPP link without data loss
Shan et al. Border Media Gateway: Extending Multimedia Multicast Gateway to Support Inter-AS Conferencing
Singh Data networks and the internet
Saito et al. Evaluation of traffic dispersion methods for synchronous distributed multimedia data transmission on multiple links for group of mobile hosts
Tsai Bandwidth allocation schemes for FTTH networks
Miladinovic et al. Multiparty Conference Signalling using the Session Initiation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant