[go: up one dir, main page]

CN111935246A - User generated content uploading method and system based on cloud edge collaboration - Google Patents

User generated content uploading method and system based on cloud edge collaboration Download PDF

Info

Publication number
CN111935246A
CN111935246A CN202010705365.XA CN202010705365A CN111935246A CN 111935246 A CN111935246 A CN 111935246A CN 202010705365 A CN202010705365 A CN 202010705365A CN 111935246 A CN111935246 A CN 111935246A
Authority
CN
China
Prior art keywords
edge node
preset
uploading
edge
content data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010705365.XA
Other languages
Chinese (zh)
Inventor
张新常
赵彦玲
朱效民
王茂励
魏亮
毕洁东
曹文鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National Supercomputing Center in Jinan
Original Assignee
National Supercomputing Center in Jinan
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National Supercomputing Center in Jinan filed Critical National Supercomputing Center in Jinan
Priority to CN202010705365.XA priority Critical patent/CN111935246A/en
Publication of CN111935246A publication Critical patent/CN111935246A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/12Shortest path evaluation
    • H04L45/122Shortest path evaluation by minimising distances, e.g. by selecting a route with minimum of number of hops
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/24Multipath
    • H04L45/245Link aggregation, e.g. trunking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/44Distributed routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/16Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
    • H04L69/163In-band adaptation of TCP data exchange; In-band control procedures

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

本发明公开了一种基于云边协作的用户生成内容上传方法和系统,涉及网络数据传输领域。该方法包括:用户端向预设边缘节点传送内容数据;预设边缘节点缓存内容数据,根据预设边缘节点对应的主边缘中继路由中的预设逻辑链路将内容数据上传到目标服务器,利用边缘节点的主边缘中继路由提供新的网络带宽利用优化空间,减轻了常规传输的压力,在规避网络拥塞前提下显著提升用户体验度。

Figure 202010705365

The invention discloses a method and system for uploading user-generated content based on cloud-edge collaboration, and relates to the field of network data transmission. The method includes: a user terminal transmits content data to a preset edge node; the preset edge node caches the content data, and uploads the content data to a target server according to a preset logical link in a primary edge relay route corresponding to the preset edge node, The main edge relay route of the edge node provides a new space for network bandwidth utilization optimization, reduces the pressure of conventional transmission, and significantly improves the user experience under the premise of avoiding network congestion.

Figure 202010705365

Description

User generated content uploading method and system based on cloud edge collaboration
Technical Field
The invention relates to the field of network data transmission, in particular to a user generated content uploading method and system based on cloud edge cooperation.
Background
The method is characterized in that iteration is rapidly updated along with an information network technology, the media industry rapidly flourishes and expands, content production is changed from professional behaviors of a few authors into common activities which can be participated by the whole world, based on the existing data uploading technology, effective and rapid uploading of content generated by a user can not be carried out, non-real-time content is uploaded in a common transmission mode, so that the non-real-time content occupies channel resources of the real-time uploaded content, transmission network channels are crowded, not only real-time content transmission is influenced, but also the non-real-time content can not be uploaded normally, in practical application, network congestion is not caused frequently, the non-real-time content cannot be uploaded normally, and user experience is greatly influenced.
Disclosure of Invention
The invention aims to solve the technical problem of the prior art and provides a user generated content uploading method and system based on cloud-edge collaboration.
The technical scheme for solving the technical problems is as follows:
a user generated content uploading method based on cloud edge collaboration comprises the following steps:
s1, the user end transmits the content data to the preset edge node;
and S2, caching the content data by the preset edge node, and uploading the content data to a target server according to a preset logic link in the main edge relay route corresponding to the preset edge node.
The invention has the beneficial effects that: according to the scheme, the edge computing and cloud computing capabilities are fully utilized, the content data transmitted from the user side to the preset edge node is not required to be successfully uploaded to the target server side, and the user experience is remarkably improved; the content data are uploaded to the target server through a preset logic link in the main edge relay route corresponding to the preset edge node, a new network bandwidth utilization optimization space is provided by the main edge relay route of the edge node, the pressure of conventional transmission is relieved, and network congestion can be greatly avoided.
Further, before the S2, the method further includes:
s21, the preset edge node distributes a main edge relay route corresponding to the uploading condition of the content data through a cloud center;
s22, the cloud center constructs the preset logic link in the main edge relay route according to the link avoidance level.
The beneficial effect of adopting the further scheme is that: according to the scheme, a performance-optimized non-real-time uploading main edge relay routing structure is constructed according to the uploading condition of the content data of the user, a new network bandwidth is provided to achieve the uploading of the content generated by the user, and the highest physical link sharing degree of the logical link in the control path is facilitated through the logic link determined by avoiding the grade.
Further, still include: the target server sends the content data to a third party request end, when the content data is not completely sent, the target server sends the received part of the content data to the third party request end, and when the sending rate of the content data sent by the target server to the third party request end is greater than the uploading rate of the preset edge node, the preset edge node caching the content data is informed to send the rest part of the request data to the target server in a TCP mode.
The beneficial effect of adopting the further scheme is that: according to the scheme, when the request data of the third-party request end is not completely uploaded, the content data cached in the preset edge node is uploaded to the server in a TCP mode, the conventional TCP is adopted to control the transmission rate from the user to the nearby edge node, when the relay route fails to successfully upload the request data, the emergency is realized through TCP transmission, the third-party request end does not need to wait too long time due to the fact that the complete uploading is not successful, and the transmission efficiency of the non-real-time content uploading is guaranteed.
Further, the S21 specifically includes:
judging whether a corresponding main edge relay route of the preset edge node exists according to the cached content data, and if so, continuing to use the main edge relay route if the corresponding main edge relay route of the preset edge node exists and the uploading condition of the content data is met; otherwise, according to the uploading condition of the content data, calculating a main edge relay route corresponding to the preset edge node.
The beneficial effect of adopting the further scheme is that: according to the scheme, the corresponding main edge relay route of the preset edge node is selected, the existing main edge relay route of the preset edge node is selected, or the main edge relay route is calculated according to the uploading condition, so that the cached content data are uploaded to the corresponding target server by fully utilizing the idle network bandwidth resources.
Further, the S1 is preceded by:
and S11, searching the idle edge node which is closest to the user end and serves as a preset edge node, sending a content uploading request to the preset edge node, and when the preset edge node receives the uploading request, performing the content of the step S1.
The beneficial effect of adopting the further scheme is that: according to the scheme, the idle state closest to the user side is searched to serve as the preset edge node and the target edge node, so that the selection of the optimal edge node is realized, and the rapid content uploading service is provided.
Another technical solution of the present invention for solving the above technical problems is as follows:
a user-generated content upload system based on cloud-edge collaboration, comprising: the system comprises a user side, a preset edge node and a target server;
the user side is used for transmitting content data to the preset edge node;
the preset edge node is used for caching the content data and uploading the content data to a target server according to a preset logic link in a main edge relay route corresponding to the preset edge node.
The invention has the beneficial effects that: according to the scheme, the edge computing and cloud computing capabilities are fully utilized, the content data transmitted from the user side to the preset edge node is not required to be successfully uploaded to the target server side, and the user experience is remarkably improved; the content data are uploaded to the target server through a preset logic link in the main edge relay route corresponding to the preset edge node, a new network bandwidth utilization optimization space is provided by the main edge relay route of the edge node, the pressure of conventional transmission is relieved, and network congestion can be greatly avoided.
Further, the preset edge node is configured to allocate a main edge relay route corresponding to the uploading condition of the content data through a cloud center; the cloud center is used for constructing the preset logical link in the main edge relay route according to the link evasion grade.
The beneficial effect of adopting the further scheme is that: according to the scheme, a performance-optimized non-real-time uploading main edge relay routing structure is constructed according to the uploading condition of the content data of the user, a new network bandwidth is provided to achieve the uploading of the content generated by the user, and the highest physical link sharing degree of the logical link in the control path is facilitated through the logic link determined by avoiding the grade.
Further, still include: and the target server is used for sending the received part of the content data to the third-party request end when the target server sends the content data to the third-party request end and the content data is not completely sent, and informing the preset edge node caching the content data to send the rest part of the request data to the target server in a TCP mode when the sending rate of the content data sent to the third-party request end by the target server is greater than the uploading rate of the preset edge node.
The beneficial effect of adopting the further scheme is that: according to the scheme, when the request data of the third-party request end is not completely uploaded, the content data cached in the preset edge node is uploaded to the server in a TCP mode, the conventional TCP is adopted to control the transmission rate from the user to the nearby edge node, when the relay route fails to successfully upload the request data, the emergency is realized through TCP transmission, the third-party request end does not need to wait too long time due to the fact that the complete uploading is not successful, and the transmission efficiency of the non-real-time content uploading is guaranteed.
Further, the preset edge node is specifically configured to determine, according to the cached content data, whether a main edge relay route corresponding to the preset edge node exists, and if the main edge relay route exists and the uploading condition of the content data is met, continue to use the main edge relay route; otherwise, according to the uploading condition of the content data, calculating a main edge relay route corresponding to the preset edge node.
The beneficial effect of adopting the further scheme is that: according to the scheme, the corresponding main edge relay route of the preset edge node is selected, the existing main edge relay route of the preset edge node is selected, or the main edge relay route is calculated according to the uploading condition, so that the cached content data are uploaded to the corresponding target server by fully utilizing the idle network bandwidth resources.
Further, still include: and the edge node acquisition module is used for searching an idle edge node which is closest to the user and serves as a preset edge node, sending a content uploading request to the preset edge node, and when the preset edge node receives the uploading request, carrying out the operation of transmitting content data to the preset edge node by the user.
The beneficial effect of adopting the further scheme is that: according to the scheme, the idle edge node which is closest to the user is searched to serve as the preset edge node and the target edge node, so that the selection of the optimal edge node is realized, and the rapid content uploading service is provided.
Advantages of additional aspects of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
Fig. 1 is a schematic flowchart of a user-generated content uploading method based on cloud-edge collaboration according to an embodiment of the present invention;
fig. 2 is a structural diagram of a user-generated content uploading system based on cloud-edge collaboration according to another embodiment of the present invention;
fig. 3 is a schematic flow chart of cooperative cloud-edge data upload according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a cloud-edge collaboration uploading structure according to an embodiment of the present invention.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth to illustrate, but are not to be construed to limit the scope of the invention.
As shown in fig. 1, a method for uploading user-generated content based on cloud-edge collaboration provided in an embodiment of the present invention includes: s1, the user end transmits the content data to the preset edge node;
in some examples, selecting the preset edge node may be: by default, the ue temporarily transmits UGC (User Generated Content) to the nearest edge node, and the UGC Content is Content that does not need to be transmitted in real time. Since the edge nodes are close to the clients and the non-real-time content upload requests are usually distributed, the edge nodes can provide fast temporary content upload services for the nearby clients in most cases. When a user side finds that the cache of the nearest edge node is insufficient, corresponding content uploading requests are sent to other adjacent edge nodes; when all nearby edge nodes refuse service due to insufficient cache, the content is directly uploaded to the target platform server according to the conventional method, which may be a Transmission Control Protocol (TCP) Transmission method.
And S2, caching the content data by the preset edge node, and uploading the content data to the target server according to the preset logic link in the main edge relay route corresponding to the preset edge node.
In some embodiments, as shown in fig. 3, obtaining the primary edge relay route may be: after the content data sent by the user side is cached by the adjacent edge node, the cloud center judges whether the available relay route of the main edge of the edge node exists or not according to the service uploading requirement, namely whether the existing relay route of the main edge can meet the current content uploading condition or not. If so, continuing to transmit the content by using the route; otherwise, the cloud center calculates a main edge relay route for the adjacent node according to the uploading condition, issues the route to each relevant node, and uploads the cached content data to the corresponding target server by using the idle network bandwidth. The service uploading requirement may include: bandwidth, delay, packet loss rate and the like.
In some embodiments, a preset logical link in the primary edge relay route may be constructed according to the avoidance level of the logical link. Calculating the avoidance ranking may include: carrying out avoidance grading on the logical links according to the sharing degree of the physical links, wherein the lowest avoidance grade is grade 1 and is initially set to be 0; the routing weights of each evasion class are equal. When the shortest path first selection strategy is adopted, the following characteristics are satisfied: if and only if the path cannot be composed of logical links with low avoidance levels, the logical links with high avoidance levels are used to dynamically construct a shortest path from the edge node to the target service end, wherein the endmost path can be measured by the avoidance levels of the logical links, and the lower level links are preferentially selected, and the calculation process can include:
when the sharing degree lambda (i) of the physical link with the constraint bottleneck of the logical link i is equal to the maximum sharing degree in the existing logical link set L, the logical link i is associated with the avoidance level p (i, L) of the L:
p(i,L)=λ(i)=max{λ(j)|j∈L}
when the sharing degree lambda (i) of the physical link with the constraint bottleneck of the logical link i is smaller than the maximum sharing degree in the existing logical link set L, the logical link i is associated with the avoidance level p (i, L) of the L:
Figure BDA0002594488230000071
wherein,
Figure BDA0002594488230000072
wherein, L is the existing logical link set and is initially an empty set; λ (j) is the bottleneck physical link sharing degree with constraint, where the logical link j is associated with L, and is expressed as the number of times that the logical link j appears in other main edge relay routes; and MaxP is the maximum value of each link avoidance grade in L. If the physical link i is used as a logical link of the main edge relay route of the current node, the link sharing degree of the link i associated with the L is added with 1, and the scheme is favorable for controlling the highest physical link sharing degree of the logical links in the path.
In some examples, when a certain edge node has a UGC upload demand, if there is no main edge relay route in L, dynamically constructing a shortest main edge relay route thereof, and storing a corresponding physical link in L as a logical link; when an edge node finishes all UGC transmission on the edge node, the related logical link is deleted in L.
In some examples, it may further include: and presetting edge nodes, uploading all UGC contents to a corresponding target server, and deleting a main edge relay route, thereby releasing occupied storage resources and improving the sharing degree of a link.
According to the scheme, the edge computing and cloud computing capabilities are fully utilized, the content data transmitted from the user side to the preset edge node is not required to be successfully uploaded to the target server side, and the user experience is remarkably improved; the content data are uploaded to the target server through a preset logic link in the main edge relay route corresponding to the preset edge node, a new network bandwidth utilization optimization space is provided by the main edge relay route of the edge node, the pressure of conventional transmission is relieved, and network congestion can be greatly avoided. Preferably, in any of the above embodiments, before S2, the method further includes:
s21, the preset edge node distributes a main edge relay route corresponding to the uploading condition of the content data through the cloud center;
s22, the cloud center constructs the preset logic link in the main edge relay route according to the link avoidance level. The uploading condition may include bandwidth, delay, packet loss rate, and the like.
In one embodiment, the edge nodes and the non-real-time content uploading historical information can be managed through the cloud computing platform, and management work can be achieved based on the edge layer network information base and the uploading activity information base. The edge-layer overlay network refers to an overlay network composed of edge nodes and logical links between the edge nodes, and the edge-layer overlay network information includes edge node information, distances between the edge nodes, related topology information, and the like, wherein the distances between the edge nodes may be measured by using measured delays, and the topology information may include a topology structure of the edge nodes. The inter-edge node distance may be obtained by sending a probe packet. And the uploading activity information base records the non-real-time content uploading condition. According to the stored non-real-time content uploading historical information, the cloud center analyzes and predicts non-real-time content uploading demand characteristics so as to construct a performance-optimized non-real-time content uploading structure according to the demand characteristics, wherein the demand characteristics can comprise: bandwidth, delay and packet loss rate, and the like, as well as some conditions required during uploading; the upload structure is a structure representing a main edge relay route.
The main edge relay route can be the edge relay route except the part from the user side to the nearest neighbor edge node, and the scheme is mainly based on the dynamic construction process of the main edge relay route. And the main edge relay route is distributed in a centralized manner through the cloud center, and a link in the main edge relay route is constructed by utilizing the avoidance grade of the logical link. To avoid the invalid non-real-time content upload structure occupying storage resources, if a single built non-real-time content upload structure is not used within a specified time, it is destroyed to free up occupied storage space.
According to the scheme, a performance-optimized non-real-time uploading main edge relay routing structure is constructed according to the uploading condition of the content data of the user, a new network bandwidth is provided to achieve the uploading of the content generated by the user, and the highest physical link sharing degree of the logical link in the control path is facilitated through the logic link determined by avoiding the grade.
Preferably, in any of the above embodiments, further comprising: the target server sends the content data to a third party request end, when the content data is not completely sent, the target server sends the received part of the content data to the third party request end, and when the sending rate of the content data sent by the target server to the third party request end is greater than the uploading rate of the preset edge node, the preset edge node caching the content data is informed to send the rest part of the request data to the target server in a TCP mode.
It should be noted that, the third party requesting end may be a platform for the user end to finally upload the content data, for example: and uploading the video data edited by the user to a tremble platform.
In some examples, the emergency mechanism may further include: when a third party requests content which is not completely uploaded, the target server side sends the received content to the third party, and when the sending rate is higher than the content uploading rate and the data volume of the received content cannot counteract the negative influence of the rate difference, namely the sending rate of the target server is larger than the receiving rate and the rate difference cannot counteract the received content, the edge node of the cache content is informed to send the remaining content to the target server by conventional TCP transmission so as to improve the non-real-time content uploading transmission. The sending rate refers to the rate of sending data to the third party by the target server, and the uploading rate refers to the rate of uploading data to the target server by the edge node of the cache content.
According to the scheme, when the request data of the third-party request end is not completely uploaded, the content data cached in the preset edge node is uploaded to the server in a TCP mode, the conventional TCP is adopted to control the transmission rate from the user to the nearby edge node, when the relay route fails to successfully upload the request data, the emergency is realized through TCP transmission, the third-party request end does not need to wait too long time due to the fact that the complete uploading is not successful, and the transmission efficiency of the non-real-time content uploading is guaranteed. Preferably, in any of the above embodiments, S21 specifically includes:
judging whether a corresponding main edge relay route of a preset edge node exists according to the cached content data, and if so, continuing to use the main edge relay route if the corresponding main edge relay route meets the uploading condition of the content data; otherwise, according to the uploading condition of the content data, calculating a main edge relay route corresponding to the preset edge node. Wherein, the uploading condition may include: bandwidth, delay, packet loss rate and other conditions required for uploading,
in some examples, the cloud center calculates the main edge relay route that satisfies the conditions according to the bandwidth, delay and packet loss rate conditions. For example: a user end needs to upload certain data content, wherein the bandwidth condition is set as follows: 100 bits +/-10 bits, delay within 10ms, packet loss rate within 1%, according to the above conditions, comparing data in the existing main edge route to obtain the main edge route with bandwidth higher than 110 bits, delay less than 10ms and packet loss rate within 1%, if calculating a plurality of satisfying conditions, selecting the main edge route satisfying various conditions and having the best packet loss rate.
According to the scheme, the corresponding main edge relay route of the preset edge node is selected, the existing main edge relay route of the preset edge node is selected, or the main edge relay route is calculated according to the uploading condition, so that the cached content data are uploaded to the corresponding target server by fully utilizing the idle network bandwidth resources.
Preferably, in any of the above embodiments, S1 may be preceded by:
and S11, searching the idle edge node which is closest to the user end and is in the state of being the preset edge node, sending a content uploading request to the preset edge node, and when the preset edge node receives the uploading request, performing the content of the step S1. The preset edge node can be represented as that the edge node closest to the user side is selected from the nodes in the state idle state as the preset edge node near the user side, wherein the state idle state can be represented as that the cache of the edge node is sufficient and the uploading condition of the uploaded data is met.
In some examples, a plurality of adjacent edge nodes are searched according to the distance from the user side, which edge nodes have redundant caches capable of storing data uploaded by the user are judged from the plurality of edge nodes, and the edge node with the closest distance is selected as a preset edge node from the edge nodes meeting the cache condition.
Specifically, the spatial position coordinates of each edge node may be stored in advance, then the position information of the user side is obtained by the existing positioning device, the euclidean distance is calculated according to the position information of the user side and the coordinate information of each edge node, and the nearest euclidean distance having the redundant cache is used as the preset edge node.
According to the scheme, the idle state closest to the user side is searched to serve as the preset edge node and the target edge node, so that the selection of the optimal edge node is realized, and the rapid content uploading service is provided.
In one embodiment, as shown in fig. 2, a user-generated content uploading system based on cloud-edge collaboration includes: a user terminal 11, a preset edge node 12 and a target server 13;
the user terminal 11 is configured to transmit content data to a preset edge node;
the preset edge node 12 is configured to cache content data, and upload the content data to the target server 13 according to a preset logical link in the main edge relay route corresponding to the preset edge node.
In an embodiment, as shown in fig. 4, the uploading system may include: the cloud center and the edge nodes can be used as a service platform for providing UGC uploading cooperation for the outside and can also be special facilities established by UGC uploading application service providers for improving application performance. The cloud center can comprise an uploading activity information base and an edge layer network information base, is used for the edge node association logic contact centralized distribution, ensures the structural robustness, the uploading fairness and the low physical link sharing of a logic link structure, and realizes the caching capacity, the computing capacity and the edge node network capacity of the edge node by building a non-real-time content uploading structure of the edge node as a relay through the cloud center and the edge node.
According to the scheme, the edge computing and cloud computing capabilities are fully utilized, the content data transmitted from the user side to the preset edge node is not required to be successfully uploaded to the target server side, and the user experience is remarkably improved; the content data are uploaded to the target server through a preset logic link in the main edge relay route corresponding to the preset edge node, a new network bandwidth utilization optimization space is provided by the main edge relay route of the edge node, the pressure of conventional transmission is relieved, and network congestion can be greatly avoided. Preferably, in any of the above embodiments, the preset edge node 12 is further configured to allocate, through the cloud center, a primary edge relay route corresponding to an upload condition of the content data; and the cloud center constructs a preset logical link in the main edge relay route according to the link evasion grade.
According to the scheme, a performance-optimized non-real-time uploading main edge relay routing structure is constructed according to the uploading condition of the content data of the user, a new network bandwidth is provided to achieve the uploading of the content generated by the user, and the highest physical link sharing degree of the logical link in the control path is facilitated through the logic link determined by avoiding the grade.
Preferably, in any of the above embodiments, further comprising: and the target server 13 is configured to send the received part of the content data to the third-party request end when the target server sends the content data to the third-party request end and the content data is not completely sent, and notify the preset edge node that caches the content data to send the remaining part of the request data to the target server in a TCP manner when the sending rate of the content data sent by the target server to the third-party request end is greater than the uploading rate of the preset edge node.
According to the scheme, when the request data of the third-party request end is not completely uploaded, the content data cached in the preset edge node is uploaded to the server in a TCP mode, the conventional TCP is adopted to control the transmission rate from the user to the nearby edge node, when the relay route fails to successfully upload the request data, the emergency is realized through TCP transmission, the third-party request end does not need to wait too long time due to the fact that the complete uploading is not successful, and the transmission efficiency of the non-real-time content uploading is guaranteed. Preferably, in any of the above embodiments, the preset edge node 12 is specifically configured to determine, according to the cached content data, whether a main edge relay route corresponding to the preset edge node exists, and if the main edge relay route exists and meets an upload condition of the content data, continue to use the main edge relay route; otherwise, according to the uploading condition of the content data, calculating a main edge relay route corresponding to the preset edge node.
According to the scheme, the corresponding main edge relay route of the preset edge node is selected, the existing main edge relay route of the preset edge node is selected, or the main edge relay route is calculated according to the uploading condition, so that the cached content data are uploaded to the corresponding target server by fully utilizing the idle network bandwidth resources.
Preferably, in any of the above embodiments, further comprising: and the edge node acquisition module is used for searching the idle edge node which is closest to the user side as the preset edge node, sending a content uploading request to the preset edge node, and when the preset edge node receives the uploading request, performing the operation of transmitting the content data to the preset edge node by the user side.
According to the scheme, the idle state closest to the user side is searched to serve as the preset edge node and the target edge node, so that the selection of the optimal edge node is realized, and the rapid content uploading service is provided.
It is understood that some or all of the alternative embodiments described above may be included in some embodiments.
It should be noted that the above embodiments are product embodiments corresponding to the previous method embodiments, and for the description of each optional implementation in the product embodiments, reference may be made to corresponding descriptions in the above method embodiments, and details are not described here again.
The reader should understand that in the description of this specification, reference to the description of the terms "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described method embodiments are merely illustrative, and for example, the division of steps into only one logical functional division may be implemented in practice in another way, for example, multiple steps may be combined or integrated into another step, or some features may be omitted, or not implemented.
The above method, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention essentially or partially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
While the invention has been described with reference to specific embodiments, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A user generated content uploading method based on cloud edge collaboration is characterized by comprising the following steps:
s1, the user end transmits the content data to the preset edge node;
and S2, caching the content data by the preset edge node, and uploading the content data to a target server according to a preset logic link in the main edge relay route corresponding to the preset edge node.
2. The method for uploading user-generated content based on cloud-edge collaboration as claimed in claim 1, further comprising, before the step S2:
s21, the preset edge node distributes a main edge relay route corresponding to the uploading condition of the content data through a cloud center;
s22, the cloud center constructs the preset logic link in the main edge relay route according to the link avoidance level.
3. The method for uploading user-generated content based on cloud-edge collaboration as claimed in claim 1 or 2, further comprising: the target server sends the content data to a third party request end, when the content data is not completely sent, the target server sends the received part of the content data to the third party request end, and when the sending rate of the content data sent by the target server to the third party request end is greater than the uploading rate of the preset edge node, the preset edge node caching the content data is informed to send the rest part of the request data to the target server in a TCP mode.
4. The method for uploading user-generated content based on cloud-edge collaboration as claimed in claim 1 or 2, wherein the S21 specifically includes:
judging whether a corresponding main edge relay route of the preset edge node exists according to the cached content data, and if so, continuing to use the main edge relay route if the corresponding main edge relay route of the preset edge node exists and the uploading condition of the content data is met; otherwise, according to the uploading condition of the content data, calculating a main edge relay route corresponding to the preset edge node.
5. The method for uploading user-generated content based on cloud-edge collaboration as claimed in claim 1 or 2, wherein the step S1 is preceded by the step of:
and S11, searching the idle edge node which is closest to the user end and serves as a preset edge node, sending a content uploading request to the preset edge node, and when the preset edge node receives the uploading request, performing the content of the step S1.
6. A user-generated content upload system based on cloud-edge collaboration, comprising: the system comprises a user side, a preset edge node and a target server;
the user side is used for transmitting content data to the preset edge node;
the preset edge node is used for caching the content data and uploading the content data to the target server according to a preset logic link in a main edge relay route corresponding to the preset edge node.
7. The system for uploading user-generated content based on cloud-edge collaboration as claimed in claim 6, wherein the preset edge node is further configured to allocate a primary edge relay route corresponding to an uploading condition of the content data through a cloud center; and the cloud center constructs the preset logical link in the main edge relay route according to the link evasion grade.
8. The cloud-edge-collaboration-based user-generated content uploading system according to claim 6 or 7, wherein the target server is configured to send, to a third-party requesting end, a received part of the content data when the target server sends the content data to the third-party requesting end and the content data is not completely sent, and notify the preset edge node that caches the content data to send a remaining part of the request data to the target server in a TCP manner when a sending rate at which the target server sends the content data to the third-party requesting end is greater than an uploading rate of the preset edge node.
9. The cloud-edge-collaboration-based user-generated content uploading system according to claim 6 or 7, wherein the preset edge node is further specifically configured to determine, according to the cached content data, whether a corresponding primary edge relay route of the preset edge node exists, and if so, and the uploading condition of the content data is met, continue to use the primary edge relay route; otherwise, according to the uploading condition of the content data, calculating a main edge relay route corresponding to the preset edge node.
10. The cloud-edge-collaboration-based user-generated content upload system of claim 6 or 7, further comprising: and the edge node acquisition module is used for searching an idle edge node which is closest to the user and serves as a preset edge node, sending a content uploading request to the preset edge node, and when the preset edge node receives the uploading request, carrying out the operation of transmitting content data to the preset edge node by the user.
CN202010705365.XA 2020-07-21 2020-07-21 User generated content uploading method and system based on cloud edge collaboration Pending CN111935246A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010705365.XA CN111935246A (en) 2020-07-21 2020-07-21 User generated content uploading method and system based on cloud edge collaboration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010705365.XA CN111935246A (en) 2020-07-21 2020-07-21 User generated content uploading method and system based on cloud edge collaboration

Publications (1)

Publication Number Publication Date
CN111935246A true CN111935246A (en) 2020-11-13

Family

ID=73314186

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010705365.XA Pending CN111935246A (en) 2020-07-21 2020-07-21 User generated content uploading method and system based on cloud edge collaboration

Country Status (1)

Country Link
CN (1) CN111935246A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112910713A (en) * 2021-03-02 2021-06-04 山东省计算中心(国家超级计算济南中心) Cloud-edge data distribution transmission method, edge node, control center and storage medium
CN114040217A (en) * 2021-11-05 2022-02-11 南京小灿灿网络科技有限公司 Double-mixed streaming media live broadcasting method
CN114241002A (en) * 2021-12-14 2022-03-25 中国电信股份有限公司 Target tracking method, system, device and medium based on cloud edge cooperation
CN114401317A (en) * 2022-03-25 2022-04-26 山东省计算中心(国家超级计算济南中心) Ocean buoy-oriented multipoint cooperative active cache networking method and system
CN116708582A (en) * 2022-02-25 2023-09-05 贵州白山云科技股份有限公司 Code request method, device, medium and equipment based on distributed cloud network
CN116915781A (en) * 2023-09-14 2023-10-20 南京邮电大学 A blockchain-based edge collaborative caching system and method
CN117459901A (en) * 2023-12-26 2024-01-26 深圳市彩生活网络服务有限公司 Cloud platform data intelligent management system and method based on positioning technology
WO2024165016A1 (en) * 2023-02-10 2024-08-15 华为云计算技术有限公司 Edge node communication method based on cloud computing technology, and related device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105357041A (en) * 2015-10-30 2016-02-24 上海帝联信息科技股份有限公司 Edge node server, and log file uploading method and system
CN105681387A (en) * 2015-11-26 2016-06-15 乐视云计算有限公司 Method, device and system for uploading live video
CN105872856A (en) * 2016-03-21 2016-08-17 乐视云计算有限公司 Method and system for distributing stream media files
CN107580021A (en) * 2017-08-01 2018-01-12 北京奇艺世纪科技有限公司 A kind of method and apparatus of file transmission
CN109040298A (en) * 2018-08-31 2018-12-18 中国科学院计算机网络信息中心 Data processing method and device based on edge calculations technology
CN109660495A (en) * 2017-10-12 2019-04-19 网宿科技股份有限公司 A kind of document transmission method and device
CN110765365A (en) * 2019-10-25 2020-02-07 国网河南省电力公司信息通信公司 Implementation method, device, device and medium for distributed edge-cloud collaborative caching strategy
CN111327677A (en) * 2020-01-20 2020-06-23 南京邮电大学 A resource scheduling system and method for industrial Internet of Things based on edge computing

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105357041A (en) * 2015-10-30 2016-02-24 上海帝联信息科技股份有限公司 Edge node server, and log file uploading method and system
CN105681387A (en) * 2015-11-26 2016-06-15 乐视云计算有限公司 Method, device and system for uploading live video
CN105872856A (en) * 2016-03-21 2016-08-17 乐视云计算有限公司 Method and system for distributing stream media files
CN107580021A (en) * 2017-08-01 2018-01-12 北京奇艺世纪科技有限公司 A kind of method and apparatus of file transmission
CN109660495A (en) * 2017-10-12 2019-04-19 网宿科技股份有限公司 A kind of document transmission method and device
CN109040298A (en) * 2018-08-31 2018-12-18 中国科学院计算机网络信息中心 Data processing method and device based on edge calculations technology
CN110765365A (en) * 2019-10-25 2020-02-07 国网河南省电力公司信息通信公司 Implementation method, device, device and medium for distributed edge-cloud collaborative caching strategy
CN111327677A (en) * 2020-01-20 2020-06-23 南京邮电大学 A resource scheduling system and method for industrial Internet of Things based on edge computing

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112910713A (en) * 2021-03-02 2021-06-04 山东省计算中心(国家超级计算济南中心) Cloud-edge data distribution transmission method, edge node, control center and storage medium
CN114040217A (en) * 2021-11-05 2022-02-11 南京小灿灿网络科技有限公司 Double-mixed streaming media live broadcasting method
CN114241002A (en) * 2021-12-14 2022-03-25 中国电信股份有限公司 Target tracking method, system, device and medium based on cloud edge cooperation
CN114241002B (en) * 2021-12-14 2024-02-02 中国电信股份有限公司 Target tracking method, system, equipment and medium based on cloud edge cooperation
CN116708582A (en) * 2022-02-25 2023-09-05 贵州白山云科技股份有限公司 Code request method, device, medium and equipment based on distributed cloud network
CN114401317A (en) * 2022-03-25 2022-04-26 山东省计算中心(国家超级计算济南中心) Ocean buoy-oriented multipoint cooperative active cache networking method and system
CN114401317B (en) * 2022-03-25 2022-07-05 山东省计算中心(国家超级计算济南中心) A method and system for multi-point cooperative active cache networking for marine buoys
WO2024165016A1 (en) * 2023-02-10 2024-08-15 华为云计算技术有限公司 Edge node communication method based on cloud computing technology, and related device
CN116915781A (en) * 2023-09-14 2023-10-20 南京邮电大学 A blockchain-based edge collaborative caching system and method
CN117459901A (en) * 2023-12-26 2024-01-26 深圳市彩生活网络服务有限公司 Cloud platform data intelligent management system and method based on positioning technology
CN117459901B (en) * 2023-12-26 2024-03-26 深圳市彩生活网络服务有限公司 Cloud platform data intelligent management system and method based on positioning technology

Similar Documents

Publication Publication Date Title
CN111935246A (en) User generated content uploading method and system based on cloud edge collaboration
AU2020103384A4 (en) Method for Constructing Energy-efficient Network Content Distribution Mechanism Based on Edge Intelligent Caches
CN114338504B (en) Micro-service deployment and routing method based on network edge system
US10523777B2 (en) System and method for joint dynamic forwarding and caching in content distribution networks
US10567538B2 (en) Distributed hierarchical cache management system and method
US20180176325A1 (en) Data pre-fetching in mobile networks
US8000239B2 (en) Method and system for bandwidth allocation using router feedback
CN112020103A (en) Content cache deployment method in mobile edge cloud
CN112995950A (en) Resource joint allocation method based on deep reinforcement learning in Internet of vehicles
WO2023284447A1 (en) Cloud-edge collaboration data transmission method, server, and storage medium
KR20140067881A (en) Method for transmitting packet of node and content owner in content centric network
Zhang et al. Collaborative hierarchical caching over 5G edge computing mobile wireless networks
Reshadinezhad et al. An efficient adaptive cache management scheme for named data networks
US11985186B1 (en) Method of drone-assisted caching in in-vehicle network based on geographic location
CN117955979A (en) A cloud-network fusion edge information service method based on mobile communication nodes
CN101710904A (en) P2p flow optimization method and system thereof
CN110012071B (en) Caching method and device for Internet of things
CN113993168B (en) Collaborative caching method based on multi-agent reinforcement learning in fog wireless access network
CN109800027B (en) Method and system for task offloading between network nodes based on autonomous participation of service nodes
CN101860938A (en) Network node and method for realizing autonomous routing control by sensing network context information
Hu P2P data dissemination for real-time streaming using load-balanced clustering infrastructure in MANETs with large-scale stable hosts
CN109194767A (en) A kind of flow medium buffer dispatching method suitable for mixing network
Liu et al. Opportunistic routing using q-learning with context information
CN117336696A (en) A resource allocation method for joint storage and computing in Internet of Vehicles
Noh et al. Cooperative and distributive caching system for video streaming services over the information centric networking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20201113