Disclosure of Invention
The invention aims to solve the technical problem of the prior art and provides a user generated content uploading method and system based on cloud-edge collaboration.
The technical scheme for solving the technical problems is as follows:
a user generated content uploading method based on cloud edge collaboration comprises the following steps:
s1, the user end transmits the content data to the preset edge node;
and S2, caching the content data by the preset edge node, and uploading the content data to a target server according to a preset logic link in the main edge relay route corresponding to the preset edge node.
The invention has the beneficial effects that: according to the scheme, the edge computing and cloud computing capabilities are fully utilized, the content data transmitted from the user side to the preset edge node is not required to be successfully uploaded to the target server side, and the user experience is remarkably improved; the content data are uploaded to the target server through a preset logic link in the main edge relay route corresponding to the preset edge node, a new network bandwidth utilization optimization space is provided by the main edge relay route of the edge node, the pressure of conventional transmission is relieved, and network congestion can be greatly avoided.
Further, before the S2, the method further includes:
s21, the preset edge node distributes a main edge relay route corresponding to the uploading condition of the content data through a cloud center;
s22, the cloud center constructs the preset logic link in the main edge relay route according to the link avoidance level.
The beneficial effect of adopting the further scheme is that: according to the scheme, a performance-optimized non-real-time uploading main edge relay routing structure is constructed according to the uploading condition of the content data of the user, a new network bandwidth is provided to achieve the uploading of the content generated by the user, and the highest physical link sharing degree of the logical link in the control path is facilitated through the logic link determined by avoiding the grade.
Further, still include: the target server sends the content data to a third party request end, when the content data is not completely sent, the target server sends the received part of the content data to the third party request end, and when the sending rate of the content data sent by the target server to the third party request end is greater than the uploading rate of the preset edge node, the preset edge node caching the content data is informed to send the rest part of the request data to the target server in a TCP mode.
The beneficial effect of adopting the further scheme is that: according to the scheme, when the request data of the third-party request end is not completely uploaded, the content data cached in the preset edge node is uploaded to the server in a TCP mode, the conventional TCP is adopted to control the transmission rate from the user to the nearby edge node, when the relay route fails to successfully upload the request data, the emergency is realized through TCP transmission, the third-party request end does not need to wait too long time due to the fact that the complete uploading is not successful, and the transmission efficiency of the non-real-time content uploading is guaranteed.
Further, the S21 specifically includes:
judging whether a corresponding main edge relay route of the preset edge node exists according to the cached content data, and if so, continuing to use the main edge relay route if the corresponding main edge relay route of the preset edge node exists and the uploading condition of the content data is met; otherwise, according to the uploading condition of the content data, calculating a main edge relay route corresponding to the preset edge node.
The beneficial effect of adopting the further scheme is that: according to the scheme, the corresponding main edge relay route of the preset edge node is selected, the existing main edge relay route of the preset edge node is selected, or the main edge relay route is calculated according to the uploading condition, so that the cached content data are uploaded to the corresponding target server by fully utilizing the idle network bandwidth resources.
Further, the S1 is preceded by:
and S11, searching the idle edge node which is closest to the user end and serves as a preset edge node, sending a content uploading request to the preset edge node, and when the preset edge node receives the uploading request, performing the content of the step S1.
The beneficial effect of adopting the further scheme is that: according to the scheme, the idle state closest to the user side is searched to serve as the preset edge node and the target edge node, so that the selection of the optimal edge node is realized, and the rapid content uploading service is provided.
Another technical solution of the present invention for solving the above technical problems is as follows:
a user-generated content upload system based on cloud-edge collaboration, comprising: the system comprises a user side, a preset edge node and a target server;
the user side is used for transmitting content data to the preset edge node;
the preset edge node is used for caching the content data and uploading the content data to a target server according to a preset logic link in a main edge relay route corresponding to the preset edge node.
The invention has the beneficial effects that: according to the scheme, the edge computing and cloud computing capabilities are fully utilized, the content data transmitted from the user side to the preset edge node is not required to be successfully uploaded to the target server side, and the user experience is remarkably improved; the content data are uploaded to the target server through a preset logic link in the main edge relay route corresponding to the preset edge node, a new network bandwidth utilization optimization space is provided by the main edge relay route of the edge node, the pressure of conventional transmission is relieved, and network congestion can be greatly avoided.
Further, the preset edge node is configured to allocate a main edge relay route corresponding to the uploading condition of the content data through a cloud center; the cloud center is used for constructing the preset logical link in the main edge relay route according to the link evasion grade.
The beneficial effect of adopting the further scheme is that: according to the scheme, a performance-optimized non-real-time uploading main edge relay routing structure is constructed according to the uploading condition of the content data of the user, a new network bandwidth is provided to achieve the uploading of the content generated by the user, and the highest physical link sharing degree of the logical link in the control path is facilitated through the logic link determined by avoiding the grade.
Further, still include: and the target server is used for sending the received part of the content data to the third-party request end when the target server sends the content data to the third-party request end and the content data is not completely sent, and informing the preset edge node caching the content data to send the rest part of the request data to the target server in a TCP mode when the sending rate of the content data sent to the third-party request end by the target server is greater than the uploading rate of the preset edge node.
The beneficial effect of adopting the further scheme is that: according to the scheme, when the request data of the third-party request end is not completely uploaded, the content data cached in the preset edge node is uploaded to the server in a TCP mode, the conventional TCP is adopted to control the transmission rate from the user to the nearby edge node, when the relay route fails to successfully upload the request data, the emergency is realized through TCP transmission, the third-party request end does not need to wait too long time due to the fact that the complete uploading is not successful, and the transmission efficiency of the non-real-time content uploading is guaranteed.
Further, the preset edge node is specifically configured to determine, according to the cached content data, whether a main edge relay route corresponding to the preset edge node exists, and if the main edge relay route exists and the uploading condition of the content data is met, continue to use the main edge relay route; otherwise, according to the uploading condition of the content data, calculating a main edge relay route corresponding to the preset edge node.
The beneficial effect of adopting the further scheme is that: according to the scheme, the corresponding main edge relay route of the preset edge node is selected, the existing main edge relay route of the preset edge node is selected, or the main edge relay route is calculated according to the uploading condition, so that the cached content data are uploaded to the corresponding target server by fully utilizing the idle network bandwidth resources.
Further, still include: and the edge node acquisition module is used for searching an idle edge node which is closest to the user and serves as a preset edge node, sending a content uploading request to the preset edge node, and when the preset edge node receives the uploading request, carrying out the operation of transmitting content data to the preset edge node by the user.
The beneficial effect of adopting the further scheme is that: according to the scheme, the idle edge node which is closest to the user is searched to serve as the preset edge node and the target edge node, so that the selection of the optimal edge node is realized, and the rapid content uploading service is provided.
Advantages of additional aspects of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth to illustrate, but are not to be construed to limit the scope of the invention.
As shown in fig. 1, a method for uploading user-generated content based on cloud-edge collaboration provided in an embodiment of the present invention includes: s1, the user end transmits the content data to the preset edge node;
in some examples, selecting the preset edge node may be: by default, the ue temporarily transmits UGC (User Generated Content) to the nearest edge node, and the UGC Content is Content that does not need to be transmitted in real time. Since the edge nodes are close to the clients and the non-real-time content upload requests are usually distributed, the edge nodes can provide fast temporary content upload services for the nearby clients in most cases. When a user side finds that the cache of the nearest edge node is insufficient, corresponding content uploading requests are sent to other adjacent edge nodes; when all nearby edge nodes refuse service due to insufficient cache, the content is directly uploaded to the target platform server according to the conventional method, which may be a Transmission Control Protocol (TCP) Transmission method.
And S2, caching the content data by the preset edge node, and uploading the content data to the target server according to the preset logic link in the main edge relay route corresponding to the preset edge node.
In some embodiments, as shown in fig. 3, obtaining the primary edge relay route may be: after the content data sent by the user side is cached by the adjacent edge node, the cloud center judges whether the available relay route of the main edge of the edge node exists or not according to the service uploading requirement, namely whether the existing relay route of the main edge can meet the current content uploading condition or not. If so, continuing to transmit the content by using the route; otherwise, the cloud center calculates a main edge relay route for the adjacent node according to the uploading condition, issues the route to each relevant node, and uploads the cached content data to the corresponding target server by using the idle network bandwidth. The service uploading requirement may include: bandwidth, delay, packet loss rate and the like.
In some embodiments, a preset logical link in the primary edge relay route may be constructed according to the avoidance level of the logical link. Calculating the avoidance ranking may include: carrying out avoidance grading on the logical links according to the sharing degree of the physical links, wherein the lowest avoidance grade is grade 1 and is initially set to be 0; the routing weights of each evasion class are equal. When the shortest path first selection strategy is adopted, the following characteristics are satisfied: if and only if the path cannot be composed of logical links with low avoidance levels, the logical links with high avoidance levels are used to dynamically construct a shortest path from the edge node to the target service end, wherein the endmost path can be measured by the avoidance levels of the logical links, and the lower level links are preferentially selected, and the calculation process can include:
when the sharing degree lambda (i) of the physical link with the constraint bottleneck of the logical link i is equal to the maximum sharing degree in the existing logical link set L, the logical link i is associated with the avoidance level p (i, L) of the L:
p(i,L)=λ(i)=max{λ(j)|j∈L}
when the sharing degree lambda (i) of the physical link with the constraint bottleneck of the logical link i is smaller than the maximum sharing degree in the existing logical link set L, the logical link i is associated with the avoidance level p (i, L) of the L:
wherein,
wherein, L is the existing logical link set and is initially an empty set; λ (j) is the bottleneck physical link sharing degree with constraint, where the logical link j is associated with L, and is expressed as the number of times that the logical link j appears in other main edge relay routes; and MaxP is the maximum value of each link avoidance grade in L. If the physical link i is used as a logical link of the main edge relay route of the current node, the link sharing degree of the link i associated with the L is added with 1, and the scheme is favorable for controlling the highest physical link sharing degree of the logical links in the path.
In some examples, when a certain edge node has a UGC upload demand, if there is no main edge relay route in L, dynamically constructing a shortest main edge relay route thereof, and storing a corresponding physical link in L as a logical link; when an edge node finishes all UGC transmission on the edge node, the related logical link is deleted in L.
In some examples, it may further include: and presetting edge nodes, uploading all UGC contents to a corresponding target server, and deleting a main edge relay route, thereby releasing occupied storage resources and improving the sharing degree of a link.
According to the scheme, the edge computing and cloud computing capabilities are fully utilized, the content data transmitted from the user side to the preset edge node is not required to be successfully uploaded to the target server side, and the user experience is remarkably improved; the content data are uploaded to the target server through a preset logic link in the main edge relay route corresponding to the preset edge node, a new network bandwidth utilization optimization space is provided by the main edge relay route of the edge node, the pressure of conventional transmission is relieved, and network congestion can be greatly avoided. Preferably, in any of the above embodiments, before S2, the method further includes:
s21, the preset edge node distributes a main edge relay route corresponding to the uploading condition of the content data through the cloud center;
s22, the cloud center constructs the preset logic link in the main edge relay route according to the link avoidance level. The uploading condition may include bandwidth, delay, packet loss rate, and the like.
In one embodiment, the edge nodes and the non-real-time content uploading historical information can be managed through the cloud computing platform, and management work can be achieved based on the edge layer network information base and the uploading activity information base. The edge-layer overlay network refers to an overlay network composed of edge nodes and logical links between the edge nodes, and the edge-layer overlay network information includes edge node information, distances between the edge nodes, related topology information, and the like, wherein the distances between the edge nodes may be measured by using measured delays, and the topology information may include a topology structure of the edge nodes. The inter-edge node distance may be obtained by sending a probe packet. And the uploading activity information base records the non-real-time content uploading condition. According to the stored non-real-time content uploading historical information, the cloud center analyzes and predicts non-real-time content uploading demand characteristics so as to construct a performance-optimized non-real-time content uploading structure according to the demand characteristics, wherein the demand characteristics can comprise: bandwidth, delay and packet loss rate, and the like, as well as some conditions required during uploading; the upload structure is a structure representing a main edge relay route.
The main edge relay route can be the edge relay route except the part from the user side to the nearest neighbor edge node, and the scheme is mainly based on the dynamic construction process of the main edge relay route. And the main edge relay route is distributed in a centralized manner through the cloud center, and a link in the main edge relay route is constructed by utilizing the avoidance grade of the logical link. To avoid the invalid non-real-time content upload structure occupying storage resources, if a single built non-real-time content upload structure is not used within a specified time, it is destroyed to free up occupied storage space.
According to the scheme, a performance-optimized non-real-time uploading main edge relay routing structure is constructed according to the uploading condition of the content data of the user, a new network bandwidth is provided to achieve the uploading of the content generated by the user, and the highest physical link sharing degree of the logical link in the control path is facilitated through the logic link determined by avoiding the grade.
Preferably, in any of the above embodiments, further comprising: the target server sends the content data to a third party request end, when the content data is not completely sent, the target server sends the received part of the content data to the third party request end, and when the sending rate of the content data sent by the target server to the third party request end is greater than the uploading rate of the preset edge node, the preset edge node caching the content data is informed to send the rest part of the request data to the target server in a TCP mode.
It should be noted that, the third party requesting end may be a platform for the user end to finally upload the content data, for example: and uploading the video data edited by the user to a tremble platform.
In some examples, the emergency mechanism may further include: when a third party requests content which is not completely uploaded, the target server side sends the received content to the third party, and when the sending rate is higher than the content uploading rate and the data volume of the received content cannot counteract the negative influence of the rate difference, namely the sending rate of the target server is larger than the receiving rate and the rate difference cannot counteract the received content, the edge node of the cache content is informed to send the remaining content to the target server by conventional TCP transmission so as to improve the non-real-time content uploading transmission. The sending rate refers to the rate of sending data to the third party by the target server, and the uploading rate refers to the rate of uploading data to the target server by the edge node of the cache content.
According to the scheme, when the request data of the third-party request end is not completely uploaded, the content data cached in the preset edge node is uploaded to the server in a TCP mode, the conventional TCP is adopted to control the transmission rate from the user to the nearby edge node, when the relay route fails to successfully upload the request data, the emergency is realized through TCP transmission, the third-party request end does not need to wait too long time due to the fact that the complete uploading is not successful, and the transmission efficiency of the non-real-time content uploading is guaranteed. Preferably, in any of the above embodiments, S21 specifically includes:
judging whether a corresponding main edge relay route of a preset edge node exists according to the cached content data, and if so, continuing to use the main edge relay route if the corresponding main edge relay route meets the uploading condition of the content data; otherwise, according to the uploading condition of the content data, calculating a main edge relay route corresponding to the preset edge node. Wherein, the uploading condition may include: bandwidth, delay, packet loss rate and other conditions required for uploading,
in some examples, the cloud center calculates the main edge relay route that satisfies the conditions according to the bandwidth, delay and packet loss rate conditions. For example: a user end needs to upload certain data content, wherein the bandwidth condition is set as follows: 100 bits +/-10 bits, delay within 10ms, packet loss rate within 1%, according to the above conditions, comparing data in the existing main edge route to obtain the main edge route with bandwidth higher than 110 bits, delay less than 10ms and packet loss rate within 1%, if calculating a plurality of satisfying conditions, selecting the main edge route satisfying various conditions and having the best packet loss rate.
According to the scheme, the corresponding main edge relay route of the preset edge node is selected, the existing main edge relay route of the preset edge node is selected, or the main edge relay route is calculated according to the uploading condition, so that the cached content data are uploaded to the corresponding target server by fully utilizing the idle network bandwidth resources.
Preferably, in any of the above embodiments, S1 may be preceded by:
and S11, searching the idle edge node which is closest to the user end and is in the state of being the preset edge node, sending a content uploading request to the preset edge node, and when the preset edge node receives the uploading request, performing the content of the step S1. The preset edge node can be represented as that the edge node closest to the user side is selected from the nodes in the state idle state as the preset edge node near the user side, wherein the state idle state can be represented as that the cache of the edge node is sufficient and the uploading condition of the uploaded data is met.
In some examples, a plurality of adjacent edge nodes are searched according to the distance from the user side, which edge nodes have redundant caches capable of storing data uploaded by the user are judged from the plurality of edge nodes, and the edge node with the closest distance is selected as a preset edge node from the edge nodes meeting the cache condition.
Specifically, the spatial position coordinates of each edge node may be stored in advance, then the position information of the user side is obtained by the existing positioning device, the euclidean distance is calculated according to the position information of the user side and the coordinate information of each edge node, and the nearest euclidean distance having the redundant cache is used as the preset edge node.
According to the scheme, the idle state closest to the user side is searched to serve as the preset edge node and the target edge node, so that the selection of the optimal edge node is realized, and the rapid content uploading service is provided.
In one embodiment, as shown in fig. 2, a user-generated content uploading system based on cloud-edge collaboration includes: a user terminal 11, a preset edge node 12 and a target server 13;
the user terminal 11 is configured to transmit content data to a preset edge node;
the preset edge node 12 is configured to cache content data, and upload the content data to the target server 13 according to a preset logical link in the main edge relay route corresponding to the preset edge node.
In an embodiment, as shown in fig. 4, the uploading system may include: the cloud center and the edge nodes can be used as a service platform for providing UGC uploading cooperation for the outside and can also be special facilities established by UGC uploading application service providers for improving application performance. The cloud center can comprise an uploading activity information base and an edge layer network information base, is used for the edge node association logic contact centralized distribution, ensures the structural robustness, the uploading fairness and the low physical link sharing of a logic link structure, and realizes the caching capacity, the computing capacity and the edge node network capacity of the edge node by building a non-real-time content uploading structure of the edge node as a relay through the cloud center and the edge node.
According to the scheme, the edge computing and cloud computing capabilities are fully utilized, the content data transmitted from the user side to the preset edge node is not required to be successfully uploaded to the target server side, and the user experience is remarkably improved; the content data are uploaded to the target server through a preset logic link in the main edge relay route corresponding to the preset edge node, a new network bandwidth utilization optimization space is provided by the main edge relay route of the edge node, the pressure of conventional transmission is relieved, and network congestion can be greatly avoided. Preferably, in any of the above embodiments, the preset edge node 12 is further configured to allocate, through the cloud center, a primary edge relay route corresponding to an upload condition of the content data; and the cloud center constructs a preset logical link in the main edge relay route according to the link evasion grade.
According to the scheme, a performance-optimized non-real-time uploading main edge relay routing structure is constructed according to the uploading condition of the content data of the user, a new network bandwidth is provided to achieve the uploading of the content generated by the user, and the highest physical link sharing degree of the logical link in the control path is facilitated through the logic link determined by avoiding the grade.
Preferably, in any of the above embodiments, further comprising: and the target server 13 is configured to send the received part of the content data to the third-party request end when the target server sends the content data to the third-party request end and the content data is not completely sent, and notify the preset edge node that caches the content data to send the remaining part of the request data to the target server in a TCP manner when the sending rate of the content data sent by the target server to the third-party request end is greater than the uploading rate of the preset edge node.
According to the scheme, when the request data of the third-party request end is not completely uploaded, the content data cached in the preset edge node is uploaded to the server in a TCP mode, the conventional TCP is adopted to control the transmission rate from the user to the nearby edge node, when the relay route fails to successfully upload the request data, the emergency is realized through TCP transmission, the third-party request end does not need to wait too long time due to the fact that the complete uploading is not successful, and the transmission efficiency of the non-real-time content uploading is guaranteed. Preferably, in any of the above embodiments, the preset edge node 12 is specifically configured to determine, according to the cached content data, whether a main edge relay route corresponding to the preset edge node exists, and if the main edge relay route exists and meets an upload condition of the content data, continue to use the main edge relay route; otherwise, according to the uploading condition of the content data, calculating a main edge relay route corresponding to the preset edge node.
According to the scheme, the corresponding main edge relay route of the preset edge node is selected, the existing main edge relay route of the preset edge node is selected, or the main edge relay route is calculated according to the uploading condition, so that the cached content data are uploaded to the corresponding target server by fully utilizing the idle network bandwidth resources.
Preferably, in any of the above embodiments, further comprising: and the edge node acquisition module is used for searching the idle edge node which is closest to the user side as the preset edge node, sending a content uploading request to the preset edge node, and when the preset edge node receives the uploading request, performing the operation of transmitting the content data to the preset edge node by the user side.
According to the scheme, the idle state closest to the user side is searched to serve as the preset edge node and the target edge node, so that the selection of the optimal edge node is realized, and the rapid content uploading service is provided.
It is understood that some or all of the alternative embodiments described above may be included in some embodiments.
It should be noted that the above embodiments are product embodiments corresponding to the previous method embodiments, and for the description of each optional implementation in the product embodiments, reference may be made to corresponding descriptions in the above method embodiments, and details are not described here again.
The reader should understand that in the description of this specification, reference to the description of the terms "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described method embodiments are merely illustrative, and for example, the division of steps into only one logical functional division may be implemented in practice in another way, for example, multiple steps may be combined or integrated into another step, or some features may be omitted, or not implemented.
The above method, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention essentially or partially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
While the invention has been described with reference to specific embodiments, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.