WO2018040816A1 - Procédé d'acquisition d'une ressource, terminal et serveur - Google Patents
Procédé d'acquisition d'une ressource, terminal et serveur Download PDFInfo
- Publication number
- WO2018040816A1 WO2018040816A1 PCT/CN2017/094944 CN2017094944W WO2018040816A1 WO 2018040816 A1 WO2018040816 A1 WO 2018040816A1 CN 2017094944 W CN2017094944 W CN 2017094944W WO 2018040816 A1 WO2018040816 A1 WO 2018040816A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- terminal
- network
- cache
- server
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
- H04L67/568—Storing data temporarily at an intermediate stage, e.g. caching
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/40—Support for services or applications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
- H04L67/568—Storing data temporarily at an intermediate stage, e.g. caching
- H04L67/5681—Pre-fetching or pre-delivering data based on network characteristics
Definitions
- the present invention relates to the field of communications, and in particular, to a method, a terminal, and a server for acquiring resources.
- CDN Content Delivery Network
- Web content can be published to the CDN so that users can get the content they need right on the CDN. In this way, the congestion of the Internet network can be solved, and the quality of service that the user can obtain the service or service can be improved.
- the CDN deploys multiple different cache servers in different network locations. Multiple cache servers store content. After the terminal obtains addresses of multiple cache servers, the terminal can simultaneously download content from multiple cache servers.
- a scheduling request is sent to a DNS (Domain Name System) server, and the DNS server forwards the scheduling request to a CDN DNS server, and the CDN DNS server allocates according to the address of the DNS server.
- the cache server sends the address of the cache server to the DNS server, and the DNS server forwards the address of the cache server to the terminal, and the terminal downloads the content fragment from the allocated cache server.
- the address of the user is different from the address of the DNS server assigned to the user.
- the cache server allocated to the user according to the address of the DNS server may not be close to the network in the distance, and the effect of the user acquiring resources in the cache server is poor.
- the prior art configures the same DNS server for multiple users. Therefore, different users are assigned the same cache server because they correspond to the same DNS server.
- the cache server cannot be allocated based on the user. Multiple users can obtain resources in the same cache server, which may affect the effect of resource acquisition.
- the present invention provides a resource acquisition method, a terminal, and a server, and combines a network topology and network quality information to select an optimal cache server according to the information of the terminal to implement resource acquisition, thereby improving resource acquisition.
- a method of obtaining resources including:
- the terminal sends scheduling request information to the request scheduling server, where the scheduling request information carries information of the terminal. Then, the terminal receives the cache server information sent by the scheduling request server, where the information carries the address of the N cache servers allocated by the scheduling request server according to the information of the terminal, and each cache server and the end of the N cache servers.
- the network quality information between the terminals, and the terminal then formulates a scheduling policy according to network quality information between each of the N cache servers and the terminal, and finally obtains resources according to the scheduling policy.
- N is an integer greater than or equal to 1.
- the N cache servers are configured by the scheduling request server to combine the network topology information, the network quality information, and the information of the terminal with the network, and the terminal root obtains the content to be downloaded from the multiple content cache servers that are allocated. It is avoided that the cache server is allocated to the terminal according to the address of the DNS as in the prior art, and the cache server allocated by the user may not be close to the network in the distance, and the effect of the user acquiring resources in the cache server is poor.
- the embodiment of the present invention can implement the cache server allocation according to the address of the user, combining the network topology, the network quality information, and the information of the terminal, and can ensure the terminal and the allocated cache server while ensuring that the cache server allocated to the terminal is adjacent to the terminal network. The quality of the network is better, thus improving the effect of data resource acquisition.
- the network quality information includes at least a round trip time RTT and a packet loss rate.
- the above two information can accurately measure the network direct connection between the terminal and the cache server, and then the optimal cache server allocation scheme can be implemented according to the network quality.
- the terminal formulating a scheduling policy according to network quality information between each of the N cache servers and the terminal includes: targeting one of the N cache servers, according to the cache server and the terminal
- the RTT and the packet loss rate calculate the throughput rate of the cache server; according to the throughput rate of the N cache servers, the content fragments that need to be downloaded from each of the N cache servers are determined.
- the content fragment downloaded by the cache server is determined according to the throughput rate of the cache server, so that the cache server can be preferentially downloaded from the cache server with better throughput, so that the resource acquisition effect can be improved.
- the terminal determines the content fragment to be downloaded from each of the N cache servers according to the throughput rate of the N cache servers, specifically: determining the ratio of the throughput rates of the N cache servers. Determining, according to the ratio relationship, a proportion of content fragments that need to be downloaded from each of the N cache servers; determining, according to the specific gravity, that downloading is required from each of the N cache servers Content fragmentation.
- the ratio relationship of the downloaded content segments is determined, which can ensure that the content of the content downloaded by the cache server with a large throughput rate is large, so that most of the terminal is to be downloaded from the throughput rate. Higher cache server downloads, thus improving resource acquisition efficiency between the terminal and the cache server.
- the content fragment that needs to be downloaded from each of the N cache servers is determined according to the specific gravity, specifically, according to the play order of the resources, the terminal determines Downloading content fragments that are played first in the resource from a cache server with high throughput, downloading content fragments that are played after the resources are downloaded from a cache server with low throughput; and downloading content from each cache server
- the number of shards corresponds to the specific gravity.
- resources are fragmented and each fragment is the same length.
- the first few fragments of the video resource are preferentially downloaded in the cache server with the highest throughput rate, the subsequent fragments are downloaded in other cache servers with a slightly lower throughput rate. In this way, for the terminal, the smooth playback of the first few segments can be guaranteed.
- the rest of the shards can be downloaded during the process of several shards to improve the user experience.
- a method of obtaining resources including:
- the request scheduling server receives the scheduling request information sent by the terminal, where the message includes the information of the terminal; and then, according to the network topology information of the network, the network quality detection record set, and the information of the terminal, the terminal allocates N cache servers, where N is greater than An integer equal to 1; finally, the cache server information is sent to the terminal, where the information includes the addresses of the N cache servers allocated in the previous step, and the network quality information between each of the N cache servers and the terminal.
- the scheduling request server acquires network topology information and network quality probe record sets of the network and maintains them locally.
- the terminal allocates multiple cache servers to the terminal according to the information of the terminal, the locally maintained network topology information, and the network quality probe record set, so that the terminal root obtains the content to be downloaded from the multiple content cache servers that are allocated. .
- the network between the terminal and the allocated cache server can be ensured while ensuring that the cache server allocated to the terminal is adjacent to the terminal network. Better quality, which improves the access to data resources.
- the network topology information includes: an address of a terminal in the network, an address of a cache server in the network, and a chain between a terminal in the network and a cache server in the network.
- the network information detection record set includes M detection records, where M is a positive integer, and each detection record includes a detection time, an address of the terminal, a network to which the terminal belongs, an access type of the terminal, an address of the cache server, and network quality information ( Including round trip time RTT and packet loss rate).
- the terminal information includes the IP address of the terminal and the access type of the terminal
- the request scheduling server allocates N cache servers to the terminal according to the network topology information of the network, the network quality detection record set, and the information of the terminal.
- the network topology information is queried according to the IP address of the terminal, and the network location of the terminal is located, that is, the network to which the terminal belongs.
- the network quality detection record set is queried according to the network of the terminal and the access type of the terminal, and the network quality detection record is recorded.
- the cache server that is centralized with the access network of the terminal and the access type of the terminal is determined as the candidate cache server; and further, the throughput rate of each cache server in the candidate cache server is calculated, according to the throughput of each cache server in the candidate cache server.
- Rate determine K cache servers, where K is a positive integer greater than or equal to N; calculate the shortest path from each cache server to the terminal of the K cache servers; calculate the cache servers other than the first cache server (ie, candidates) The largest throughput in the cache server The number of hops of each cache server outside the storage server and the first cache server on the shortest path; the number of intersecting hops is determined by the first N-1 cache servers sorted from small to large and the first cache server is assigned to N cache servers for the terminal.
- N cache servers can be determined according to the information of the terminal, and the N cache servers can be ensured to be adjacent to the terminal network, and the network quality between the terminal and the N cache servers is better, thereby effectively improving the terminal and the cache. Resource acquisition efficiency between servers.
- the method before the request scheduling server receives the scheduling request information sent by the terminal, the method further includes: receiving the detection record reported by the network quality monitor, including the detection time, the address of the terminal, the network to which the terminal belongs, and the connection of the terminal. Incoming type, address of the cache server, and network quality information, where the network quality information is the network quality information between the terminal and the cache server; secondly, the request scheduling server detects the report according to the network quality monitor Recording a network quality detection record set; sending a request information for acquiring a network topology to an SDN (Software Defined Network) controller, and receiving network topology response information sent by the SDN control device, where the network topology response information includes the network topology information.
- SDN Software Defined Network
- the network quality information of the network is generated according to the network quality detection record of the terminal in the network, and the network topology information of the network is generated according to the requested network topology information.
- the generated two sets of information can be maintained locally.
- the local maintenance information can be queried according to the information of the terminal, and the cache server with the network proximity and good network quality is allocated to the terminal, so as to improve the data. The effect of resource acquisition.
- a terminal including:
- a sending unit configured to send scheduling request information to the request scheduling server, where the request information carries information of the terminal
- a receiving unit configured to receive cache server information sent by the scheduling request server, where the information includes addresses of the N cache servers and the N Cache network quality information between each cache server and the terminal, where N is an integer greater than or equal to 1
- a policy formulation unit is configured to cache the server and the terminal according to each of the N cache servers The inter-network quality information is used to formulate a scheduling policy; the obtaining unit is configured to acquire resources according to the scheduling policy formulated above.
- the N cache servers are configured by the scheduling request server in combination with the network topology information of the SDN and the network quality information of the SDN and the information of the terminal, and the terminal root obtains the downloaded content from the allocated content cache servers. content. It is avoided that the cache server is allocated to the terminal according to the address of the DNS as in the prior art, and the cache server allocated by the user may not be close to the network in the distance, and the effect of the user acquiring resources in the cache server is poor.
- the embodiment of the present invention can implement the cache server allocation according to the address of the user, combining the network topology, the network quality information, and the information of the terminal, and can ensure the terminal and the allocated cache server while ensuring that the cache server allocated to the terminal is adjacent to the terminal network. The quality of the network is better, thus improving the effect of data resource acquisition.
- the policy setting unit is specifically configured to determine the N cache servers according to an RTT and a packet loss ratio between each of the N cache servers and the terminal.
- the throughput rate of each of the cache servers is determined; according to the throughput rate of the N cache servers, content fragments that need to be downloaded from each of the N cache servers are determined.
- the policy making unit is specifically configured to determine a ratio relationship of throughput rates of the N cache servers, and determine, according to the ratio relationship, that each of the N cache servers needs to be cached.
- the proportion of content downloaded by the server; according to the specific gravity, content fragments that need to be downloaded from each of the N cache servers are determined.
- the policy setting unit is specifically configured to: according to the playing order of the resources, determine to download a content slice that is played first in the resource from a cache server with a high throughput rate, and the throughput rate is low.
- the content fragment that is played after downloading the resource in the cache server; and the number of content fragments downloaded from each cache server corresponds to the specific weight.
- a request scheduling server including:
- a receiving unit configured to receive scheduling request information sent by the terminal, where the request information includes information of the terminal, and an allocating unit, configured to allocate, according to the network topology information of the network, the network quality detection record set, and the information of the terminal,
- N cache servers where N is an integer greater than or equal to 1
- a sending unit is configured to send cache server information to the terminal, where the information includes an address of the N cache servers, and each of the N cache servers Network quality information between the server and the terminal.
- the scheduling request server acquires network topology information and network quality probe record sets of the network and maintains them locally.
- the terminal allocates multiple cache servers to the terminal according to the information of the terminal, the locally maintained network topology information, and the network quality probe record set, so that the terminal root obtains the content to be downloaded from the multiple content cache servers that are allocated. .
- the network between the terminal and the allocated cache server can be ensured while ensuring that the cache server allocated to the terminal is adjacent to the terminal network. Better quality, which improves the access to data resources.
- the network topology information includes: an address of a terminal in the network, an address of a cache server in the network, and a chain between a terminal in the network and a cache server in the network.
- the network information detection record set includes M detection records, where M is a positive integer, and each detection record includes a detection time, an address of the terminal, a network to which the terminal belongs, an access type of the terminal, an address of the cache server, and network quality information ( Including round trip time RTT and packet loss rate).
- the information of the terminal includes an IP address of the terminal and an access type of the terminal.
- the allocating unit is configured to: determine, according to the IP address of the terminal, the network to which the terminal belongs; query the network quality detection record set according to the network to which the terminal belongs and the access type of the terminal, and focus the network quality detection record on the terminal.
- the cache network of the associated network and the access type matching of the terminal is determined as a candidate cache server; the throughput rate of each cache server in the candidate cache server is calculated, and K are determined according to the throughput rate of each cache server in the candidate cache server.
- K is a positive integer greater than or equal to the N
- calculating a shortest path of each of the K cache servers to the terminal respectively calculating the first cache of the K cache servers
- the first N-1 cache servers after sorting and the first cache server are determined For the N cache servers assigned to the terminal.
- the request dispatch server further includes a generating unit.
- the receiving unit is further configured to: before receiving the scheduling request information sent by the terminal, receive the detection record reported by the network quality monitor; the generating unit is configured to generate a network quality detection record set according to the detection record reported by the network quality monitor; And the method for sending the request information for acquiring the network topology to the software-defined network SDN control device; the receiving unit is further configured to receive the network topology response information sent by the SDN control device, where the network topology response information includes the network topology information.
- a terminal comprising: a processor and a memory, wherein the memory is configured to store a program supporting the terminal to perform the method of the first aspect, the processor being configured to execute The program stored in the memory.
- the terminal may also include a communication interface for the terminal to communicate with other devices or communication networks.
- a request scheduling server comprising: a processor and a memory, wherein the memory is configured to store a program supporting a request scheduling server to execute the method described in the second aspect, the processor configured to be used for The program stored in the memory is executed.
- the request dispatch server may further include a communication interface for requesting the dispatch server to communicate with other devices or communication networks.
- the embodiment of the present invention further discloses a computer storage medium, configured to store computer software instructions used by the terminal or the request scheduling server, and includes a program designed to execute the foregoing aspect for a terminal or a request scheduling server. .
- FIG. 1 is an architectural diagram of an existing CDN network
- FIG. 2 is a schematic flowchart of a conventional allocation cache server for a terminal and a content downloading of a terminal by the terminal;
- FIG. 3 is a schematic diagram of content fragmentation according to an embodiment of the present invention.
- FIG. 4 is a schematic diagram of a network topology according to an embodiment of the present invention.
- FIG. 5 is a schematic diagram of another network topology according to an embodiment of the present disclosure.
- FIG. 6 is a schematic diagram of path intersection according to an embodiment of the present invention.
- FIG. 7 is a structural diagram of a resource acquisition system according to an embodiment of the present invention.
- FIG. 8 is a schematic flowchart diagram of a method for acquiring a resource according to an embodiment of the present disclosure
- FIG. 9 is a schematic flowchart of network topology information and a network quality detection record set according to an embodiment of the present disclosure.
- FIG. 10 is a schematic flowchart of a request scheduling server for allocating a cache server to a terminal according to an embodiment of the present disclosure
- FIG. 11 is a schematic diagram of a hop count according to an embodiment of the present invention.
- FIG. 12 is a schematic flowchart of a terminal formulating a scheduling policy according to an embodiment of the present disclosure
- FIG. 13 is a schematic diagram of a scheduling policy according to an embodiment of the present invention.
- FIG. 14 is a structural block diagram of a terminal according to an embodiment of the present invention.
- FIG. 15 is a block diagram showing another structure of a terminal according to an embodiment of the present invention.
- FIG. 16 is a structural block diagram of a request scheduling server according to an embodiment of the present invention.
- FIG. 17 is another structural block diagram of a request scheduling server according to an embodiment of the present invention.
- the CDN network is an overlay network built on the existing Internet, and the network includes multiple cache services.
- Server Publish the web content to the cache server closest to the user so that the user can download the desired content on the cache server. In this way, the congestion of the Internet network is solved, and the quality of service that the user can obtain the network content service is improved.
- Multi-path transmission technology can be used for data transmission between servers, which can effectively utilize the effective bandwidth between the terminal and the cache server, and improve the throughput of the transmission.
- the process of allocating a cache server to a terminal and downloading content fragments by the terminal is as shown in FIG. 2, and specifically includes:
- the terminal sends a scheduling request to the DNS server 1, specifically by sending a scheduling request of the form "A.com” by triggering the URL.
- the DNS server 1 sends the above scheduling request to the CDN DNS server according to the authorized DNS address.
- the CDN DNS server allocates the cache server for the terminal according to the address of the DNS server 1 to the cache server S1, and returns the information of the cache server S1 to the DNS server 1.
- the DNS server 1 returns the information of the cache server S1 to the terminal.
- the terminal allocates the downloaded content fragments for the cache server S1, such as the fragments 1-3 downloaded from the cache server 1.
- the terminal sends a download request to the service cache server S1 and acquires the data.
- the DNS server of the terminal can be manually modified from the DNS server 1 to the DNS server 2, and the terminal can simultaneously send the DNS server 2 to the DNS server 2 through the above steps 1-6.
- the scheduling request the DNS server 2 forwards the scheduling request to the CDN DNS server, the CDN DNS server allocates the cache server S2 according to the address of the DNS server 2, and delivers the address of the allocated cache server S2 to the DNS server 2, by the DNS The server 2 forwards the address of the cache server S2 to the terminal, and the terminal acquires data from the cache server S2.
- the prior art is a cache server allocated to the user according to the address of the DNS server of the terminal. Because the address of the user is different from the address of the DNS server, the cache server allocated by the user in the prior art may not be close to the network in the distance, and the effect of the user acquiring resources in the cache server is poor. At the same time, since the DNS servers of multiple users are the same, the cache servers allocated by the prior art for different users may be the same. The cache server cannot be allocated at the user granularity. Multiple users can obtain resources in the same cache server, which may affect the effect of resource acquisition.
- the RRS Request Router Service acquires network topology information and network quality detection records of the network (including network quality information between the terminal in the network and the cache server in the network), and maintains the local .
- the terminal allocates a plurality of cache servers to the terminal according to the information of the terminal, the locally maintained network topology information, and the network quality information, so that the terminal root obtains the content to be downloaded from the multiple content cache servers that are allocated.
- the optimal cache server allocation effect is realized by combining the network topology and the network quality information, thereby improving the acquisition effect of the data resource.
- network content (such as video resources) is fragmented storage, and each fragment is the same length (such as 5s).
- the terminal can preferentially download the first few fragments of the video resource in the cache server with the highest throughput rate, and the subsequent fragments are downloaded in other cache servers with a slightly lower throughput rate. In this way, for the terminal, the smooth playback of the first few fragments can be ensured, and the remaining fragments can be downloaded during the first few fragments to improve the user experience.
- Network topology information includes node information in the network and link information between the nodes.
- the node information may be an IP address of the node.
- a link is a path in which two nodes are directly connected, also called a hop.
- the link information may be an IP address of the left and right nodes corresponding to the link, a port IP, and an IP address segment corresponding to the external network.
- the network topology information of the network includes 10 node information and 11 link information.
- the route between the source node and the destination node includes many hop routing information, and each hop is a path.
- the shortest path of the terminal to the cache server 1 is a dotted line portion
- the shortest path of the terminal to the cache server 2 is a solid line portion. Then, the path of the path from the terminal to the cache server 1 (the dotted line path) and the path of the terminal to the cache server 2 (the solid line path shown) are 1 hop.
- the present invention provides a resource acquisition system.
- the resource acquisition system includes: an NQM (Network Quality Monitor), an SDN control device, an RRS, a terminal, and at least one cache server.
- the SDN control device network device controller: performs unified control on data forwarding of the forwarding device in the SDN, and the SDN control device can sense network topology information in the network.
- the information of the cache server distributed in the network can be obtained from the SDN control device.
- the NQM maintains information about the terminals in the network locally.
- the NQM sends a network quality detection task to the terminal, instructing the terminal to detect the network quality between the terminal and a cache server, such as RTT (Round-Trip Time) and packet loss rate, and collect the terminal to each
- the network quality information of the cache server and the information of the terminal (such as access type, IP address, probe time, etc.).
- the collected information will be reported to the RRS.
- RRS Obtain the probe records of multiple terminals reported by the NQM (probe time, address of the terminal, network to which the terminal belongs, access type of the terminal, address of the cache server, and network quality information between the terminal and the cache server).
- the SDN control device obtains network topology information.
- the network topology information of the network and the probe records of each terminal are maintained locally.
- Cache server responsible for the storage of content, while receiving terminal requests, providing content download services for the terminal.
- the cache server is a cache server distributed in the network maintained by the SDN control device.
- the terminal sends a content download request to the request scheduling server, and obtains the addresses of the plurality of cache servers and the network quality information of the terminal to the plurality of cache servers from the request scheduling server, and performs data download scheduling according to the obtained network quality information.
- Different cache servers determine different content segments.
- the embodiment of the invention provides a method for resource acquisition. As shown in FIG. 8, the method includes the following steps:
- the terminal sends scheduling request information to the request scheduling server.
- the scheduling request information is used to indicate that the request scheduling server allocates a cache server to the terminal, and the scheduling request information carries the information of the terminal, where the information of the terminal includes at least an IP address of the terminal and an access type of the terminal.
- the access type of the terminal may be WIFI access or DSL access.
- the terminal sends scheduling request information to the RRS through the IF-3 interface.
- the request scheduling server receives the scheduling request information sent by the terminal, and allocates N cache servers to the terminal according to the network topology information of the network, the network quality detection record set, and the information of the terminal.
- the network topology information of the network includes: an address of a terminal in the network, an address of a cache server in the network, and link information between a terminal in the network and a cache server in the network.
- the network quality detection record set includes M detection records; wherein each detection record includes a detection time, an address of the terminal, a network to which the terminal belongs, an access type of the terminal, an address of the cache server, and network quality information; wherein the network quality information includes RTT and packet loss rate, M is a positive integer.
- the NQM sends a network quality detection indication to the terminal, instructing the terminal to detect itself and the cache server.
- the quality of the network between.
- the terminal reports the probe record (including the detection result RTT, packet loss rate, etc.) to the NQM.
- the NQM can periodically report the probe record of the terminal to the request scheduling server, and then generate a network quality probe record set according to the probe record of each terminal, and store it in the network quality database of the request dispatch server in FIG. 7.
- the request scheduling server requests the network topology information of the network from the SDN control device, and maintains the network topology information of the SDN returned by the SDN control device in the network topology database of the request scheduling server shown in FIG. 7.
- the request scheduling server first queries the network topology information of the network according to the IP address of the terminal, and locates the network location of the terminal, that is, the network to which the terminal belongs. Further, the candidate cache server may be determined according to the network to which the terminal belongs, the access type of the terminal, the network topology information requested by the scheduling server, and the network quality probe record set. Recalculating the throughput rate of each of the candidate cache servers, and determining K cache servers according to the throughput rate of each cache server in the candidate cache servers.
- calculating a shortest path of each of the K cache servers to the terminal and calculating, respectively, each of the K cache servers except the first cache server (ie, the cache server with the highest throughput in the candidate cache server)
- the number of hops of the cache server and the first cache server on the shortest path is determined by the first N-1 cache servers sorted from small to large and the first cache server is determined as the N assigned to the terminal Cache servers.
- the request scheduling server sends the cache server information to the terminal.
- the cache server information includes an address of the N cache servers determined in the foregoing step, and network quality information between each of the N cache servers and the terminal.
- the terminal receives the cache server information sent by the scheduling request server, and formulates a scheduling policy according to network quality information between each of the N cache servers and the terminal.
- the so-called scheduling policy determines which cache server should download which content fragments.
- the terminal calculates, according to the RTT and the packet loss ratio of each of the N cache servers, the throughput rate of each of the N cache servers, and then according to the N caches.
- the throughput of the server determines the content fragments that need to be downloaded from each of the N cache servers.
- the content fragment downloaded by the cache server may be determined according to the following manner: First, the ratio relationship of the throughput rates of the N cache servers is determined. Then, according to the ratio relationship, the proportion of content fragments downloaded by each of the N target cache servers is determined. Finally, based on the specific gravity, content slices that need to be downloaded from each of the N cache servers are determined.
- the throughput ratio of the three cache servers is 6:3:1
- the content to be downloaded (resources) includes 10 fragments
- the proportion of content fragments downloaded by the three cache servers is also 6:3: 1.
- determining, according to the specific gravity, that the content fragment that needs to be downloaded from each of the N cache servers is specifically: according to the play order of the resources, the terminal determines a cache with a high throughput rate. Downloading, in the server, the content slice that is played first in the resource, and downloading the content slice that is played after the resource is downloaded from the cache server with low throughput; and the number of content fragments downloaded from each cache server is The proportion is corresponding.
- video resources are all sliced, and each slice is the same length.
- the terminal downloads the first few fragments of the video resource in the cache server with the highest throughput rate, and the subsequent fragments are downloaded in other cache servers with a slightly lower throughput rate. In this way, for the terminal, the smooth playback of the first few segments can be ensured while the rest is downloaded during the playback process. Fragmentation can improve the user experience. For example, if the ratio of the throughput of the three cache servers is 6:3:1, and the resources to be acquired include 10 fragments, the first 6 fragments are first downloaded in the cache server with a throughput ratio of 6/10. The cache server with a throughput ratio of 3/10 downloads the 3 fragments after the first 6 slices, and finally downloads the last fragment in the cache server with a throughput ratio of 1/10.
- the final scheduling policy may be: (Cache Server 1, Range1), (Cache Server 2, Range2), and Range is a sequence number range that can be fragmented for content, such as Range1 (Shard 1, Slice 3) ), indicating that the slice 1, the slice 2, and the slice 3 are downloaded from the cache server 1.
- Range is a sequence number range that can be fragmented for content, such as Range1 (Shard 1, Slice 3) ), indicating that the slice 1, the slice 2, and the slice 3 are downloaded from the cache server 1.
- the terminal acquires resources according to the scheduling policy.
- the terminal downloads, from the cache server, the content fragment determined by the terminal for the cache server.
- the terminal is the first six fragments of the content fragment determined by the cache server with a throughput ratio of 6/10, and the terminal downloads the first six fragments in the cache server.
- the terminal sends a data download request to the cache server 1 through the IF-4 interface, where the download request includes the content identifier requested to be downloaded, such as fragment 1 and fragment 2 And the identification of the slice 3.
- the cache server 1 transmits data to the terminal through the IF-4 interface.
- the RRS may generate the network topology information of the network and the network quality detection record set by using the process shown in FIG.
- the NQM sends a probe task to the terminal through the IF6 interface, including detecting the IP address and the detection period of the cache server.
- the detection period is generally 5 minutes.
- the NQM locally maintains the information of the terminal in the network, and can obtain the information of the cache server deployed in the network from the SDN control device. In this way, the NQM sends a network quality detection task to the terminal, instructing the terminal to detect the network quality between itself and a cache server.
- the terminal After receiving the network detection task, the terminal starts the detection, and continuously sends Q ping packets to the detection server at intervals of a preset duration.
- the network quality of the terminal to the cache server is RTT, and the packet loss rate is an example. Therefore, the terminal sends a fixed number of ping packets to the probe server at a preset interval.
- the preset duration can be 100ms, and Q can be 1000.
- the server sends a ping response packet to the terminal.
- the terminal calculates an average RTT and a packet loss rate.
- the value of the RTT is the average value of the RTT obtained multiple times, and the packet loss rate is the ratio of the number of lost packets to the total number of transmitted packets.
- the terminal reports the detection record to the NQM according to the detection period through the IF6 interface.
- the probe record includes the probe time, the address of the terminal, the access type of the terminal, the address of the cache server, and network quality information.
- the network quality information includes RTT and packet loss rate.
- the address of the terminal may be the IP address of the terminal, and the address of the cache server may cache the IP address of the server.
- the NQM periodically synchronizes the detection records reported by the terminal to the RRS through the IF1 interface.
- the reported network quality detection results include: detection time, terminal IP, terminal access type, cache server IP, RTT, and packet loss rate.
- the network quality detection result of multiple terminals can be reported at one time, and the reporting period can be 5 minutes.
- the RRS can generate a network quality probe record set according to the M probe records reported by the NQM.
- the RRS periodically requests network topology information of the network from the SDN control device through the IF-2 interface.
- the SDN Controller northbound interface can be used, and the network topology information request period is generally set to 24 hours.
- the network topology request information includes an identifier of a set of topology objects; the set of topology objects is composed of a terminal in the network and a cache server in the network.
- the SDN control device returns network topology response information to the RRS.
- the network topology response information carries topology information of the set of topology objects, and the topology information includes node information of the set of topology objects and link information between the nodes.
- the node information of the topology object may be an identifier of the topology object and link information between the topology objects, and the IP, port IP, and link of the left and right nodes corresponding to the link correspond to the IP address segment of the external network.
- the RRS can generate network topology information of the network according to the network topology response information returned by the SDN control device.
- the RRS generates a network quality probe record set of the network according to the probe record reported by the network quality monitor periodically, and generates network topology information of the SDN according to the topology response information.
- the terminal first classifies each terminal according to the access type and the network to which it belongs. Examples include: terminals including Nanjing Telecom WIFI access terminals, Nanjing Telecom DSL access terminals, and Hefei Telecom DSL access terminals. Then, the network quality information (RTT, packet loss rate) between each terminal and the cache server is recorded one by one.
- the network quality probe record set generated by RRS can be as shown in Table 2:
- the RRS may allocate a cache server to the terminal by using the process shown in FIG. 10, which specifically includes:
- the RRS After receiving the scheduling request information sent by the terminal, the RRS queries the network topology information of the SDN according to the IP address of the terminal, and determines the network to which the terminal belongs.
- the scheduling request information sent by the RRS receiving terminal includes an IP address of the terminal and an access type of the terminal. Therefore, the RRS can receive the access request information to acquire the access type of the terminal.
- the RRS Prior to this, the RRS generates network topology information of the network according to the network topology response information returned by the SDN control device, where the address of the terminal in the network, the address of the cache server in the SDN, and the SDN are recorded. Link information between the terminal and the cache server in the SDN. Therefore, by querying the network topology information of the SDN according to the IP address of the terminal, the network location of the terminal, that is, the network to which the terminal belongs, can be located.
- the IP address of the current terminal is 117.34.80.8
- the network to which the terminal belongs can be located according to the network topology.
- the RRS queries the network quality detection record set according to the network to which the terminal belongs and the access type of the terminal, and determines the cache server that matches the network quality detection record set with the network to which the terminal belongs and the access type of the terminal as the candidate cache server.
- the RRS receives the detection records of each terminal reported by the NQM, classifies all the terminals according to the two dimensions of the network location (such as the same metropolitan area network) and the access type (WIFI/4G, etc.), and counts each terminal.
- the network quality detection record set shown in Table 2 is generated to the network quality information of each cache server.
- the step 301 is based on the IP address 117.34.80.8 of the terminal, the network to which the terminal belongs is located in Nanjing Telecom, where the access type of the terminal is WIFI access, and the network quality information of the SDN shown in Table 2 is queried. It can be determined that the candidate cache server matching the terminal is the cache server 10.136.1.5. Referring to Table 2, it can be seen that the terminal to the cache server 10.136.1.5 of the Nanjing Telecom WIFI access has two sets of network quality information, respectively (48ms, 0.8%) (50ms, 1%).
- the network quality of the terminal of the Nanjing Telecom WIFI access to the cache server 10.136.1.5 is ⁇ [(49+50)/2]ms, [(1+0.8) /2]% ⁇ is (49ms, 0.9%).
- the query table 2 determines that the terminal to the cache server only corresponds to a set of network quality information, the network quality information is directly used as the network quality information between the terminal and the cache server.
- the candidate servers may be ordered in descending order of throughput, and the top K cache servers are selected.
- K cache servers may also be determined in the candidate cache server based on other rules, which is not limited herein.
- it can be based on Calculating the throughput rate T of the cache server.
- the MSS is the largest data segment that the TCP packet can transmit at a time, in order to achieve optimal transmission performance.
- the MSS value is determined by both parties when the TCP protocol establishes a connection between the two parties (for example, the terminal and the cache server).
- the MTU value is often used instead (the size of the IP packet header is 20 bytes and the TCP segment is required to be subtracted).
- the header is 20Bytes), so the MSS is usually 1460.
- P is the packet loss rate.
- the cache server 1 has an RTT of 20 ms and a packet loss rate of 1%; the cache server 2 has an RTT of 40 ms and a packet loss rate of 10%; and the cache server 3 has an RTT of 100 ms and a packet loss rate of 10%.
- the result of sorting the different cache server throughput rates is: cache server 1 > cache server 2 > cache server 3.
- the first cache server is a cache server with the highest throughput rate among the candidate cache servers.
- a direct connection between two nodes is a jump.
- the so-called hop count that is, the number of directly connected paths.
- the number of intersecting hops that is, the number of coincident direct paths. Referring to FIG. 11, the direct connection path between node A and node B is s, the direct connection path between node B and node C is v, and the path from node A to node C is two hops.
- the shortest path from the terminal to the K cache servers is calculated, and the N-1 cache servers having the smallest intersection with the optimal cache server (the cache server with the highest throughput rate, that is, the first cache server) are selected, and finally selected.
- the first cache server and the N cache servers of the above N-1 cache servers are allocated to the terminal.
- a shortest path from the terminal to the K cache servers can be calculated by using a typical SPF (Shortest Path First).
- the shortest paths from the computing terminal to the K cache servers are P1, P2, P3, ... PK, respectively.
- the number of intersections of the shortest path of the other cache server and the shortest path of the first cache server is calculated based on the first cache server (ie, the first cache server) in step 303, respectively: P12, P13, P14... ...P1k. Sort the number of path intersections from small to large, and select the first N-1 cache servers.
- the first cache server constitutes an N cache server determined by the RRS for the terminal.
- the terminal may determine the scheduling policy by using the process shown in FIG. 12, which specifically includes:
- the terminal calculates an throughput rate of each of the N cache servers, and sorts the N cache servers according to a throughput ratio from a large to a small.
- the terminal receives the cache server information of the RRS, and the information is sent by the RRS in response to the scheduling request information of the terminal.
- the cache server information carries the addresses of the N cache servers allocated by the RRS for the terminal and the RTT and packet loss rate between each of the N cache servers and the terminal.
- the terminal determines, according to the throughput rate of the N cache servers, a content fragment that needs to be downloaded from each of the N cache servers.
- the terminal may determine the content fragment downloaded by the cache server by first determining the ratio of the throughput rates of the N cache servers, and determining the N target cache servers according to the ratio of the throughput rates of the N cache servers.
- Each target caches the proportion of content fragments downloaded by the server, and then determines the content fragments that need to be downloaded from each of the N cache servers according to the proportion of the content fragments downloaded by each target cache server. .
- the cache server 1 has an RTT of 20 ms and a packet loss rate of 1%; the cache server 2 has an RTT of 40 ms and a packet loss rate of 10%; and the cache server 3 has an RTT of 100 ms and a packet loss rate of 10%.
- the throughput rate T1 of the cache server 1 is calculated, the throughput rate of the cache server 2 is T2, and the throughput rate of the cache server 3 is T3.
- the terminal determines to download a content slice that is played first in the resource from a cache server with a high throughput rate, and downloads the resource from a cache server with a low throughput rate.
- Content fragmentation; and the number of content fragments downloaded from each cache server corresponds to the weight.
- the previous fragment is preferentially downloaded from the cache server with a high throughput rate, and the cache server with a low throughput rate downloads the subsequent fragment.
- the smooth playback of the first few shards can be ensured while the remaining shards are downloaded during the playback process, which can improve the user experience.
- the content download can be fragmented into a granularity, and the size of the fragment can be configured, for example, 128 KB, 256 KB, 512 KB, 1 MB, and the like.
- the subsequent throughput rate is determined according to the actual throughput rate and the fragmentation ranges allocated by each cache server are both.
- the scheduling request server acquires network topology information and a network quality detection record set of the network, and maintains the local.
- the terminal allocates multiple cache servers to the terminal according to the information of the terminal, the locally maintained network topology information, and the network quality probe record set, so that the terminal root obtains the content to be downloaded from the multiple content cache servers that are allocated. .
- the network between the terminal and the allocated cache server can be ensured while ensuring that the cache server allocated to the terminal is adjacent to the terminal network. Better quality, which improves the access to data resources.
- the solution provided by the embodiment of the present invention is mainly introduced from the perspective of the working process of the terminal and the request scheduling server.
- the terminal and the request scheduling server include corresponding hardware structures and/or software modules for executing the respective functions in order to implement the above functions.
- the present invention can be implemented in a combination of hardware or hardware and computer software in combination with the elements and algorithm steps of the various examples described in the embodiments disclosed herein. Whether a function is implemented in hardware or computer software to drive hardware depends on the specific application and design constraints of the solution. A person skilled in the art can use different methods for implementing the described functions for each particular application, but such implementation should not be considered to be beyond the scope of the present invention.
- the embodiment of the present invention may divide the function module into the terminal and the request scheduling server according to the foregoing method example.
- each function module may be divided according to each function, or two or more functions may be integrated into one processing module.
- the above integrated modules can be implemented in the form of hardware or in the form of software functional modules. It should be noted that the division of the module in the embodiment of the present invention is schematic, and is only a logical function division, and the actual implementation may have another division manner.
- FIG. 14 is a schematic diagram showing a possible structure of a terminal involved in the foregoing embodiment, where the terminal includes: a sending unit 501, a receiving unit 502, and a policy formulating unit 503.
- the sending unit 501 is configured to support the terminal 20 to perform the process 101 in FIG. 8 and the process 205 in FIG. 9.
- the receiving unit 502 is configured to support the terminal to execute the process 104 in FIG. 8;
- the policy making unit 503 is configured to support the terminal to perform the process in FIG. Process 104;
- the obtaining unit 504 is configured to support the terminal to perform the process 105 in FIG.
- the above method is implemented All the related content of each step involved in the example can be referred to the function description of the corresponding function module, and details are not described herein again.
- FIG. 15 shows a possible structural diagram of the terminal involved in the above embodiment.
- the terminal may include a processor 601, a communication interface 602, and a memory 603.
- Communication interface 602 is used to support communication of terminals with other network entities.
- the memory 603 is configured to store program code for executing the solution of the present invention, and is controlled by the processor 601 for execution.
- the processor 601 is configured to execute the program code stored in the memory 603, and control and manage the action of the terminal.
- the processor 601 is configured to support the terminal to execute the part of the process 104 in the process 104 in FIG. And/or other processes for the techniques described herein.
- the processor 601 can be a general purpose central processing unit (CPU), a microprocessor, an application-specific integrated circuit (ASIC) or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof. It is possible to implement or carry out the various illustrative logical blocks, modules and circuits described in connection with the present disclosure.
- Communication interface 602 can be a communication port or can be a transceiver or transceiver circuit or the like.
- the memory 603 can be a read-only memory (ROM) or other type of static storage device that can store static information and instructions, a random access memory (RAM) or other type that can store information and instructions.
- the dynamic storage device can also be an Electrically Erasable Programmable Read-Only Memory (EEPROM), a Compact Disc Read-Only Memory (CD-ROM) or other optical disc storage, and a disc storage device. (including compact discs, laser discs, optical discs, digital versatile discs, Blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, or can be used to carry or store desired program code in the form of instructions or data structures and can be Any other media accessed, but not limited to this.
- the memory can exist independently and be connected to the processor via a bus. The memory can also be integrated with the processor.
- FIG. 16 is a schematic structural diagram of a request scheduling server involved in the foregoing embodiment, where the request scheduling server includes: a receiving unit 701, an allocating unit 702, and a sending. Unit 703.
- the receiving unit 701 is configured to support the request scheduling server to perform the action of receiving the scheduling request information in the process 102 in FIG. 8.
- the allocating unit 702 is configured to support the request scheduling server to perform the action of allocating the cache server in the process 102 in FIG. 8; the sending unit 703
- the process 103 for supporting the request dispatch server is performed in FIG. All the related content of the steps involved in the foregoing method embodiments may be referred to the functional descriptions of the corresponding functional modules, and details are not described herein again.
- FIG. 17 shows a possible structural diagram of the request scheduling server involved in the above embodiment.
- the request dispatch server may include a processor 801, a communication interface 802, and a memory 803.
- Communication interface 802 is used to support communication between the request dispatch server and other network entities.
- the memory 803 is configured to store program code for executing the scheme of the present invention, and is controlled by the processor 801 for execution.
- the processor 801 is configured to execute program code stored in the memory 803.
- the processor 801 is configured to control the management of the actions of the request scheduling server.
- the processor 801 is configured to support the request scheduling server to perform the process 102 of FIG. 8, and/or other processes for the techniques described herein.
- the processor 801 can be a general purpose central processing unit (CPU), a microprocessor, an application-specific integrated circuit (ASIC) or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof. . It is possible to implement or carry out the various illustrative logical blocks, modules and circuits described in connection with the present disclosure.
- Communication interface 802 can be a communication port or can be a transceiver or transceiver circuit or the like.
- the memory 803 can be a read-only memory (ROM) or can store static letters.
- RAM random access memory
- EEPROM Electrically erasable programmable read only memory
- CD-ROM Compact Disc Read-Only Memory
- EEPROM Electrically Erasable Programmable Read-Only Memory
- CD-ROM Compact Disc Read-Only Memory
- CD-ROM Compact Disc Read-Only Memory
- the memory can exist independently and be connected to the processor via a bus.
- the memory can also be integrated with the processor.
- the steps of a method or algorithm described in connection with the present disclosure may be implemented in a hardware, or may be implemented by a processor executing software instructions.
- the software instructions may be composed of corresponding software modules, which may be stored in RAM, flash memory, ROM, Erasable Programmable ROM (EPROM), and electrically erasable programmable read only memory (Electrically EPROM).
- EEPROM electrically erasable programmable read only memory
- registers hard disk, removable hard disk, compact disk read only (CD-ROM) or any other form of storage medium known in the art.
- An exemplary storage medium is coupled to the processor to enable the processor to read information from, and write information to, the storage medium.
- the storage medium can also be an integral part of the processor.
- the processor and the storage medium can be located in an ASIC. Additionally, the ASIC can be located in a core network interface device.
- the processor and the storage medium may also exist as discrete components in the core network interface device.
- the modules described as separate components may or may not be physically separated.
- the components displayed as modules may be one physical module or multiple physical modules, that is, may be located in one place, or may be distributed to multiple different places. . Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
- each functional module in each embodiment of the present invention may be integrated into one processing module, or each module may exist physically separately, or two or more modules may be integrated into one module.
- the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
- the integrated modules if implemented in the form of software functional modules and sold or used as separate products, may be stored in a readable storage medium. Based on such understanding, the technical solution of the present invention may contribute to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium. A number of instructions are included to cause a device (which may be a microcontroller, chip, etc.) or a processor to perform all or part of the steps of the methods described in various embodiments of the present invention.
- the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like. .
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Information Transfer Between Computers (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
L'invention concerne un procédé d'acquisition d'une ressource, un terminal et un serveur se rapportant au domaine des communications. Conjointement avec une topologie de réseau et des informations de qualité de réseau, et selon des informations concernant un terminal, un serveur cache optimal est sélectionné pour réaliser l'acquisition d'une ressource. Le procédé comprend les étapes suivantes : un serveur de planification de requête reçoit des informations de requête de planification envoyées par un terminal, les informations de requête de planification comprenant des informations concernant le terminal ; le serveur de planification de requête alloue N serveurs caches pour le terminal selon des informations de topologie de réseau concernant un réseau, un ensemble d'enregistrements de détection de qualité de réseau et les informations concernant le terminal, N étant un nombre entier supérieur ou égal à 1 ; et le serveur de planification de requête envoie des informations de serveur cache au terminal, les informations de serveur cache comprenant les adresses des N serveurs caches et des informations de qualité concernant un réseau entre chacun des N serveurs caches et le terminal.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201610797105.3A CN107786620B (zh) | 2016-08-31 | 2016-08-31 | 一种获取资源的方法、终端及服务器 |
| CN201610797105.3 | 2016-08-31 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2018040816A1 true WO2018040816A1 (fr) | 2018-03-08 |
Family
ID=61301361
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2017/094944 Ceased WO2018040816A1 (fr) | 2016-08-31 | 2017-07-28 | Procédé d'acquisition d'une ressource, terminal et serveur |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN107786620B (fr) |
| WO (1) | WO2018040816A1 (fr) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115604263A (zh) * | 2022-09-30 | 2023-01-13 | 北京奇艺世纪科技有限公司(Cn) | 一种资源调度方法、装置、电子设备及存储介质 |
Families Citing this family (14)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108848530B (zh) * | 2018-07-10 | 2020-09-18 | 网宿科技股份有限公司 | 一种获取网络资源的方法、装置及调度服务器 |
| CN109688421B (zh) * | 2018-12-28 | 2020-07-10 | 广州华多网络科技有限公司 | 请求消息处理方法、装置及系统、服务器、存储介质 |
| CN109966736B (zh) * | 2019-03-06 | 2022-08-16 | 绎谛数据科技(上海)有限公司 | 基于用户地理信息的服务器弹性部署方法、设备及计算机可读存储介质 |
| CN109951341B (zh) * | 2019-04-01 | 2022-03-25 | 北京达佳互联信息技术有限公司 | 内容获取方法、装置、终端及存储介质 |
| CN110278254B (zh) * | 2019-06-12 | 2022-02-22 | 深圳梨享计算有限公司 | 用于FogCDN场景的调度方法及调度端 |
| CN113315646B (zh) * | 2020-02-27 | 2024-11-26 | 阿里巴巴集团控股有限公司 | 用于内容分发网络的异常处理方法、装置及内容分发网络 |
| CN111756868B (zh) * | 2020-05-06 | 2024-10-22 | 西安万像电子科技有限公司 | 云服务器连接方法及系统 |
| CN113993158B (zh) * | 2021-10-28 | 2023-05-23 | 成都长虹网络科技有限责任公司 | 一种网络质量监测方法、系统、计算机设备及存储介质 |
| CN113891387B (zh) * | 2021-11-12 | 2024-03-29 | 山东亚华电子股份有限公司 | 一种音视频通信链路的探测方法及设备 |
| CN114760362B (zh) * | 2022-06-13 | 2022-09-02 | 杭州马兰头医学科技有限公司 | 网络接入请求的调度方法、系统、电子装置和存储介质 |
| CN115623236A (zh) * | 2022-10-20 | 2023-01-17 | 上海哔哩哔哩科技有限公司 | 礼物特效资源播放方法及装置 |
| CN116192957A (zh) * | 2022-12-13 | 2023-05-30 | 广州市网星信息技术有限公司 | 一种接入节点调度方法、系统、设备及存储介质 |
| CN118233521A (zh) * | 2022-12-21 | 2024-06-21 | 中兴通讯股份有限公司 | 服务资源的调度方法及装置 |
| CN119276749B (zh) * | 2024-12-02 | 2026-01-09 | 天翼云科技有限公司 | 内容分发网络覆盖质量监控方法、装置和计算机设备 |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN1585357A (zh) * | 2003-08-19 | 2005-02-23 | 华为技术有限公司 | 一种在网络中选择服务器的方法 |
| CN101287011A (zh) * | 2008-05-26 | 2008-10-15 | 蓝汛网络科技(北京)有限公司 | 内容分发网络中响应用户服务请求的方法、系统和设备 |
| US20100036954A1 (en) * | 2008-08-06 | 2010-02-11 | Edgecast Networks, Inc. | Global load balancing on a content delivery network |
| CN103166985A (zh) * | 2011-12-09 | 2013-06-19 | 上海盛霄云计算技术有限公司 | 一种全局负载均衡调度方法、数据传输方法、装置及系统 |
-
2016
- 2016-08-31 CN CN201610797105.3A patent/CN107786620B/zh active Active
-
2017
- 2017-07-28 WO PCT/CN2017/094944 patent/WO2018040816A1/fr not_active Ceased
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN1585357A (zh) * | 2003-08-19 | 2005-02-23 | 华为技术有限公司 | 一种在网络中选择服务器的方法 |
| CN101287011A (zh) * | 2008-05-26 | 2008-10-15 | 蓝汛网络科技(北京)有限公司 | 内容分发网络中响应用户服务请求的方法、系统和设备 |
| US20100036954A1 (en) * | 2008-08-06 | 2010-02-11 | Edgecast Networks, Inc. | Global load balancing on a content delivery network |
| CN103166985A (zh) * | 2011-12-09 | 2013-06-19 | 上海盛霄云计算技术有限公司 | 一种全局负载均衡调度方法、数据传输方法、装置及系统 |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115604263A (zh) * | 2022-09-30 | 2023-01-13 | 北京奇艺世纪科技有限公司(Cn) | 一种资源调度方法、装置、电子设备及存储介质 |
Also Published As
| Publication number | Publication date |
|---|---|
| CN107786620B (zh) | 2019-10-22 |
| CN107786620A (zh) | 2018-03-09 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2018040816A1 (fr) | Procédé d'acquisition d'une ressource, terminal et serveur | |
| JP6820320B2 (ja) | リアルタイムユーザ監視データを用いてリアルタイムトラフィック誘導を行うための方法および装置 | |
| CN102571856B (zh) | 一种中转节点的选择方法、设备和系统 | |
| US10277487B2 (en) | Systems and methods for maintaining network service levels | |
| CN104158755B (zh) | 传输报文的方法、装置和系统 | |
| CN103825975B (zh) | Cdn节点分配服务器及系统 | |
| CN112087382B (zh) | 一种服务路由方法及装置 | |
| US20130227048A1 (en) | Method for Collaborative Caching for Content-Oriented Networks | |
| CN104702522A (zh) | 软件定义网络(sdn)中基于性能的路由 | |
| WO2012106918A1 (fr) | Procédé, dispositif et système de traitement de contenu | |
| KR102160494B1 (ko) | 네트워크 노드, 엔드포인트 노드 및 관심 메시지 수신 방법 | |
| JP2004040793A (ja) | QoSを推定する方法および通信装置 | |
| CN111935031B (zh) | 一种基于ndn架构的流量优化方法及系统 | |
| CN104754640A (zh) | 一种网络资源调度方法及网络资源管理服务器 | |
| US20150146722A1 (en) | Optimized content routing distribution using proximity based on predictive client flow trajectories | |
| US9112664B2 (en) | System for and method of dynamic home agent allocation | |
| CN107332744B (zh) | 一种路由路径选择方法和系统以及用户接入服务器 | |
| CN103179045B (zh) | 支持p2p流量优化的资源节点选择方法 | |
| JP5871908B2 (ja) | ネットワーク内部のデータ通信を制御するための方法およびシステム | |
| WO2015039616A1 (fr) | Procédé et dispositif de traitement de paquets | |
| CN114520784A (zh) | 一种动态内容加速访问方法及装置 | |
| Alkhazaleh et al. | A COMPREHENSIVE SURVEY OF INFORMATION-CENTRIC NETWORK: CONTENT CACHING STRATEGIES PERSPECTIVE | |
| CN116055385B (zh) | 路由方法、管理节点、路由节点及介质 | |
| Chen et al. | Priority service for paying content providers through dedicated cache leasing in information-centric networking | |
| CN109302348B (zh) | 一种基于cnn网络的数据处理方法及一种路由器 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17845118 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 17845118 Country of ref document: EP Kind code of ref document: A1 |