[go: up one dir, main page]

CN112953985B - Request data processing method, device, medium and system - Google Patents

Request data processing method, device, medium and system Download PDF

Info

Publication number
CN112953985B
CN112953985B CN201911256609.4A CN201911256609A CN112953985B CN 112953985 B CN112953985 B CN 112953985B CN 201911256609 A CN201911256609 A CN 201911256609A CN 112953985 B CN112953985 B CN 112953985B
Authority
CN
China
Prior art keywords
request
target server
server
url
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911256609.4A
Other languages
Chinese (zh)
Other versions
CN112953985A (en
Inventor
郑友声
王少阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guizhou Baishancloud Technology Co Ltd
Original Assignee
Guizhou Baishancloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guizhou Baishancloud Technology Co Ltd filed Critical Guizhou Baishancloud Technology Co Ltd
Priority to CN201911256609.4A priority Critical patent/CN112953985B/en
Publication of CN112953985A publication Critical patent/CN112953985A/en
Application granted granted Critical
Publication of CN112953985B publication Critical patent/CN112953985B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

A method, apparatus, medium, and system for processing request data are provided. The request data processing method comprises the steps that a first business server receives a URL request; determining a request accumulation trend according to the request accumulation number in the adjacent preset period; when the request accumulation trend is larger, determining a first class domain name according to the URLs of all requests; in the URLs corresponding to the first class domain names, determining a target server based on each URL; and when the target server is not the first service server, forwarding the URL request to the target server, wherein the ratio of the number of the requests for accessing the domain name to all the number of the requests for accessing the domain name is greater than a preset threshold or the ratio of the number of the requests for accessing the domain name to the request accumulation number is greater than a preset threshold. Under the condition of integral emergency of domain names of non-URL hot spots, the smooth operation of the service can be guaranteed, the whole group of machines is fully utilized, and the group service bearing capacity is enhanced.

Description

Request data processing method, device, medium and system
Technical Field
This document relates to content distribution networks, and more particularly, to a method, apparatus, medium, and system for requesting data processing.
Background
The CDN system is used as a platform for bearing customer flow, the server processes customer requests and gives response results, the response consumption speed of the server for processing different services is different, and service accumulation is easily caused. In the related art, the load balancing method has two disadvantages, one is that the back-end request is directly weighted on the load balancing server, the cache cannot be guaranteed to be reused, and the keep-a i ve persistent link of the special URL cannot be kept. Secondly, by finding the hot URL, the hot URL strategy is changed to be directly dispersed on each cache server, but the hot URL strategy is caught when the domain name request is totally sudden and no obvious URL hot spot exists, because the hot finding system mainly carries out modified hash mode diffusion aiming at a single URL hot spot, the domain name level hot spot cannot be identified and even cannot be processed. In addition, in the prior art, hardware information such as a rear-end server CPU, a magnetic disk and the like is directly calculated to be used as distribution weights, so that special services are difficult to be centralized in the same service processor. Secondly, the load-bearing capacity of the server during operation changes in real time, and if the weight is modified at any time, the cache cannot be continuously utilized, so that a large number of back-to-source requests cannot be allowed by CDN services.
Disclosure of Invention
To overcome the problems in the related art, a request data processing method, apparatus, medium, and system are provided.
According to a first aspect of the present disclosure, there is provided a method for processing request data, which is applied to a service server and includes:
a first service server receives a URL request;
determining a request accumulation trend according to the request accumulation number in the adjacent preset period;
when the request accumulation trend is larger, determining a first class domain name according to the URLs of all requests;
determining a target server based on each URL in the URLs corresponding to the first-class domain names;
and when the target server is not the first service server, forwarding the URL request to the target server, wherein the ratio of the number of the requests for accessing the domain name to all the number of the requests for accessing the domain name is greater than a preset threshold or the ratio of the number of the requests for accessing the domain name to the request accumulation number is greater than a preset threshold.
The determining a target server based on each URL includes:
determining whether a service server in a node has a cache file of the URL;
if not, determining a target server according to the consumption capacity score of the servers in the nodes;
and if so, determining the service server with the cache file of the URL as a target server.
The determining a target server according to the consumption capacity score of the servers in the node comprises the following steps:
determining a service server with the highest expenditure capability score in the node as a target server;
the spending capacity score includes: the sum of the scores of the current requested stacking number score, the current requested stacking trend score, and the server configuration score.
The forwarding the request for the URL to the target server includes: and calculating a forwarding proportion according to the consumption capacity score of the target server, and sending the URL request to the target server according to the forwarding proportion.
The forwarding ratio = (number of request piles/total number of requests) × (target server consumption capacity score/first traffic server consumption capacity score).
Further comprising: and when the target server is the first service server, the first service server acquires the source file, responds to the request of the URL and caches the response file.
According to another aspect of the present disclosure, there is provided a request data processing apparatus applied to a service server, including:
the receiving module is used for receiving the URL request by the first service server;
the counting module is used for determining a request accumulation trend according to the request accumulation number in the adjacent preset period;
the first-class domain name determining module is used for determining a first-class domain name according to URLs of all requests when the request accumulation trend is larger;
the target server determining module is used for determining a target server based on each URL in the URLs corresponding to the first-class domain names;
and the forwarding module is used for forwarding the request of the URL to the target server when the target server is not the service server, wherein the first-class domain name is that the ratio of the number of the requests for accessing the domain name to all the number of the requests is greater than a preset threshold value or the ratio of the number of the requests for accessing the domain name to the request accumulation number is greater than a preset threshold value.
The target server determination module determining a target server includes:
determining whether a service server in a node has a response file of the URL;
if not, determining a target server according to the consumption capacity score of the servers in the nodes;
and if so, determining the service server with the cache file of the URL as a target server.
The determining a target server according to the consumption capacity score of the servers in the node comprises the following steps:
determining a service server with the highest consumption capability score in the node as a target server;
the spending capacity score includes: the sum of the scores of the current requested stacking number score, the current requested stacking trend score, and the server configuration score.
And the forwarding module calculates a forwarding proportion according to the consumption capacity score of the target server and sends the URL request to the target server according to the forwarding proportion.
The forwarding ratio = (number of request piles/total number of requests) × (sum of target server consumption capacity score/first traffic server consumption capacity score).
Further comprising: the response caching module 501.
The response caching module 501 is configured to, when the target server is the first service server, obtain a source file from the first service server and cache a response file in response to the request of the URL.
According to another aspect herein, there is provided a computer readable storage medium having stored thereon a computer program which, when executed, carries out the steps of a method of requesting data processing.
According to another aspect herein, there is provided a requesting data processing system comprising the above-described requesting data processing apparatus.
The service server is configured to further execute the request data processing method, so that the smooth operation of the service can be guaranteed under the condition of integral emergency of the domain name of a non-URL hot spot, the whole set of machines in the node is fully utilized, and the service bearing capacity is enhanced; before and after the domain name integral burst of the non-URL hot spot, the load balancer system continues to execute, and the influence on the continuous service of the system is small. By adopting the 4-layer packet forwarding technology, the core calculation of the bottom CPU is consumed, and the limitation of the performance bottleneck of an application layer is avoided. The cache index data is stored in the URL record without being stored by an external system. If the target server also downloads the bearing capacity after the request is forwarded, the machine with the first consumption capacity score at the current moment is calculated as a machine for bearing together, and so on, and the whole group of refined request data processing is achieved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, are included to provide a further understanding of the disclosure, and are incorporated in and constitute a part of this specification. In the drawings:
fig. 1 is a schematic diagram of load balancing implemented by a load balancer.
Fig. 2 is a schematic diagram of traffic balancing implemented by a traffic server.
FIG. 3 is a flow diagram illustrating a method of requesting data processing in accordance with an exemplary embodiment.
FIG. 4 is a block diagram illustrating a requesting data processing apparatus according to an example embodiment.
FIG. 5 is a block diagram illustrating a requesting data processing apparatus according to an example embodiment.
FIG. 6 is a block diagram illustrating a computer device according to an example embodiment.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the drawings of the embodiments of the present invention, and it is obvious that the described embodiments are some but not all of the embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments herein without making any creative effort, shall fall within the scope of protection. It should be noted that the embodiments and features of the embodiments in the present disclosure may be arbitrarily combined with each other without conflict.
Fig. 1 is a schematic diagram of load balancing by a load balancer. Referring to fig. 1, in the related art, each service server (cache) receives a request and responds to the request, and a load balancer (switch) achieves an average allocation of resources through a HASH algorithm. However, the carrying capacity of the service server may vary from service to service. Different requests consume CPUs of the service servers, I O resources are different, and time for processing different requests by the service servers is different, so that accumulation of requests of some servers is easily caused, that is, the number of requests allocated to the server exceeds the processing capacity of the server, and thus, a task is overloaded. As shown in fig. 1, load balancers switcha, switchhb, and swi tchC allocate the same type of requests to the cache1 according to a preset load balancing rule, and under a normal condition, the cache1 can smoothly respond to the request of the client, but under a special condition, if a hot spot request at a domain name level is burst, that is, when a large number of users instantly access different URLs (for example, a shopping website at a double-eleven shopping session) under the same domain name, the load balancing rule cannot be adjusted. For example, the plethora of requests in FIG. 1 are generated based on the domain name doma i nB, with a large number of clients accessing various pages within the doma i nB's web site at the same time period, such as doma i nB/2.Txt, doma i nB/3.Txt, \8230;. Such requests are not generated based on a single URL, and load balancing rules cannot be adjusted, which inevitably results in overloading the task of cache 1.
In order to solve the above problems, in the request data processing method provided herein, a service server is set so that the service server has a function of forwarding a request. Fig. 2 is a schematic diagram of traffic balancing implemented by a traffic server. Referring to fig. 2, when a service server has a request stack, the service server may spread the request stack to other service servers in a node by using the method provided herein, so as to speed up the request processing.
Fig. 3 is a flow chart of a method of requesting data processing. Referring to fig. 3, the request data processing method is applied to a service server, and includes:
in step S31, the first service server receives the URL request.
And step S32, determining a request accumulation trend according to the request accumulation number in the adjacent preset period.
And step S33, when the request accumulation trend is larger, determining the first-class domain name according to all the requested URLs.
In step S34, a target server is determined based on each URL in the URLs corresponding to the domain names of the first class.
And step S35, when the target server is not the service server, forwarding the request of the URL to the target server.
In an embodiment, in step S31, the URL request received by the first service server may be a request forwarded by the load balancer, or a request forwarded by another service server in the node.
In an embodiment, in step S32, a request accumulation trend is determined according to the number of request accumulations in adjacent preset periods, referring to fig. 2, taking cache1 as an example, domainB/2.txt and domainB/3.txt are all requests for different URLs of the domain name of domainB, and when there are many such requests, the load balancer will continuously forward the request to cache1 according to preset rules, resulting in accumulation of cache1 requests. In this embodiment, the preset period is set to 1 second, the service server records the number of request stacks in each 1 second period, and the number of request stacks = the total number of requests-the number of consumed requests. The number of request stacks is represented by J, and the number of request stacks in the previous cycle is J 1 The request accumulation number of the next cycle is J 2 If J is 2 >J 1 It can be determined that the request accumulation quantity is more and more, and the request accumulation trend is larger. The time length of the preset period is adjusted according to actual conditions, and the method is not limited in this document.
In one embodiment, in step S33, when the request accumulation trend becomes larger, the domain name of the first class is determined according to the URLs of all the current requests. The first-class domain name is characterized in that the ratio of the number of requests for accessing the domain name to all the number of requests is larger than a preset threshold value or the ratio of the number of requests for accessing the domain name to the number of request stacks is larger than a preset threshold value. The ratio of the number of the requests for accessing the domain name to the number of all the requests is larger than a preset threshold value, for example, 20% or 30%; or the ratio of the number of requests for accessing the domain name to the number of request stacks is greater than a preset threshold, for example, 50%, it may be determined that the domain name is the first class domain name or the hotspot domain name. The preset threshold is determined according to a usage scenario, and is not limited herein.
If a large number of requests in the current requests are directed to different URLs under the same domain name, such as the domain B/2.Txt and the domain B/3.Txt which access the same domain name, when the number of the requests of the domain B/2.Txt and the domain B/3.Txt exceeds a set threshold value, the first service server determines that the domain B is a hot domain name.
In one embodiment, the first type of domain name may be a domain name of a website, such as a bieleven shopping festival, where a large number of users collectively visit shopping websites such as kyoto, naobao, and the like, but users visiting a certain website are scattered to visit different commodity pages within the website, so that a large number of requests to visit different URLs under the same domain name occur. For example, the business server has request accumulation, and the current requests include a domain nA/1.Txt request, a large number of domain B/2.Txt requests and a large number of domain B/3.Txt requests. And when the request numbers of the domain B/2.Txt and the domain B/3.Txt exceed a set threshold value, the first service server determines that the domain B is a hot domain name. The cache1 forwards the request for accessing the hot domain name domainB to other service servers according to the request data processing method, and does not process the request for the non-hot domain name, such as domainA/1.Txt in the figure. Therefore, the stacked requests are diffused to other servers in the node, a plurality of service servers or all the service servers in the node cooperatively process the requests, and the response speed is increased.
In one embodiment, step S34, determining the target server based on each URL includes:
inquiring a cache index record based on each URL of the hotspot domain name, and determining whether a cache file of the URL is cached in a service server in the node;
if the service servers in the nodes do not cache the cache file of the URL, determining a target server according to the consumption capacity score of the servers in the nodes;
and if the service server in the node has the cache file for caching the URL, determining the service server for caching the cache file of the URL as a target server.
Taking cache1 in fig. 2 as an example, after the hot domain name appears, cache1 determines a target server based on each URL in a plurality of URLs accessing the hot domain name. And for the doma i nB/2.Txt, inquiring the cache index record, if the response file of the doma nB/2.Txt is cached in the cache3 in the node, forwarding the request of the doma i nB/2.Txt to the cache3 by the cache1, and responding the request of the doma nB/2.Txt by the cache3 by utilizing the existing cache file. And inquiring the cache index record for the domain nB/3.Txt, finding that no response file for caching the domain i nB/3.Txt request exists in the service server in the node, determining the service server with the highest consumption capability in the node as a target server, and forwarding the domain i nB/3.Txt request to the target server.
In one embodiment, determining the target server based on the consumption capacity scores of the servers within the nodes comprises:
determining a service server with the highest consumption capability score in the node as a target server;
the spending capacity score includes: the sum of the scores of the current requested stacking number score, the current requested stacking trend score, and the server configuration score.
For example, counting the current request accumulation number of each server in the node, if the request accumulation number is 0, the request distributed to the server can be processed, and the score is 5; for the server with the highest request accumulation number in the node, other requests can not be processed in the period, and the score is 0; for example, counting the request accumulation trend of each server in the node, the server with the minimum request accumulation trend is counted as 5, and the server with the maximum request accumulation trend is counted as 0; then according to server configuration, CPU core number, load value, disk read-write capability and the like, the value of the server with the highest configuration is counted as 5, and the value of the server with the lowest configuration is counted as 0; and adding the scores of the servers, namely the sum of the scores of the consuming capacities of the servers. The above score scoring items are listed only for illustrating how to calculate the spending capacity score, and other more detailed score scoring items may be introduced in practical applications, which is not limited herein.
According to the above rules, the cache1 determines that the target server may be other servers or the cache1 itself. For example, for doma i nB/3.Txt, the cache index records are inquired, and it is found that no service server in the node caches the response file of the doma nB/3.Txt request, and when the consumption capacity score of each server is calculated, the score of cache1 is possibly the highest. Showing that the pressure of each server is large at present and even larger than that of the cache 1. If the service server in the node does not cache the response file of the URL request and the determined target server is the service server, the service server pulls the source file and responds to the URL request, caches the response file and updates the cache index record.
If the determined target server is other server, which indicates that other server has cached the response file of the URL request or has higher consumption capacity, the cache1 forwards the request of URL, doma i nB/3.Txt related to the hot domain name domainb to the target server.
In one embodiment, forwarding the request for the URL to the target server includes: and calculating a forwarding proportion according to the consumption capacity score of the target server, and sending the URL request to the target server according to the forwarding proportion. If the target server is determined, the cache1 sends all the overloaded requests to the target server, which may cause the accumulation of the requests of the target server, and the effect of the accumulation of the scattered requests cannot be achieved, so in this embodiment, the cache1 calculates the forwarding proportion according to the actual consumption capacity of the target server,
forwarding ratio = (number of request piles/total number of requests) × (target server consumption capacity score/first traffic server consumption capacity score).
The request accumulation number is generally far smaller than the total request number, and under normal conditions, the request accumulation number is approximately between 0.01 and 0.1 percent of the total request number, and the overloaded request is forwarded according to the forwarding proportion and is closer to the consumption capacity of the target server.
If the stacked requests are requests of a plurality of URLs, the first service server can simultaneously determine a plurality of target servers when calculating the server consumption capacity scores in the nodes, and send the requests of the plurality of URLs to the plurality of service servers according to a forwarding ratio.
Meanwhile, for the target server, after receiving the requests forwarded by other servers, request accumulation also occurs, the target server further calculates the server with the first consumption capacity score at the current moment according to the request data processing method, and sends the overload request to the server with the first consumption capacity score at the current moment according to the forwarding proportion, and so on, and fine request data processing in the whole node can be achieved.
In one embodiment, when the target server is the first service server, the first service server obtains the source file and responds to the request of the URL, and caches the response file. And when the target server is not the first service server, the target server acquires the source file and responds to the request of the URL to cache the response file.
Taking cache3 as an example, when a first service server, cache1, first generates request accumulation and determines that a target server is cache3, the accumulated request doma i nB/2.Txt is forwarded to cache 3.
When the cache3 locally caches a response file of the doma i nB/2.Txt request, responding to the URL request; when the cache3 does not locally cache the response file of the doma i nB/2.Txt request, the source file is pulled, the response file is cached, the cache index record is updated, and the URL request is responded. When other servers receive the request of the doma i nB/2.Txt again, the response file of the doma nB/2.Txt is cached on the cache3 by inquiring the cache index record, and then the request with the URL of doma i nB/2.Txt is forwarded to the cache 3.
The request of the target server to respond to the URL includes: and modifying the source address of the response message into the address of the first service server, and sending the response message to a load balancer for forwarding the URL request.
In order to correctly respond to the request of the client, the response message needs to be returned to the client along the sending path of the request message, so that when the cache3 responds to the request forwarded by the cache1, the response message needs to be sent to a load balancer (switchB) for forwarding the URL request, and the source address of the response message is modified to be the address of the cache 1.
Through the embodiments, the request data processing method provided by the text can ensure that the service runs smoothly under the condition of integral burst of the domain name of a non-URL hotspot, fully utilizes the whole set of machines in the node and enhances the service carrying capacity; the whole group only needs to store one cache file, and the space occupation is reduced. Before and after the domain name integral burst of the non-URL hot spot, the load balancer system continues to execute, and the influence on the continuous service of the system is small. By adopting the 4-layer packet forwarding technology, the bottom CPU is consumed to perform kernel calculation, and the method is not limited by the performance bottleneck of an application layer. The cache index data is stored in the URL record without being stored by an external system. If the target server also downloads the bearing capacity after the request is forwarded, the machine with the first consumption capacity score at the current moment is calculated as a machine for bearing together, and so on, and the refined request data processing in the whole node is realized.
Fig. 4 is a block diagram of a request data processing apparatus. Referring to fig. 4, the request data processing apparatus is applied to a service server, and includes a receiving module 401, a counting module 402, a domain name determining module 403 of a first type, a target server determining module 404, and a forwarding module 405.
The receiving module 401 is configured for the first service server to receive a URL request;
the statistical module 402 is configured to determine a request accumulation trend according to the request accumulation number in the adjacent preset period;
the first-class domain name determining module 403 is configured to determine a first-class domain name according to URLs of all requests when the request accumulation trend becomes large;
the target server determining module 404 is configured to determine a target server based on each URL in URLs corresponding to the domain names of the first class;
the forwarding module 405 is configured to forward the request of the URL to the target server when the target server is not the first traffic server.
The ratio of the number of requests for accessing the domain name to all the number of requests is greater than a preset threshold value, or the ratio of the number of requests for accessing the domain name to the number of request stacks is greater than a preset threshold value.
The target server determination module 404 determines the target server including:
determining whether a service server in a node has a response file of the URL;
if not, determining a target server according to the consumption capacity score of the servers in the nodes;
and if so, determining the service server with the cache file of the URL as a target server.
Determining a target server according to the consumption capacity scores of the servers within the nodes comprises:
determining a service server with the highest consumption capability score in the node as a target server;
the spending capacity scores include: the sum of the scores of the current requested stacking number score, the current requested stacking trend score, and the server configuration score.
And the forwarding module calculates a forwarding proportion according to the consumption capacity score of the target server and sends the URL request to the target server according to the forwarding proportion.
Forward ratio = (number of request stacks/total number of requests) × (sum of target server consuming capacity score/first traffic server consuming capacity score).
If the service servers in the nodes do not have the cache file of the URL, the target server acquires the source file and responds to the request of the URL to cache the response file.
The target server responding to the request of the URL comprises the following steps: and modifying the source address of the response message into the address of the first service server, and sending the response message to a load balancer for forwarding the URL request.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
FIG. 6 is a block diagram illustrating a computer device 600 for requesting data processing in accordance with an illustrative embodiment. For example, the computer device 600 may be provided as a server. Referring to fig. 6, the computer device 600 includes a processor 601, and the number of processors may be set to one or more as necessary. The computer device 600 further comprises a memory 602 for storing instructions, e.g. application programs, executable by the processor 601. The number of the memories can be set to one or more according to needs. Which may store one or more application programs. The processor 601 is configured to execute instructions to perform the above-described request data processing method.
As will be appreciated by one of skill in the art, the embodiments herein may be provided as a method, apparatus (device), or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media having computer-usable program code embodied in the medium. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, including, but not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the computer, and the like. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as known to those skilled in the art.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (devices) and computer program products according to embodiments herein. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that an article or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such article or apparatus. Without further limitation, an element defined by the phrases "comprising 8230; \8230;" comprises 8230; "does not exclude the presence of additional like elements in an article or device comprising the element.
While the preferred embodiments herein have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following appended claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of this disclosure.
It will be apparent to those skilled in the art that various changes and modifications may be made herein without departing from the spirit and scope thereof. Thus, it is intended that such changes and modifications be included herein, provided they come within the scope of the appended claims and their equivalents.

Claims (10)

1. A request data processing method is applied to a service server and is characterized by comprising the following steps:
a first service server receives a URL request;
determining a request accumulation trend according to the request accumulation number in the adjacent preset period;
when the request accumulation trend is larger, determining a first class domain name according to the URLs of all requests;
determining a target server based on each URL in the URLs corresponding to the first-class domain names;
when the target server is not the first service server, forwarding the URL request to the target server, wherein the ratio of the number of requests for accessing the domain name to all the number of requests is greater than a preset threshold value or the ratio of the number of requests for accessing the domain name to the number of request stacks is greater than a preset threshold value;
the determining a target server based on each URL includes:
determining whether a service server in a node has a cache file of the URL;
if not, determining a target server according to the consumption capacity score of the servers in the nodes;
if yes, determining the service server with the cache file of the URL as a target server;
the determining a target server according to the consumption capacity score of the servers in the node comprises the following steps:
determining a service server with the highest expenditure capability score in the node as a target server;
the spending capacity score includes: the sum of the scores of the current requested stacking number score, the current requested stacking trend score, and the server configuration score.
2. The request data processing method of claim 1, wherein the forwarding the request for the URL to the target server comprises: and calculating a forwarding proportion according to the consumption capacity score of the target server, and sending the URL request to the target server according to the forwarding proportion.
3. The request data processing method according to claim 2, wherein the forwarding ratio = (number of request piles/total number of requests) × (target server consumption capacity score/first service server consumption capacity score).
4. The method of processing request data according to claim 1, further comprising: and when the target server is the first service server, the first service server acquires the source file, responds to the request of the URL and caches the response file.
5. A request data processing device applied to a service server is characterized by comprising:
the receiving module is used for receiving the URL request by the first service server;
the counting module is used for determining a request accumulation trend according to the request accumulation number in the adjacent preset period;
the first-class domain name determining module is used for determining a first-class domain name according to URLs of all requests when the request accumulation trend is larger;
the target server determining module is used for determining a target server based on each URL in the URLs corresponding to the first-class domain names;
a forwarding module, configured to forward, when the target server is not the service server, the request of the URL to the target server, where the first-class domain name is that a ratio of the number of requests for accessing the domain name to all the number of requests is greater than a preset threshold or a ratio of the number of requests for accessing the domain name to the number of request stacks is greater than a preset threshold;
the target server determination module determining a target server includes:
determining whether a service server in a node has a response file of the URL;
if not, determining a target server according to the consumption capacity score of the servers in the nodes;
if yes, determining the service server with the cache file of the URL as a target server;
the determining a target server according to the consumption capacity score of the servers in the node comprises the following steps:
determining a service server with the highest consumption capability score in the node as a target server;
the spending capacity score includes: the sum of the scores of the current requested pile score, the current requested pile trend score, and the server configuration score.
6. The request data processing apparatus of claim 5, wherein the forwarding module calculates a forwarding ratio according to the consuming capacity score of the target server, and sends the URL request to the target server according to the forwarding ratio.
7. The request data processing apparatus according to claim 6, wherein the forwarding ratio = (number of request piles/total number of requests) × (sum of target server consumption capacity score/first service server consumption capacity score).
8. The request data processing apparatus of claim 5, further comprising: a response caching module 501;
the response caching module 501 is configured to, when the target server is the first service server, obtain a source file from the first service server and cache a response file in response to the request of the URL.
9. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when executed, implements the steps of the method according to any one of claims 1-4.
10. A requested data processing system, characterized in that said system comprises a requested data processing apparatus according to any of claims 5-8.
CN201911256609.4A 2019-12-10 2019-12-10 Request data processing method, device, medium and system Active CN112953985B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911256609.4A CN112953985B (en) 2019-12-10 2019-12-10 Request data processing method, device, medium and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911256609.4A CN112953985B (en) 2019-12-10 2019-12-10 Request data processing method, device, medium and system

Publications (2)

Publication Number Publication Date
CN112953985A CN112953985A (en) 2021-06-11
CN112953985B true CN112953985B (en) 2023-04-07

Family

ID=76225551

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911256609.4A Active CN112953985B (en) 2019-12-10 2019-12-10 Request data processing method, device, medium and system

Country Status (1)

Country Link
CN (1) CN112953985B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102299959A (en) * 2011-08-22 2011-12-28 北京邮电大学 Load balance realizing method of database cluster system and device
CN106790743A (en) * 2016-11-28 2017-05-31 北京小米移动软件有限公司 Information transferring method, device and mobile terminal
CN108322392A (en) * 2018-02-05 2018-07-24 重庆邮电大学 The link damage perception efficiency method for routing of Differentiated Services in a kind of elastic optical network
CN109246229A (en) * 2018-09-28 2019-01-18 网宿科技股份有限公司 A kind of method and apparatus of distribution resource acquisition request

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8438642B2 (en) * 2009-06-05 2013-05-07 At&T Intellectual Property I, L.P. Method of detecting potential phishing by analyzing universal resource locators
CN104580216B (en) * 2015-01-09 2017-10-03 北京京东尚科信息技术有限公司 A kind of system and method limited access request

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102299959A (en) * 2011-08-22 2011-12-28 北京邮电大学 Load balance realizing method of database cluster system and device
CN106790743A (en) * 2016-11-28 2017-05-31 北京小米移动软件有限公司 Information transferring method, device and mobile terminal
CN108322392A (en) * 2018-02-05 2018-07-24 重庆邮电大学 The link damage perception efficiency method for routing of Differentiated Services in a kind of elastic optical network
CN109246229A (en) * 2018-09-28 2019-01-18 网宿科技股份有限公司 A kind of method and apparatus of distribution resource acquisition request

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Malicious URL detection using multi-layer filtering model;Rajesh Kumar;《2017 14th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP)》;20180227;全文 *
基于业务流类型划分的SDN网络优化策略;宫颖兴;《信息科技辑》;20181015;全文 *

Also Published As

Publication number Publication date
CN112953985A (en) 2021-06-11

Similar Documents

Publication Publication Date Title
US11032387B2 (en) Handling of content in a content delivery network
US10404790B2 (en) HTTP scheduling system and method of content delivery network
CN102047244B (en) Handling Long Tail Content in a Content Delivery Network (CDN)
EP2975820B1 (en) Reputation-based strategy for forwarding and responding to interests over a content centric network
US9762692B2 (en) Handling long-tail content in a content delivery network (CDN)
US20120110113A1 (en) Cooperative Caching Method and Contents Providing Method Using Request Apportioning Device
US7930394B2 (en) Measured client experience for computer network
CN111464649B (en) Access request source returning method and device
CN104714965A (en) Static resource weight removing method, and static resource management method and device
CN112256495A (en) Data transmission method and device, computer equipment and storage medium
CN109873855A (en) A method and system for resource acquisition based on blockchain network
CN107707593B (en) A kind of dynamic resource access accelerating method and device improving cache hit rate
JP3546850B2 (en) Intelligent load distribution system and method for minimizing response time to accessing web content
CN117695617A (en) Cloud game data optimization processing method and device
CN110321225A (en) Load-balancing method, meta data server and computer readable storage medium
CN112953984B (en) Data processing method, device, medium and system
CN112953985B (en) Request data processing method, device, medium and system
US20130144728A1 (en) PRE-PROCESSING OF AD REQUESTS USING EDGE SIDE PROCESSING OVER COMMERCIAL CDNs
KR20150011087A (en) Distributed caching management method for contents delivery network service and apparatus therefor
CN107707373A (en) A kind of dynamic resource based on API request accesses accelerated method
CN116841513A (en) High-concurrency online shopping platform system and backend management method based on SpringBoot
CN113225396B (en) Hot spot data packet distribution method and device, electronic equipment and medium
CN113989034B (en) Bank attribute data management method and device, electronic equipment and storage medium
CN109302505A (en) Data transmission method, system, device and storage medium
CN117793392A (en) Short-connection live-broadcast source-returning method and CDN cluster of short-connection live-broadcast source-returning method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant