CN110659132B - Request processing optimization method and computer-readable storage medium - Google Patents
Request processing optimization method and computer-readable storage medium Download PDFInfo
- Publication number
- CN110659132B CN110659132B CN201910806371.1A CN201910806371A CN110659132B CN 110659132 B CN110659132 B CN 110659132B CN 201910806371 A CN201910806371 A CN 201910806371A CN 110659132 B CN110659132 B CN 110659132B
- Authority
- CN
- China
- Prior art keywords
- message
- preset
- processing speed
- out queue
- task
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/546—Message passing systems or structures, e.g. queues
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/54—Indexing scheme relating to G06F9/54
- G06F2209/548—Queue
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
- Computer And Data Communications (AREA)
Abstract
The invention discloses a request processing optimization method and a computer readable storage medium, comprising the following steps: creating a first-in first-out queue and a first-in last-out queue; when the server side has overtime request or reaches the upper limit of the task processing quantity, acquiring the CPU occupancy rate or the message processing speed of the equipment where the server side is located; if the CPU occupancy rate does not reach a preset first threshold value or the message processing speed reaches a preset second threshold value, writing the received task message into the first-in first-out queue; and if the CPU occupancy rate reaches a preset first threshold value or the message processing speed does not reach a preset second threshold value, writing the received task message into the first-in and last-out queue. The invention can improve the request success rate.
Description
Technical Field
The present invention relates to the field of internet technologies, and in particular, to a request processing optimization method and a computer-readable storage medium.
Background
In many projects, there are two client sides and two server sides, and actually there are many different architecture implementation manners for the server side, one of which is a micro service architecture manner, which mainly embodies operations such as request access and aggregation interface between a gateway and a micro service.
The implementation mode of the architecture is mainly that the gateway calls the micro-services and calls the micro-services mutually. Actually, each interface provided by each micro service has an upper limit of the number of requests or an upper limit of resources, and when the micro service request reaches the upper limit, or other special situations, such as abnormal situations of machine resources or network jitter, etc., a request congestion of the micro service may exist. Once the congestion occurs, other requests continue to be congested, so that the whole micro-service system is unavailable. Or a higher-level system, when the situation occurs, the queue is used for storing the task message, the queue is a first-in first-out queue, the first-in message is processed firstly, so that a lot of time is consumed when the message which arrives at the next time is processed, and the timeout is abandoned, so that the request fails.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: a request processing optimization method and a computer-readable storage medium are provided, which can improve a request success rate.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows: a method of request processing optimization, comprising:
creating a first-in first-out queue and a first-in last-out queue;
when the server side has overtime request or reaches the upper limit of the task processing quantity, acquiring the CPU occupancy rate or the message processing speed of the equipment where the server side is located;
if the CPU occupancy rate does not reach a preset first threshold value or the message processing speed reaches a preset second threshold value, writing the received task message into the first-in first-out queue;
and if the CPU occupancy rate reaches a preset first threshold value or the message processing speed does not reach a preset second threshold value, writing the received task message into the first-in and last-out queue.
The invention also proposes a computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps as described above.
The invention has the beneficial effects that: by setting two queues, when the task message needs to be written into the queue and the CPU occupancy rate is low or the message processing speed is high, the task message is written into the first-in first-out queue, because the resources are sufficient, the first-in task message can be successfully processed, the subsequent task message can be consumed and completed quickly, and the request success rate is ensured; when the task message needs to be written into the queue and the CPU occupancy rate is high or the message processing speed is low, the task message is written into the first-in last-out queue, so that the subsequent task message can be processed quickly, and the overall request success rate is improved by ensuring the processing success rate of the latest task message. According to the invention, through a mode of processing tasks in a self-adjusting manner, the probability of task execution failure caused by insufficient resources can be reduced to a great extent, and the execution success rate is improved.
Drawings
FIG. 1 is a flow chart of a method for request processing optimization in accordance with the present invention;
fig. 2 is a flowchart of a method according to a first embodiment of the invention.
Detailed Description
In order to explain technical contents, objects and effects of the present invention in detail, the following detailed description is given with reference to the accompanying drawings in conjunction with the embodiments.
The most key concept of the invention is as follows: when the task message needs to be written into the queue, the task message is written into a first-in first-out queue or a first-in last-out queue according to the CPU resource use condition or the message processing speed.
Referring to fig. 1, a method for optimizing request processing includes:
creating a first-in first-out queue and a first-in last-out queue;
when the server side has a condition of overtime request or reaches the upper limit of the task processing quantity, acquiring the CPU occupancy rate or the message processing speed of the equipment where the server side is located;
if the CPU occupancy rate does not reach a preset first threshold value or the message processing speed reaches a preset second threshold value, writing the received task message into the first-in first-out queue;
and if the CPU occupancy rate reaches a preset first threshold value or the message processing speed does not reach a preset second threshold value, writing the received task message into the first-in last-out queue.
From the above description, the beneficial effects of the present invention are: the request success rate can be improved.
Further, before the obtaining of the CPU occupancy rate or the message processing speed of the device where the server is located, the method further includes:
according to a preset period, detecting the CPU occupancy rate of the equipment where the server is located, or according to the preset period, acquiring the total number of task messages processed in the period, and calculating to obtain the message processing speed.
Further, still include:
and storing the CPU occupancy rate or the message processing speed into a local cache.
According to the description, by detecting the CPU occupancy rate in advance or calculating the message processing speed in advance and storing the message processing speed in the local cache, when judgment is needed, relevant data can be acquired quickly, and the efficiency is improved.
Further, after the CPU occupancy does not reach a preset first threshold or the message processing speed reaches a preset second threshold, the method further includes:
and the consumption thread takes out the earliest written task message from the first-in first-out queue for processing.
From the above description, under the condition of sufficient resources, the task messages in the first-in first-out queue are processed in sequence, and the request success rate of the task messages is ensured.
Further, after the CPU occupancy rate reaches a preset first threshold or the message processing speed does not reach a preset second threshold, the method further includes:
and the consumption thread takes out the latest written task message from the first-in and second-out queue for processing.
According to the description, under the condition of insufficient resources, the latest received task message is processed preferentially, the processing success rate of the latest task message is ensured, and therefore the overall request success rate is improved.
The invention also proposes a computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps as described above.
Example one
Referring to fig. 2, a first embodiment of the present invention is: a request processing optimization method can be applied to a micro-service architecture, and comprises the following steps:
s1: creating a first-in first-out queue and a first-in last-out queue; that is, in the system content of the server, two queues are created, namely a first-in first-out queue and a first-in last-out queue. The length of the queue can be set according to conditions such as the size of a memory, and preferably, the length of the queue is 1024.
S2: according to a preset period, detecting the CPU occupancy rate of equipment where a server side is located, or acquiring the total number of task messages processed in the period, and calculating to obtain the message processing speed. Detecting the occupation condition of CPU resources in each period to obtain the CPU occupancy rate of the current period; or when the current period is finished, counting the total number of the task messages processed by the server in the current period, and dividing the total number by the duration of the current period to obtain the message processing speed of the current period.
For example, the CPU occupancy is detected every 10s, or the total number of task messages processed by the service end in the 10s is counted every 10s, and then is divided by 10, so as to obtain the message processing speed corresponding to the 10 s.
Further, the CPU occupancy rate or the message processing speed is stored in a local cache. Preferably, only the latest CPU occupancy or message processing speed may be stored in the local cache, so as to save the storage space of the local cache.
S3: receiving a task message sent by a request end; the request end may be a client or a server.
S4: and judging whether the request processing of the server is normally executed, namely whether the server does not have the condition of request timeout and does not reach the upper limit of the task processing quantity, if so, processing the received task message according to the existing processing method, if not, judging that the server has the condition of request timeout or reaches the on-line task processing quantity, and if not, writing the received task message into the queue, executing the step S5.
S5: acquiring the CPU occupancy rate or the message processing speed of the previous period; namely, the newly detected CPU occupancy or the newly calculated message processing speed is acquired.
S6: and judging whether the CPU occupancy rate does not reach a preset first threshold value or whether the message processing speed reaches a preset second threshold value, if so, namely the resource occupancy rate is lower or the message processing speed is higher, indicating that the task message can be continuously received and processed, executing the step S7, and if not, namely the resource occupancy rate is higher or the message processing speed is lower, indicating that more task messages cannot be continuously processed, executing the step S8.
Preferably, the first threshold is 60%; the second threshold is dependent on the specific traffic operation.
S7: writing the received task message into the first-in first-out queue by the write-in thread, and taking out the earliest written task message from the first-in first-out queue by the consumption thread for processing; the first-in first-out queue is adopted to store the task messages, and the first-in task messages are processed preferentially.
S8: the write-in thread writes the received task message into the first-in last-out queue, and the consumption thread takes out the latest written task message from the first-in last-out queue for processing; namely, the first-in and second-out queues are adopted to store the task messages, and the task messages which arrive after the priority processing are also adopted.
When the CPU occupancy is large or the processing speed is slow, if the task message that arrives first is processed first, the task processing is slow, and it is easy for the processing time of the task that is overtime to be delayed (or for the request end to be determined to be failed due to the overtime of the request), which may cause the request to fail, and the task message that arrives later may be abandoned due to the overtime, which may cause the request to fail. Therefore, under the scene, the latest received task message is processed preferentially, and the earlier task message is processed, so that the processing success rate of the later-achieved task message is ensured, and the overall request success rate is improved.
Further, if the processing speed of the message does not reach the speed of writing the message into the queue, the number of the task messages in the queue is increased, and when the maximum supported number of the queue is reached, the queue can not support more messages to be stored, and the task messages written into the queue earlier are discarded or refused to be processed.
Further, for steps S5-S8, if it is detected in the current cycle that the CPU occupancy rate of the previous cycle reaches the first threshold or the message processing speed does not reach the second threshold, the write thread immediately writes the task message into the fifo queue, but the consumption thread still continues to process the task message in the fifo queue in the current cycle, no matter whether the consumption thread can consume the task message in the fifo queue in the current cycle, the consumption thread starts to process the task message in the fifo queue in the next cycle, and the task message in the fifo queue gives up processing, that is, the default request fails. This is because the alert threshold has been reached at this time, and priority needs to be given to successful processing of the latest task message.
In this embodiment, a new self-adjusting function is adopted, and when a request timeout condition occurs at a server or the upper limit of the task processing number is reached, two queues are used to store task messages, one is a first-in first-out queue, and the other is a first-in last-out queue, and the task messages are specifically analyzed and processed according to the resource occupation condition (mainly CPU occupancy rate) or the message processing speed of the device where the server is located. When the CPU occupancy rate is low or the message processing speed is high, the task message can be continuously received and processed, the first-in first-out queue is adopted to store the task message and perform subsequent processing, and the subsequent task can be quickly consumed and completed due to sufficient resources. When the CPU occupancy rate is high or the message processing speed is low, which indicates that more tasks cannot be processed continuously, the first-in and second-out queues are adopted to store the task messages, so that the latest task messages can be processed quickly, the success rate is ensured, and the previously received task messages have high request failure probability and are subsequently processed. By adopting the mode of processing the tasks in a self-adjusting manner, the probability of task execution failure caused by insufficient resources can be reduced to a great extent, and the execution success rate is improved.
Example two
The present embodiment is a computer-readable storage medium corresponding to the above-mentioned embodiments, on which a computer program is stored, which when executed by a processor implements the steps of:
creating a first-in first-out queue and a first-in last-out queue;
when the server side has a condition of overtime request or reaches the upper limit of the task processing quantity, acquiring the CPU occupancy rate or the message processing speed of the equipment where the server side is located;
if the CPU occupancy rate does not reach a preset first threshold value or the message processing speed reaches a preset second threshold value, writing the received task message into the first-in first-out queue;
and if the CPU occupancy rate reaches a preset first threshold value or the message processing speed does not reach a preset second threshold value, writing the received task message into the first-in and last-out queue.
Further, before the obtaining the CPU occupancy rate or the message processing speed of the device where the server is located, the method further includes:
and detecting the CPU occupancy rate of the equipment where the server side is positioned according to a preset period, or acquiring the total number of task messages processed in the period according to the preset period, and calculating to obtain the message processing speed.
Further, still include:
and storing the CPU occupancy rate or the message processing speed into a local cache.
Further, after the CPU occupancy does not reach a preset first threshold or the message processing speed reaches a preset second threshold, the method further includes:
and the consumption thread takes out the earliest written task message from the first-in first-out queue for processing.
Further, after the CPU occupancy rate reaches a preset first threshold or the message processing speed does not reach a preset second threshold, the method further includes:
and the consumption thread takes out the latest written task message from the first-in and second-out queue for processing.
In summary, according to the optimization method for request processing and the computer-readable storage medium provided by the present invention, when a task message needs to be written into a queue and the CPU occupancy rate is low or the message processing speed is fast, the task message is written into a first-in first-out queue, because resources are sufficient, the first-in task message can be successfully processed, and the subsequent task message can be consumed and completed fast, thereby ensuring the request success rate; when the task message needs to be written into the queue and the CPU occupancy rate is high or the message processing speed is low, the task message is written into the first-in last-out queue, so that the subsequent task message can be processed quickly, and the overall request success rate is improved by ensuring the processing success rate of the latest task message. According to the invention, through a mode of processing tasks in a self-adjusting manner, the probability of task execution failure caused by insufficient resources can be reduced to a great extent, and the execution success rate is improved.
The above description is only an embodiment of the present invention, and is not intended to limit the scope of the present invention, and all equivalent modifications made by the present invention and the contents of the accompanying drawings, which are directly or indirectly applied to the related technical fields, are included in the scope of the present invention.
Claims (8)
1. A method for optimizing request processing, comprising:
creating a first-in first-out queue and a first-in last-out queue;
when the server side has overtime request or reaches the upper limit of the task processing quantity, acquiring the CPU occupancy rate or the message processing speed of the equipment where the server side is located;
if the CPU occupancy rate does not reach a preset first threshold value or the message processing speed reaches a preset second threshold value, writing the received task message into the first-in first-out queue;
if the CPU occupancy rate reaches a preset first threshold value or the message processing speed does not reach a preset second threshold value, writing the received task message into the first-in last-out queue;
before the obtaining of the CPU occupancy rate or the message processing speed of the device where the server is located, the method further includes:
detecting the CPU occupancy rate of equipment where a server is located according to a preset period, or acquiring the total number of task messages processed in the period according to the preset period, and calculating to obtain a message processing speed;
if the CPU occupancy rate of the last period in the current period is detected to reach a first threshold value or the message processing speed does not reach a second threshold value, the writing thread immediately writes the task message into the first-in first-out queue, but the consuming thread continues to process the task message in the first-in first-out queue in the current period, no matter whether the consuming thread can consume the task message in the first-in first-out queue completely in the current period, the consuming thread starts to process the task message in the first-in first-out queue in the next period, and the task message in the first-in first-out queue is abandoned.
2. The method of claim 1, further comprising:
and storing the CPU occupancy rate or the message processing speed into a local cache.
3. The method of claim 1, wherein after the CPU occupancy does not reach a preset first threshold or the message processing speed reaches a preset second threshold, the method further comprises:
and the consumption thread takes out the earliest written task message from the first-in first-out queue for processing.
4. The method of claim 1, wherein after the CPU occupancy reaches a preset first threshold or the message processing speed does not reach a preset second threshold, the method further comprises:
and the consumption thread takes out the latest written task message from the first-in and last-out queue for processing.
5. A computer-readable storage medium, on which a computer program is stored, which program, when executed by a processor, performs the steps of:
creating a first-in first-out queue and a first-in last-out queue;
when the server side has a condition of overtime request or reaches the upper limit of the task processing quantity, acquiring the CPU occupancy rate or the message processing speed of the equipment where the server side is located;
if the CPU occupancy rate does not reach a preset first threshold value or the message processing speed reaches a preset second threshold value, writing the received task message into the first-in first-out queue;
if the CPU occupancy rate reaches a preset first threshold value or the message processing speed does not reach a preset second threshold value, writing the received task message into the first-in last-out queue;
before the obtaining of the CPU occupancy rate or the message processing speed of the device where the server is located, the method further includes:
detecting the CPU occupancy rate of equipment where a server is located according to a preset period, or acquiring the total number of task messages processed in the period according to the preset period, and calculating to obtain a message processing speed;
if the CPU occupancy rate of the last period in the current period is detected to reach a first threshold value or the message processing speed does not reach a second threshold value, the writing thread immediately writes the task message into the first-in first-out queue, but the consuming thread continues to process the task message in the first-in first-out queue in the current period, no matter whether the consuming thread can consume the task message in the first-in first-out queue completely in the current period, the consuming thread starts to process the task message in the first-in first-out queue in the next period, and the task message in the first-in first-out queue is abandoned.
6. The computer-readable storage medium of claim 5, further comprising:
and storing the CPU occupancy rate or the message processing speed into a local cache.
7. The computer-readable storage medium of claim 5, wherein if the CPU occupancy rate does not reach a preset first threshold or the message processing speed reaches a preset second threshold, further comprising:
and the consumption thread takes the earliest written task message from the first-in first-out queue for processing.
8. The computer-readable storage medium of claim 5, wherein if the CPU occupancy reaches a preset first threshold or the message processing speed does not reach a preset second threshold, further comprising:
and the consumption thread takes out the latest written task message from the first-in and second-out queue for processing.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910806371.1A CN110659132B (en) | 2019-08-29 | 2019-08-29 | Request processing optimization method and computer-readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910806371.1A CN110659132B (en) | 2019-08-29 | 2019-08-29 | Request processing optimization method and computer-readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110659132A CN110659132A (en) | 2020-01-07 |
CN110659132B true CN110659132B (en) | 2022-09-06 |
Family
ID=69037891
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910806371.1A Active CN110659132B (en) | 2019-08-29 | 2019-08-29 | Request processing optimization method and computer-readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110659132B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111552577B (en) * | 2020-03-24 | 2023-11-03 | 福建天泉教育科技有限公司 | Method for preventing invalid request from occurring and storage medium |
CN112866145B (en) * | 2021-01-13 | 2022-11-25 | 中央财经大学 | Method, device and computer readable storage medium for setting internal parameters of node |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107391271A (en) * | 2017-05-17 | 2017-11-24 | 阿里巴巴集团控股有限公司 | A kind of delayed tasks triggering method and device based on Message Queuing system |
CN108304254A (en) * | 2017-12-29 | 2018-07-20 | 珠海国芯云科技有限公司 | Quick virtual machine process dispatch control method and device |
CN108762953A (en) * | 2018-05-25 | 2018-11-06 | 连云港杰瑞电子有限公司 | A kind of message queue implementation method |
CN108920093A (en) * | 2018-05-30 | 2018-11-30 | 北京三快在线科技有限公司 | Data read-write method, device, electronic equipment and readable storage medium storing program for executing |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7051330B1 (en) * | 2000-11-21 | 2006-05-23 | Microsoft Corporation | Generic application server and method of operation therefor |
US7170900B2 (en) * | 2001-07-13 | 2007-01-30 | Telefonaktiebolaget Lm Ericsson (Publ) | Method and apparatus for scheduling message processing |
-
2019
- 2019-08-29 CN CN201910806371.1A patent/CN110659132B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107391271A (en) * | 2017-05-17 | 2017-11-24 | 阿里巴巴集团控股有限公司 | A kind of delayed tasks triggering method and device based on Message Queuing system |
CN108304254A (en) * | 2017-12-29 | 2018-07-20 | 珠海国芯云科技有限公司 | Quick virtual machine process dispatch control method and device |
CN108762953A (en) * | 2018-05-25 | 2018-11-06 | 连云港杰瑞电子有限公司 | A kind of message queue implementation method |
CN108920093A (en) * | 2018-05-30 | 2018-11-30 | 北京三快在线科技有限公司 | Data read-write method, device, electronic equipment and readable storage medium storing program for executing |
Non-Patent Citations (2)
Title |
---|
A Combined LIFO-Priority Scheme for Overload Control of E-commerce Web Servers;Naresh Singhmar.et.al;《International Infrastructure Survivability Workshop》;20061117;第2.2节 * |
基于收益驱动请求分类的多目标动态优先请求调度;陈梅梅;《计算机科学》;20160831;第43卷(第8期);第200页第1节 * |
Also Published As
Publication number | Publication date |
---|---|
CN110659132A (en) | 2020-01-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106452818B (en) | Resource scheduling method and system | |
CN111338773B (en) | Distributed timing task scheduling method, scheduling system and server cluster | |
CN105468450A (en) | Task scheduling method and system | |
CN108200544A (en) | Short message delivery method and SMS platform | |
CN101102281A (en) | Data processing method when large amount of data is reported in mobile communication system | |
CN112650575B (en) | Resource scheduling method, device and cloud service system | |
CN110659132B (en) | Request processing optimization method and computer-readable storage medium | |
CN109710416B (en) | Resource scheduling method and device | |
US20190286582A1 (en) | Method for processing client requests in a cluster system, a method and an apparatus for processing i/o according to the client requests | |
CN111857992B (en) | Method and device for allocating linear resources in Radosgw module | |
CN111104257A (en) | Anti-timeout method, device, equipment and medium for backup log data | |
CN115150460B (en) | Node security registration method, device, equipment and readable storage medium | |
CN106095638A (en) | The method of a kind of server resource alarm, Apparatus and system | |
CN110795239A (en) | Application memory leakage detection method and device | |
CN106953884A (en) | Middleware message processing method, device and middleware platform | |
CN108306815A (en) | A kind of method, apparatus, equipment and computer readable storage medium obtaining message | |
CN114979169B (en) | A network resource push method, device, storage medium and electronic device | |
CN114500544B (en) | Method, system, equipment and medium for balancing load among nodes | |
CN115934845A (en) | Self-adaptive data synchronization system, method and storage medium | |
CN111131083B (en) | Method, device and equipment for data transmission between nodes and computer readable storage medium | |
CN110019372A (en) | Data monitoring method, device, server and storage medium | |
CN112291288B (en) | Container cluster expansion method and device, electronic equipment and readable storage medium | |
CN115567477A (en) | Method, equipment and storage medium for processing message accumulation | |
CN109062707B (en) | Electronic device, method for limiting inter-process communication thereof and storage medium | |
CN113127508A (en) | Method, device and system for acquiring sequence number |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |