[go: up one dir, main page]

CN120835096A - Service request processing method and device, storage medium and electronic equipment - Google Patents

Service request processing method and device, storage medium and electronic equipment

Info

Publication number
CN120835096A
CN120835096A CN202410478212.4A CN202410478212A CN120835096A CN 120835096 A CN120835096 A CN 120835096A CN 202410478212 A CN202410478212 A CN 202410478212A CN 120835096 A CN120835096 A CN 120835096A
Authority
CN
China
Prior art keywords
queue
service request
bean
service
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410478212.4A
Other languages
Chinese (zh)
Inventor
王俊
刘思彦
关凯
刘柏
胡志鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202410478212.4A priority Critical patent/CN120835096A/en
Publication of CN120835096A publication Critical patent/CN120835096A/en
Pending legal-status Critical Current

Links

Landscapes

  • Computer And Data Communications (AREA)

Abstract

The disclosure provides a service request processing method, a service request processing device, a computer storage medium and electronic equipment, and relates to the technical field of computers. The method comprises the steps of receiving a plurality of service requests, adding the service requests to an ordered set queue of a Redis cache, obtaining a target service request through a unified capability interface, determining the queue type of the target service request and a Bean object corresponding to the queue type, obtaining a pre-processor of the Bean object from a pre-configured Bean registry, determining parameters before service call through the pre-processor, carrying out service call through the parameters before service call and a unified exception handling mode to process the target service request to obtain a response result, obtaining a post-processor of the Bean object from the pre-configured Bean registry, carrying out conversion operation on the response result through the post-processor, and sending the response result after the conversion operation to a client through the unified capability interface. The method and the device can ensure that the processing efficiency of the server is improved under the condition of low development cost.

Description

Service request processing method and device, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a service request processing method, a service request processing device, a computer storage medium, and an electronic device.
Background
With the rapid development of internet technology, online service systems gradually enrich entertainment and life style of people, for example, AI tasks with specific functions can be realized through AI drawing of artificial intelligence (ARTIFICIAL INTELLIGENCE, abbreviated as AI) algorithm task class, AI customer service and the like, and shopping functions can also be realized through shopping platforms and the like. The service request initiated by the client mainly comprises a synchronous processing mode and an asynchronous processing mode. In the synchronous processing mode, after a user initiates a service request through a client, the user needs to wait for a response result of a server. However, when the concurrent service request amount is large, the server is easy to have congestion, so that the processing time of the server is too long, the processing capacity of the server is limited, the utilization rate of server resources is low, and the client receives the response result of the server to be overtime, so that the use experience of a user is affected.
Currently, an asynchronous processing mode is generally used to solve the technical problems existing in the synchronous processing mode. However, the existing asynchronous processing mode also needs to additionally modify the existing server processing flow to realize the control of the queue, thereby increasing the development cost and the code maintenance cost of asynchronous access.
Therefore, an asynchronous access method with low development cost is needed to overcome the problems of easy congestion, low resource utilization rate and low processing efficiency of the server under the high concurrency scene and long waiting time of the client under the condition of low cost.
Disclosure of Invention
The disclosure provides a service request processing method, a service request processing device, a computer storage medium and electronic equipment, so that easy congestion, resource utilization rate and processing efficiency of a server side in a high concurrency scene are ensured under the condition of low cost, waiting time of a client side is reduced, and user experience is further improved.
In a first aspect, an embodiment of the present disclosure provides a service request processing method, where the method includes receiving a plurality of service requests from a client, adding the plurality of service requests to an ordered set queue of a Redis cache, obtaining a target service request from the ordered set queue of the Redis cache through a unified capability interface, determining a queue type of the target service request and a Bean object corresponding to the queue type according to an index value of the target service request, where the queue type of the service request and the Bean object are in a one-to-one correspondence, obtaining a preprocessor of the Bean object from a pre-configured Bean registry, determining parameters before service call by the preprocessor, performing service call by the parameters before service call and a unified exception handling manner in the Bean object to obtain a response result, obtaining a post-processor of the Bean object from the pre-configured Bean registry, performing a conversion operation on the response result through the post-processor, and sending the response result after the conversion operation to the client through the unified capability interface.
In a second aspect, an embodiment of the disclosure provides a service request processing device, which includes a request receiving module configured to receive a plurality of service requests from a client and add the service requests to an ordered set queue of a Redis cache, an object determining module configured to obtain a target service request from the ordered set queue of the Redis cache through a unified capability interface, determine a queue type of the target service request and a Bean object corresponding to the queue type according to an index value of the target service request, where the queue type of the service request and the Bean object are in a one-to-one correspondence, a processor obtaining module configured to obtain a preprocessor of the Bean object from a pre-configured Bean registry, to determine parameters before service call through the preprocessor, and a calling module configured to perform service call through a unified exception handling manner in the Bean object and obtain a response result, and obtain a post-processor of the Bean object from the pre-configured Bean registry, to perform a conversion operation on the response result through the post-processor, and a sending module configured to send the response result to the client through the unified capability interface.
In a third aspect, an embodiment of the present disclosure provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the service request processing method as above.
In a fourth aspect, one embodiment of the present disclosure provides an electronic device comprising a processor, and a memory for storing executable instructions of the processor, wherein the processor is configured to perform the service request processing method as above via execution of the executable instructions.
In a fifth aspect, one embodiment of the present disclosure provides a computer program product having a computer program stored thereon, which when executed by a processor implements the service request processing method as above.
The technical scheme of the present disclosure has the following beneficial effects:
The service request processing method comprises the steps of receiving a plurality of service requests from a client, adding the service requests to an ordered set queue of a Redis cache, obtaining a target service request from the ordered set queue of the Redis cache through a unified capability interface, determining the queue type of the target service request and a Bean object corresponding to the queue type according to an index value of the target service request, wherein the queue type of the service request and the Bean object are in one-to-one correspondence, obtaining a preprocessor of the Bean object from a pre-configured Bean registry, determining parameters before service call through the preprocessor, performing service call through the parameters before service call and a unified exception processing mode in the Bean object, processing the target service request to obtain a response result, obtaining a post-processor of the Bean object from the pre-configured Bean registry, performing conversion operation on the response result through the post-processor, and sending the response result after the conversion operation to the client through the unified capability interface.
In the first aspect, when high concurrent service requests occur, for the server, a large number of service requests are added to the queue for queuing by adopting the ordered set queue of the Redis cache, so that the technical problems of congestion caused by processing of a large number of service requests by the server and limitation of processing capacity of the server are avoided, and the processing performance and efficiency of the server are improved. For the client, the client does not need to wait for a timeout problem caused by the response of the server, so that the user experience of the client is poor. According to the method, the target service request is acquired from the ordered set queue through the unified capability interface, the response result is sent to the client, after the target service request is received, the processing process of the request message is realized through parameters before service calling, result conversion after service calling, unified exception processing mode and the like through the pre-configured Bean registry, a series of management and control programs for the message queue are not required to be additionally developed in the process, and the request processing can be carried out by acquiring a front processor and a rear processor in the Bean registry for different types of service requests, so that the technical problems of high development cost and maintenance cost caused by the fact that different message queue processing programs are required to be set for different types of message queues in the related technical scheme are avoided, and the processing capability and performance of the server are improved under the condition of low development cost, and the technical effect of reducing the waiting time of the client is achieved. In the third aspect, compared with the method for processing the service request in an asynchronous manner in the related technical scheme, a great amount of modification on the server is not needed, so that the cost and the error rate of the service request processing process are reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely some embodiments of the present disclosure and that other drawings may be derived from these drawings without undue effort.
Fig. 1 schematically shows a system architecture diagram of a service request processing system in the present exemplary embodiment;
Fig. 2 schematically shows a flowchart of a service request processing method in the present exemplary embodiment;
FIG. 3 schematically illustrates a flow chart of a method of inserting an ordered set queue of a Redis cache in the present exemplary embodiment;
fig. 4 schematically shows a service request processing procedure in the present exemplary embodiment;
Fig. 5 schematically shows a schematic structural diagram of a service request processing apparatus in the present exemplary embodiment;
fig. 6 schematically shows a structural diagram of an electronic device in the present exemplary embodiment.
Detailed Description
Exemplary embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many different forms and should not be construed as limited to the examples set forth herein, but rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the exemplary embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the present disclosure. However, those skilled in the art will recognize that the aspects of the present disclosure may be practiced with one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
The flow diagrams depicted in the figures are exemplary only and not necessarily all steps are included. For example, some steps may be decomposed, and some steps may be combined or partially combined, so that the order of actual execution may be changed according to actual situations.
In the related technical schemes, with the rapid development of internet technology, online service systems gradually enrich entertainment and life style of people. For example, the online service system may be AI drawing, AI customer service, etc. of AI algorithm, and the service request initiated correspondingly may be a corresponding drawing request and session request, and the online service system may also be an e-commerce system, such as a shopping system, etc., which runs through the life of people. Taking an AI algorithm as an example, most AI algorithms adopt a synchronous processing mode to call an inference interface, namely a client side needs to wait for a response result of a server side after initiating a service request.
However, when the concurrent service request amount is large, for the server, the server is easy to have congestion, so that the processing time of the server is too long, the processing capacity of the server is limited, the utilization rate of server resources is low, and for the client, the response result of the server received by the client is overtime, so that the use experience of a user is affected.
In the prior art, one common approach is to use a load balancer to spread the traffic pressure of the AI-inferred interface. However, this solution requires a lot of hardware and software resources, and there is still a performance bottleneck in coping with peak traffic.
The other method is to use an asynchronous processing mode, namely, after the client initiates the service request, the server does not need to wait for the response result of the server, and the server puts the background processing task (namely, the service request) into the message queue for processing, and sends the response result to the client after the processing is completed. However, the existing asynchronous processing mode still needs to be modified on the server, for example, processing of a message queue, dynamic configuration of the message queue length, control of the number of concurrent requests in the queue, and the like need to be achieved, and for different types of service requests, different asynchronous processing modes need to be set for the message queue, which clearly increases development cost and subsequent program maintenance cost. In addition, the flow of asynchronous processing is complex and is prone to error.
In view of the above problems, an exemplary embodiment of the present disclosure provides a service request processing method, which receives a plurality of service requests from a client, and adds the plurality of service requests to an ordered set queue of a Redis cache, obtains a target service request from the ordered set queue of the Redis cache through a unified capability interface, determines a queue type of the target service request and a Bean object corresponding to the queue type according to an index value of the target service request, wherein the queue type of the service request and the Bean object are in a one-to-one correspondence, obtains a preprocessor of the Bean object from a pre-configured Bean registry, determines parameters before service call through the preprocessor, performs service call through the parameters before service call and a unified exception handling manner in the Bean object to process the target service request to obtain a response result, obtains a post-processor of the Bean object from the pre-configured Bean registry, performs a conversion operation on the response result through the post-processor, and sends the response result after the conversion operation to the client through the unified capability interface.
In the first aspect, when high concurrent service requests occur, for the server, a large number of service requests are added to the queue for queuing by adopting the ordered set queue of the Redis cache, so that the technical problems of congestion caused by processing of a large number of service requests by the server and limitation of processing capacity of the server are avoided, and the processing performance and efficiency of the server are improved. For the client, the client does not need to wait for a timeout problem caused by the response of the server, so that the user experience of the client is poor. According to the method, the target service request is acquired from the ordered set queue through the unified capability interface, the response result is sent to the client, after the target service request is received, the processing process of the request message is realized through parameters before service calling, result conversion after service calling, unified exception processing mode and the like through the pre-configured Bean registry, a series of management and control programs for the message queue are not required to be additionally developed in the process, and the request processing can be carried out by acquiring a front processor and a rear processor in the Bean registry for different types of service requests, so that the technical problems of high development cost and maintenance cost caused by the fact that different message queue processing programs are required to be set for different types of message queues in the related technical scheme are avoided, and the processing capability and performance of the server are improved under the condition of low development cost, and the technical effect of reducing the waiting time of the client is achieved. In the third aspect, compared with the method for processing the service request in an asynchronous manner in the related technical scheme, a great amount of modification on the server is not needed, so that the cost and the error rate of the service request processing process are reduced.
In order to solve the above-mentioned problems, the present disclosure proposes a service request processing method and apparatus, which can be applied to the system architecture of the exemplary application environment shown in fig. 1.
As shown in fig. 1, system architecture 100 may include one or more of clients 101, 102, 103, 104, a network 105 and a server (also referred to as a server) 106, and a dis cache 107. The network 105 serves as a medium to provide communication links between the clients 101, 102, 103, 104 and the server 106. The network 105 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others. The clients 101, 102, 103, 104 may be, for example, but not limited to, smartphones, palmtops (Personal DIGITAL ASSISTANT, PDA), notebooks, servers, desktop computers, or any other computing device with networking capabilities.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. For example, the server 106 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, a content delivery network (Content Delivery Network, CDN), and basic cloud computing services such as big data and an artificial intelligence platform.
The service request processing method provided in the embodiments of the present disclosure may be executed in the server 106, and accordingly, the service request processing apparatus is generally disposed in the server 106. The service request processing method provided by the embodiment of the disclosure may also be executed in the terminal device, and correspondingly, the service request processing apparatus may also be set in the terminal device. The service request processing method provided by the embodiment of the present disclosure may be partially executed in the server 106 and partially executed in the terminal device, and accordingly, a part of modules of the service request processing apparatus may be disposed in the server 106 and a part of modules are disposed in the terminal device.
For example, in an exemplary embodiment, a user may initiate a plurality of service requests to the server 106 through the clients 101, 102, 103 and/or 104, the server 106 receives the plurality of service requests from the clients and adds the plurality of service requests to the ordered set queue of the Redis cache, if the target service request is initiated by the client 101, the server 106 obtains the target service request from the ordered set queue of the Redis cache through a unified capability interface, determines a queue type of the target service request and a Bean object corresponding to the queue type according to an index value of the target service request, where the queue type of the service request and the Bean object are in a one-to-one correspondence, obtains a pre-processor of the Bean object from a pre-configured Bean registry, determines parameters before service call through the pre-processor, processes the service call by a unified exception handling manner in the Bean object to obtain a response result, obtains a post-processor of the Bean object from the pre-configured Bean registry, and converts the post-processor to a response result through a post-conversion operation to the unified capability interface, and sends the response result to the client through the unified capability interface.
However, it is easy to understand by those skilled in the art that the above application scenario is only for example, and the present exemplary embodiment is not limited thereto.
The server 106 is taken as an execution subject, and the service request processing method is applied to the server 106 as an example. Fig. 2 schematically illustrates a flowchart of a service request processing method in the present exemplary embodiment, referring to fig. 2, the service request processing method provided by the embodiment of the present disclosure includes the following steps S201 to S205:
Step S201, a plurality of service requests from a client are received, and the plurality of service requests are added to an ordered set queue of the Redis cache.
Step S202, obtaining a target service request from an ordered set queue of a Redis cache through a unified capability interface, and determining a queue type of the target service request and a Bean object corresponding to the queue type according to an index value of the target service request, wherein the queue type of the service request and the Bean object are in one-to-one correspondence.
Step S203, a preprocessor of the Bean object is obtained from a pre-configured Bean registry, so that parameters before service call are determined through the preprocessor.
And step S204, performing service call by using parameters before service call and a unified exception handling mode in the Bean object to process the target service request to obtain a response result, and acquiring a post processor of the Bean object from a pre-configured Bean registry to perform conversion operation on the response result by using the post processor.
Step S205, the response result after the conversion operation is sent to the client through the unified capability interface.
In the technical scheme provided by some embodiments of the present disclosure, a plurality of service requests from a client are received and added to an ordered set queue of a Redis cache, a target service request is obtained from the ordered set queue of the Redis cache through a unified capability interface, a queue type of the target service request and a Bean object corresponding to the queue type are determined according to an index value of the target service request, the queue type of the service request and the Bean object are in a one-to-one correspondence relationship, a preprocessor of the Bean object is obtained from a pre-configured Bean registry, parameters before service call are determined through the preprocessor, service call is performed through the parameters before service call and a unified exception processing mode in the Bean object to process the target service request to obtain a response result, a post-processor of the Bean object is obtained from the pre-configured Bean registry, the response result is converted through the post-processor, and the response result after the conversion operation is sent to the client through the unified capability interface.
In the first aspect, when high concurrent service requests occur, for the server, a large number of service requests are added to the queue for queuing by adopting the ordered set queue of the Redis cache, so that the technical problems of congestion caused by processing of a large number of service requests by the server and limitation of processing capacity of the server are avoided, and the processing performance and efficiency of the server are improved. For the client, the client does not need to wait for a timeout problem caused by the response of the server, so that the user experience of the client is poor. According to the method, the target service request is acquired from the ordered set queue through the unified capability interface, the response result is sent to the client, after the target service request is received, the processing process of the request message is realized through parameters before service calling, result conversion after service calling, unified exception processing mode and the like through the pre-configured Bean registry, a series of management and control programs for the message queue are not required to be additionally developed in the process, and the request processing can be carried out by acquiring a front processor and a rear processor in the Bean registry for different types of service requests, so that the technical problems of high development cost and maintenance cost caused by the fact that different message queue processing programs are required to be set for different types of message queues in the related technical scheme are avoided, and the processing capability and performance of the server are improved under the condition of low development cost, and the technical effect of reducing the waiting time of the client is achieved. In the third aspect, compared with the method for processing the service request in an asynchronous manner in the related technical scheme, a great amount of modification on the server is not needed, so that the cost and the error rate of the service request processing process are reduced.
The following describes in detail the implementation of each step in the embodiment shown in fig. 2 with reference to specific embodiments:
In step S201, a plurality of service requests from a client are received, and the plurality of service requests are added to an ordered set queue of the Redis cache.
The service request can be a service request aiming at any type of service, and takes an AI algorithm service as an example, the service request can be an AI drawing request, an AI session request and the like, and the AI algorithm has the characteristics of long processing time and heavy task and realizes synchronous or asynchronous processing through the call of an inference interface. The embodiment of the disclosure does not limit the type of the service request in any particular way, and the service request processing method provided by the application can be suitable for application scenes of any type of service.
It should be noted that, the multiple service requests received by the server may be from the same client (i.e., terminal device), or may be from multiple different clients, and in a high concurrency application scenario, multiple service requests received by the server in a short time typically come from multiple different clients.
In addition, any embodiment of the present disclosure is applicable to processing a service request in any scenario, that is, not only in a high concurrency application scenario, but also in an application scenario with a smaller concurrency request amount, which is not limited in any particular manner by the embodiments of the present disclosure.
It should be explained that the ordered set queue may be an ordered set queue based on ScoreSet, ZSet of the Redis cache or the like. The ScoreSet is an ordered set in the Redis cache, is an upgrade of set, is a set queue of string data types, and does not allow repeated service requests to exist in the set queue. The Redis caches an open source log-type, key-Value (Key-Value) database written in ANSI C language, compliant with the wide tolerance free software license (Berkeley Software Distribution, BSD protocol), supporting network, and capable of being based on memory and persistent, and provides a multi-language application programming interface (Application Programming Interface, API for short).
Illustratively, after receiving a plurality of service requests from a client, the server needs to insert the plurality of service requests into an ordered set queue of the Redis cache sequentially.
In performing step S201, it may also be implemented using the following embodiments:
In some example embodiments of the present disclosure, service requests at least include queue information to be inserted, according to the queue information to be inserted corresponding to each service request, a target queue to be inserted corresponding to each service request is searched from an ordered set queue of a Redis cache, a remaining queue length of the target queue to be inserted is obtained, and if the queue length occupied by each service request is less than or equal to the remaining queue length of the target queue to be inserted, a plurality of service requests are sequentially added to the target queue to be inserted of the Redis cache.
Each service request at least carries queue information to be inserted corresponding to the service request, and each service request can be rapidly distributed to a target queue to be inserted, which is matched with the type of the service request, in a Redis cache through the carried queue information to be inserted.
In general, in order to improve service processing efficiency and better cope with a high concurrency application scenario, a Redis cache of a server generally includes a plurality of ordered set queues for receiving service requests from clients in parallel, and service types of service requests processed by different ordered set queues are different. When a client sends a service request to a server, the service request contains queue information to be inserted, so that the server can conveniently find a corresponding target queue to be inserted from a plurality of different ordered set queues in a Redis cache based on the queue information to be inserted, and then insert the target queue to be inserted.
Taking the AI drawing service as an example, the initiated service request includes specific service content (also referred to as a request body), where the specific service content may be described by a section of promt. Since the AI request is to implement information transfer with the inference interface, the pending queue information of the dis cache may be queried by the get inference interface (generally defined as requestAPI).
The server, after searching the target queue to be inserted corresponding to each service request from the Redis cache, needs to determine the relationship between the length of the queue occupied by the service request and the length of the remaining queue of the target queue to be inserted, that is, determine whether the target queue to be inserted can support the insertion of the service request. If the length of the queue occupied by the service request is smaller than or equal to the length of the remaining queue of the target to-be-inserted queue, the service request can be directly added into the target to-be-inserted queue of the Redis cache at the moment, otherwise, if the length of the queue occupied by the service request is larger than the length of the remaining queue of the target to-be-inserted queue, the current target to-be-inserted queue is indicated to be incapable of being inserted, the server sends the current target to-be-inserted queue to the client, and the client needs to continue waiting until the target to-be-inserted queue can insert the service request.
In this embodiment, whether the service request can be inserted into the determined target to-be-inserted queue can be quickly determined by judging the length of the queue occupied by the service request and the length of the remaining queue of the target to-be-inserted queue, so that the safety and the accuracy of the transmission process are improved.
On the basis of determining that the service request can be added to the target to-be-inserted queue in the above embodiment, adding a plurality of service requests to the ordered set queue of the Redis cache can be performed.
When the executing step adds the plurality of service requests to the ordered set queue of the Redis cache, in an optional embodiment of the disclosure, an index value corresponding to each service request may also be generated according to the UUID, and the index value and each service request are added to the ordered set queue of the Redis cache in a storage form of a key value pair.
Wherein a universally unique identification code (Universally Unique Identifier, UUID for short) is used to identify the attribute type, which is regarded as a unique identification in all space and time, thereby creating a unique identifier for the new service. It is a standard for software architecture, also part of the open software foundation organization in the field of distributed computing environments, aimed at identifying the uniqueness of information in the network.
By way of example, since the ordered set queue adopts the processing mode of 'first in first out', when unified queuing of each service request is realized by using the Redis cache, the time stamp of each service request added into the ordered set queue of the Redis cache can be used as the ordering value of the service request, so that sequential processing according to the ordering value is facilitated. Then, a unique index value (e.g., defined as requestId) for queuing the service request in the ordered set queue of the Redis cache is generated based on the UUID, and a one-to-one mapping relationship between the index value (requestId) and the request body (request) is constructed using the Map data structure of the Redis cache.
Wherein, the Map data structure is a storage form of a Key Value pair, wherein, an index Value (requestId) of the service request is taken as a Key (Key) of the Key Value pair, and the service request (i.e. a request body (request)) is taken as a Value (Value) of the Key Value pair.
After the server achieves that the service requests are added to the ordered set queue of the Redis cache, the server side can return index values (requestId) of the service requests to corresponding clients, so that in the following embodiments, the user can visually check queuing conditions in the ordered set queue through the index values (requestId).
In the embodiment, the index value and the service request are added to the ordered set queue of the Redis cache in the form of key value pair storage, so that when the server acquires the target service request from the ordered set queue, the target service request can be rapidly positioned only by searching the index value, the request processing process is simplified, and the service request processing efficiency of the server is further improved. And the server can convert the synchronous waiting process of the client into an asynchronous pushing process based on the ordered set queue of the Redis cache, so that the waiting time of the client is reduced, and the application experience of a user of the client is improved.
The process by which the server adds a target service request to the ordered set queue of the Redis cache will be described below in connection with FIG. 3. Fig. 3 schematically shows a flowchart of a method for inserting an ordered set queue of a dis cache in the present exemplary embodiment, where the method at least includes the following steps S301-S308:
first, the client initiates a service request in step S301.
For example, the client may initiate a service request by responding to a triggering operation of the user, e.g., the user initiates a service request for an AI drawing service by triggering a control for AI drawing.
According to some embodiments of the present disclosure, the server, after receiving the service request, performs step S302 to obtain the target to-be-inserted queue information.
The server may search the target queue to be inserted of each service request from the multiple ordered set queues in the dis cache according to the queue to be inserted information carried by each service request, and further determine the target queue to be inserted from the dis cache.
The server executes step S303 to determine the size of the queue length occupied by the service request and the target to-be-inserted queue length. If the length of the queue occupied by the service request is greater than the length of the target queue to be inserted, step 3052 is executed to prompt that the queue is full.
The server sends prompt information to the client to prompt the client that the target queue to be inserted is full and does not support the insertion of the service request sent by the client, the user can select to continue waiting or quit through the prompt information, if waiting continues, the length of the queue occupied by the waiting service request is smaller than or equal to the length of the remaining queue of the target queue to be inserted until waiting, otherwise, if waiting is abandoned, the service request initiated by the client is rejected.
In an embodiment opposite to step 3502, the target to-be-inserted queue supports insertion of the service request sent by the client, that is, satisfies a condition that a length of a queue occupied by the service request is less than or equal to a remaining length of the target to-be-inserted queue, and then step 3051 is executed to generate an index value according to the UUID.
That is, the server generates an index value of the service request according to the UUID (requestId), and executes step S306 to generate an insertion target to-be-inserted queue request based on the index value and the service request.
After the Redis cache receives the request, step S307 may be executed to insert the target to-be-inserted queue and store the service request information. Namely, the Redis cache inserts the service request into the target queue to be inserted, and adds the index value and the service request into the ordered set queue of the Redis cache in the form of storing key value pairs.
The Redis cache executes step S308, returns key value pairs to the server, and executes step S309, returns index values. I.e., the index value of the service request is returned to the server to the client (requestId).
In step S202, a target service request is obtained from the ordered set queue of the Redis cache through a unified capability interface, and a queue type of the target service request and a Bean object corresponding to the queue type are determined according to the target service request, where the queue type of the service request and the Bean object are in a one-to-one correspondence.
The target service request is the earliest enqueued service request in the ordered set queue of the Redis cache. The unified capability interface is an interactive interface for providing other system components with the service request processing method provided by the disclosure, and the service request processing method provided by any embodiment can be integrated through the unified capability interface and provided with services in the form of an interface.
Illustratively, the server will extract the earliest inserted target service request from the ordered set queue of the Redis cache through the unified capability interface according to the first-in first-out principle for processing. The interface for acquiring the service request and sending the final response result to the client can be realized through the unified capability interface.
In the Spring Boot, the Bean is an object instantiated, managed and maintained by the Spring container. Bean is one of the core concepts of the Spring framework, which represents a component or object in an application. A Bean may be any Java object, such as plain Java object (Plain Old Java Object, abbreviated as POJO), service, repository, controller, etc. The manner in which a class is declared as a Bean may be by using a '@ Component' annotation or its derivative annotation ('@ Service','@ restore','@ Controller', etc.) at the class level, or by explicit declaration via a configuration file.
For instantiation, the Spring container is responsible for instantiating beans. When the application program is started, the Spring container finds and instantiates all classes marked as beans according to the configuration information or the annotation scanning result, and adds the classes to the container, wherein the instantiated process is responsible for the Spring loC (Inversion of Control) container.
And the Spring container is responsible for managing the life cycle and the dependency relationship of the Bean after the Bean is instantiated. According to the configuration file or the annotated information, the Spring container automatically solves the dependency relationship among beans, ensures that the dependency is correctly injected when needed, and is also responsible for destroying the beans which are not needed any more. It should be noted that the above injection dependency may be implemented by way of constructor injection, seter method injection, or field injection, where the dependency injection is most commonly performed using the; @ Autowired' notation.
The method includes the steps of obtaining a Bean object corresponding to a target service request, packaging the Bean object through a corresponding preprocessor, a corresponding post-processor and a corresponding unified processing method, obtaining the service request from outside through a unified capability interface, and sending a processed response result to a corresponding client.
After the target service request is obtained through the unified capability interface, the queue type of the target service request and the Bean object matched with the queue type can be determined according to the target service request.
For example, a mapping relationship between the index value and the queue type may be pre-constructed, and the queue type (e.g., defined as QueueType) of the target service request may be determined according to the index value of the target service request. And Bean objects corresponding to different queue types are also different.
Before proceeding to step S202 and determining the queue type of the target service request and the Bean object corresponding to the queue type according to the target service request, the mapping relationship between the queue type and the Bean object may be constructed in advance.
In an alternative embodiment of the present disclosure, a target service request is converted into context information of a service application, and a one-to-one correspondence between a queue type and a corresponding Bean object is constructed according to the context information of the service application, so as to determine the Bean object corresponding to the queue type based on the queue type of the target service request.
Wherein the business application is a consumer that consumes the business request.
For example, after determining the queue type of the target service request, the corresponding Bean object may be determined according to the queue type. Therefore, a mapping relationship between the queue type and the corresponding Bean object needs to be constructed in advance.
And converting the target service request into context information of the service application program, acquiring a Bean object with a type of a queue handle (QueueHandler) according to the context information of the service application program, and traversing the Bean object so as to construct a mapping relation between the queue type and the Bean object, so that the queue type of the target service request and the Bean object corresponding to the queue type can be found out.
In an optional embodiment of the present disclosure, when the step of constructing the one-to-one correspondence between the queue type and the corresponding Bean object according to the context information of the service application is performed, whether a dynamic annotation exists in the determined Bean object may be searched for according to the context information of the service application, and if the dynamic annotation exists, the one-to-one object relationship between the queue type associated with the Bean object and the corresponding Bean object is constructed.
For example, all Bean objects acquired according to the context information of the service application may be traversed one by one to find out whether dynamic annotations (i.e., invokeAnnotation annotations) exist in each Bean object during the traversal. If dynamic annotations exist, the name of its associated queue type QueueType and corresponding Bean object are stored in the Map data structure. So that the preprocessor and the postprocessor corresponding to the Bean object can be obtained according to the queue type QueueType.
In the embodiment, the corresponding relation between the Bean object and the queue type is constructed by searching the Bean object with the dynamic annotation, so that the Bean object corresponding to the target service request is obtained according to the queue type of the target service request, the condition that the related technology needs an additional development program to realize the management of the queue is avoided, and the development cost and the later maintenance cost are reduced.
In step S203, a preprocessor of the Bean object is acquired from a pre-configured Bean registry to determine parameters before service invocation by the preprocessor.
After the Bean object corresponding to the target service request is obtained, the preprocessor of the Bean object can be obtained from a pre-configured Bean registry. The preparation of parameters before service invocation can be realized by the preprocessor. So that a call to a service or service chain is made according to the parameter.
Before executing step S203, a Bean registry needs to be built in advance.
In an alternative embodiment of the present disclosure, the registration of Bean objects is performed in advance through an abstract factory schema to form a Bean registry.
Wherein abstract factory schema (Abstract Factory Pattern) is the most abstract and generic one of the factory schemas of all modalities. An abstract factory refers to a factory model that is used when there are multiple abstract roles. The abstract factory schema can provide an interface to clients that enables clients to create product objects in multiple product families without having to specify a particular product.
Illustratively, one or more different beans may be created at the SpringBoot compilation stage by abstracting the factory schema, and subsequent processing of the target business request is accomplished by invoking the beans. The registration of different Bean objects is realized through an abstract factory mode of Java, and a preprocessor and a postprocessor of the Bean objects are added to a Bean registry.
The registered beans are used for abstracting parameter information which needs to be prepared before the beans are called, the return value of the parameter information is converted after the beans are called, and in the implementation process, when an exception occurs, the exception handling is carried out by using a unified exception handling mode.
In the embodiment, the abstract factory mode is used for registering the Bean object, so that the Bean registry can be quickly built by using the organized abstract factory mode, and the processing efficiency of the server is further improved.
In step 204, service call is performed through parameters before service call and unified exception handling mode in the Bean object to process the target service request to obtain a response result, and a post processor of the Bean object is obtained from a pre-configured Bean registry, so that conversion operation is performed on the response result through the post processor.
The parameters in the service can be transferred by the parameters before the service is transferred, which are determined by the pre-processor, so that the service matched with the target service request is transferred. The unified exception handling method can also be called global exception handling, and can be triggered when an exception occurs in the running process of the program, so that the robustness of the program and the readability of the program are ensured, and the exception problem point can be conveniently and quickly positioned by a back-end personnel.
By way of example, customization of developers can be achieved through a unified exception handling manner so as to quickly locate exception conditions. For example, when an exception condition occurs when a unified exception handling approach is not used, then the program may be thrown throw new RuntimeException ("XXX exception"). While several special exceptions may be defined using a unified exception handling approach. For example:
Log-on exception throw new LoginException ("XXX exception");
Rights exception throw new AuthorityException ("XXX exception");
Traffic anomalies throw new BusinessException ("XXX anomalies") and the like. If 'BusinessException' appears, the business abnormality can be rapidly positioned, thereby realizing rapid positioning of abnormality processing.
For example, after the parameter determination prior to the service call is implemented by the pre-processor, the service call may be performed to process the target service request. In the embodiment, the service call is performed by the unified exception handling mode in the Bean object, so that the complexity of codes is reduced, and the problem caused by improper exception handling is also reduced.
After the processing is completed, the post processor of the Bean object can be obtained from a pre-configured Bean registry, so that the obtained response result is converted according to the post processor, and the result is sent out through a unified capability interface.
It should be noted that the pre-processor and post-processor may support groovy, which is not limited in any way by the embodiments of the present disclosure.
The process can match the corresponding Bean object according to the queue type of the target service request, and further directly realize the consumption process through conversion of the corresponding parameter, the processing method and the return value of the Bean object, so that the development cost is prevented from being greatly increased caused by developing the queue management and control method matched with the Bean object for different service types, and the performance and the efficiency of the server are improved rapidly under the condition of low development cost.
In step 205, the response result after the conversion operation is sent to the client through the unified capability interface.
The response result after the conversion operation can be sent to the client through the packaged unified capability interface.
It should be explained that when the target service request is obtained from the ordered set queue in the dis cache, the key being queued (i.e. the index value corresponding to the service request) may be polled at @ Schedule annotation, and if not, one requestId (index value) is popped from the ordered set queue ScoreSet and the corresponding target service request is obtained from the Map for processing.
For ease of understanding, the embodiments of the present disclosure will be described in greater detail below in conjunction with fig. 4:
Fig. 4 schematically illustrates a service request processing procedure in this exemplary embodiment, as shown in fig. 4, a target service request may be obtained from an ordered set queue through a unified capability interface, and a response result may be sent to a client, where the capability of processing the service request is encapsulated through the unified capability interface, and a developer may obtain a corresponding response result only by inputting the service request without managing the queue.
The business application (i.e., the application shown in the figure) is a consumer, and implements a corresponding capability factory by registering the corresponding dynamically annotated Bean object, loading the configuration file of the corresponding queue type, and building a Bean registry by registering the Bean object. When processing the target service request, only the preprocessor, the postprocessor and the processing method of the Bean object are needed to be obtained from a pre-configured Bean registry according to the obtained queue type (type is shown in the figure, namely the queue type QueueType), so that a task scheduling process is realized, and finally, the corresponding result is sent to the client through a unified capability interface.
The process only needs to realize service request processing of different queue types (namely service types) through the pre-configured Bean object registry, so that the technical problem of high development cost caused by developing different management and control programs aiming at different service types in the related technical scheme is avoided, the improvement of the processing capacity of a server is realized under the condition of ensuring low cost, the waiting time of a client is reduced, and the user experience is improved.
In some example embodiments of the present disclosure, when the step S305 is performed, in order to improve the availability and stability of the system, to avoid a situation that the network is unstable or the inference interface calls slowly, the response result after the conversion operation may also be sent to the client through a WebSocket communication protocol or a callback manner.
For example, after the server processes the service request, the unified channel may be implemented to push the processing result to the client through WebSocket or callback.
The WebSocket or callback mode is adopted for pushing, so that the availability of the system is improved, and the result is notified by adopting the callback mode, so that the accuracy of the result can be ensured even if the network is unstable.
Furthermore, in order to improve the applicability of the system, the following technical scheme can be implemented:
In some example embodiments of the present disclosure, the ordered set queue of the Redis cache is visually displayed.
By way of example, the method can also realize configuration and visual viewing of the ordered set queue of the Redis cache after management is built, so that the queuing condition of the queue is visually displayed through a background page, and management personnel can monitor conveniently.
In some example embodiments of the present disclosure, the processing order of the service requests in the ordered set queue is adjusted in response to an edit operation for the service requests in the ordered set queue of the Redis cache.
In the embodiment provided by the disclosure, the processing sequence of each service request in the ordered set queue can be manually adjusted by a manager, so that the operation of the manager is facilitated.
In some example embodiments of the present disclosure, after processing the target service request, the target service request may also be deleted from the ordered set queue of the Redis cache.
For example, after the target service request is processed, in order to improve the resource utilization rate, the target service request needs to be deleted from the ordered set queue, that is, the index value in the ordered set queue needs to be deleted.
In order to implement the service request processing method, an embodiment of the present disclosure provides a service request processing device. Fig. 5 schematically shows a schematic architecture diagram of a service request processing device.
The service request processing device 500 includes a request receiving module 501, an object determining module 502, a processor obtaining module 503, a calling module 504, and a sending module 505.
The system comprises a request receiving module 501 for receiving a plurality of service requests from a client and adding the service requests to an ordered set queue of a Redis cache, an object determining module 502 for obtaining a target service request from the ordered set queue of the Redis cache through a unified capability interface and determining a queue type of the target service request and a Bean object corresponding to the queue type according to an index value of the target service request, wherein the queue type of the service request and the Bean object are in one-to-one correspondence, a processor obtaining module 503 for obtaining a preprocessor of the Bean object from a pre-configured Bean registry to determine parameters before service call through the preprocessor, a calling module 504 for performing service call through the parameters before service call and a unified exception handling mode in the Bean object to process the target service request to obtain a response result, a post-processor of the Bean object from the pre-configured Bean registry to convert the response result through the post-processor, and a sending module 505 for sending the response result after the conversion operation to the client through the unified capability interface. .
In an optional embodiment of the disclosure, the apparatus further comprises a registration module, wherein the registration module is configured to register the Bean object in advance through an abstract factory mode, and form the Bean registry.
In an optional embodiment of the present disclosure, the service requests at least include queue information to be inserted, and the request receiving module 501 is specifically configured to search, according to the queue information to be inserted corresponding to each service request, a target queue to be inserted corresponding to each service request from the ordered set queue of the dis cache, obtain a remaining queue length of the target queue to be inserted, and if a queue length occupied by each service request is less than or equal to the remaining queue length of the target queue to be inserted, add a plurality of service requests to the target queue to be inserted of the dis cache.
In an optional embodiment of the present disclosure, the request receiving module 501 is specifically configured to generate an index value corresponding to each service request according to a UUID of a universal unique identifier code, and add the index value and each service request to an ordered set queue of the dis cache in a storage form of a key value pair.
In an optional embodiment of the disclosure, the device further comprises an information conversion module and a construction module, wherein the information conversion module is used for converting the target service request into context information of a service application program, and the construction module is used for constructing a one-to-one correspondence between a queue type and a corresponding Bean object according to the context information of the service application program so as to determine the Bean object corresponding to the queue type based on the queue type of the target service request.
In an optional embodiment of the disclosure, the construction module is configured to search whether a dynamic annotation exists in the determined Bean object according to the context information of the service application program, and if the dynamic annotation exists, construct a queue type associated with the Bean object and a one-to-one object relationship of the corresponding Bean object.
In an optional embodiment of the present disclosure, the sending module 505 is configured to send the response result after the conversion operation to the client through WebSocket communication protocol or callback mode.
In an optional embodiment of the disclosure, the apparatus may further include a display module, where the display module is configured to visually display the ordered set queue of the Redis cache.
In an optional embodiment of the disclosure, the apparatus may further include an adjustment module, where the adjustment module is configured to adjust a processing order of each service request in the ordered set queue in response to an editing operation for each service request in the ordered set queue of the Redis cache.
In an optional embodiment of the disclosure, the apparatus may further include a request deletion module configured to delete the target service request from the ordered set queue of the Redis cache.
The service request processing device 500 provided in the embodiments of the present disclosure may execute the technical scheme of the service request processing method in any of the embodiments, and the implementation principle and beneficial effects of the service request processing method are similar to those of the service request processing method, and may refer to the implementation principle and beneficial effects of the service request processing method, which are not described herein.
In an exemplary embodiment of the present disclosure, a computer-readable storage medium having stored thereon a program product capable of implementing the method described above in the present specification is also provided. In some possible embodiments, the aspects of the invention may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the invention as described in the "exemplary method" section of this specification, when the program product is run on the terminal device.
A program product for implementing the above-mentioned method according to an embodiment of the present invention may employ a portable compact disc read Only Memory (CD-ROM) and comprise program code and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of a readable storage medium include an electrical connection having one or more wires, a portable disk, a hard disk, a random access Memory (Random Access Memory, RAM), a Read-Only Memory (ROM), an erasable programmable Read-Only Memory (Erasable Programmable Read-Only Memory, EPROM or flash Memory), an optical fiber, a portable compact disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the preceding.
The computer readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, radio Frequency (RF), etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a local area network (Local Area Network, LAN) or wide area network (Wide Area Network, WAN), or may be connected to an external computing device (e.g., connected through the internet using an internet service provider).
In an exemplary embodiment of the present disclosure, an electronic device capable of implementing the above method is also provided.
Those skilled in the art will appreciate that the various aspects of the invention may be implemented as a system, method, or program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects that may be referred to herein collectively as a "circuit," module "or" system.
An electronic device 600 according to this embodiment of the invention is described below with reference to fig. 6. The electronic device 600 shown in fig. 6 is merely an example, and should not be construed as limiting the functionality and scope of use of embodiments of the present invention.
As shown in fig. 6, the electronic device 600 is in the form of a general purpose computing device. The components of the electronic device 600 may include, but are not limited to, the at least one processing unit 610, the at least one memory unit 620, a bus 630 connecting the different system components (including the memory unit 620 and the processing unit 610), and a display unit 640.
Wherein the storage unit stores program code that is executable by the processing unit 610 such that the processing unit 610 performs steps according to various exemplary embodiments of the present invention described in the above-described "exemplary methods" section of the present specification. For example, the processing unit 610 may perform steps S201 to S205 as shown in fig. 2.
The storage unit 620 may include readable media in the form of volatile storage units, such as Random Access Memory (RAM) 6201 and/or cache memory unit 6202, and may further include Read Only Memory (ROM) 6203.
The storage unit 620 may also include a program/utility 6204 having a set (at least one) of program modules 6205, such program modules 6205 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
Bus 630 may be a local bus representing one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or using any of a variety of bus architectures.
The electronic device 600 may also communicate with one or more external devices 1000 (e.g., keyboard, pointing device, bluetooth device, etc.), one or more devices that enable a user to interact with the electronic device 600, and/or any device (e.g., router, modem, etc.) that enables the electronic device 600 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 650. Also, electronic device 600 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet, through network adapter 660. As shown, network adapter 660 communicates with other modules of electronic device 600 over bus 630. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 600, including, but not limited to, microcode, device drivers, redundant processing units, external disk drive arrays, disk array (Redundant Arrays of INDEPENDENT DISKS, RAID) systems, tape drives, and data backup storage systems, among others.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, including several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
Furthermore, the above-described drawings are only schematic illustrations of processes included in the method according to the exemplary embodiment of the present invention, and are not intended to be limiting. It will be readily appreciated that the processes shown in the above figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (13)

1.一种业务请求处理方法,其特征在于,包括:1. A service request processing method, comprising: 接收来自客户端的多个业务请求,并将所述多个业务请求添加至Redis缓存的有序集合队列中;Receive multiple business requests from the client and add the multiple business requests to the ordered set queue of the Redis cache; 通过统一能力接口从所述Redis缓存的有序集合队列中获取目标业务请求,并确定所述目标业务请求的队列类型以及所述队列类型对应的Bean对象,所述业务请求的队列类型与Bean对象为一一对应关系;Obtain the target business request from the ordered set queue of the Redis cache through the unified capability interface, and determine the queue type of the target business request and the Bean object corresponding to the queue type, where the queue type of the business request and the Bean object are in a one-to-one correspondence; 从预先配置的Bean注册表中获取所述Bean对象的前置处理器,以通过所述前置处理器确定服务调用前的参数;Obtaining a preprocessor of the Bean object from a pre-configured Bean registry, so as to determine parameters before service invocation through the preprocessor; 通过所述服务调用前的参数以及所述Bean对象中的统一异常处理方式进行服务调用以对所述目标业务请求进行处理得到响应结果,从预先配置的Bean注册表中获取所述Bean对象的后置处理器,以通过所述后置处理器对所述响应结果进行转换操作;Performing a service call using the parameters before the service call and the unified exception handling method in the Bean object to process the target business request and obtain a response result, obtaining a post-processor of the Bean object from a pre-configured Bean registry, and performing a conversion operation on the response result through the post-processor; 通过所述统一能力接口将转换操作后的响应结果发送至所述客户端。The response result after the conversion operation is sent to the client through the unified capability interface. 2.根据权利要求1的业务请求处理方法,其特征在于,在所述从预先配置的Bean注册表中获取所述Bean对象的前置处理器之前,所述方法还包括:2. The service request processing method according to claim 1, wherein before obtaining the preprocessor of the Bean object from the preconfigured Bean registry, the method further comprises: 预先通过抽象工厂模式进行Bean对象的注册,形成所述Bean注册表。Bean objects are registered in advance through the abstract factory pattern to form the Bean registry. 3.根据权利要求1的业务请求处理方法,其特征在于,业务请求中至少包含待插入队列信息;其中,将多个业务请求添加至Redis缓存的有序集合队列中,包括:3. The service request processing method according to claim 1, wherein the service request at least includes queue information to be inserted; wherein adding multiple service requests to the ordered set queue of the Redis cache comprises: 根据各业务请求对应的待插入队列信息,从所述Redis缓存的有序集合队列中查找各所述业务请求对应的目标待插入队列;According to the queue information to be inserted corresponding to each business request, the target queue to be inserted corresponding to each business request is searched from the ordered set queue of the Redis cache; 获取目标待插入队列的剩余队列长度;Get the remaining queue length of the target queue to be inserted; 若各所述业务请求占用的队列长度小于或等于目标待插入队列的剩余队列长度,则将多个业务请求添加至Redis缓存的目标待插入队列中。If the queue length occupied by each of the business requests is less than or equal to the remaining queue length of the target queue to be inserted, the multiple business requests are added to the target queue to be inserted in the Redis cache. 4.根据权利要求1或3的业务请求处理方法,其特征在于,将所述多个业务请求添加至Redis缓存的有序集合队列中,包括:4. The service request processing method according to claim 1 or 3, characterized in that adding the multiple service requests to the ordered set queue of the Redis cache comprises: 根据通用唯一标识码UUID生成各业务请求对应的索引值;Generate the index value corresponding to each service request based on the universal unique identifier UUID; 将索引值与各业务请求以键值对的存储形式添加至Redis缓存的有序集合队列中。The index value and each business request are stored in the form of key-value pairs and added to the ordered set queue of the Redis cache. 5.根据权利要求1的业务请求处理方法,其特征在于,在所述根据所述目标业务请求确定所述目标业务请求的队列类型以及所述队列类型对应的Bean对象之前,包括:5. The service request processing method according to claim 1, characterized in that before determining the queue type of the target service request and the Bean object corresponding to the queue type according to the target service request, it includes: 将所述目标业务请求转换为业务应用程序的上下文信息;Converting the target business request into context information of a business application; 根据所述业务应用程序的上下文信息,构建队列类型以及对应的Bean对象之间的一一对应关系,以基于目标业务请求的队列类型确定所述队列类型对应的Bean对象。A one-to-one correspondence between a queue type and a corresponding Bean object is established according to the context information of the business application program, so as to determine the Bean object corresponding to the queue type based on the queue type of the target business request. 6.根据权利要求5的业务请求处理方法,其特征在于,根据所述业务应用程序的上下文信息,构建队列类型以及对应的Bean对象之间的一一对应关系,包括:6. The service request processing method according to claim 5, characterized in that, based on the context information of the service application, a one-to-one correspondence between the queue type and the corresponding Bean object is established, comprising: 根据所述业务应用程序的上下文信息,从确定的Bean对象中查找是否存在动态注解;According to the context information of the business application, searching for a dynamic annotation in a determined Bean object; 若存在动态注解,则构建Bean对象关联的队列类型和对应的Bean对象的一一对象关系。If there is a dynamic annotation, a one-to-one object relationship between the queue type associated with the Bean object and the corresponding Bean object is constructed. 7.根据权利要求1的业务请求处理方法,其特征在于,通过所述统一能力接口将转换操作后的响应结果发送至所述客户端,包括:7. The service request processing method according to claim 1, wherein sending the response result after the conversion operation to the client through the unified capability interface comprises: 通过WebSocket通信协议或回调方式将转换操作后的响应结果发送至所述客户端。The response result after the conversion operation is sent to the client via the WebSocket communication protocol or a callback method. 8.根据权利要求1的业务请求处理方法,其特征在于,所述方法还包括:8. The service request processing method according to claim 1, further comprising: 对Redis缓存的有序集合队列进行可视化显示。Visualize the ordered set queue of the Redis cache. 9.根据权利要求1的业务请求处理方法,其特征在于,所述方法还包括:9. The service request processing method according to claim 1, further comprising: 响应于针对Redis缓存的有序集合队列中各业务请求的编辑操作,调整有序集合队列中各业务请求的处理顺序。In response to an edit operation on each business request in the ordered set queue of the Redis cache, the processing order of each business request in the ordered set queue is adjusted. 10.根据权利要求1的业务请求处理方法,其特征在于,在对目标业务请求进行处理之后,方法还包括了:10. The service request processing method according to claim 1, characterized in that after processing the target service request, the method further comprises: 从Redis缓存的有序集合队列中删除目标业务请求。Delete the target business request from the ordered set queue of the Redis cache. 11.一种业务请求处理装置,其特征在于,包括:11. A service request processing device, comprising: 请求接收模块,用于接收来自客户端的多个业务请求,并将所述多个业务请求添加至Redis缓存的有序集合队列中;A request receiving module is used to receive multiple service requests from the client and add the multiple service requests to the ordered set queue of the Redis cache; 对象确定模块,用于通过统一能力接口从所述Redis缓存的有序集合队列中获取目标业务请求,并根据所述目标业务请求确定所述目标业务请求的队列类型以及所述队列类型对应的Bean对象,所述业务请求的队列类型与Bean对象为一一对应关系;An object determination module is used to obtain a target business request from the ordered set queue of the Redis cache through a unified capability interface, and determine the queue type of the target business request and the Bean object corresponding to the queue type according to the target business request. The queue type of the business request and the Bean object are in a one-to-one correspondence; 处理器获取模块,用于从预先配置的Bean注册表中获取所述Bean对象的前置处理器,以通过所述前置处理器确定服务调用前的参数;A processor acquisition module is used to obtain the preprocessor of the Bean object from a pre-configured Bean registry, so as to determine the parameters before the service call through the preprocessor; 调用模块,用于通过所述服务调用前的参数以及所述Bean对象中的统一异常处理方式进行服务调用以对所述目标业务请求进行处理得到响应结果,从预先配置的Bean注册表中获取所述Bean对象的后置处理器,以通过所述后置处理器对所述响应结果进行转换操作;A calling module is used to call the service through the parameters before the service call and the unified exception handling method in the Bean object to process the target business request to obtain a response result, obtain the post-processor of the Bean object from the pre-configured Bean registry, and perform a conversion operation on the response result through the post-processor; 发送模块,用于通过所述统一能力接口将转换操作后的响应结果发送至所述客户端。The sending module is used to send the response result after the conversion operation to the client through the unified capability interface. 12.一种计算机可读存储介质,其上存储有计算机程序,其特征在于,计算机程序被处理器执行时实现权利要求1至10任一项的业务请求处理方法。12. A computer-readable storage medium having a computer program stored thereon, wherein when the computer program is executed by a processor, the service request processing method according to any one of claims 1 to 10 is implemented. 13.一种电子设备,其特征在于,包括:13. An electronic device, comprising: 处理器;以及processor; and 存储器,用于存储处理器的可执行指令;a memory for storing executable instructions for the processor; 其中,处理器配置为经由执行可执行指令来执行权利要求1至10任一项的业务请求处理方法。The processor is configured to execute the service request processing method according to any one of claims 1 to 10 by executing executable instructions.
CN202410478212.4A 2024-04-18 2024-04-18 Service request processing method and device, storage medium and electronic equipment Pending CN120835096A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410478212.4A CN120835096A (en) 2024-04-18 2024-04-18 Service request processing method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410478212.4A CN120835096A (en) 2024-04-18 2024-04-18 Service request processing method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN120835096A true CN120835096A (en) 2025-10-24

Family

ID=97399743

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410478212.4A Pending CN120835096A (en) 2024-04-18 2024-04-18 Service request processing method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN120835096A (en)

Similar Documents

Publication Publication Date Title
US8676848B2 (en) Configuring cloud resources
CN110716748B (en) Service processing method, device, computer readable medium and electronic equipment
US7673029B2 (en) Grid automation bus to integrate management frameworks for dynamic grid management
US20060184535A1 (en) Suspension and resuming of sessions
CN110109983B (en) Method and device for operating Redis database
WO2023056946A1 (en) Data caching method and apparatus, and electronic device
CN113626512A (en) Data processing method, device, equipment and readable storage medium
CN117751347A (en) Techniques for distributed interface component generation
EP3750078B1 (en) System and methods for loading objects from hash chains
US9473565B2 (en) Data transmission for transaction processing in a networked environment
US8738742B2 (en) Tiered XML services in a content management system
WO2024001240A1 (en) Task integration method and apparatus for multiple technology stacks
CN120835096A (en) Service request processing method and device, storage medium and electronic equipment
CN116932147A (en) Streaming job processing method and device, electronic equipment and medium
CN113687881A (en) Metadata calling method and device, electronic equipment and storage medium
US12003590B2 (en) Performance-enhancing cross-system request handling via shared memory cache
CN114968209B (en) Page management method, system, electronic device and storage medium
US12086141B1 (en) Coordination of services using PartiQL queries
WO2023193682A1 (en) Local arrangement of remote deployment
US11514016B2 (en) Paging row-based data stored as objects
US10417133B2 (en) Reference cache maintenance optimizer
CN119226087A (en) Log tracking method, device, equipment and medium
CN116737591A (en) Interface calling methods, devices, computer equipment and storage media
CN117112189A (en) Data processing method, device, electronic equipment and storage medium
CN116010968A (en) FaaS distributed computing system, method and storage medium based on side car mode

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination