[go: up one dir, main page]

CN113641410B - A processing method and system for a high-performance gateway system based on Netty - Google Patents

A processing method and system for a high-performance gateway system based on Netty Download PDF

Info

Publication number
CN113641410B
CN113641410B CN202110630084.7A CN202110630084A CN113641410B CN 113641410 B CN113641410 B CN 113641410B CN 202110630084 A CN202110630084 A CN 202110630084A CN 113641410 B CN113641410 B CN 113641410B
Authority
CN
China
Prior art keywords
task
executed
waiting
data
thread
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110630084.7A
Other languages
Chinese (zh)
Other versions
CN113641410A (en
Inventor
李怀根
丘佳成
吴亮
温祖辉
连宾雄
李行龙
吴浔
黄翠仪
王旭
周宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Guangfa Bank Co Ltd
Original Assignee
China Guangfa Bank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Guangfa Bank Co Ltd filed Critical China Guangfa Bank Co Ltd
Priority to CN202110630084.7A priority Critical patent/CN113641410B/en
Publication of CN113641410A publication Critical patent/CN113641410A/en
Application granted granted Critical
Publication of CN113641410B publication Critical patent/CN113641410B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/4401Bootstrapping
    • G06F9/4411Configuring for operating with peripheral devices; Loading of device drivers
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/66Arrangements for connecting between networks having differing types of switching systems, e.g. gateways

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer And Data Communications (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention adopts a full-asynchronous multi-task processing model, and adopts an asynchronous waiting mode when time-consuming operations such as IO are encountered during task execution, so that a working thread is not blocked during waiting for the time-consuming operations, and the working thread can execute other tasks. The configuration information is dynamically loaded by adopting a three-level cache lazy loading strategy, so that the configuration can be modified at any time and take effect at any time, the configuration information is loaded when needed, the configuration is not needed to be loaded when the system is started, the risk of the system is reduced when the system is started, and the system can be focused on executing busy tasks. By adopting the Pipeline-Filter task processing mode, the linear execution flow better accords with the thinking habit of a developer, and the developer can realize the service function by only developing different filters and then using the Pipeline to combine the filters, so that the development difficulty is reduced. The three-level abnormal spam mechanism provides better experience for the requesting party, and ensures the stability of the system, and the sending of the abnormal tasks by the abnormal spam mechanism does not affect other tasks.

Description

Processing method and system of high-performance gateway system based on Netty
Technical Field
The invention relates to the technical field of computers and networks, in particular to a Netty-based high-performance gateway system processing method and system.
Background
The API gateway is an API-oriented, serial centralized, strong management service that appears on the system boundary, which refers to the boundary of the enterprise IT system. Before the popularity of the micro-service concept, the entity of the API gateway has been born, and the main application scenario is OpenAPI, that is, an open platform, and is oriented to the external partner of the enterprise. After the popularity of the micro-service concept, the API gateway has become a standard component integrated at the upper application layer.
The Api gateway can be used to solve the following problems:
1) Micro services typically provide APIs at a granularity different from the needs of the client, that is, the client typically needs to interact with multiple services.
2) Different clients require different data and different types of client networks perform differently.
3) The partitioning of services may vary over time, thus requiring details to be hidden from the client.
The main locations of the API gateway are:
1) The Web App is oriented to the Web App, the scenes are similar to front-end and back-end separation in physical form, and the Web App is not a full-function Web App at this time, but an App customized and scene-oriented according to the scenes.
2) For Mobile App, the Mobile App is a user of the back-end Service, and the API gateway at this time needs to take on a part of the role of Mobile Device Management (MDM).
3) The Partner OpenAPI is oriented, the scene is mainly used for establishing an ecological circle with an enterprise external Partner in order to meet the external opening of a service form, and at the moment, an API gateway needs to be added with a series of security management and control functions such as quota, flow control, token and the like.
4) In the face of Partner ExternalAPI, as the internet modality gradually affects traditional enterprises, many systems rely on the capabilities of external partners, such as logging in using partner accounts, paying for waits using third party paytables, for the purpose of importing traffic or content, which are some external capabilities for the enterprise's interior. At this time, the API gateway needs to perform unified authentication, authorization and access control for unified scheduling of external APIs for the Service inside the enterprise.
The API gateway system in the prior art is realized by Zuul serving as a technical prototype based on a Filter mechanism and a PRPE (PRE-ROUTING-POST-ERROR) model. In terms of system architecture, plug-in management of service functions is achieved through a responsibility chain mechanism (FILTERCHAIN) and a Java SPI mechanism, and management of service/configuration information is achieved through a Registry module (Registry).
The data related to the API gateway in the prior art mainly comprises basic configuration information and service configuration information, wherein the basic configuration information is stored in a local file by a Hengfeng bank, and the service configuration information is subjected to subscription notification through a Zookeeper registry.
The key technology of the gateway system in the prior art is realized as follows:
1) Platform independence, an extended loading mechanism for realizing filters and other extension points through an SPI mechanism, and partial service functions realized based on a third-party tool class. In addition, it is independent of other third party platforms or architectures.
2) The Filter-PRPE mechanism is improved Zuul, and an init method of each Filter is sequentially called when the gateway is started, so that data required by the running period are acquired through a registry and cached in a memory, and the efficiency of the running period is improved. The interface class defines doPre and doPost methods for realizing a bidirectional mechanism of the filters, namely, after doPre methods of each Filter are sequentially executed through FILTERCHAIN, the filters with doPost method reloaded are executed in reverse order, so that the method is suitable for scenes in which resources are reserved in doPost and release in doPost is required.
3) The service/configuration data is dynamically managed, and the gateway does not depend on the persistent data of the database and performs data management through the decentralised registry. The initial business data are maintained by the management end, the gateway subscribes to the initial business data when being started, and when the data of the registration center are updated, the gateway can timely receive notification to update the cache.
4) And the request filtering mechanism is used for carrying out basic checking or processing on the request information and providing the functions of message conversion, message analysis, black-and-white list checking and request parameter checking. Each function supports dynamic on-demand start/stop while supporting dynamic expansion.
5) And the multidimensional dynamic routing mechanism sends the transaction to the corresponding back-end system according to the parameters in the request message. The functions of rule analysis/inspection, service external call and the like are provided.
6) The service degradation/fusing mechanism is realized by introducing Hystrix mechanism of Netflix company and resource isolation mechanism, and has the functions of preventing all user threads in a container from being exhausted by single dependence, reducing system load, quickly failing and queuing requests which cannot be processed in time, providing failure rollback, making failure transparent to users when necessary, and reducing the influence of dependent service on the whole system by using the isolation mechanism.
However, the prior art has the following technical problems:
1) Zuul gateway framework used by Hengfeng silver, and the whole performance of the framework is not very good.
2) When the Filter preloading mode is used, after the later Filter is increased, the content to be preloaded during system startup is increased, and the startup speed is slower and slower.
3) The configuration information is realized through a local file and a registration center, so that the initial service data is subscribed when the gateway is started, and the gateway is informed to update when the gateway is updated. This mode requires a large amount of service data to be subscribed to at start-up, which slows down the start-up speed of the system.
4) The service degradation/fusing mechanism is Hystrix mechanism of Netflix company, the current limiting strategy in the mechanism has an optimization space, and the current limiting function is the most basic function for the gateway system, so that the stability of the gateway system can be better ensured by optimizing the current limiting strategy.
Disclosure of Invention
The invention provides a processing method and a processing system of a high-performance gateway system based on Netty, which are used for thoroughly achieving platform independence and fully playing the performance of the gateway system. Because the transaction frequencies of other systems in gateway docking are uneven, a lot of systems can have transactions for a long time, in order to make the systems concentrate on processing busy tasks, the gateway configuration adopts a lazy loading mode, namely loading according to the need, rather than loading all the systems when the systems are started. In order to improve the system operation efficiency, a three-level cache strategy is adopted, frequently used configuration information is stored in a memory, and the program locality principle is utilized. In order to improve the stability of a gateway system, ensure that the gateway cannot be abnormal due to huge requests, optimize flow control by adopting an Ali Sentinel distributed flow control component, and ensure that the requests of each external system can be responded by adopting a three-level abnormal bottom-covering strategy, so that the tasks cannot be mutually influenced. In order to enable developers to quickly understand and develop different business functions, the invention uses a Pipeline-Filter mode to process tasks, and linear processing flows are easier to understand.
The first aspect of the present invention provides a method for processing a Netty-based high performance gateway system, including:
Receiving a connection request sent by a client, establishing a data transmission channel, calling NETTY SERVER a processor to process the connection request to obtain a data channel and request data, and packaging the data channel and the request data into a task to be executed in a queue;
and polling the task state in the queue to be executed, wherein the running task state is the task to be executed.
Further, the polling task state in the queue to be executed, before the task state is the task waiting to be executed, includes:
Judging whether an idle thread exists or not, if so, putting the task to be executed into the idle thread for execution.
Further, after the determining whether there is an idle thread, the method further includes:
And polling the task state in the waiting queue, and if the task in the priority processing state exists, placing the task in the priority processing state into the waiting queue from the waiting queue, and preferentially executing the task in the priority processing state, wherein the priority processing state comprises time-consuming operations such as waiting for asynchronous IO (input/output).
Further, before the determining whether there is an idle thread, the method further includes:
receiving a network request, packaging the network request into a task to be executed, and placing the task to be executed into a queue to be executed;
polling task states in a queue to be executed, wherein the running task states are tasks to be executed, and further comprises:
when the execution of the asynchronous operation is finished, the task state is synchronously updated to be executed, the task is moved from the waiting queue to be executed, and the task is executed when the idle thread is waiting.
Further, the calling execution task thread runs the task waiting to run, including:
When the task waiting to be operated is operated by calling an execution task thread, if an abnormal condition exists, the abnormal condition is processed by a three-level abnormal bottom-holding mechanism, which comprises the following steps:
when the task waiting to be operated is operated by calling an execution task thread, if an abnormal condition exists, the abnormal condition is processed through a secondary abnormal pipeline in the execution task thread;
if the second-level abnormal pipeline has abnormal conditions in the process of processing the abnormal conditions, marking the current execution task as an error state;
Tasks marked as error states are processed through the exception pipeline.
The second aspect of the present invention also provides a processing system of a Netty-based high performance gateway system, including:
The task receiving module is used for receiving a connection request sent by a client, establishing a data transmission channel, calling NETTY SERVER a processor to process the connection request, obtaining a data channel and request data, packaging the data channel and the request data into a task, and placing the task into a queue to be executed;
And the task execution module is used for polling the task state in the queue to be executed, wherein the running task state is the task to be executed.
Further, the processing system of the Netty-based high-performance gateway system further comprises:
And the thread polling module is used for judging whether an idle thread exists or not, and if so, putting the task to be executed into the idle thread for execution.
Further, the processing system of the Netty-based high-performance gateway system further comprises:
and the priority processing module is used for polling the task state in the waiting queue, if the task in the priority processing state exists, putting the task in the priority processing state into the waiting queue from the waiting queue, and preferentially executing the task in the priority processing state, wherein the priority processing state comprises time-consuming operations such as waiting for asynchronous IO and the like.
Further, the processing system of the Netty-based high-performance gateway system further comprises:
the network request module is used for receiving a network request, packaging the network request into a task to be executed, and placing the task to be executed into a queue to be executed;
And the asynchronous operation module is used for calling the task execution thread to execute the task waiting to be executed, if the condition that the node needs to execute the asynchronous operation exists, the task execution thread firstly executes the asynchronous operation and updates the task state waiting to be executed to be waiting and put the task state waiting to be executed into a waiting queue, and after the asynchronous operation is finished, the task state is synchronously updated to be waiting to be executed, the task is moved from the waiting queue to be executed and is executed when the idle thread is waiting to exist.
Further, the asynchronous operation module is further configured to:
When the task waiting to be operated is operated by calling an execution task thread, if an abnormal condition exists, the abnormal condition is processed by a three-level abnormal bottom-holding mechanism, which comprises the following steps:
when the task waiting to be operated is operated by calling an execution task thread, if an abnormal condition exists, the abnormal condition is processed through a secondary abnormal pipeline in the execution task thread;
if the second-level abnormal pipeline has abnormal conditions in the process of processing the abnormal conditions, marking the current execution task as an error state;
Tasks marked as error states are processed through the exception pipeline.
Compared with the prior art, the embodiment of the invention has the beneficial effects that:
The invention provides a processing method and a processing system of a high-performance gateway system based on Netty, wherein the method comprises the steps of receiving a connection request sent by a client, establishing a data transmission channel, calling NETTY SERVER a processor to process the connection request, obtaining a data channel and request data, packaging the data channel and the request data into a task to be executed, and placing the task into a queue to be executed, and polling a task state in the queue to be executed, wherein the task state is the task to be executed. The invention adopts a full-asynchronous multi-task processing model, and adopts an asynchronous waiting mode when time-consuming operations such as IO are encountered during task execution, so that a working thread is not blocked during waiting for the time-consuming operations, and the working thread can execute other tasks. The configuration information is dynamically loaded by adopting a three-level cache lazy loading strategy, so that the configuration can be modified at any time and take effect at any time, the configuration information is loaded when needed, the configuration is not needed to be loaded when the system is started, the risk of the system is reduced when the system is started, and the system can be focused on executing busy tasks. And the local cache in the three-level cache can well meet the locality principle of the program, and the redistribution volatilizes the system performance. By adopting the Pipeline-Filter task processing mode, the linear execution flow better accords with the thinking habit of a developer, and the developer can realize the service function by only developing different filters and then using the Pipeline to combine the filters, so that the development difficulty is reduced. The three-level abnormal bottom-holding mechanism provides better experience for the requester, the condition that the requester overtime or nothing is returned due to the occurrence of the abnormality is avoided, the stability of the system is ensured by the abnormal bottom-holding mechanism, and the sending abnormality of individual tasks does not influence other tasks.
Drawings
In order to more clearly illustrate the technical solutions of the present invention, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for processing a Netty-based high performance gateway system according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method for processing a Netty-based high performance gateway system according to another embodiment of the present invention;
FIG. 3 is a flow chart of a method for processing a Netty-based high performance gateway system according to another embodiment of the present invention;
FIG. 4 is a flow chart of a method for processing a Netty-based high performance gateway system according to another embodiment of the present invention;
FIG. 5 is a flow chart of a method for processing a Netty-based high performance gateway system according to yet another embodiment of the present invention;
FIG. 6 is a schematic diagram of a full asynchronous gateway mode provided by an embodiment of the present invention;
FIG. 7 is a schematic diagram of a multi-tasking handoff provided by an embodiment of the present invention;
FIG. 8 is a schematic diagram of a gateway working model according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of a single task process provided by an embodiment of the present invention;
FIG. 10 is a schematic diagram of a multi-level cache provided in accordance with an embodiment of the present invention;
FIG. 11 is a flow chart of a multi-level cache provided in another embodiment of the present invention;
FIG. 12 is a flow chart of a multi-level cache provided in accordance with yet another embodiment of the present invention;
FIG. 13 is a device diagram of a processing system of a Netty-based high performance gateway system in accordance with one embodiment of the present invention;
FIG. 14 is a device diagram of a processing system of a Netty-based high performance gateway system according to another embodiment of the present invention;
FIG. 15 is a device diagram of a processing system of a Netty-based high performance gateway system according to another embodiment of the present invention;
FIG. 16 is a device diagram of a processing system of a Netty-based high performance gateway system provided in accordance with yet another embodiment of the present invention;
Fig. 17 is a block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be understood that the step numbers used herein are for convenience of description only and are not limiting as to the order in which the steps are performed.
It is to be understood that the terminology used in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
The terms "comprises" and "comprising" indicate the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
The term "and/or" refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
In a first aspect.
Referring to fig. 1, an embodiment of the present invention provides a method for processing a Netty-based high performance gateway system, including:
S10, receiving a connection request sent by a client, establishing a data transmission channel, calling NETTY SERVER a processor to process the connection request, obtaining a data channel and request data, packaging the data channel and the request data into a task, and placing the task into a queue to be executed.
S30, polling task states in the queue to be executed, wherein the running task states are tasks to be executed.
Referring to fig. 2, in a specific embodiment, the step S30 further includes:
s20, judging whether an idle thread exists, and if so, putting the task to be executed into the idle thread for execution.
Referring to fig. 3, in another embodiment, after the step S20, the method further includes:
s21, polling task states in the waiting queue, and if tasks in the priority processing state exist, placing the tasks in the priority processing state into the waiting queue from the waiting queue, and preferentially executing the tasks in the priority processing state, wherein the priority processing state comprises time-consuming operations such as waiting for asynchronous IO (input/output).
Referring to fig. 4, in another embodiment, before the step S20, the method further includes:
s11, receiving a network request, packaging the network request into a task to be executed, and placing the task to be executed into a queue to be executed.
After the step S20, the method further includes:
S40, when the task waiting to be executed is executed by calling the task executing thread, if the condition that the node needs to execute the asynchronous operation exists, the task executing thread executes the asynchronous operation first, updates the task state waiting to be executed to be waiting and puts the task state waiting to be executed into a waiting queue, and when the asynchronous operation is executed, synchronously updates the task state to be waiting to be executed, moves the task from the waiting queue to be executed, and waits for the idle thread to execute the task.
In another specific embodiment, after the step S40, the method further includes:
s50, when the task waiting to be operated is operated by calling the execution task thread, if the abnormal condition exists, the abnormal condition is processed through a three-level abnormal bottom-covering mechanism.
Specifically, the step S50 includes:
s51, when the task waiting to be operated is operated by calling the task executing thread, if an abnormal condition exists, the abnormal condition is processed through a second-level abnormal pipeline in the task executing thread.
S52, if the abnormal condition exists in the process of processing the abnormal condition by the second-level abnormal pipeline, marking the current execution task as an error state.
S53, processing the task marked as the error state through the abnormal pipeline.
An embodiment of the present invention provides a method for processing a high performance gateway system based on Netty, including:
1. Based on Netty technology, a fully asynchronous gateway is implemented.
Referring to FIG. 6, the I/O threads are separated from the business processing threads, so that the I/O threads are not occupied by the time-consuming call requests, thereby improving throughput performance. The I/O thread and the service processing thread of the back-end service system are called to adopt asynchronous queue events, the request is directly put into the queue, when the event happens, the callback function is triggered to process, the number of threads is reduced while more requests are supported to be received, so that the thread context switching overhead is reduced, and the problem of multithreading blocking basically does not exist.
In the full asynchronous gateway mode, the thread number is not a bottleneck of a gateway system, the slow API can not cause instability of the gateway system, and the influence is effectively isolated by combining the thread pool technology.
Wherein the term in fig. 6 explains:
chart resolution of fig. 6:
first, when the server Acceptor receives a connection request from a client, it fetches a thread from the reaction thread pool, establishes a data transfer channel, and invokes NETTY SERVER a processor to process the connection request.
And the NETTY SERVER processor processes the message sent by the client, and then calls the NETTY CLIENT processor through the NETTY CLIENT scheduler to send the processed message to other application systems.
2. And (5) multitasking switching.
Referring to fig. 7, in order to ensure that the gateway does not have thread blocking during the fully asynchronous task processing, the following polling device is used to complete the multi-task switching process in cooperation with the high-performance queue dispatcher. When some tasks are blocked due to time-consuming operations such as IO, the threads are immediately released to process other tasks, and after the time-consuming operations are completed, the thread resources are continuously occupied to process the tasks.
FIG. 7 term interpretation:
the flow of fig. 7 explains:
1) When a network request arrives at the gateway, a data Channel and request data Message are obtained after processing by the NETTY SERVER processor, then the processor generates a Task, places Channel, message and the Pipeline name for processing the Task into the Task, and places the Task into TaskQueue.
2) EventLooper polls TaskQueue if there is a task in STANDBY state, and if so, pushes the task to TaskWorker for execution.
3) When TaskWorker executes a task, there may be conditions of WAITING for an IO event, being limited, etc., when these conditions are encountered, setting the task state to WAITING, recording the current execution location, placing the task into TaskQueue, and then releasing the worker thread.
4) The worker thread performs other tasks.
5) When the thread is executed to the EndPoint node, the network IO message needs to be forwarded to other application systems as a client, at the moment, the Task is put into IOWorker to be processed by Netty, the Task state is set to be WAITING, the current execution position is recorded, the Task is put into TaskQueue to wait for the Netty execution result, and then the working thread is released.
6) The worker thread performs other tasks.
7) Circulation 2) -6) process. When the conditions of task waiting in step 3) and step 5) are met or NETTYCLIENT obtains a back-end system response, setting the task state as STANDBY, and waiting for the poller in step 2) to push the task to the working thread for execution.
3. Gateway working model.
For a better understanding of the above-described multitasking handoff procedure, reference may be made to the following gateway operational model diagram.
When the gateway receives network requests of different protocols (Socket, http, etc.), these requests are wrapped into Task tasks and put into TaskQueue. TaskWorker takes tasks from TaskQueue and invokes the corresponding Pipeline to process the different tasks.
If a node TaskNode in the Pipeline needs to execute the asynchronous flow, it is divided into two actions, where the first Action executes the asynchronous operation and the second Action is executed after obtaining the response result of the asynchronous operation. Setting the task state as WAITING after executing the first Action, recording the current execution position, putting the task into TaskQueue, then releasing the working thread (shown by an Async arrow in the figure), and starting to execute from the second Action after acquiring the asynchronous result and pushing the asynchronous result to the working thread by the poller for executing. As shown in fig. 8.
4. And (5) single task processing.
As shown in fig. 9, the gateway processes a single task through a Pipeline, packages single functions such as parameter verification, security authentication, black and white list, current limiting control, message conversion, message encryption and decryption, message conversion and the like into a Filter, can be organized according to different industry standards or access requirements of a partner according to configuration sequences, and supports hot plug of the Filter function. When in transaction, according to different connected partners, corresponding pipelines are obtained from the pipeline pool, and then the pipelines use different filters to filter the transaction according to the functional requirements.
TaskWorker executing the Task calls the Pipeline to execute different tasks, and a single TaskNode functional node in the Task is executed through the Filter. For example, the first Filter performs parameter verification, the second Filter performs security authentication, the third Filter performs current limiting control, and so on.
Most basic functions of the API gateway, such as flow limiting control, admission control, encryption and decryption functions, log records, exception handling, flow interception, field mapping, message parsing, dynamic routing and the like, can be realized through the Filter.
These Pipeline and Filter are reusable, so that the same Pipeline and Filter can be reused for duplicate class function requirements, avoiding developing duplicate blocks of code for functions.
5. Pipeline exception handling model.
The Pipeline function is a processing flow normally executed by the Task, but an abnormal situation is unavoidable, and in order to maintain the stability of the system, the abnormality must be handled. The invention adopts a three-level abnormal bottom-covering mechanism to treat the abnormality. Each Pipeline has a corresponding two-level exception Pipeline (ExceptionPipeline), multiple pipelines can multiplex the same ExceptionPipeline, when the expected ERROR is sent in the Filter, the Task state is set to the ERROR state by the working thread, the Task state is put into TaskQueue, and when the polling device polls an exception Task, the exception Pipeline set during Pipeline initialization is called to execute the corresponding Task. And if the third-level exception is DefaultExceptionPipeline and an exception occurs when the exception pipeline is executed, entering DefaultExceptionPipeline, carrying out exception spam by DefaultExceptionPipeline, and returning an http message with a state code of 500 to a requester. The three-level abnormal bottom-blocking mechanism ensures the stable operation of the server, the mutual isolation of tasks is not affected, and the execution of other tasks is not affected by the abnormal transmission of one task.
6. Multi-level cache
As shown in fig. 10, in order to improve throughput performance of the gateway, the financial open platform uses a three-level cache mode to avoid the gateway from directly interacting with the database to consume performance.
As shown in FIG. 10, the three-level cache mode is divided into a first-level local cache, a second-level Redis cache center cache and three-level database persistent storage. The local cache mainly stores frequently used or configuration loaded during system initialization, the Redis cache center stores the total configuration information of the database for other application servers to use, the database stores persistent data, only the capability center can access the database, other application servers cannot directly connect the database, and only the capability center interface is called or database data can be obtained from the Redis cache center.
When the configuration is subjected to new addition, modification and deletion operations, the database record is updated firstly, then the Redis cache center data is updated, and the gateway reads the data from the Redis.
When the gateway requests configuration information, the gateway queries the local cache preferentially, then queries the Redis cache center, if the Redis cache center has no data, asynchronously calls the capability center interface, and after the capability center queries configuration information in the database, firstly stores the configuration information into the Redis, and then returns the configuration information to the gateway, as shown in FIG. 11:
The local cache manages local cache events using a high performance queue Disrupter, and uses two local java data structures to manage cache data, a LinkedHashMap object cacheStandby and a HashMap object cache. cacheStandby is mainly used for realizing lru first-in first-out caching strategy, and the cache maintains the current local cache data. The local cache query and update logic is as follows:
1) If the local cache is not used, the Redis is directly inquired;
2) If the cache does not exist or the cache has expired (current time-cache update time > cache expiration time), acquiring data from the Redis;
3) If the condition 2) is not satisfied, acquiring data from the local cache;
4) No matter which condition of step 2) or step 3) is satisfied, the local cache needs to be updated;
5) If the local cache does not have the record, storing the record to the head of the cacheStandby linked list and simultaneously storing the record to the cache;
6) If the record exists in the local cache, but the new value and the old value are inconsistent, the new value is used for updating the cache, the old value in cacheStandby is deleted, then the new value is inserted into the head part of the linked list, and the record in the cache is updated;
7) If the record exists in the local cache and the new value is consistent with the old value, the old node in the cacheStandby linked list is moved to the linked list head, and the cache is not processed.
8) If cacheStandby data exceeds the maximum limit, deleting the tail node of the linked list, and synchronously deleting the records in the cache.
9) Every 1024 milliseconds, the data in cacheStandby is re-synchronized into the cache.
The invention has the beneficial effects that:
1. A fully asynchronous task processing model for multi-tasking. In the prior art, a service degradation/fusing mechanism is adopted, a Hystrix mechanism of Netflix company is introduced, a multithreading task processing function is realized through a resource isolation mechanism, a fast failure is adopted for blocked tasks, a queuing strategy is not adopted, and a failure rollback function is provided. The invention adopts a full-asynchronous task processing model based on Netty self-grinding, waits when a task is blocked, does not occupy a system thread during waiting, and returns error information after overtime.
2. The lazy loading mode of tertiary buffer memory fully gives full play to system performance. In the prior art, configuration information is saved by adopting a registration center and a local file mode, and the configuration information is initialized when a system is started. The invention adopts a three-level caching strategy and a configuration center mode to store configuration information, and the Redis caching center is only required to acquire configuration and store the configuration information into the local cache.
3. And the local caching strategy fully utilizes the locality principle of the program, and improves the system efficiency. The present invention refers to the three level cache functionality of a CPU in a computer architecture to design a local cache policy.
4. Pipeline-filter mode task processing. In the prior art, a Filter-PRPE mechanism is adopted, configuration is loaded during starting, two methods doPre and doPost are arranged in each Filter, doPre is firstly invoked in the forward direction, doPost is then invoked in the reverse direction, and a bidirectional mechanism of the Filter is realized. The invention adopts a Pipeline-Filter mode, each task corresponds to one Pipeline, and configuration information is loaded from a cache when running, and a unidirectional linear execution mode is used.
5. Three-level abnormal bottom-covering mechanism. The use of the spam exception ensures that no exception condition is missed, better experience is provided for a requester, overtime or no return condition caused by the occurrence of exception is avoided, and the exception spam mechanism ensures the stability of the system, and the sending exception of individual tasks does not affect other tasks.
The second aspect.
Referring to fig. 13-16, an embodiment of the present invention provides a processing system of a Netty-based high performance gateway system, including:
The task receiving module 10 is configured to receive a connection request sent by a client, establish a data transmission channel, call NETTY SERVER a processor to process the connection request, obtain a data channel and request data, and package the data channel and the request data into a task to be executed in a queue.
The task execution module 30 is configured to poll the task state in the queue to be executed, where the task state is a task to be executed.
In a specific embodiment, the method further comprises:
And the thread polling module 20 is used for judging whether an idle thread exists or not, and if so, putting the task to be executed into the idle thread for execution.
In another specific embodiment, the method further comprises:
and the priority processing module 40 is configured to poll the task state in the waiting queue, and if there is a task in the priority processing state, put the task in the priority processing state into the waiting queue from the waiting queue, and execute the task in the priority processing state preferentially, where the priority processing state includes waiting for time-consuming operations such as asynchronous IO.
In another specific embodiment, the method further comprises:
The network request module 50 is configured to receive a network request, package the network request into a task to be executed, and place the task to be executed in a queue to be executed.
And the asynchronous operation module 60 is used for calling the task execution task thread to execute the task waiting to be executed, if the condition that the node needs to execute the asynchronous operation exists, the task execution task thread firstly executes the asynchronous operation and updates the task state waiting to be executed to be waiting and put the task state waiting to be executed into a waiting queue, and after the asynchronous operation is finished, the task state is synchronously updated to be waiting to be executed, the task is moved from the waiting queue to be executed and is executed when the idle thread is waiting to exist.
In another specific embodiment, the asynchronous operation module 60 is further configured to:
When the task waiting to be operated is operated by calling an execution task thread, if an abnormal condition exists, the abnormal condition is processed by a three-level abnormal bottom-holding mechanism, which comprises the following steps:
when the task waiting to be operated is operated by calling an execution task thread, if an abnormal condition exists, the abnormal condition is processed through a secondary abnormal pipeline in the execution task thread;
if the second-level abnormal pipeline has abnormal conditions in the process of processing the abnormal conditions, marking the current execution task as an error state;
Tasks marked as error states are processed through the exception pipeline.
In a third aspect.
The present invention provides an electronic device including:
a processor, a memory, and a bus;
The bus is used for connecting the processor and the memory;
the memory is used for storing operation instructions;
The processor is configured to, by invoking the operation instruction, cause the processor to execute an operation corresponding to a processing method of the Netty-based high performance gateway system according to the first aspect of the present application.
In an alternative embodiment, an electronic device is provided, as shown in FIG. 17, the electronic device 5000 shown in FIG. 17 comprising a processor 5001 and a memory 5003. The processor 5001 is coupled to the memory 5003, e.g., via bus 5002. Optionally, the electronic device 5000 may also include a transceiver 5004. It should be noted that, in practical applications, the transceiver 5004 is not limited to one, and the structure of the electronic device 5000 is not limited to the embodiment of the present application.
The processor 5001 may be a CPU, general-purpose processor, DSP, ASIC, FPGA or other programmable logic device, transistor logic device, hardware components, or any combination thereof. Which may implement or perform the various exemplary logic blocks, modules and circuits described in connection with this disclosure. The processor 5001 may also be a combination of computing functions, e.g., including one or more microprocessor combinations, a combination of a DSP and a microprocessor, etc.
Bus 5002 may include a path to transfer information between the aforementioned components. Bus 5002 may be a PCI bus or an EISA bus, among others. The bus 5002 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in fig. 17, but not only one bus or one type of bus.
The memory 5003 may be, but is not limited to, ROM or other type of static storage device, RAM or other type of dynamic storage device, which can store static information and instructions, EEPROM, CD-ROM or other optical disk storage, optical disk storage (including compact disk, laser disk, optical disk, digital versatile disk, blu-ray disc, etc.), magnetic disk storage or other magnetic storage devices, or any other medium capable of carrying or storing desired program code in the form of instructions or data structures and capable of being accessed by a computer.
The memory 5003 is used for storing application program codes for implementing the inventive arrangements and is controlled to be executed by the processor 5001. The processor 5001 is operative to execute application code stored in the memory 5003 to implement what has been shown in any of the method embodiments described previously.
Among them, the electronic devices include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), car terminals (e.g., car navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like.
A fourth aspect.
The present application provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor implements a method for processing a Netty-based high performance gateway system according to the first aspect of the present application.
Yet another embodiment of the present application provides a computer-readable storage medium having stored thereon a computer program which, when run on a computer, causes the computer to perform the corresponding ones of the foregoing method embodiments.

Claims (6)

1. A method for processing a Netty-based high performance gateway system, comprising:
Receiving a connection request sent by a client, establishing a data transmission channel, calling NETTY SERVER a processor to process the connection request to obtain a data channel and request data, and packaging the data channel and the request data into a task to be executed in a queue;
receiving a network request, packaging the network request into a task to be executed, and placing the task to be executed into a queue to be executed;
Polling task states in a queue to be executed, wherein the running task states are tasks to be executed;
When a task waiting to be operated is called to operate by an execution task thread, if the condition that the node needs to execute asynchronous operation exists, the execution task thread firstly executes the asynchronous operation, and the state of the task waiting to be operated is updated to be waiting and is put into a waiting queue; when the asynchronous operation is finished, synchronously updating the task state to be executed, and moving the task from a waiting queue to the waiting queue to be executed, and executing when waiting for an idle thread;
when the task waiting to be operated is operated by calling an execution task thread, if an abnormal condition exists, the abnormal condition is processed through a secondary abnormal pipeline in the execution task thread;
if the second-level abnormal pipeline has abnormal conditions in the process of processing the abnormal conditions, marking the current execution task as an error state;
processing tasks marked as error states through the abnormal pipelines;
When the multi-task switching is carried out, the multi-task switching is completed according to a preset polling device and a dispeptor, when the tasks are blocked, the tasks are waited, system threads are not occupied during the waiting period, error information is returned after timeout, and the situation that the threads are blocked in the process of the gateway full-asynchronous tasks is avoided is ensured;
The cache mode of the high-performance gateway system is divided into a first-level local cache for storing configuration which is frequently used or loaded during system initialization, a second-level Redis cache center cache for storing the total configuration information of a database and a third-level database persistence storage for storing persistence data, wherein the first-level local cache uses a Disrupter to manage local cache events and uses two local java data structures to manage cache data;
when the system configuration generates new addition, modification and deletion operations, the database record is updated firstly, and then the Redis cache center data is updated;
When the gateway requests configuration information, the gateway firstly queries the local cache, then queries the Redis cache center, and if the Redis cache center has no data, asynchronously calls the capability center interface so that the capability center queries configuration information in the database, stores the configuration information in the Redis cache center and returns the configuration information to the gateway.
2. The method for processing the Netty-based high performance gateway system according to claim 1, wherein the polling the task state in the queue to be executed, and before the running task state is the task to be executed, comprises:
Judging whether an idle thread exists or not, if so, putting the task to be executed into the idle thread for execution.
3. The method for processing a Netty-based high performance gateway system according to claim 2, wherein after determining whether there is an idle thread, further comprising:
and polling the task state in the waiting queue, and if the task in the priority processing state exists, placing the task in the priority processing state into the waiting queue from the waiting queue, and preferentially executing the task in the priority processing state, wherein the priority processing state comprises waiting for asynchronous IO operation.
4. A processing system for a Netty-based high performance gateway system, comprising:
The task receiving module is used for receiving a connection request sent by a client, establishing a data transmission channel, calling NETTY SERVER a processor to process the connection request, obtaining a data channel and request data, packaging the data channel and the request data into a task, and placing the task into a queue to be executed;
the network request module is used for receiving a network request, packaging the network request into a task to be executed, and placing the task to be executed into a queue to be executed;
The task execution module is used for polling the task state in the queue to be executed, and the running task state is the task to be executed;
the system comprises an asynchronous operation module, a synchronous update task state and an idle thread, wherein the asynchronous operation module is used for calling an execution task thread to execute a task waiting to be executed when the node needs to execute the asynchronous operation, and the execution task thread firstly executes the asynchronous operation and updates the task state waiting to be executed to be placed in a waiting queue;
When the task waiting to be operated is operated by calling an execution task thread, if an abnormal condition exists, the abnormal condition is processed through a second-level abnormal pipeline in the execution task thread;
When the multi-task switching is carried out, the multi-task switching is completed according to a preset polling device and a dispeptor, when the tasks are blocked, the tasks are waited, system threads are not occupied during the waiting period, error information is returned after timeout, and the situation that the threads are blocked in the process of the gateway full-asynchronous tasks is avoided is ensured;
The cache mode of the high-performance gateway system is divided into a first-level local cache for storing configuration which is frequently used or loaded during system initialization, a second-level Redis cache center cache for storing the total configuration information of a database and a third-level database persistence storage for storing persistence data, wherein the first-level local cache uses a Disrupter to manage local cache events and uses two local java data structures to manage cache data;
when the system configuration generates new addition, modification and deletion operations, the database record is updated firstly, and then the Redis cache center data is updated;
When the gateway requests configuration information, the gateway firstly queries the local cache, then queries the Redis cache center, and if the Redis cache center has no data, asynchronously calls the capability center interface so that the capability center queries configuration information in the database, stores the configuration information into the Redis, and returns the configuration information to the gateway.
5. The Netty-based high performance gateway system of claim 4, further comprising:
And the thread polling module is used for judging whether an idle thread exists or not, and if so, putting the task to be executed into the idle thread for execution.
6. The Netty-based high performance gateway system of claim 5, further comprising:
and the priority processing module is used for polling the task state in the waiting queue, if the task in the priority processing state exists, putting the task in the priority processing state into the waiting queue from the waiting queue, and preferentially executing the task in the priority processing state, wherein the priority processing state comprises waiting for asynchronous IO operation.
CN202110630084.7A 2021-06-07 2021-06-07 A processing method and system for a high-performance gateway system based on Netty Active CN113641410B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110630084.7A CN113641410B (en) 2021-06-07 2021-06-07 A processing method and system for a high-performance gateway system based on Netty

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110630084.7A CN113641410B (en) 2021-06-07 2021-06-07 A processing method and system for a high-performance gateway system based on Netty

Publications (2)

Publication Number Publication Date
CN113641410A CN113641410A (en) 2021-11-12
CN113641410B true CN113641410B (en) 2025-05-13

Family

ID=78416013

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110630084.7A Active CN113641410B (en) 2021-06-07 2021-06-07 A processing method and system for a high-performance gateway system based on Netty

Country Status (1)

Country Link
CN (1) CN113641410B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114640719B (en) * 2022-03-22 2024-12-03 康键信息技术(深圳)有限公司 Data processing method, device, equipment and storage medium based on Netty framework
CN115065588B (en) * 2022-05-31 2024-04-05 浪潮云信息技术股份公司 API fusing degradation realization method and system based on back-end error code
CN115051987B (en) * 2022-06-06 2024-04-16 瞳见科技有限公司 Mobile terminal service distribution system and method for multiple nodes
CN115118590B (en) * 2022-06-22 2024-05-10 平安科技(深圳)有限公司 Method, device, system, equipment and storage medium for managing configuration data
CN117221374B (en) * 2023-09-11 2024-05-24 广州Tcl互联网小额贷款有限公司 API (application program interface) calling method and system based on API gateway
CN119449530A (en) * 2024-10-10 2025-02-14 浪潮云信息技术股份公司 A method and system for implementing enterprise-level API gateway based on Netty, Nacos and Disruptor

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106126354A (en) * 2016-06-21 2016-11-16 中国建设银行股份有限公司 A kind of asynchronous batch processing method and system
CN109849935A (en) * 2019-02-20 2019-06-07 百度在线网络技术(北京)有限公司 A kind of method of controlling security, device and storage medium
CN112148500A (en) * 2020-05-18 2020-12-29 南方电网数字电网研究院有限公司 Netty-based remote data transmission method

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101286980B (en) * 2008-05-14 2012-03-28 华中科技大学 A distributed media access control method for increasing the capacity of wireless local area network
CN111209467B (en) * 2020-01-08 2023-05-26 中通服咨询设计研究院有限公司 Data real-time query system in multi-concurrency multi-channel environment
CN111277672B (en) * 2020-03-31 2022-03-11 上海积成能源科技有限公司 Energy Internet of things data acquisition method based on non-blocking input and output model
CN111309501A (en) * 2020-04-02 2020-06-19 无锡弘晓软件有限公司 High availability distributed queues
CN112527519A (en) * 2020-11-26 2021-03-19 福州智象信息技术有限公司 High-performance local cache method, system, equipment and medium
CN112650706A (en) * 2020-12-31 2021-04-13 鲸灵科技股份有限公司 Method for realizing high situation perception capability under big data technology system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106126354A (en) * 2016-06-21 2016-11-16 中国建设银行股份有限公司 A kind of asynchronous batch processing method and system
CN109849935A (en) * 2019-02-20 2019-06-07 百度在线网络技术(北京)有限公司 A kind of method of controlling security, device and storage medium
CN112148500A (en) * 2020-05-18 2020-12-29 南方电网数字电网研究院有限公司 Netty-based remote data transmission method

Also Published As

Publication number Publication date
CN113641410A (en) 2021-11-12

Similar Documents

Publication Publication Date Title
CN113641410B (en) A processing method and system for a high-performance gateway system based on Netty
US11159411B2 (en) Distributed testing service
US9553944B2 (en) Application server platform for telecom-based applications using an actor container
CN107729139B (en) Method and device for concurrently acquiring resources
US11716264B2 (en) In situ triggered function as a service within a service mesh
US7962566B2 (en) Optimized session management for fast session failover and load balancing
CN106161537B (en) Method, device and system for processing remote procedure call and electronic equipment
US9231995B2 (en) System and method for providing asynchrony in web services
US7689660B2 (en) Application server architecture
US9749445B2 (en) System and method for updating service information for across-domain messaging in a transactional middleware machine environment
CN110727507B (en) Message processing method and device, computer equipment and storage medium
US20090199208A1 (en) Queued message dispatch
CN110413822B (en) Offline image structured analysis method, device and system and storage medium
CN108475220B (en) System and method for integrating a transactional middleware platform with a centralized auditing framework
CN114928579A (en) Data processing method and device, computer equipment and storage medium
CN111200606A (en) Deep learning model task processing method, system, server and storage medium
Rosa et al. INSANE: a unified middleware for QoS-aware network acceleration in edge cloud computing
US20100153565A1 (en) Connection management in line-of-business
JP2009516296A (en) Asynchronous just-in-time compilation
CN119690694A (en) Asynchronous-to-synchronous method, device, equipment, storage medium and product
CN106997304B (en) Input and output event processing method and device
US7587399B2 (en) Integrated software toolset for a web server
CN101482816B (en) Intermediary software bridging system and method
CN115878290A (en) Job processing method and device, electronic equipment and computer readable medium
US20250097181A1 (en) Claim check mechanism for a message payload in an enterprise messaging system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant