[go: up one dir, main page]

CN114978813A - Gateway implementation method based on response type thread pool - Google Patents

Gateway implementation method based on response type thread pool Download PDF

Info

Publication number
CN114978813A
CN114978813A CN202210383552.XA CN202210383552A CN114978813A CN 114978813 A CN114978813 A CN 114978813A CN 202210383552 A CN202210383552 A CN 202210383552A CN 114978813 A CN114978813 A CN 114978813A
Authority
CN
China
Prior art keywords
gateway
request
server
load
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210383552.XA
Other languages
Chinese (zh)
Other versions
CN114978813B (en
Inventor
陈成
陈廷梁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Shuxin Network Co ltd
Original Assignee
Zhejiang Shuxin Network Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Shuxin Network Co ltd filed Critical Zhejiang Shuxin Network Co ltd
Priority to CN202210383552.XA priority Critical patent/CN114978813B/en
Publication of CN114978813A publication Critical patent/CN114978813A/en
Application granted granted Critical
Publication of CN114978813B publication Critical patent/CN114978813B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/66Arrangements for connecting between networks having differing types of switching systems, e.g. gateways
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1008Server selection for load balancing based on parameters of servers, e.g. available memory or workload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/101Server selection for load balancing based on network conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1019Random or heuristic server selection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/18Multiprotocol handlers, e.g. single devices capable of handling multiple protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/22Parsing or analysis of headers
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5011Pool
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention provides a gateway implementation method based on a response type thread pool, which relates to the technical field of proxy service and comprises the following steps: establishing TCP connection between all the microservices and a gateway; the gateway receives a user request and carries out frame analysis to obtain request contents of different protocol types; the gateway distributes servers corresponding to the micro services for all the request contents according to a preset load strategy; the gateway respectively pushes the request contents of different protocol types to corresponding threads for processing, and the threads are connected with corresponding servers to obtain request results. The gateway supports various protocol types, realizes IO multiplexing based on the response type thread pool, and has higher performance threshold value compared with the existing gateway by using the response type thread pool, avoids deep object analysis of various protocols, reduces a large amount of memory creating and destroying processes, avoids the condition of lock waiting and the like to consume a large amount of server resources, accelerates user request content forwarding, and improves server calling.

Description

一种基于响应式线程池的网关实现方法A Gateway Implementation Method Based on Responsive Thread Pool

技术领域technical field

本发明涉及通信技术领域,尤其是涉及一种基于响应式线程池的网关实现方法。The present invention relates to the field of communication technologies, in particular to a gateway implementation method based on a responsive thread pool.

背景技术Background technique

在现代企业软件开发的体系下,基于微服务的体系架构是当下较为流行的架构设计方案。微服务架构中服务被拆的非常零散,降低了耦合度的同时也给服务的统一管理增加了难度。Under the system of modern enterprise software development, the architecture based on microservices is a more popular architecture design scheme. In the microservice architecture, the services are very scattered, which reduces the coupling degree and increases the difficulty of the unified management of the services.

在以往的基于SOA(即面向服务)架构的设计思路下,鉴权、限流、日志、监控等通用功能需要在每个服务中单独实现,这使得系统维护者没有一个全局的视图来统一管理这些功能。Under the previous design idea based on SOA (ie service-oriented) architecture, common functions such as authentication, current limiting, logging, and monitoring need to be implemented separately in each service, which makes the system maintainer do not have a global view for unified management these functions.

在微服务框架中,每个对外服务都是独立部署的,对外的api或者服务地址都不尽相同。对于内部而言,很简单,通过注册中心自动感知即可;但我们大部分情况下,服务都是提供给外部系统进行调用的,不可能同享一个注册中心,同时,内部的微服务都是在内网的,和外界是不连通的。而且,就算我们每个微服务对外开放,对于调用者而言,调用不同的服务的地址或者参数也是不尽相同的,这样就会造成消费者客户端的复杂性,同时微服务可能是不同的技术栈实现的,有的是http、rpc或者websocket等等,也会进一步加大客户端的调用难度。In the microservice framework, each external service is deployed independently, and the external API or service addresses are different. For the internal, it is very simple, it can be automatically sensed through the registry; but in most cases, services are provided to external systems for calling, and it is impossible to share a registry. At the same time, the internal microservices are The internal network is not connected to the outside world. Moreover, even if each of our microservices is open to the outside world, for callers, the addresses or parameters for calling different services are not the same, which will cause the complexity of consumer clients, and microservices may be different technologies. The stack implementation, some of which are http, rpc or websocket, etc., will further increase the difficulty of calling the client.

所以,一般上都会有个api网关,根据请求的url不同,路由到不同的服务上去,入口统一后,还能进行统一的身份鉴权、日志记录、分流等操作,即API网关为微服务纳管这些通用功能,提高系统的可扩展性,并使得服务本身更专注于自己的领域,很好地对服务调用者和服务提供者做了隔离。Therefore, there is generally an api gateway, which routes to different services according to the requested url. After the entry is unified, it can also perform unified authentication, logging, and offloading operations, that is, the API gateway is a microservice collection By managing these general functions, the scalability of the system is improved, and the service itself is more focused on its own domain, and the service caller and service provider are well isolated.

目前,主流的API网关有Netflix的zuul以及KONG。zuul是一种同步阻塞的网关设计思路,每来一个请求,zuul会专门分配一个线程去处理,然后转发到后端服务,后端再启用线程处理请求,后端处理时网关的线程会阻塞,当请求数量比较大时,很容易造成线程池被占满而无法接受新的请求。另外用户有上传文件的操作时,zuul内部并未对内部进行特殊的异步化IO处理,使得整个系统的吞吐量直线下滑。At present, the mainstream API gateways are Netflix's zuul and KONG. Zuul is a synchronous blocking gateway design idea. Every time a request comes, zuul will allocate a thread to process it, and then forward it to the back-end service. The back-end will then enable the thread to process the request, and the thread of the gateway will be blocked during the back-end processing. When the number of requests is relatively large, it is easy to cause the thread pool to be full and unable to accept new requests. In addition, when users upload files, zuul does not perform special asynchronous IO processing internally, which makes the throughput of the entire system plummet.

KONG是一个基于Nginx与Lua的高性能Web平台,其内部集成了大量精良的Lua库、第三方模块以及大多数的依赖项。用于方便地搭建能够处理超高并发、扩展性极高的动态Web应用、Web服务和动态网关。此网关性能较高,但因业务逻辑及其他相关的插件依赖的Lua语言,该语言为脚本语言,暂无较好的开发调试用具,企业在使用该技术的同时,定制开发较为困难。KONG is a high-performance web platform based on Nginx and Lua, which integrates a large number of sophisticated Lua libraries, third-party modules and most dependencies. It is used to easily build dynamic web applications, web services and dynamic gateways that can handle ultra-high concurrency and high scalability. The performance of this gateway is high, but because the business logic and other related plug-ins depend on the Lua language, which is a scripting language, there is currently no good development and debugging tools. It is difficult for enterprises to customize development while using this technology.

发明内容SUMMARY OF THE INVENTION

针对上述问题,本发明提供了一种基于响应式线程池的网关实现方法,使用响应式线程池方式,提供一种全新的网关实现方式,该网关无对外依赖,减少网关的线程不能提供服务的时间,实现分布式HA高可用方案的部署。In view of the above problems, the present invention provides a gateway implementation method based on a responsive thread pool, using the responsive thread pool method to provide a brand-new gateway implementation method, the gateway has no external dependence, and reduces the problem that the threads of the gateway cannot provide services. time, to realize the deployment of the distributed HA high-availability solution.

为实现上述目的,本发明提供的基于响应式线程池的网关实现方法,包括:In order to achieve the above purpose, the gateway implementation method based on the responsive thread pool provided by the present invention includes:

将所有微服务均与网关建立TCP连接;Establish a TCP connection with the gateway for all microservices;

网关接收用户请求并进行帧解析,得到不同协议类型的请求内容;The gateway receives the user request and parses the frame to obtain the request content of different protocol types;

网关根据预设负载策略为所有请求内容分配对应微服务的服务器;The gateway allocates the server corresponding to the microservice for all the requested content according to the preset load policy;

网关将不同种协议类型的所述请求内容分别推送至对应线程处理,所述线程与对应的所述服务器建立连接,获取请求结果。The gateway pushes the request contents of different protocol types to the corresponding thread for processing, and the thread establishes a connection with the corresponding server to obtain the request result.

作为本发明的进一步改进,所有所述微服务启动时均向网关汇报开放接口数量、接口的URL地址和接口的请求参数内容;As a further improvement of the present invention, all the microservices report the number of open interfaces, the URL address of the interface and the content of the request parameters of the interface to the gateway when starting;

网关根据预设负载策略为所述请求内容分配对应微服务的服务器,获取所述微服务接口的URL地址和接口的请求参数内容;The gateway allocates a server corresponding to the microservice to the request content according to the preset load policy, and obtains the URL address of the microservice interface and the request parameter content of the interface;

所述线程根据所述微服务接口的URL地址和接口的请求参数内容与所述服务器建立连接,获取请求结果。The thread establishes a connection with the server according to the URL address of the microservice interface and the content of the request parameters of the interface, and obtains the request result.

作为本发明的进一步改进,As a further improvement of the present invention,

所述网关直接将所述微服务接口的URL地址和接口的请求参数内容推送到对应的线程;The gateway directly pushes the URL address of the microservice interface and the request parameter content of the interface to the corresponding thread;

所述线程直接将所述请求内容转发至对应的服务器,并获取请求结果。The thread directly forwards the request content to the corresponding server, and obtains the request result.

作为本发明的进一步改进,所述网关接收用户请求并进行帧解析,将帧解析得到的不同协议类型的所述请求内容序列化后存放在线程池的内存对象中。As a further improvement of the present invention, the gateway receives user requests and performs frame parsing, and serializes the request contents of different protocol types obtained by frame parsing and stores them in the memory object of the thread pool.

作为本发明的进一步改进,不同协议类型的所述请求内容分别对应不同微服务协议,包括http、tcp、udp、websocket、RPC。As a further improvement of the present invention, the request contents of different protocol types correspond to different microservice protocols, including http, tcp, udp, websocket, and RPC.

作为本发明的进一步改进,所有所述微服务启动后,定时向所述网关汇报其所在服务器的基本负载情况,包括CPU、内存、网络和应用当前会话数。As a further improvement of the present invention, after all the microservices are started, they regularly report to the gateway the basic load of the server where they are located, including the CPU, memory, network and the current number of sessions of the application.

作为本发明的进一步改进,所述负载策略包括:平均负载、随机负载和优先负载;As a further improvement of the present invention, the load strategy includes: average load, random load and priority load;

所述平均负载是将用户的请求内容平均分配到所有与请求内容相符合的所述微服务的服务器;The load average is to evenly distribute the user's request content to all the servers of the microservices that conform to the request content;

所述随机负载是将用户的请求内容随机分配到所有与请求内容相符合的所述微服务的服务器;The random load is to randomly distribute the user's request content to all the servers of the microservices that conform to the request content;

所述优先负载是根据各所述微服务定时上传的其所在服务器的基本负载情况进行加权求分,将用户的请求内容负载到分数最低的服务器。The priority load is weighted and scored according to the basic load of the server where each microservice is regularly uploaded, and the user's request content is loaded to the server with the lowest score.

作为本发明的进一步改进,所述优先负载为默认负载策略,所述加权求分公式为:As a further improvement of the present invention, the priority load is a default load strategy, and the weighted scoring formula is:

S=CPU使用率*40*CPU负载+内存使用率*40+(网卡速率Mb/400Mb)*5+(应用当前会话数/2000)*15。S=CPU usage*40*CPU load+memory usage*40+(network card rate Mb/400Mb)*5+(current session number of application/2000)*15.

作为本发明的进一步改进,As a further improvement of the present invention,

所述网关支持会话保持,在用户第一次发起所述用户请求时根据所述负载策略进行服务器分配;The gateway supports session retention, and performs server allocation according to the load policy when the user initiates the user request for the first time;

该用户后续所述用户请求直接转发至前一次分配的所述服务器处理;The user's subsequent user request is directly forwarded to the previously allocated server for processing;

若该所述服务器下线或异常,则为用户后续所述用户请求重新分配服务器。If the server is offline or abnormal, the server is reassigned for the user's subsequent user request.

作为本发明的进一步改进,所述网关接收用户请求并进行帧解析,得到不同协议类型的请求内容;包括:As a further improvement of the present invention, the gateway receives user requests and performs frame analysis to obtain request contents of different protocol types; including:

所述网关监听各所述微服务的TCP/UDP端口,将所有的流量进行逐个报文的帧解析,得到不同协议类型的请求内容。The gateway monitors the TCP/UDP ports of each of the microservices, and performs frame-by-frame analysis of all the traffic to obtain request contents of different protocol types.

与现有技术相比,本发明的有益效果为:Compared with the prior art, the beneficial effects of the present invention are:

本发明网关支持多种协议类型,基于响应式线程池实现了IO的多路复用,与既有网关相比,具有更广泛的通用性,使用响应式线程池使网关具备了更高的性能阈值,避免了各种协议的深度对象化解析,减少了大量的内存创建销毁的过程,避免了锁等待等情况消耗大量服务器资源,以及大量服务器资源被消耗导致的应用服务不可用或用户等待超时后异常退出等情况,加速了用户请求内容转发和服务器调用,可支持分布式HA高可用方案的部署。The gateway of the present invention supports a variety of protocol types, realizes the multiplexing of IO based on the responsive thread pool, and has wider versatility compared with the existing gateway, and the use of the responsive thread pool enables the gateway to have higher performance Threshold, avoids the in-depth object analysis of various protocols, reduces the process of creating and destroying a lot of memory, avoids the consumption of a lot of server resources such as lock waiting, and the application service is unavailable or the user waits timeout due to the consumption of a lot of server resources. In the event of an abnormal exit afterward, the forwarding of user requests and server calls are accelerated, and the deployment of distributed HA high-availability solutions can be supported.

本发明网关通过预设的负载策略对用户请求内容进行分配,尤其采用优先负载可以实现网络资源的充分利用,同时提高服务器的处理效率。The gateway of the present invention distributes the content requested by the user through a preset load strategy, and especially by using the priority load, the network resources can be fully utilized, and the processing efficiency of the server can be improved at the same time.

附图说明Description of drawings

图1为本发明一种实施例公开的基于响应式线程池的网关实现方法流程图;1 is a flowchart of a method for implementing a gateway based on a responsive thread pool disclosed by an embodiment of the present invention;

图2为本发明一种实施例公开的基于响应式线程池的网关实现过程示意图。FIG. 2 is a schematic diagram of an implementation process of a gateway based on a responsive thread pool disclosed by an embodiment of the present invention.

具体实施方式Detailed ways

为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明的一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本发明保护的范围。In order to make the purposes, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments These are some embodiments of the present invention, but not all embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative work fall within the protection scope of the present invention.

下面结合附图对本发明做进一步的详细描述:Below in conjunction with accompanying drawing, the present invention is described in further detail:

如图1所示,本发明提供的一种基于响应式线程池的网关实现方法,包括:As shown in Figure 1, a gateway implementation method based on a responsive thread pool provided by the present invention includes:

S1、将所有微服务均与网关建立TCP连接;S1. Establish a TCP connection between all microservices and the gateway;

其中,in,

该步骤实现微服务的注册本地化。This step realizes the registration localization of microservices.

进一步的,further,

所有微服务启动时均向网关汇报开放接口数量、接口的URL地址和接口的请求参数内容;All microservices report the number of open interfaces, the URL address of the interface, and the content of the request parameters of the interface to the gateway when they are started;

所有微服务启动后,定时(如:设置默认3s)向网关汇报其所在服务器的基本负载情况,包括CPU、内存、网络和应用当前会话数。After all microservices are started, periodically (for example, set the default 3s) to report to the gateway the basic load of the server where it is located, including CPU, memory, network, and the current number of sessions of the application.

S2、网关接收用户请求并进行帧解析,得到不同协议类型的请求内容;S2. The gateway receives the user request and performs frame analysis to obtain the request content of different protocol types;

其中,in,

网关监听各微服务的TCP/UDP端口,将所有的流量进行逐个报文的帧解析,得到不同协议类型的请求内容,将帧解析得到的不同协议类型的请求内容序列化后存放在线程池的内存对象中,可以支持多种服务协议,实现IO的多路复用。The gateway monitors the TCP/UDP ports of each microservice, parses all the traffic frame by frame, and obtains the request content of different protocol types. The request content of different protocol types obtained by frame analysis is serialized and stored in the thread pool. In the memory object, multiple service protocols can be supported to realize the multiplexing of IO.

进一步的,further,

不同协议类型的请求内容分别对应不同微服务协议,包括http、tcp、udp、websocket、RPC。The request content of different protocol types corresponds to different microservice protocols, including http, tcp, udp, websocket, and RPC.

S3、网关根据预设负载策略为所有请求内容分配对应微服务的服务器;S3. The gateway allocates the server corresponding to the microservice for all the requested content according to the preset load policy;

其中,in,

负载策略包括:平均负载、随机负载和优先负载,优先负载为默认负载策略;Load policies include: average load, random load and priority load, and priority load is the default load policy;

具体的,specific,

平均负载是将用户的请求内容平均分配到所有与请求内容相符合的微服务的服务器;Load average is to evenly distribute the user's request content to the servers of all microservices that match the request content;

随机负载是将用户的请求内容随机分配到所有与请求内容相符合的微服务的服务器;Random load is to randomly distribute the user's request content to the servers of all microservices that match the request content;

优先负载是根据各微服务定时上传的其所在服务器的基本负载情况进行加权求分,将用户的请求内容负载到分数最低的服务器,加权求分公式为:The priority load is weighted and scored according to the basic load of the server where each microservice is regularly uploaded, and the user's request content is loaded to the server with the lowest score. The weighted scoring formula is:

S=CPU使用率*40*CPU负载+内存使用率*40+(网卡速率Mb/400Mb)*5+(应用当前会话数/2000)*15。S=CPU usage*40*CPU load+memory usage*40+(network card rate Mb/400Mb)*5+(current session number of application/2000)*15.

进一步的,further,

网关根据预设负载策略为请求内容分配对应微服务的服务器,获取微服务接口的URL地址和接口的请求参数内容;The gateway allocates the server corresponding to the microservice for the request content according to the preset load policy, and obtains the URL address of the microservice interface and the request parameter content of the interface;

S4、网关将不同种协议类型的请求内容分别推送至对应线程处理,线程与对应的服务器建立连接,获取请求结果。S4. The gateway pushes the request contents of different protocol types to the corresponding thread for processing, and the thread establishes a connection with the corresponding server to obtain the request result.

其中,in,

网关直接将微服务接口的URL地址和接口的请求参数内容推送到对应的线程,减少内存复制拷贝的调用;The gateway directly pushes the URL address of the microservice interface and the request parameter content of the interface to the corresponding thread, reducing the invocation of memory copying;

线程根据微服务接口的URL地址和接口的请求参数内容与服务器建立连接,直接将请求内容转发至对应的服务器,实现调用,并获取请求结果。The thread establishes a connection with the server according to the URL address of the microservice interface and the request parameter content of the interface, directly forwards the request content to the corresponding server, implements the call, and obtains the request result.

本申请中网关支持会话保持,在用户第一次发起用户请求时根据负载策略进行服务器分配;In this application, the gateway supports session retention, and performs server allocation according to the load policy when the user initiates a user request for the first time;

该用户后续用户请求直接转发至前一次分配的服务器处理;The subsequent user request of the user is directly forwarded to the server assigned last time for processing;

若该服务器下线或异常,则为用户后续用户请求重新分配服务器。If the server is offline or abnormal, the server will be reassigned for the user's subsequent user request.

具体的,specific,

基于http的会话保持的实现为根据用户的UA+ip地址等信息进行hash处理,将该字符串和负载的机器编号存放到服务器缓存当中。并尝试会写到用户侧http only的cookie中。The implementation of http-based session retention is to perform hash processing according to the user's UA+ip address and other information, and store the string and the machine number of the load in the server cache. And try to write to the http only cookie on the user side.

基于tcp协议如TCP本身和RPC,socket等的会话保持的实现则为IP地址+端口号进行hash并存放到服务器缓存中。The implementation of session retention based on tcp protocols such as TCP itself and RPC, socket, etc. is hashed by IP address + port number and stored in the server cache.

基于UDP协议的网络转发,不进行会话保持的配置不生效。即不对UDP协议族的转发进行会话保持。For network forwarding based on the UDP protocol, the configuration without session persistence does not take effect. That is, session retention is not performed for the forwarding of the UDP protocol suite.

本发明的优点:Advantages of the present invention:

(1)本发明网关支持多种协议类型,基于响应式线程池实现了IO的多路复用,与既有网关相比,具有更广泛的通用性,使用响应式线程池使网关具备了更高的性能阈值,避免了各种协议的深度对象化解析,减少了大量的内存创建销毁的过程,避免了锁等待等情况消耗大量服务器资源,以及大量服务器资源被消耗导致的应用服务不可用或用户等待超时后异常退出等情况,加速了用户请求内容转发和服务器调用,可支持分布式HA高可用方案的部署。(1) The gateway of the present invention supports a variety of protocol types, and realizes the multiplexing of IO based on the responsive thread pool. Compared with the existing gateway, it has wider versatility. The use of the responsive thread pool enables the gateway to have more The high performance threshold avoids the in-depth object analysis of various protocols, reduces the process of creating and destroying a lot of memory, avoids the consumption of a lot of server resources such as lock waiting, and the unavailability of application services caused by the consumption of a lot of server resources. When the user waits for a timeout and then exits abnormally, it accelerates the forwarding of user requests and server calls, and supports the deployment of distributed HA high-availability solutions.

(2)本发明网关通过预设的负载策略对用户请求内容进行分配,尤其采用优先负载可以实现网络资源的充分利用,同时提高服务器的处理效率。(2) The gateway of the present invention distributes the content requested by the user through a preset load strategy, and especially by using the priority load, the network resources can be fully utilized, and the processing efficiency of the server can be improved at the same time.

以上仅为本发明的优选实施例而已,并不用于限制本发明,对于本领域的技术人员来说,本发明可以有各种更改和变化。凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The above are only preferred embodiments of the present invention, and are not intended to limit the present invention. For those skilled in the art, the present invention may have various modifications and changes. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention shall be included within the protection scope of the present invention.

Claims (10)

1.一种基于响应式线程池的网关实现方法,其特征在于,包括:1. a gateway implementation method based on responsive thread pool, is characterized in that, comprises: 将所有微服务均与网关建立TCP连接;Establish a TCP connection with the gateway for all microservices; 网关接收用户请求并进行帧解析,得到不同协议类型的请求内容;The gateway receives the user request and parses the frame to obtain the request content of different protocol types; 网关根据预设负载策略为所有请求内容分配对应微服务的服务器;The gateway allocates the server corresponding to the microservice for all the requested content according to the preset load policy; 网关将不同种协议类型的所述请求内容分别推送至对应线程处理,所述线程与对应的所述服务器建立连接,获取请求结果。The gateway pushes the request contents of different protocol types to the corresponding thread for processing, and the thread establishes a connection with the corresponding server to obtain the request result. 2.根据权利要求1所述的网关实现方法,其特征在于:所有所述微服务启动时均向网关汇报开放接口数量、接口的URL地址和接口的请求参数内容;2. gateway implementation method according to claim 1, is characterized in that: all report the request parameter content of open interface quantity, the URL address of interface and interface to gateway when all described microservices are started; 网关根据预设负载策略为所述请求内容分配对应微服务的服务器,获取所述微服务接口的URL地址和接口的请求参数内容;The gateway allocates a server corresponding to the microservice to the request content according to the preset load policy, and obtains the URL address of the microservice interface and the request parameter content of the interface; 所述线程根据所述微服务接口的URL地址和接口的请求参数内容与所述服务器建立连接,获取请求结果。The thread establishes a connection with the server according to the URL address of the microservice interface and the content of the request parameters of the interface, and obtains the request result. 3.根据权利要求2所述的网关实现方法,其特征在于:3. gateway implementation method according to claim 2, is characterized in that: 所述网关直接将所述微服务接口的URL地址和接口的请求参数内容推送到对应的线程;The gateway directly pushes the URL address of the microservice interface and the request parameter content of the interface to the corresponding thread; 所述线程直接将所述请求内容转发至对应的服务器,并获取请求结果。The thread directly forwards the request content to the corresponding server, and obtains the request result. 4.根据权利要求1所述的网关实现方法,其特征在于:所述网关接收用户请求并进行帧解析,将帧解析得到的不同协议类型的所述请求内容序列化后存放在线程池的内存对象中。4. The gateway implementation method according to claim 1, characterized in that: the gateway receives a user request and performs frame parsing, and stores in the memory of the thread pool after serializing the request content of different protocol types obtained by the frame parsing in the object. 5.根据权利要求1所述的网关实现方法,其特征在于:不同协议类型的所述请求内容分别对应不同微服务协议,包括http、tcp、udp、websocket、RPC。5 . The gateway implementation method according to claim 1 , wherein the request contents of different protocol types correspond to different microservice protocols, including http, tcp, udp, websocket, and RPC. 6 . 6.根据权利要求1所述的网关实现方法,其特征在于:所有所述微服务启动后,定时向所述网关汇报其所在服务器的基本负载情况,包括CPU、内存、网络和应用当前会话数。6. The gateway implementation method according to claim 1, characterized in that: after all the microservices are started, regularly report to the gateway the basic load of the server where they are located, including CPU, memory, network and application current session number . 7.根据权利要求6所述的网关实现方法,其特征在于,所述负载策略包括:平均负载、随机负载和优先负载;7. The gateway implementation method according to claim 6, wherein the load policy comprises: average load, random load and priority load; 所述平均负载是将用户的请求内容平均分配到所有与请求内容相符合的所述微服务的服务器;The load average is to evenly distribute the user's request content to all the servers of the microservices that conform to the request content; 所述随机负载是将用户的请求内容随机分配到所有与请求内容相符合的所述微服务的服务器;The random load is to randomly distribute the user's request content to all the servers of the microservices that conform to the request content; 所述优先负载是根据各所述微服务定时上传的其所在服务器的基本负载情况进行加权求分,将用户的请求内容负载到分数最低的服务器。The priority load is weighted and scored according to the basic load of the server where each microservice is regularly uploaded, and the user's request content is loaded to the server with the lowest score. 8.根据权利要求7所述的网关实现方法,其特征在于:所述优先负载为默认负载策略,所述加权求分公式为:8. The gateway implementation method according to claim 7, wherein the priority load is a default load policy, and the weighted scoring formula is: S=CPU使用率*40*CPU负载+内存使用率*40+(网卡速率Mb/400Mb)*5+(应用当前会话数/2000)*15。S=CPU usage*40*CPU load+memory usage*40+(network card rate Mb/400Mb)*5+(current session number of application/2000)*15. 9.根据权利要求1所述的网关实现方法,其特征在于:9. gateway implementation method according to claim 1, is characterized in that: 所述网关支持会话保持,在用户第一次发起所述用户请求时根据所述负载策略进行服务器分配;The gateway supports session retention, and performs server allocation according to the load policy when the user initiates the user request for the first time; 该用户后续所述用户请求直接转发至前一次分配的所述服务器处理;The user's subsequent user request is directly forwarded to the previously allocated server for processing; 若该所述服务器下线或异常,则为用户后续所述用户请求重新分配服务器。If the server is offline or abnormal, the server is reassigned for the user's subsequent user request. 10.根据权利要求1所述的网关实现方法,其特征在于:所述网关接收用户请求并进行帧解析,得到不同协议类型的请求内容;包括:10. The gateway implementation method according to claim 1, wherein: the gateway receives user requests and performs frame analysis to obtain request contents of different protocol types; comprising: 所述网关监听各所述微服务的TCP/UDP端口,将所有的流量进行逐个报文的帧解析,得到不同协议类型的请求内容。The gateway monitors the TCP/UDP ports of each of the microservices, and performs frame-by-frame analysis of all the traffic to obtain request contents of different protocol types.
CN202210383552.XA 2022-04-12 2022-04-12 Gateway implementation method based on responsive thread pool Active CN114978813B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210383552.XA CN114978813B (en) 2022-04-12 2022-04-12 Gateway implementation method based on responsive thread pool

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210383552.XA CN114978813B (en) 2022-04-12 2022-04-12 Gateway implementation method based on responsive thread pool

Publications (2)

Publication Number Publication Date
CN114978813A true CN114978813A (en) 2022-08-30
CN114978813B CN114978813B (en) 2024-09-03

Family

ID=82977970

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210383552.XA Active CN114978813B (en) 2022-04-12 2022-04-12 Gateway implementation method based on responsive thread pool

Country Status (1)

Country Link
CN (1) CN114978813B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119094279A (en) * 2024-10-29 2024-12-06 成都星联芯通科技有限公司 Service gateway implementation method, device, beam network node and readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111737022A (en) * 2019-09-30 2020-10-02 北京沃东天骏信息技术有限公司 A microservice-based interface calling method, system, device and medium
CN112202872A (en) * 2020-09-28 2021-01-08 华云数据控股集团有限公司 Data forwarding method, API gateway and message service system
CN112261061A (en) * 2020-11-03 2021-01-22 合沃物联技术(南京)有限公司 Equipment multi-protocol analysis method based on industrial Internet of things gateway
CN114285857A (en) * 2021-12-31 2022-04-05 中企云链(北京)金融信息服务有限公司 Load balancing method, device and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111737022A (en) * 2019-09-30 2020-10-02 北京沃东天骏信息技术有限公司 A microservice-based interface calling method, system, device and medium
CN112202872A (en) * 2020-09-28 2021-01-08 华云数据控股集团有限公司 Data forwarding method, API gateway and message service system
CN112261061A (en) * 2020-11-03 2021-01-22 合沃物联技术(南京)有限公司 Equipment multi-protocol analysis method based on industrial Internet of things gateway
CN114285857A (en) * 2021-12-31 2022-04-05 中企云链(北京)金融信息服务有限公司 Load balancing method, device and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
彭永勇;曾强;: "基于互联网应用模式的企业统一身份网关技术实践", 网络安全技术与应用, no. 02, 15 February 2018 (2018-02-15) *
温馨;樊婧雯;王富强;: "基于OpenResty平台的API网关系统的设计与实现", 信息化研究, no. 03, 20 June 2020 (2020-06-20), pages 1 - 7 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119094279A (en) * 2024-10-29 2024-12-06 成都星联芯通科技有限公司 Service gateway implementation method, device, beam network node and readable storage medium

Also Published As

Publication number Publication date
CN114978813B (en) 2024-09-03

Similar Documents

Publication Publication Date Title
US11777790B2 (en) Communications methods and apparatus for migrating a network interface and/or IP address from one Pod to another Pod in a Kubernetes system
JP6600373B2 (en) System and method for active-passive routing and control of traffic in a traffic director environment
US8635265B2 (en) Communicating between a server and clients
US7003574B1 (en) Session load balancing and use of VIP as source address for inter-cluster traffic through the use of a session identifier
Yang et al. Efficient Support for Content-based Routing in Web Server Clusters.
CN101523866B (en) Systems and methods for hierarchical global load balancing
EP2566135B1 (en) Cloud-based mainframe integration system and method
CN101632067B (en) Systems and methods for end-user experience monitoring
US10476800B2 (en) Systems and methods for load balancing virtual connection traffic
Jiang et al. Design, implementation, and performance of a load balancer for SIP server clusters
US20060167883A1 (en) System and method for the optimization of database acess in data base networks
US8612601B2 (en) Management method and management device for network address translation
US8082580B1 (en) Session layer pinhole management within a network security device
WO2020236806A1 (en) Network traffic steering with programmatically generated proxy auto-configuration files
US20100080241A1 (en) System and method for providing timer affinity through engine polling within a session-based server deployment
CN110868323B (en) Bandwidth control method, device, equipment and medium
CN117544624A (en) Cluster load processing method and device, storage medium and electronic equipment
CN114978813B (en) Gateway implementation method based on responsive thread pool
EP2701358B1 (en) Method, device, and system for implementing multimedia data recording
US10791088B1 (en) Methods for disaggregating subscribers via DHCP address translation and devices thereof
US11272014B2 (en) Systems and methods for reducing connection setup latency
CN119967045B (en) Communication methods, communication systems and computer program products
CN119484529A (en) Data collection method, system and related device
Chanda Content delivery in software defined networks
Yang et al. EFFICIENTSUPPORTFORCO NTENT-BASED ROUTINGINWEBSERVERCLU STERS

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A gateway implementation method based on responsive thread pool

Granted publication date: 20240903

Pledgee: Industrial and Commercial Bank of China Limited Hangzhou Yuhang sub branch

Pledgor: Zhejiang Shuxin Network Co.,Ltd.

Registration number: Y2025980002929

PE01 Entry into force of the registration of the contract for pledge of patent right