CN113535362B - Distributed scheduling system architecture and micro-service workflow scheduling method - Google Patents
Distributed scheduling system architecture and micro-service workflow scheduling method Download PDFInfo
- Publication number
- CN113535362B CN113535362B CN202110841580.7A CN202110841580A CN113535362B CN 113535362 B CN113535362 B CN 113535362B CN 202110841580 A CN202110841580 A CN 202110841580A CN 113535362 B CN113535362 B CN 113535362B
- Authority
- CN
- China
- Prior art keywords
- node
- scheduling
- service
- task
- micro
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/485—Task life-cycle, e.g. stopping, restarting, resuming execution
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/461—Saving or restoring of program or task context
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/465—Distributed object oriented systems
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention relates to a distributed scheduling system architecture and a micro-service workflow scheduling method, and belongs to the field of computers. The invention adopts a distributed architecture to design a control center of micro-service scheduling, separates the control logic from the service logic, separates the service logic from the test result, and adopts an asynchronous task mode, so that the time of occupying an execution thread is greatly shortened, and the response speed of the micro-service is greatly improved. The distributed locks are adopted to realize ordered scheduling and fault tolerance of background micro-service workflow, and the concurrency capability and high availability capability of a scheduling center can be obviously improved under the same number of thread pools.
Description
Technical Field
The invention belongs to the field of computers, and particularly relates to a distributed scheduling system architecture and a micro-service workflow scheduling method.
Background
The main flow micro-service system realizes the control and management of micro-service application through the micro-service gateway product. The micro service gateway is responsible for processing interface service call among the micro service modules, and can relate to dispatching work such as security, routing, proxy, monitoring, log, current limiting and the like, so as to form a centralized dispatching system architecture. For the centralized architecture, all API interface services are registered in the micro service gateway, which is equivalent to the micro service gateway making a layer of encapsulation based on the original business service API interface and then issuing proxy services. Therefore, calls to all micro service interface services can be intercepted in the micro service gateway. The security, logging, throttling, routing, etc. scheduling capabilities implemented by the micro-service gateway are all based on this interception. And the realization of each capability can be configured as each independent plug-in the interception process.
The centralized micro service gateway is used as an API (application program interface) inlet of all services, and performance bottlenecks are easily encountered with the expansion of the service scale. When each user requests to access the background application, only interaction among services is involved, the service is routed from the micro-service gateway, and after a large number of services are provided, interaction call among internal services is overlapped on the micro-service gateway in a large number, so that the problems of overload of the micro-service gateway and slow response of the background service are caused. Another problem is that once a problem arises with the micro-service gateway, the whole cluster collapses without the first of a group dragon.
To solve this problem, some solutions use a design of multiple gateway instances with load balancing in an architecture mode to realize load sharing and high availability, and this mode has a disadvantage that scheduling control is not flexible enough. Some micro-service systems also provide a decentralization architecture, such as a SericeMesh server gateway, by implanting an SDK packet with a control function into a server, a background service directly performs point-to-point interaction, and an actual service call request and a data stream do not pass through a control center; this has the disadvantage of requiring design and implantation of SDK packages for micro-services, requiring a large amount of work, and is not suitable for complex scheduling of workflow tasks.
Disclosure of Invention
First, the technical problem to be solved
The invention aims to solve the technical problem of how to provide a distributed scheduling system architecture and a micro-service workflow scheduling method so as to solve the problem that a centralized micro-service gateway is easy to suffer from performance bottlenecks.
(II) technical scheme
In order to solve the technical problems, the invention provides a distributed scheduling system architecture, which comprises a micro-service registration center eureka, a scheduling center, an execution node, a scheduling database and a service database, wherein the scheduling center comprises a plurality of scheduling nodes;
the dispatching node and the executing node are distributed and deployed in a micro-service mode; the roles and API address information of all nodes are registered on a micro-service center eureka, and are uniformly maintained by the micro-service registration center eureka;
the dispatching node comprises a remote call controller, a callback controller, a management runtime and a core dispatcher; wherein the core scheduler is built based on quartz; the management runtime is used for realizing various management functions; the core communication between the dispatching node and the execution node comprises remote call (RMS) and callback (Callback), an execution instruction is sent to the execution node through a remote call controller, a job operation result returned from an executor of the execution node is received through a callback controller, and a complex job flow sequence can be received from a job chain module of the execution node through a job management component;
the dispatching database is connected with the dispatching center and is used for storing dispatching-related data in a lasting mode;
the execution node is an execution module embedded in each micro service and comprises an executor, a job chain and a service bean; the executor executes the task and returns the result to the dispatching center through the callback interface; the execution sequence and the dependency relationship of the job chain combined task meet the requirement of complex job scheduling; the service bean is a carrier for embedding the execution node and the micro-service;
the service database is connected with the execution node and is used for storing the service end data of the persistent micro-service application.
Further, the management functions of the dispatching node comprise job management, monitoring management, log management, configuration management, trigger management and dispatching log, and a Restful interface and web page dynamic display are provided.
Further, the data stored in the scheduling database includes task sequences, monitoring data, log data and configuration data.
Further, the communication between the dispatching node and the executing node carries out remote call and result callback through an API interface of an http protocol.
Further, the dispatching node sends synchronous or asynchronous execution instructions to the execution node through the remote call controller, and the executor supports synchronous and asynchronous execution tasks and returns the result to the dispatching center through the callback interface.
The invention also provides a micro-service workflow scheduling method based on the distributed scheduling system architecture, which is characterized by comprising the following steps,
s1, a scheduling node acquires an idle thread from a task scheduling thread pool, and accesses a scheduling database through a new thread to acquire a task; if the task needs to be executed, entering a step S2; otherwise, entering a dormant state until the step is restarted after awakening;
s2, the scheduling node acquires a flow lock through competition; the flow lock is distributed to the optimal node in the distributed scheduling center, namely, the optimal node is elected to be a management node, and the node which does not acquire the lock is blocked until the flow lock is acquired;
s3, the management node starts the transaction, takes out a first task from the task database, judges the type of the instruction, submits the first task to the task queue to remotely call the execution node, then the management node deletes the task in the task database, closes the transaction, and records log information;
s4, the management node releases the flow lock, releases the thread resource, and returns the thread to the thread pool for the next task scheduling.
Further, the scheduling node calls the execution node in a non-blocking mode, and the process lock and the thread can be released without waiting for the execution node to result in a callback.
Further, when the instruction acquired by the management node/the execution node is "resource deficiency", the blocked thread is suspended, and the suspended flow with the resource deficiency is awakened to execute again.
Further, when the management node fails or loses heartbeat due to network jitter, the following management node fault-tolerant flow is executed:
(1) The dispatching center monitors the fault event of the management node and triggers a fault tolerance mechanism;
(2) The available dispatching nodes compete for the fault-tolerant lock, the dispatching nodes with the fault-tolerant lock become fault-tolerant management nodes, and the fault-tolerant management nodes broadcast fault-tolerant alarm notification and record log information;
(3) The fault-tolerant management node inquires task examples with calling sources of original fault nodes, updates the calling sources of the examples to Nul l, and generates a new task instruction;
(4) And releasing the fault-tolerant lock, and completing fault tolerance.
Further, after fault tolerance is completed, the method further comprises the following steps: thread scheduling is carried out again by the scheduling center, and the new management node takes over according to the different states of the monitored newly submitted task; monitoring the status of a task instance of a task which is running; for a task that is "submitted successfully", it is necessary to determine whether it already exists in the task queue, if so, the status of the task instance is monitored as well, and if not, the task instance is re-submitted.
(III) beneficial effects
The invention provides a distributed scheduling system architecture and a micro-service workflow scheduling method, wherein a control center of micro-service scheduling is designed by adopting the distributed architecture, control logic and service logic are separated, and after the service logic is separated from the test result, an asynchronous task mode is adopted, so that the time of occupying an execution thread is greatly shortened, and the response speed of micro-service is greatly improved. The distributed locks are adopted to realize ordered scheduling and fault tolerance of background micro-service workflow, and the concurrency capability and high availability capability of a scheduling center can be obviously improved under the same number of thread pools.
Drawings
FIG. 1 is a diagram of a distributed scheduling system architecture of the present invention;
FIG. 2 is a flow chart of a distributed scheduling embodiment of the present invention;
FIG. 3 is a flow chart of the distributed scheduling fault tolerance of the present invention.
Detailed Description
To make the objects, contents and advantages of the present invention more apparent, the following detailed description of the present invention will be given with reference to the accompanying drawings and examples.
The invention aims to provide a distributed solution for micro-service scheduling management, which realizes the decentralization and high availability of a control center by independently stripping out the control functions in the traditional micro-service gateway to form a distributed control center. In the distributed scheduling flow, control logic is separated from business logic, and the ordered scheduling and fault tolerance of control nodes are guaranteed through distributed lock design, so that the scheduling of highly flexible and complex workflow tasks is realized.
The invention adopts a distributed architecture to design a control center of micro-service scheduling, separates the control logic from the service logic, separates the service logic from the test result, and adopts an asynchronous task mode, so that the time of occupying an execution thread is greatly shortened, and the response speed of the micro-service is greatly improved. The distributed locks are adopted to realize ordered scheduling and fault tolerance of background micro-service workflow, and the concurrency capability and high availability capability of a scheduling center can be obviously improved under the same number of thread pools.
The architecture diagram of the distributed scheduling system provided by the invention is shown in figure 1. The system architecture comprises a micro-service registration center eureka, a dispatching center, an execution node, a dispatching database and a service database. The dispatch center includes a plurality of dispatch nodes.
The whole architecture is established on the basis of micro-services, and the dispatching nodes and the execution nodes are distributed and deployed in the form of micro-services. The roles and API address information of each node are registered on the micro service center eureka, and are uniformly maintained by the micro service registration center eureka. The service can be decentralized and scheduled, and the high availability of the cluster is realized.
The dispatch center is composed of a plurality of dispatch nodes. The single dispatch node includes a remote call controller, a callback controller, a management runtime, and a core dispatcher. The core scheduler is constructed based on the quaterz, and the quaterz originally supports the cluster, so that complex task triggering and scheduling can be realized;
the management function is realized through a management runtime component of modularized deployment, supports functions such as job management, monitoring management, log management, configuration management, trigger management, log scheduling and the like, and provides a Restful interface and web page dynamic display.
The core communication between the dispatching node and the execution node comprises remote call (RMS) and callback (Callback), synchronous or asynchronous execution instructions are sent to the execution node through a remote call controller, and a job running result returned from an executor of the execution node is received through a callback controller. A complex job flow sequence may be received by a job management component from a job chain module of an execution node.
The dispatching database is connected with the dispatching center and is used for storing task sequences, monitoring data, log data, configuration data and the like related to dispatching in a lasting mode.
The execution node is an execution module embedded in each micro service and is responsible for receiving scheduling center scheduling, including an executor, a job chain and a service bean. The executor supports synchronous and asynchronous execution tasks, and returns the result to the dispatching center through a callback interface; the job chain can combine the execution sequence and the dependency relationship of the tasks to meet the requirement of complex job scheduling; a traffic bean is a carrier that performs node-to-microservice mosaics.
The service database is connected with the execution node and is used for storing the service end data of the persistent micro-service application.
The scheduling node and the executing node are separated, the scheduling node is only responsible for scheduling, the executing node is only responsible for service, the communication between the nodes is mainly used for remote call and result callback through an API interface of an http protocol, and the scheduling node and the executing node can be completely decoupled, so that the expansibility of the whole system is enhanced.
The method for distributed scheduling provided by the invention is shown in fig. 2, and comprises the following steps:
s1, the scheduling node acquires an idle thread from a task scheduling thread pool, and accesses a scheduling database through a new thread to acquire a task. If the task needs to be executed, entering a step S2; otherwise, entering a dormant state until the step is restarted after awakening;
s2, the scheduling node acquires the flow lock through competition. The flow lock will be assigned to the optimal node in the distributed scheduling center, i.e. elected to be the management node, the nodes that do not acquire the lock will block until the flow lock is acquired.
S3, the management node starts the transaction, takes out the first task from the task database, judges the type of the instruction, and submits the instruction type to the task queue so as to remotely call the execution node. And then the management node deletes the task in the task database, closes the transaction and records log information.
S4, the management node releases the flow lock, releases the thread resource, and returns the thread to the thread pool for the next task scheduling.
In the above scenario, the remote call initiated by the management node to the execution node is typically asynchronous, and the process lock and thread may be released without waiting for the execution node to result in a callback. In the working mode, the scheduling node can call the execution node in a non-blocking mode, so that the performance influence caused by task business logic can be avoided, and the performance of the system is improved.
However, in some stateful application scenarios, the task execution has a limited sequence, and the task execution can only be performed in a synchronous manner, i.e. the execution node completes the task execution through the synchronous executor, and after the result is returned to the management node, the management node can release the flow lock and the thread resource. If the sub-flows are nested, the problem of insufficient cyclic waiting and deadlock of threads can be generated. The method is to add an instruction type with insufficient resources, when the instruction acquired by the management node/execution node is 'resource-insufficient', the blocked thread is suspended, so that a new thread exists in the thread pool, and the execution can be awakened again by the process of suspending the resource-insufficient.
In a dispatch system, a management node may fail or lose its heartbeat due to network jitter. The distributed scheduling system of the invention can realize high availability of clusters through fault tolerance, and a management node fault tolerance flow chart is shown in figure 3.
(1) The dispatching center monitors the fault event of the management node and triggers a fault-tolerant mechanism.
(2) The available dispatching nodes compete for the fault-tolerant lock, the dispatching nodes with the fault-tolerant lock become fault-tolerant management nodes, and the fault-tolerant management nodes broadcast fault-tolerant alarm notification and record log information.
(3) The fault-tolerant management node inquires task examples with calling sources of original fault nodes, updates the calling sources of the examples to Nul l, and generates new task instructions.
(4) And releasing the fault-tolerant lock, and completing fault tolerance.
(5) And after fault tolerance is completed, thread scheduling is carried out again by the scheduling center, and the new management node takes over according to the different states of the monitored newly submitted task. Monitoring the status of a task instance of a task which is running; for a task that is "submitted successfully", it is necessary to determine whether it already exists in the task queue, if so, the status of the task instance is monitored as well, and if not, the task instance is re-submitted.
The foregoing is merely a preferred embodiment of the present invention, and it should be noted that modifications and variations could be made by those skilled in the art without departing from the technical principles of the present invention, and such modifications and variations should also be regarded as being within the scope of the invention.
Claims (7)
1. The micro-service workflow scheduling method based on the distributed scheduling system architecture is characterized in that the system architecture comprises a micro-service registration center eureka, a scheduling center, an execution node, a scheduling database and a service database, and the scheduling center comprises a plurality of scheduling nodes;
the dispatching node and the executing node are distributed and deployed in a micro-service mode; the roles and API address information of all nodes are registered on a micro-service center eureka, and are uniformly maintained by the micro-service registration center eureka;
the dispatching node comprises a remote call controller, a callback controller, a management runtime and a core dispatcher; wherein the core scheduler is built based on quartz; the management runtime is used for realizing various management functions; the core communication between the dispatching node and the execution node comprises remote call (RMS) and Callback (Callback), an execution instruction is sent to the execution node through a remote call controller, a job operation result returned from an executor of the execution node is received through a Callback controller, and a complex job flow sequence can be received from a job chain module of the execution node through a job management component;
the dispatching database is connected with the dispatching center and is used for storing dispatching-related data in a lasting mode;
the execution node is an execution module embedded in each micro service and comprises an executor, a job chain and a service bean; the executor executes the task and returns the result to the dispatching center through the callback interface; the execution sequence and the dependency relationship of the job chain combined task meet the requirement of complex job scheduling; the service bean is a carrier for embedding the execution node and the micro-service;
the service database is connected with the execution node and is used for storing the service end data of the persistent micro-service application;
wherein,,
the dispatching node sends synchronous or asynchronous execution instructions to the execution node through a remote call controller, an executor supports synchronous and asynchronous execution tasks, and a result is returned to a dispatching center through a callback interface;
the micro-service workflow scheduling method includes the steps of,
s1, a scheduling node acquires an idle thread from a task scheduling thread pool, and accesses a scheduling database through a new thread to acquire a task; if the task needs to be executed, entering a step S2; otherwise, entering a dormant state until the step is restarted after awakening;
s2, the scheduling node acquires a flow lock through competition; the flow lock is distributed to the optimal node in the distributed scheduling center, namely, the optimal node is elected to be a management node, and the node which does not acquire the lock is blocked until the flow lock is acquired;
s3, the management node starts the transaction, takes out a first task from the task database, judges the type of the instruction, submits the first task to the task queue to remotely call the execution node, then the management node deletes the task in the task database, closes the transaction, and records log information;
s4, the management node releases the flow lock, releases thread resources, and the thread returns to the thread pool for the next task scheduling;
when the management node fails or loses heartbeat due to network jitter, the following management node fault-tolerant flow is executed:
(1) The dispatching center monitors the fault event of the management node and triggers a fault tolerance mechanism;
(2) The available dispatching nodes compete for the fault-tolerant lock, the dispatching nodes with the fault-tolerant lock become fault-tolerant management nodes, and the fault-tolerant management nodes broadcast fault-tolerant alarm notification and record log information;
(3) The fault-tolerant management node inquires task examples with calling sources of original fault nodes, updates the calling sources of the examples to be Null, and generates a new task instruction;
(4) And releasing the fault-tolerant lock, and completing fault tolerance.
2. The method for dispatching micro-service workflow based on distributed dispatching system architecture according to claim 1, wherein the dispatching node management functions include job management, monitoring management, log management, configuration management, trigger management and dispatching log, and providing Restful interface and web page dynamic presentation.
3. The method for scheduling micro-service workflows based on a distributed scheduling system architecture according to claim 1, wherein the data stored in the scheduling database includes a task sequence, monitoring data, log data, and configuration data.
4. The method for scheduling micro-service workflow based on distributed scheduling system architecture according to claim 1, wherein the communication between the scheduling node and the executing node is performed with remote call and result callback through an API interface of http protocol.
5. The method for scheduling micro-service workflow based on distributed scheduling system architecture according to claim 1, wherein the scheduling node invokes the executing node in a non-blocking form, and releases the flow lock and thread without waiting for the executing node to result in a callback.
6. The method of claim 1, wherein when the instruction acquired by the management node/execution node is "resource deficient", the blocked thread is suspended, and the suspended flow is awakened to execute again.
7. The method for scheduling micro-service workflow based on distributed scheduling system architecture as claimed in claim 1, further comprising the steps of after fault tolerance is completed: thread scheduling is carried out again by the scheduling center, and the new management node takes over according to the different states of the monitored newly submitted task; monitoring the status of a task instance of a task which is running; for a task that is "submitted successfully", it is necessary to determine whether it already exists in the task queue, if so, the status of the task instance is monitored as well, and if not, the task instance is re-submitted.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110841580.7A CN113535362B (en) | 2021-07-26 | 2021-07-26 | Distributed scheduling system architecture and micro-service workflow scheduling method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110841580.7A CN113535362B (en) | 2021-07-26 | 2021-07-26 | Distributed scheduling system architecture and micro-service workflow scheduling method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113535362A CN113535362A (en) | 2021-10-22 |
CN113535362B true CN113535362B (en) | 2023-07-28 |
Family
ID=78120719
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110841580.7A Active CN113535362B (en) | 2021-07-26 | 2021-07-26 | Distributed scheduling system architecture and micro-service workflow scheduling method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113535362B (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114138435A (en) * | 2021-11-19 | 2022-03-04 | 浪潮通用软件有限公司 | Distributed scheduling method, device and medium under a microservice architecture |
CN114691233B (en) * | 2022-03-16 | 2024-10-29 | 中国电子科技集团公司第五十四研究所 | Remote sensing data processing plug-in distributed scheduling method based on workflow engine |
CN114625520B (en) * | 2022-05-16 | 2022-08-30 | 中博信息技术研究院有限公司 | Distributed task scheduling gateway scheduling method based on current limiting |
CN114995768B (en) * | 2022-06-24 | 2024-11-08 | 杭州谐云科技有限公司 | A method and system for improving distributed storage performance in container scenarios |
CN115129452A (en) * | 2022-07-13 | 2022-09-30 | 广州市百果园信息技术有限公司 | Cooperative control system, method, equipment, storage medium and product |
CN115357403A (en) * | 2022-10-20 | 2022-11-18 | 智己汽车科技有限公司 | Micro-service system for task scheduling and task scheduling method |
CN118590538A (en) * | 2024-05-30 | 2024-09-03 | 中国人民解放军61660部队 | A third-party system distributed scheduling system based on the server |
CN120301863B (en) * | 2025-06-09 | 2025-09-02 | 富盛科技股份有限公司 | Method for realizing automatic service discovery based on xxl-job transformation of service registration center |
CN120276832B (en) * | 2025-06-11 | 2025-09-19 | 深圳海规网络科技有限公司 | Task scheduling control method and system and electronic equipment |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10061627B2 (en) * | 2014-09-16 | 2018-08-28 | Oracle International Corporation | System and method for supporting waiting thread notification offloading in a distributed data grid |
CN106445648A (en) * | 2016-10-21 | 2017-02-22 | 天津海量信息技术股份有限公司 | System for achieving multi-worker coordination based on redis |
CN112527476B (en) * | 2019-09-19 | 2024-03-26 | 华为技术有限公司 | Resource scheduling method and electronic equipment |
CN111400053B (en) * | 2020-03-17 | 2023-12-15 | 畅捷通信息技术股份有限公司 | Database access system, method, apparatus and computer readable storage medium |
CN111752696B (en) * | 2020-06-25 | 2023-09-12 | 武汉众邦银行股份有限公司 | Distributed timing task scheduling method based on RPC and thread lock |
CN111897646A (en) * | 2020-08-13 | 2020-11-06 | 银联商务股份有限公司 | Asynchronous distributed lock implementation method and device, storage medium and electronic device |
CN112148436B (en) * | 2020-09-23 | 2023-06-20 | 厦门市易联众易惠科技有限公司 | Decentralized TCC transaction management method, device, equipment and system |
CN112162841B (en) * | 2020-09-30 | 2024-09-06 | 重庆长安汽车股份有限公司 | Big data processing oriented distributed scheduling system, method and storage medium |
CN112486695A (en) * | 2020-12-07 | 2021-03-12 | 浪潮云信息技术股份公司 | Distributed lock implementation method under high concurrency service |
CN113157447B (en) * | 2021-04-13 | 2023-08-29 | 中南大学 | RPC load balancing method based on intelligent network card |
-
2021
- 2021-07-26 CN CN202110841580.7A patent/CN113535362B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN113535362A (en) | 2021-10-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113535362B (en) | Distributed scheduling system architecture and micro-service workflow scheduling method | |
CN102346460B (en) | Transaction-based service control system and method | |
CN103442049B (en) | The mixed clouds operating system architecture of a kind of component-oriented and communication means thereof | |
CN108804238B (en) | Soft bus communication method based on remote procedure call | |
CN102521044B (en) | Distributed task scheduling method and system based on messaging middleware | |
CN111506412A (en) | Distributed asynchronous task construction and scheduling system and method based on Airflow | |
CA3168286A1 (en) | Data flow processing method and system | |
US20110004701A1 (en) | Provisioning highly available services for integrated enterprise and communication | |
CN113515356B (en) | Lightweight distributed resource management and task scheduler and method | |
CN101694709A (en) | Service-oriented distributed work flow management system | |
CN110532074A (en) | A kind of method for scheduling task and system of multi-tenant Mode S aaS service cluster environment | |
CN107436806A (en) | A kind of resource regulating method and system | |
CN101719852B (en) | Method and device for monitoring performance of middleware | |
US10498817B1 (en) | Performance tuning in distributed computing systems | |
JPH0563821B2 (en) | ||
CA2614976A1 (en) | Application server distributing the jobs to virtual environments running on different computers | |
WO2011137672A1 (en) | Method and device for task execution based on database | |
CN108073414B (en) | Implementation method for merging multithreading concurrent requests and submitting and distributing results in batches based on Jedis | |
CN114138434A (en) | Big data task scheduling system | |
WO2021043124A1 (en) | Kbroker distributed operating system, storage medium, and electronic device | |
CN114615308A (en) | RPC-based asynchronous multithreading concurrent network communication method and device | |
CN108563495A (en) | The cloud resource queue graded dispatching system and method for data center's total management system | |
CN115061814A (en) | A Distributed High Concurrency Scheduling System Based on Decentralized Job Tasks | |
CN113347430B (en) | Distributed scheduling device of hardware transcoding acceleration equipment and use method thereof | |
CN101018192A (en) | Grid workflow virtual service scheduling method based on the open grid service architecture |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |