[go: up one dir, main page]

CN113204418A - Task scheduling method and device, electronic equipment and storage medium - Google Patents

Task scheduling method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113204418A
CN113204418A CN202110547571.7A CN202110547571A CN113204418A CN 113204418 A CN113204418 A CN 113204418A CN 202110547571 A CN202110547571 A CN 202110547571A CN 113204418 A CN113204418 A CN 113204418A
Authority
CN
China
Prior art keywords
task
scheduling
node
task processing
tasks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110547571.7A
Other languages
Chinese (zh)
Inventor
吴想想
王思梦
郑峥
秦瑞雄
赵金鑫
胡智
王博
马晓恒
熊威
董华强
花薇薇
谭雨婷
李浩宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Construction Bank Corp
Original Assignee
China Construction Bank Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Construction Bank Corp filed Critical China Construction Bank Corp
Priority to CN202110547571.7A priority Critical patent/CN113204418A/en
Publication of CN113204418A publication Critical patent/CN113204418A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer And Data Communications (AREA)

Abstract

本发明公开了任务调度的方法、装置、电子设备和存储介质,涉及自动程序设计领域。该方法的一具体实施方式包括:接收任务调度请求,基于调度节点的地址信息,查询调度节点的配置参数表,得出调度节点的状态值;判断状态值与预设目标值是否一致;若否,则忽略任务调度请求;若是,则获取任务调度请求中的任务标识,基于地址信息和任务标识,查询配置参数表得出调度节点的调度任务量,从任务库中获取任务标识对应的调度任务量个待处理任务,向一个或多个任务处理节点分配的待处理任务。该实施方式能够解决触发的批量任务只会在一个任务处理服务器,很容易使集群中其他任务处理服务器不能满载运行,从而导致负载不均衡。

Figure 202110547571

The invention discloses a task scheduling method, device, electronic device and storage medium, and relates to the field of automatic program design. A specific implementation of the method includes: receiving a task scheduling request, querying the configuration parameter table of the scheduling node based on the address information of the scheduling node, and obtaining the status value of the scheduling node; judging whether the status value is consistent with the preset target value; , then ignore the task scheduling request; if so, obtain the task ID in the task scheduling request, query the configuration parameter table to obtain the scheduling task amount of the scheduling node based on the address information and the task ID, and obtain the scheduling task corresponding to the task ID from the task library The number of pending tasks, the pending tasks assigned to one or more task processing nodes. This implementation can solve the problem that the triggered batch tasks are only on one task processing server, which easily prevents other task processing servers in the cluster from running at full load, resulting in unbalanced load.

Figure 202110547571

Description

Task scheduling method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of automatic program design, and in particular, to a method and an apparatus for task scheduling, an electronic device, and a storage medium.
Background
In large trading systems, a large number of batch tasks are often required to be processed in order to meet the requirements of the trading system applications. In the prior art, batch tasks are triggered regularly, and one task processing server is called from a cluster of task processing servers when a task is triggered so as to process the triggered task. However, since the triggered batch tasks are only on one task processing server, other task processing servers in the cluster cannot be fully operated, and thus load imbalance is caused.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method and an apparatus for task scheduling, an electronic device, and a storage medium, which can solve the problem that triggered batch tasks are only on one task processing server, and other task processing servers in a cluster cannot run fully, so that load imbalance is caused.
To achieve the above object, according to an aspect of an embodiment of the present invention, a method for task scheduling is provided.
The task scheduling method of the embodiment of the invention comprises the following steps: receiving a task scheduling request, and inquiring a configuration parameter table of the scheduling node based on the address information of the scheduling node to obtain a state value of the scheduling node; judging whether the state value is consistent with a preset target value or not; if not, ignoring the task scheduling request; if yes, acquiring a task identifier in the task scheduling request, inquiring the configuration parameter table to obtain a scheduling task amount of the scheduling node based on the address information and the task identifier, acquiring a plurality of to-be-processed tasks corresponding to the scheduling task amount from a task library, and distributing the to-be-processed tasks to one or a plurality of task processing nodes.
In one embodiment, the allocating tasks to be processed to one or more of the task processing nodes includes:
inquiring the concurrent batch number corresponding to the address information and the task identifier, the number of the task processing nodes and the address information of each task processing node;
according to the number of the task processing nodes, calling an allocation model to determine the tasks to be processed corresponding to the task processing nodes, and splitting the tasks to be processed corresponding to the task nodes into the tasks in batches with the number of the concurrent batches;
and sending the corresponding batch task to each task processing node based on the address information of each task processing node.
In another embodiment, the querying the number of the task processing nodes and the address information of each of the task processing nodes includes:
querying configuration information of each task processing node to obtain a state identifier of each task processing node;
judging whether the state identifier of each task processing node is a preset identifier or not;
if yes, determining the task processing node as an effective processing node; if not, determining the task processing node as an invalid processing node;
and inquiring the number of the effective processing nodes and the address information of each effective task processing node.
In another embodiment, said querying the configuration parameter table of the scheduling node comprises:
and sending a query request to a scheduling parameter node so that the scheduling parameter node queries a configuration parameter table of the scheduling node based on the address information of the scheduling node in the query request and returns the configuration parameter table to the scheduling node.
To achieve the above object, according to an aspect of the embodiments of the present invention, there is provided yet another method for task scheduling.
The embodiment of the invention also discloses a task scheduling method, which is used for task processing nodes and comprises the following steps:
receiving a task processing request sent by a scheduling node, and acquiring a task to be processed in the task processing request;
acquiring a state identifier of the task processing node from a parameter configuration table of the task processing node based on the address information of the task processing node to judge whether the state identifier of the task processing node is a preset identifier;
if not, discarding the task to be processed;
if so, inquiring the operation parameter value of the task processing node, judging whether the operation parameter value is smaller than a preset operation parameter threshold value, if not, discarding the task to be processed, and if so, locking the task to be processed.
In one embodiment, the querying an operation parameter value of the task processing node, and determining whether the operation parameter value is smaller than a preset operation parameter threshold value includes:
and inquiring the thread usage number of the task processing node, and judging whether the thread usage number is smaller than a preset operation parameter threshold value.
In yet another embodiment, the locking the pending task comprises:
acquiring the quantity of the concurrent batches of the tasks to be processed in the task processing request;
calculating a difference value between the thread usage number and the preset threshold value;
judging whether the difference value is larger than the quantity of the concurrent batches or not;
if so, locking the task to be processed;
and if not, screening the different batch tasks from the tasks to be processed, determining the different batch tasks as target tasks, locking the target tasks, and discarding other tasks except the target tasks in the tasks to be processed.
To achieve the above object, according to another aspect of the embodiments of the present invention, an apparatus for task scheduling is provided.
The task scheduling device of the embodiment of the invention is arranged at a scheduling node and comprises: the receiving unit is used for receiving a task scheduling request, inquiring a configuration parameter table of the scheduling node based on the address information of the scheduling node, and obtaining a state value of the scheduling node; the judging unit is used for judging whether the state value is consistent with a preset target value or not; the ignoring unit is used for ignoring the task scheduling request if the task scheduling request is not received; and the scheduling unit is used for acquiring a task identifier in the task scheduling request if the task identifier is the address information, inquiring the configuration parameter table to obtain the scheduling task amount of the scheduling node based on the address information and the task identifier, acquiring a plurality of to-be-processed tasks corresponding to the scheduling task amount from a task library, and distributing the to-be-processed tasks to one or more task processing nodes.
In an embodiment, the scheduling unit is specifically configured to:
inquiring the concurrent batch number corresponding to the address information and the task identifier, the number of the task processing nodes and the address information of each task processing node;
according to the number of the task processing nodes, calling an allocation model to determine the tasks to be processed corresponding to the task processing nodes, and splitting the tasks to be processed corresponding to the task nodes into the tasks in batches with the number of the concurrent batches;
and sending the corresponding batch task to each task processing node based on the address information of each task processing node.
In another embodiment, the scheduling unit is specifically configured to:
querying configuration information of each task processing node to obtain a state identifier of each task processing node;
judging whether the state identifier of each task processing node is a preset identifier or not;
if yes, determining the task processing node as an effective processing node; if not, determining the task processing node as an invalid processing node;
and inquiring the number of the effective processing nodes and the address information of each effective task processing node.
In yet another embodiment, the receiving unit is specifically configured to send an inquiry request to a scheduling parameter node, so that the scheduling parameter node inquires a configuration parameter table of the scheduling node based on address information of the scheduling node in the inquiry request, and returns the configuration parameter table to the scheduling node.
To achieve the above object, according to another aspect of the embodiments of the present invention, an apparatus for task scheduling is provided.
The task scheduling device provided by the embodiment of the invention is arranged at a task processing node and comprises: the scheduling node comprises a receiving unit, a processing unit and a processing unit, wherein the receiving unit is used for receiving a task processing request sent by the scheduling node and acquiring a task to be processed in the task processing request; the judging unit is used for acquiring the state identifier of the task processing node from a parameter configuration table of the task processing node based on the address information of the task processing node so as to judge whether the state identifier of the task processing node is a preset identifier; the discarding unit is used for discarding the task to be processed if the task to be processed is not processed; and the processing unit is used for inquiring the operation parameter value of the task processing node if the operation parameter value is smaller than a preset operation parameter threshold value, discarding the task to be processed if the operation parameter value is smaller than the preset operation parameter threshold value, and locking the task to be processed if the operation parameter value is larger than the preset operation parameter threshold value.
In another embodiment, the processing unit is specifically configured to:
and inquiring the thread usage number of the task processing node, and judging whether the thread usage number is smaller than a preset operation parameter threshold value.
In another embodiment, the processing unit is specifically configured to:
acquiring the quantity of the concurrent batches of the tasks to be processed in the task processing request;
calculating a difference value between the thread usage number and the preset operation parameter threshold value;
judging whether the difference value is larger than the quantity of the concurrent batches or not;
if so, locking the task to be processed;
and if not, screening the different batch tasks from the tasks to be processed, determining the different batch tasks as target tasks, locking the target tasks, and discarding other tasks except the target tasks in the tasks to be processed.
To achieve the above object, according to still another aspect of an embodiment of the present invention, there is provided an electronic apparatus.
An electronic device of an embodiment of the present invention includes: one or more processors; the storage device is used for storing one or more programs, and when the one or more programs are executed by the one or more processors, the one or more processors implement the method for task scheduling provided by the embodiment of the invention.
To achieve the above object, according to still another aspect of an embodiment of the present invention, there is provided a computer-readable medium.
A computer readable medium of an embodiment of the present invention stores thereon a computer program, and the computer program, when executed by a processor, implements the method for task scheduling provided by an embodiment of the present invention.
One embodiment of the above invention has the following advantages or benefits: in the embodiment of the invention, after receiving the task scheduling request, the scheduling node can firstly inquire the state value of the scheduling node, and determine whether the scheduling node is a main node or not by judging whether the state value is a preset target value or not, namely whether the task scheduling can be carried out or not; after the state value is determined to be consistent with the preset target value, the scheduling task amount of the scheduling node can be inquired, the tasks to be processed are obtained from the task library and are distributed to the task processing centers, and after the task processing centers obtain the tasks to be processed, the tasks to be processed can be locked when the tasks to be processed are determined to be capable of being processed, so that the tasks to be processed can not be distributed to other task processing nodes. Therefore, the scheduling node is arranged in the embodiment of the invention, the task to be processed is scheduled and distributed to each task processing node through the scheduling node, and each task processing node can judge whether the distributed task to be processed can be processed or not based on the operation parameter value after receiving the task to be processed, so that the problem of load imbalance caused by the fact that batch tasks are concentrated in one task processing node for processing and other task processing nodes cannot be operated fully can be avoided, and the task processing efficiency is improved.
Further effects of the above-mentioned non-conventional alternatives will be described below in connection with the embodiments.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
FIG. 1 is a schematic diagram of a system architecture of a system for task scheduling according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of one main flow of a method of task scheduling according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of yet another major flow of a method of task scheduling according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of yet another major flow of a method of task scheduling according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of the main elements of an apparatus for task scheduling according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of the main elements of an apparatus for task scheduling according to an embodiment of the present invention;
FIG. 7 is a diagram of yet another exemplary system architecture to which embodiments of the present invention may be applied;
FIG. 8 is a schematic block diagram of a computer system suitable for use in implementing embodiments of the present invention.
Detailed Description
Exemplary embodiments of the present invention are described below with reference to the accompanying drawings, in which various details of embodiments of the invention are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
The embodiment of the invention provides a task scheduling system, which can be used in a batch task processing scene, and particularly can be used in a scene of scheduling timed batch tasks. Specifically, the task scheduling system may include a scheduling node cluster formed by a plurality of scheduling nodes, and a task processing node cluster formed by a plurality of task processing nodes, where the scheduling node cluster may further include a scheduling parameter node for storing a scheduling parameter, and the task processing node cluster may further include a task information node for storing task information. Fig. 1 is a schematic structural diagram of a task scheduling system according to an embodiment of the present invention. As shown in fig. 1, the scheduling nodes are disposed in the scheduling engine servers, each scheduling engine server forms a scheduling node cluster, the task processing nodes are disposed in the batch task processing servers, each batch task processing server forms a task processing node cluster, the scheduling parameter nodes are disposed in the scheduling parameter servers, and the task information nodes are disposed in the task information servers. Each scheduling node in the scheduling node cluster can call the task processing node, the scheduling node and the task processing node can inquire data such as configuration parameters from the scheduling parameter server, and the scheduling node and the task processing node can inquire data such as task information from the task information server.
In the embodiment of the invention, a timed batch task can be defined by adopting a Quartz or other cluster timed task framework in the scheduling node cluster, when each timed batch task is triggered, a task scheduling request can be sent to each scheduling node, but only one scheduling node is triggered to execute task scheduling so as to avoid task scheduling contention, and the triggered scheduling node is determined as a main node. The scheduling parameter server stores a configuration parameter table of each scheduling node, and indicates which scheduling node is the master node by configuring whether the master node state value in the parameter scheduling table is the master node, so that each scheduling node can inquire the state value in the configuration parameter table after receiving the task scheduling request to judge whether the scheduling node is the master node, and if so, the scheduling node performs task scheduling.
The scheduling parameter server may be configured to store each parameter information of the scheduling node, and specifically, may store each parameter information in a data table, for example, a configuration parameter table. The configuration parameter table of the dispatching node may include address information (IP address) of each dispatching node, a server name, a status value of whether to host a node, a task identifier, a number of concurrent batches, a number of batches, and the like. The configuration parameter table of the task processing node may include address information of the task processing node, a server name, a status flag of whether to be valid, and the like. The configuration parameters of the scheduling node can be used for the scheduling node to acquire a task to be processed (namely the task needing to be scheduled) when the scheduling node schedules the task and determine the specific mode of task scheduling; the configuration parameters of the task processing node can be used for judging whether the task processing node is a valid processing node or not, whether a task to be processed can be executed or not and the like.
It should be noted that, in the configuration parameters, the task identifier may be used to identify a task and also indicate a processing step corresponding to a current task to be processed. Taking the batch transfer service as an example, the batch transfer service corresponds to three processing steps, namely three processing steps of account checking, account transferring and result generating, so a timing task corresponding to each processing step and a task identifier corresponding to the timing task are established, and which processing step corresponds to the current task to be processed can be inquired based on the task identifier. The number of concurrent batches indicates the number of tasks to be processed that are split into batches by the dispatch node. When the dispatching node allocates the to-be-processed tasks to be executed for each task processing node, the to-be-processed tasks can be split into a number of batch tasks in a concurrent batch number, and then the split batch tasks are sent to the task processing nodes. For example, the number of the to-be-processed tasks allocated to a certain task processing node is 10, and the number of the concurrent batches is 2, so that the dispatching node divides the 10 to-be-processed tasks into 2 batches of tasks, for example, every 5 to-be-processed tasks are one batch of tasks, and then sends the batch of tasks to the task processing node as a whole. The valid or invalid status identifier is used to indicate whether the task processing node is valid, i.e. whether the node is a node capable of processing the task to be processed. In the embodiment of the invention, the number of effective task processing nodes in the task processing node cluster can be adjusted in real time through the effective state identification, so that the problems that when the number of tasks to be processed is small, the effective task processing nodes are too many, so that each task processing node cannot be fully operated, and when the number of tasks to be processed is large, the effective task processing nodes are too few, so that each task processing node is overloaded, and the tasks to be processed cannot be processed in time are solved.
In the embodiment of the present invention, a task library may be set in the task information server, the scheduling node may obtain the tasks to be processed from the task library, and the task library may store task information of each task to be processed, which may specifically include a batch number, a batch type, a task identifier, a detail number, and a batch state. The batch type can comprise an in-line batch, an out-of-line batch and the like, and the task classification according to the batch is a task classification mode. The task identifier represents a processing step corresponding to the task currently, the detail number represents detail data required for executing the task, and the batch state represents an execution state of the task, and may include states of waiting for processing, in-process (locking), completion, and the like. When the scheduling node acquires the task to be processed, the task needing scheduling can be inquired through the task identifier, and then the task with the batch state to be processed is acquired and used as the task to be processed for scheduling.
An embodiment of the present invention provides a method for task scheduling, where the method may be executed by a scheduling node in a system architecture shown in fig. 1, and as shown in fig. 2, the method includes:
s201: and receiving a task scheduling request, and inquiring a configuration parameter table of the scheduling node based on the address information of the scheduling node to obtain a state value of the scheduling node.
After receiving the scheduling request, the scheduling node may first query a configuration parameter table of the scheduling node based on address information thereof to obtain a state value of the scheduling node.
The configuration parameter table of the scheduling node may be stored in the scheduling parameter server, and in this step, the scheduling node may send a query request including address information to the scheduling parameter server, so as to query the corresponding configuration parameter table through the address information, thereby obtaining the state value. Or the scheduling parameter server may periodically send the configuration parameter table to the scheduling node, and the scheduling node may store the configuration parameter table, so that the state value of the scheduling node is directly queried from the stored configuration parameter table based on the address information in this step. The status value is used to indicate whether the scheduling node is the master node, and in the embodiment of the present invention, the status value may be configured through the scheduling parameter table to indicate which scheduling node is the master node.
S202: and judging whether the state value is consistent with a preset target value or not.
The preset target value is a state value when the scheduling node is the master node, so that in the step, after the state value of the scheduling node is inquired, whether the scheduling node is the master node or not is judged by judging whether the state value is consistent with the preset target value or not. If the state value is consistent with the preset target value, the scheduling node is a main node, and task scheduling can be carried out; if the state value is not consistent with the preset target value, the scheduling node is not the main node and cannot perform task scheduling.
It should be noted that, in the embodiment of the present invention, in order to avoid the contention or the repeated scheduling of tasks during task scheduling, only one scheduling node is configured as a master node in a scheduling node cluster at the same time.
S203: if not, ignoring the task scheduling request; if so, acquiring a task identifier in the task scheduling request, inquiring the configuration parameter table to obtain the scheduling task amount of the scheduling node based on the address information and the task identifier, acquiring a scheduling task amount corresponding to the task identifier from the task library and a task to be processed, and distributing the task to be processed to one or more task processing nodes.
If not, the scheduling node is not the master node, the task scheduling request may not be processed, and the task scheduling request is ignored. If so, the scheduling node is indicated to be the master node and needs to execute task scheduling, so that a task identifier in a task scheduling request can be obtained first to determine which processing step corresponds to the task to be processed, and then a configuration parameter table can be queried based on the address information and the task identifier to obtain the scheduling task amount of the scheduling node, that is, the number of the tasks to be processed which the scheduling node needs to schedule at this time is queried. After the task identifier and the scheduling task amount are inquired, the tasks to be processed can be obtained from the task library, the tasks to be processed are tasks corresponding to the task identifier, and the number of the tasks to be processed is the scheduling task amount. After the scheduling node acquires the tasks to be processed, the tasks to be processed can be distributed to each task processing node, so that the task processing nodes can execute the tasks to be processed conveniently.
In the embodiment of the present invention, the to-be-processed task allocated to each task processing node by the scheduling node may be specifically executed as: inquiring the number of concurrent batches, the number of task processing nodes and the address information of each task processing node corresponding to the address information and the task identification; according to the number of the task processing nodes, calling an allocation model to determine the tasks to be processed corresponding to the task processing nodes, and splitting the tasks to be processed corresponding to the task processing nodes into a plurality of batches of tasks in a concurrent batch; and sending the corresponding batch tasks to the task processing nodes based on the address information of the task processing nodes.
After the scheduling node acquires the tasks to be processed, the scheduling node needs to determine the tasks to be processed corresponding to each task processing node, namely the tasks to be processed which are required to be processed by each task processing node. For example, the distribution model may use an average distribution algorithm, that is, to-be-processed tasks are averagely distributed to each task processing node, so that the to-be-processed tasks may be determined for each task processing node in turn. After the to-be-processed tasks corresponding to the task processing nodes are determined, the to-be-processed tasks corresponding to the task processing nodes can be split based on the number of concurrent batches, and the number of concurrent batches of tasks is obtained. The configuration parameter table of the dispatching node may include the number of the concurrent batches, and in this step, the number of the concurrent batches may be queried from the configuration parameter table of the dispatching node, so that the to-be-processed task of each task processing node may be split. After the batch task corresponding to each task processing node is obtained, the batch task can be sent to each task processing node based on the address information of the task processing node. The scheduling node can send the corresponding task to be processed to the task processing node through the task processing request.
In the embodiment of the invention, when the number and the address information of the task processing nodes are inquired, the effective task processing nodes, namely the task processing nodes for processing the tasks to be processed, can be determined firstly. Therefore, the configuration information of each task processing node can be queried first to obtain the state identifier of each task processing node, and specifically, the configuration information can be queried from the scheduling parameter node. And then judging whether the state identification of each task processing node is a preset identification or not, wherein the preset identification represents the state identification when the task processing node is an effective task processing node. If the task processing node state identifier is a preset identifier, determining the task processing node as an effective task processing node, namely an effective processing node; if the task processing node state identifier is not the preset identifier, the task processing node which is not valid, namely the invalid processing node, can be determined. Therefore, after the effective processing nodes are determined, only the number and the address information of the effective processing nodes can be acquired, so that the tasks to be processed can be distributed to the effective task processing nodes.
It should be noted that, in the embodiment of the present invention, the configuration parameter table of the task processing node may further include other configuration parameters of the task processing node, such as a maximum thread number, a maximum processing task number, a processable task type, and the like. The scheduling node may also query the configuration parameters of each task processing node when allocating the to-be-processed task to each task processing node, so that the to-be-processed task may be allocated based on the configuration parameters.
In the embodiment of the invention, the scheduling of the tasks to be processed is realized through the scheduling node, the tasks to be processed are distributed to the task processing nodes, and the task processing nodes can judge whether to process the tasks to be processed or not based on the operation parameter values after receiving the tasks to be processed, so that the problem of unbalanced load caused by the fact that batch tasks are concentrated in one task processing node for processing and other task processing nodes cannot be fully loaded for operation can be avoided, and the task processing efficiency is improved.
Referring to the system architecture shown in fig. 1, an embodiment of the present invention provides a method for task scheduling, where the method is executable by a task processing node in the system architecture shown in fig. 1, and as shown in fig. 3, the method includes:
s301: and receiving a task processing request sent by the scheduling node, and acquiring a task to be processed in the task processing request.
After the scheduling node determines the to-be-processed task corresponding to the task processing node, the to-be-processed task can be sent to the task processing node through the task processing request. After receiving the task processing request, the task processing node may obtain a task to be processed in the task processing request. The specific task to be processed may include various information of the task to be processed, such as a batch number, a batch type, a task identifier, a detail number, and the like of the task to be processed, so that the task processing node may execute the task to be processed.
S302: and acquiring the state identifier of the task processing node from a parameter configuration table of the task processing node based on the address information of the task processing node so as to judge whether the state identifier of the task processing node is a preset identifier.
After the task processing node acquires the task to be processed, it needs to first judge whether the task to be processed can be processed, so that the state identifier of the task processing node can be acquired from the parameter configuration table of the task processing node based on the address information of the task processing node, and then whether the state identifier of the task processing node is a valid task processing node is determined by judging whether the state identifier of the task processing node is a preset identifier.
If the state identifier of the task processing node is a preset identifier, the task processing node is a valid task processing node, namely a valid processing node; and if the state identifier of the task processing node is not the preset identifier, indicating that the task processing node is not a valid task processing node, namely an invalid processing node.
It should be noted that the configuration parameter table of the task processing node is stored in the scheduling parameter server, and in this step, the task processing node may send a query request including address information to the scheduling parameter server, so as to obtain the state identifier by querying the corresponding configuration parameter table. Or the scheduling parameter server may periodically send the configuration parameter table to the task processing node, and the task processing node may store the configuration parameter table, so that the state identifier of the task processing node is directly queried from the stored configuration parameter table based on the address information in this step.
S303: if not, discarding the task to be processed; if so, inquiring the operation parameter value of the task processing node, judging whether the operation parameter value is smaller than a preset operation parameter threshold value, if not, discarding the task to be processed, and if so, locking the task to be processed.
If not, the task processing node is not an effective processing node, the task to be processed cannot be processed, and the task to be processed can be directly discarded. If so, it is indicated that the task processing node is an effective processing node and can process the task to be processed, and at this time, it is also required to determine whether the task processing node has the capability of processing the task to be processed, that is, processing the task to be processed. In the embodiment of the invention, whether the task processing node has the capability of processing the task to be processed can be determined by whether the operation parameter value of the task processing node is smaller than the preset operation parameter threshold value. If the operation parameter value is smaller than the preset operation parameter threshold value, the task processing node can process the task, and the task to be processed can be locked; if the operation parameter value is not less than the preset operation parameter threshold value, the task processing node can not process the task any more, and the task to be processed can be directly discarded.
In the embodiment of the present invention, the operation parameter value may be a thread usage number, and the preset operation parameter threshold value may be a maximum thread number of the task processing node, so the step of querying the operation parameter value of the task processing node and determining whether the operation parameter value is smaller than the preset operation parameter threshold value may be specifically performed as: and inquiring the thread use number of the task processing nodes, and judging whether the thread use number is smaller than a preset operation parameter threshold value. The operating parameter values of the task processing nodes may be obtained from operating parameters stored in the task processing nodes.
It should be noted that, after it is determined that the number of threads used is less than the preset operation parameter threshold, it may also be determined whether the task processing node can completely process the received to-be-processed tasks. In general, a scheduling node sends tasks to be processed to task processing nodes in batches, and when the task processing nodes process the tasks to be processed, each batch of tasks is processed by using one thread, so that after the number of used threads is determined to be smaller than a preset operation parameter threshold value, the number of concurrent batches of the tasks to be processed in a task processing request needs to be obtained to determine the number of threads needed for processing the tasks to be processed; then calculating a difference value between the thread usage number and a preset threshold value to determine that the number of unused threads of the task processing node still exists; therefore, whether the task processing node can process all the tasks to be processed can be judged by judging whether the difference value is larger than the quantity of the concurrent batches. If so, namely the difference value is greater than the number of the concurrent batches, indicating that all the tasks to be processed can be processed, and locking the tasks to be processed; if not, the processing method indicates that all the tasks to be processed cannot be processed, so that different batch tasks can be screened from the tasks to be processed, the tasks to be processed can be determined as target tasks, namely the tasks to be processed which can be processed are screened, then the target tasks are locked, so that the scheduling node can lock the target tasks, and meanwhile, the task processing node discards other tasks except the target tasks in the tasks to be processed, namely the tasks to be processed which cannot be processed are discarded.
It should be noted that, since the task processing node discards the to-be-processed task when determining that the to-be-processed task cannot be processed, the operation and maintenance staff inquires the processing completion rate of the to-be-processed task in the task processing node (the number of processed to-be-processed tasks is divided by the number of all received to-be-processed tasks). If the processing completion rate is too low, the number of the task processing nodes in the task processing node cluster can be increased at any time, and the increased task processing nodes can be registered in the scheduling parameter server; if the processing completion rate is excessive, the task processing nodes in the task processing node cluster can be deleted at any time, so that the number of the task processing nodes in the task processing node cluster can be dynamically adjusted.
The mode of locking the to-be-processed tasks by the task processing node may be to send a to-be-processed task locking instruction to the task library to update the batch state of the corresponding to-be-processed tasks in the task library to be in processing, so as to avoid the tasks from being repeatedly scheduled and executed.
In the embodiment of the invention, the task to be processed is scheduled and distributed to each task processing node through the scheduling node, and each task processing node can judge and execute the processing of the task to be processed based on the operation parameter value after receiving the task to be processed, so that the problem of load imbalance caused by the fact that batch tasks are concentrated in one task processing node for processing and other task processing nodes cannot be operated fully can be avoided, and the task processing efficiency is improved.
Based on the embodiments shown in fig. 2 and fig. 3, the present invention provides an implementation manner to explain a specific execution process of task scheduling. As shown in fig. 4, the method includes:
s401: and the scheduling node receives the task scheduling request, inquires a configuration parameter table of the scheduling node based on the address information of the scheduling node and obtains a state value of the scheduling node.
S402: and the scheduling node judges whether the state value is consistent with a preset target value or not.
S403: if yes, the scheduling node obtains a task identifier in the task scheduling request, inquires the configuration parameter table to obtain the scheduling task amount of the scheduling node based on the address information and the task identifier, obtains scheduling task amount to-be-processed tasks corresponding to the task identifier from a task library, and distributes the to-be-processed tasks to the task processing nodes.
In the embodiment of the present invention, the state value is consistent with the preset target value, and when the state value is not consistent with the preset target value, the processing may be performed as described in the embodiment shown in fig. 2.
S404: and the task processing node receives the task processing request sent by the scheduling node and acquires the task to be processed in the task processing request.
S405: the task processing node acquires the state identifier of the task processing node from the parameter configuration table of the task processing node based on the address information of the task processing node so as to judge whether the state identifier of the task processing node is a preset identifier.
S406: if so, the task processing node inquires the operation parameter value of the task processing node, judges whether the operation parameter value is smaller than a preset operation parameter threshold value, and locks the task to be processed if so.
In the embodiment of the present invention, the state identifier of the task processing node is taken as the preset identifier, and the operation parameter value is smaller than the preset operation parameter threshold value, and when the state identifier of the task processing node is not the preset identifier or the operation parameter value is not smaller than the preset operation parameter threshold value, the processing may be performed through the description in the embodiment shown in fig. 3.
It should be noted that, in the embodiment of the present invention, data processing of each step may be as described in the embodiment shown in fig. 2 and fig. 3, and is not described herein again.
In the embodiment of the invention, the task to be processed is scheduled and distributed to each task processing node through the scheduling node, and each task processing node can judge and execute the processing of the task to be processed based on the operation parameter value after receiving the task to be processed, so that the problem of unbalanced load caused by the fact that each task processing node cannot run fully can be avoided, and the task processing efficiency is improved.
In order to solve the problems in the prior art, an embodiment of the present invention provides a device 500 for task scheduling, which is disposed at a scheduling node, and as shown in fig. 5, the device 500 includes:
a receiving unit 501, configured to receive a task scheduling request, and query a configuration parameter table of the scheduling node based on address information of the scheduling node to obtain a state value of the scheduling node;
a determining unit 502, configured to determine whether the state value is consistent with a preset target value;
an ignoring unit 503, configured to ignore the task scheduling request if the task scheduling request is not received;
and the scheduling unit 504 is configured to, if yes, obtain a task identifier in the task scheduling request, query the configuration parameter table to obtain a scheduling task amount of the scheduling node based on the address information and the task identifier, obtain a plurality of to-be-processed tasks corresponding to the scheduling task amount from a task library, and allocate the to-be-processed tasks to one or more task processing nodes.
It should be understood that the manner of implementing the embodiment of the present invention is the same as the manner of implementing the embodiment shown in fig. 2, and the description thereof is omitted.
In an implementation manner of the embodiment of the present invention, the scheduling unit 504 is specifically configured to:
inquiring the concurrent batch number corresponding to the address information and the task identifier, the number of the task processing nodes and the address information of each task processing node;
according to the number of the task processing nodes, calling an allocation model to determine the tasks to be processed corresponding to the task processing nodes, and splitting the tasks to be processed corresponding to the task nodes into the tasks in batches with the number of the concurrent batches;
and sending the corresponding batch task to each task processing node based on the address information of each task processing node.
In another implementation manner of the embodiment of the present invention, the scheduling unit 504 is specifically configured to:
querying configuration information of each task processing node to obtain a state identifier of each task processing node;
judging whether the state identifier of each task processing node is a preset identifier or not;
if yes, determining the task processing node as an effective processing node; if not, determining the task processing node as an invalid processing node;
based on the number of active processing nodes and address information for each of the active task processing nodes.
In another implementation manner of the embodiment of the present invention, the receiving unit 501 is configured to send a query request to a scheduling parameter node, so that the scheduling parameter node queries a configuration parameter table of the scheduling node based on address information of the scheduling node in the query request, and returns the configuration parameter table to the scheduling node.
It should be understood that the manner of implementing the embodiment of the present invention is the same as the manner of implementing the embodiment shown in fig. 2, and the description thereof is omitted.
In the embodiment of the invention, the task to be processed is scheduled and distributed to each task processing node through the scheduling node, and each task processing node can judge and execute the processing of the task to be processed based on the operation parameter value after receiving the task to be processed, so that the problem of unbalanced load caused by the fact that each task processing node cannot run fully can be avoided, and the task processing efficiency is improved.
In order to solve the problems in the prior art, an embodiment of the present invention provides a device 600 for task scheduling, which is disposed at a task processing node, and as shown in fig. 6, the device 600 includes:
a receiving unit 601, configured to receive a task processing request sent by a scheduling node, and obtain a to-be-processed task in the task processing request;
a determining unit 602, configured to obtain a state identifier of a task processing node from a parameter configuration table of the task processing node based on address information of the task processing node, so as to determine whether the state identifier of the task processing node is a preset identifier;
a discarding unit 603, configured to discard the to-be-processed task if the task is not to be processed;
a processing unit 604, configured to query an operation parameter value of the task processing node if the operation parameter value is smaller than a preset operation parameter threshold, discard the to-be-processed task if the operation parameter value is smaller than the preset operation parameter threshold, and lock the to-be-processed task if the operation parameter value is smaller than the preset operation parameter threshold.
It should be understood that the manner of implementing the embodiment of the present invention is the same as the manner of implementing the embodiment shown in fig. 3, and the description thereof is omitted.
In an implementation manner of the embodiment of the present invention, the processing unit 604 is specifically configured to:
and inquiring the thread usage number of the task processing node, and judging whether the thread usage number is smaller than a preset operation parameter threshold value.
In another implementation manner of the embodiment of the present invention, the processing unit 604 is specifically configured to:
acquiring the quantity of the concurrent batches of the tasks to be processed in the task processing request;
calculating a difference value between the thread usage number and the preset threshold value;
judging whether the difference value is larger than the quantity of the concurrent batches or not;
if so, locking the task to be processed;
and if not, screening the different batch tasks from the tasks to be processed, determining the different batch tasks as target tasks, locking the target tasks, and discarding other tasks except the target tasks in the tasks to be processed.
It should be understood that the embodiment of the present invention is implemented in the same manner as the embodiment shown in fig. 3 or fig. 4, and is not repeated herein.
In the embodiment of the invention, the task to be processed is scheduled and distributed to each task processing node through the scheduling node, and each task processing node can judge and execute the processing of the task to be processed based on the operation parameter value after receiving the task to be processed, so that the problem of unbalanced load caused by the fact that each task processing node cannot run fully can be avoided, and the task processing efficiency is improved.
According to an embodiment of the present invention, an electronic device and a readable storage medium are also provided.
The electronic device of the embodiment of the invention comprises: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the processor, and the instructions are executed by the at least one processor to cause the at least one processor to execute the method for task scheduling provided by the embodiment of the invention.
Fig. 7 shows an exemplary system architecture 700 of a method or apparatus for task scheduling to which embodiments of the present invention may be applied.
As shown in fig. 7, the system architecture 700 may include terminal devices 701, 702, 703, a network 704, and a server 705. The network 704 serves to provide a medium for communication links between the terminal devices 701, 702, 703 and the server 705. Network 704 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
A user may use the terminal devices 701, 702, 703 to interact with a server 705 over a network 704, to receive or send messages or the like. Various client applications may be installed on the terminal devices 701, 702, 703.
The terminal devices 701, 702, 703 may be, but are not limited to, smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 705 may be a server that provides various services, and may analyze and process received data such as batch tasks, and feed back a processing result (e.g., a task execution result — just an example) to the terminal device.
It should be noted that the method for task scheduling provided by the embodiment of the present invention is generally executed by the server 705, and accordingly, the device for task scheduling is generally disposed in the server 705.
It should be understood that the number of terminal devices, networks, and servers in fig. 7 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to FIG. 8, a block diagram of a computer system 800 suitable for use in implementing embodiments of the present invention is shown. The computer system illustrated in FIG. 8 is only one example and should not impose any limitations on the scope of use or functionality of embodiments of the invention.
As shown in fig. 8, the computer system 800 includes a Central Processing Unit (CPU)801 that can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)802 or a program loaded from a storage section 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data necessary for the operation of the system 800 are also stored. The CPU 801, ROM 802, and RAM 803 are connected to each other via a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
The following components are connected to the I/O interface 805: an input portion 806 including a keyboard, a mouse, and the like; an output section 807 including a signal such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 808 including a hard disk and the like; and a communication section 809 including a network interface card such as a LAN card, a modem, or the like. The communication section 809 performs communication processing via a network such as the internet. A drive 810 is also connected to the I/O interface 805 as necessary. A removable medium 811 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 810 as necessary, so that a computer program read out therefrom is mounted on the storage section 808 as necessary.
In particular, according to the embodiments of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 809 and/or installed from the removable medium 811. The computer program executes the above-described functions defined in the system of the present invention when executed by the Central Processing Unit (CPU) 801.
It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a unit, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present invention may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes a receiving unit, a judging unit, an ignoring unit, and a scheduling unit. Where the names of these elements do not in some cases constitute a limitation of the element itself, for example, a receiving element may also be described as "an element of the function of the receiving element".
As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to perform the method for task scheduling provided by the present invention.
The above-described embodiments should not be construed as limiting the scope of the invention. Those skilled in the art will appreciate that various modifications, combinations, sub-combinations, and substitutions can occur, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (16)

1.一种任务调度的方法,其特征在于,用于调度节点,包括:1. a method for task scheduling, characterized in that, for scheduling nodes, comprising: 接收任务调度请求,基于所述调度节点的地址信息,查询所述调度节点的配置参数表,得出所述调度节点的状态值;Receive a task scheduling request, query the configuration parameter table of the scheduling node based on the address information of the scheduling node, and obtain the state value of the scheduling node; 判断所述状态值与预设目标值是否一致;Judging whether the state value is consistent with the preset target value; 若否,则忽略所述任务调度请求;If not, ignore the task scheduling request; 若是,则获取所述任务调度请求中的任务标识,基于所述地址信息和所述任务标识,查询所述配置参数表得出所述调度节点的调度任务量,从任务库中获取所述任务标识对应的所述调度任务量个待处理任务,向一个或多个所述任务处理节点分配待处理任务。If so, obtain the task identifier in the task scheduling request, query the configuration parameter table based on the address information and the task identifier to obtain the scheduling task amount of the scheduling node, and obtain the task from the task library Identify the corresponding scheduled tasks and to-be-processed tasks, and assign the to-be-processed tasks to one or more of the task processing nodes. 2.根据权利要求1所述的方法,其特征在于,所述向一个或多个所述任务处理节点分配待处理任务,包括:2. The method according to claim 1, wherein the assigning tasks to be processed to one or more of the task processing nodes comprises: 查询所述地址信息和所述任务标识对应的并发批次数量、所述任务处理节点的数量和各所述任务处理节点的地址信息;query the address information and the number of concurrent batches corresponding to the task identifier, the number of the task processing nodes, and the address information of each of the task processing nodes; 根据所述任务处理节点的数量,调用分配模型确定各所述任务处理节点对应的待处理任务,将每个所述任务节点对应的待处理任务拆分为所述并发批次数量个批次任务;According to the number of the task processing nodes, the allocation model is called to determine the tasks to be processed corresponding to each of the task processing nodes, and the to-be-processed tasks corresponding to each of the task nodes are divided into batch tasks of the number of concurrent batches ; 基于各所述任务处理节点的地址信息,向各所述任务处理节点发送对应的批次任务。Based on the address information of each of the task processing nodes, a corresponding batch of tasks is sent to each of the task processing nodes. 3.根据权利要求2所述的方法,其特征在于,所述查询所述任务处理节点的数量和各所述任务处理节点的地址信息,包括:3. The method according to claim 2, wherein the querying the number of the task processing nodes and the address information of each of the task processing nodes comprises: 查询各所述任务处理节点的配置信息,以获取各所述任务处理节点的状态标识;query the configuration information of each of the task processing nodes to obtain the status identifier of each of the task processing nodes; 判断每个任务处理节点的状态标识是否为预设标识;Determine whether the status identifier of each task processing node is a preset identifier; 若是,则将所述任务处理节点确定为有效处理节点;若否,则将所述任务处理节点确定为无效处理节点;If yes, determine the task processing node as a valid processing node; if not, determine the task processing node as an invalid processing node; 查询所述有效处理节点的数量和各所述有效任务处理节点的地址信息。The number of the valid processing nodes and the address information of each valid task processing node are queried. 4.根据权利要求1所述的方法,其特征在于,所述查询所述调度节点的配置参数表,包括:4. The method according to claim 1, wherein the querying the configuration parameter table of the scheduling node comprises: 向调度参数节点发送查询请求,以使所述调度参数节点基于所述查询请求中所述调度节点的地址信息查询所述调度节点的配置参数表,并返回至所述调度节点。A query request is sent to the scheduling parameter node, so that the scheduling parameter node queries the configuration parameter table of the scheduling node based on the address information of the scheduling node in the query request, and returns to the scheduling node. 5.一种任务调度的方法,其特征在于,用于任务处理节点,包括:5. A method for task scheduling, characterized in that, for a task processing node, comprising: 接收调度节点发送的任务处理请求,获取所述任务处理请求中待处理任务;Receive the task processing request sent by the scheduling node, and obtain the tasks to be processed in the task processing request; 基于所述任务处理节点的地址信息,从所述任务处理节点的参数配置表中获取所述任务处理节点的状态标识,以判断所述任务处理节点的状态标识是否为预设标识;Based on the address information of the task processing node, obtain the status identifier of the task processing node from the parameter configuration table of the task processing node, so as to determine whether the status identifier of the task processing node is a preset identifier; 若否,则丢弃所述待处理任务;If not, discard the pending task; 若是,则查询所述任务处理节点的运行参数值,判断所述运行参数值是否小于预设运行参数阈值,如果否,则丢弃所述待处理任务,如果是,则对所述待处理任务加锁。If yes, query the operating parameter value of the task processing node, and determine whether the operating parameter value is less than the preset operating parameter threshold, if not, discard the pending task, and if so, add Lock. 6.根据权利要求5所述的方法,其特征在于,所述查询所述任务处理节点的运行参数值,判断所述运行参数值是否小于预设运行参数阈值,包括:6 . The method according to claim 5 , wherein the querying the operating parameter value of the task processing node, and judging whether the operating parameter value is less than a preset operating parameter threshold, comprises: 6 . 查询所述任务处理节点的线程使用数量,判断所述线程使用数量是否小于预设运行参数阈值。Query the number of threads used by the task processing node, and determine whether the number of threads used is less than a preset operating parameter threshold. 7.根据权利要求6所述的方法,其特征在于,所述对所述待处理任务加锁,包括:7. The method according to claim 6, wherein the locking the to-be-processed task comprises: 获取所述任务处理请求中待处理任务的并发批次数量;Obtain the number of concurrent batches of tasks to be processed in the task processing request; 计算所述线程使用数量与所述预设运行参数阈值之间的差值;calculating the difference between the number of threads used and the preset operating parameter threshold; 判断所述差值是否大于所述并发批次数量;Determine whether the difference is greater than the number of concurrent batches; 若是,则对所述待处理任务加锁;If so, lock the pending task; 若否,则从所述待处理任务中筛选所述差值个批次任务,以确定为目标任务,对所述目标任务加锁,丢弃所述待处理任务中除所述目标任务之外的其他任务。If not, screen the difference batch tasks from the tasks to be processed to determine the target tasks, lock the target tasks, and discard the tasks to be processed except the target tasks. other tasks. 8.一种任务调度的装置,其特征在于,设置于调度节点,包括:8. A device for task scheduling, characterized in that, being arranged on a scheduling node, comprising: 接收单元,用于接收任务调度请求,基于所述调度节点的地址信息,查询所述调度节点的配置参数表,得出所述调度节点的状态值;a receiving unit, configured to receive a task scheduling request, and based on the address information of the scheduling node, query the configuration parameter table of the scheduling node to obtain the state value of the scheduling node; 判断单元,用于判断所述状态值与预设目标值是否一致;a judgment unit for judging whether the state value is consistent with a preset target value; 忽略单元,用于若否,则忽略所述任务调度请求;An ignore unit for ignoring the task scheduling request if not; 调度单元,用于若是,则获取所述任务调度请求中的任务标识,基于所述地址信息和所述任务标识,查询所述配置参数表得出所述调度节点的调度任务量,从任务库中获取所述任务标识对应的所述调度任务量个待处理任务,向一个或多个所述任务处理节点分配待处理任务。A scheduling unit, configured to obtain the task identifier in the task scheduling request, based on the address information and the task identifier, query the configuration parameter table to obtain the scheduling task amount of the scheduling node, and obtain the scheduling task amount of the scheduling node from the task database. The number of scheduled tasks corresponding to the task identifiers and the number of pending tasks are obtained in , and the pending tasks are allocated to one or more of the task processing nodes. 9.根据权利要求8所述的装置,其特征在于,所述调度单元,具体用于:9. The apparatus according to claim 8, wherein the scheduling unit is specifically configured to: 查询所述地址信息和所述任务标识对应的并发批次数量、所述任务处理节点的数量和各所述任务处理节点的地址信息;query the address information and the number of concurrent batches corresponding to the task identifier, the number of the task processing nodes, and the address information of each of the task processing nodes; 根据所述任务处理节点的数量,调用分配模型确定各所述任务处理节点对应的待处理任务,将每个所述任务节点对应的待处理任务拆分为所述并发批次数量个批次任务;According to the number of the task processing nodes, the allocation model is called to determine the tasks to be processed corresponding to each of the task processing nodes, and the to-be-processed tasks corresponding to each of the task nodes are divided into batch tasks of the number of concurrent batches ; 基于各所述任务处理节点的地址信息,向各所述任务处理节点发送对应的批次任务。Based on the address information of each of the task processing nodes, a corresponding batch of tasks is sent to each of the task processing nodes. 10.根据权利要求9所述的装置,其特征在于,所述调度单元,具体用于:10. The apparatus according to claim 9, wherein the scheduling unit is specifically configured to: 查询各所述任务处理节点的配置信息,以获取各所述任务处理节点的状态标识;query the configuration information of each of the task processing nodes to obtain the status identifier of each of the task processing nodes; 判断每个任务处理节点的状态标识是否预设标识;Determine whether the status identifier of each task processing node is a preset identifier; 若是,则将所述任务处理节点确定为有效处理节点;若否,则将所述任务处理节点确定为无效处理节点;If yes, determine the task processing node as a valid processing node; if not, determine the task processing node as an invalid processing node; 查询所述有效处理节点的数量和各所述有效任务处理节点的地址信息。The number of the valid processing nodes and the address information of each valid task processing node are queried. 11.根据权利要求8所述的装置,其特征在于,所述接收单元,具体用于向调度参数节点发送查询请求,以使所述调度参数节点基于所述查询请求中所述调度节点的地址信息查询所述调度节点的配置参数表,并返回至所述调度节点。The apparatus according to claim 8, wherein the receiving unit is specifically configured to send a query request to a scheduling parameter node, so that the scheduling parameter node is based on the address of the scheduling node in the query request The information queries the configuration parameter table of the scheduling node, and returns to the scheduling node. 12.一种任务调度的装置,其特征在于,设置于任务处理节点,包括:12. A device for task scheduling, characterized in that, being arranged on a task processing node, comprising: 接收单元,用于接收调度节点发送的任务处理请求,获取所述任务处理请求中待处理任务;a receiving unit, configured to receive the task processing request sent by the scheduling node, and obtain the tasks to be processed in the task processing request; 判断单元,用于基于所述任务处理节点的地址信息,从所述任务处理节点的参数配置表中获取所述任务处理节点的状态标识,以判断所述任务处理节点的状态标识是否为预设标识;A judgment unit, configured to obtain the status identifier of the task processing node from the parameter configuration table of the task processing node based on the address information of the task processing node, so as to judge whether the status identifier of the task processing node is preset identification; 丢弃单元,用于若否,则丢弃所述待处理任务;a discarding unit for discarding the pending task if not; 处理单元,用于若是,则查询所述任务处理节点的运行参数值,判断所述运行参数值是否小于预设运行参数阈值,如果否,则丢弃所述待处理任务,如果是,则对所述待处理任务加锁。A processing unit, configured to query the operating parameter value of the task processing node, and determine whether the operating parameter value is less than a preset operating parameter threshold, if not, discard the pending task, and if so, apply The pending task is locked. 13.根据权利要求12所述的装置,其特征在于,所述处理单元,具体用于:13. The apparatus according to claim 12, wherein the processing unit is specifically configured to: 查询所述任务处理节点的线程使用数量,判断所述线程使用数量是否小于预设运行参数阈值。Query the number of threads used by the task processing node, and determine whether the number of threads used is less than a preset operating parameter threshold. 14.根据权利要求13所述的装置,其特征在于,所述处理单元,具体用于:14. The apparatus according to claim 13, wherein the processing unit is specifically configured to: 获取所述任务处理请求中待处理任务的并发批次数量;Obtain the number of concurrent batches of tasks to be processed in the task processing request; 计算所述线程使用数量与所述预设运行参数阈值之间的差值;calculating the difference between the number of threads used and the preset operating parameter threshold; 判断所述差值是否大于所述并发批次数量;Determine whether the difference is greater than the number of concurrent batches; 若是,则对所述待处理任务加锁;If so, lock the pending task; 若否,则从所述待处理任务中筛选所述差值个批次任务,确定为目标任务,对所述目标任务加锁,丢弃所述待处理任务中除所述目标任务之外的其他任务。If not, screen the difference batch tasks from the tasks to be processed, determine them as the target task, lock the target task, and discard the tasks to be processed except the target task. Task. 15.一种电子设备,其特征在于,包括:15. An electronic device, comprising: 一个或多个处理器;one or more processors; 存储装置,用于存储一个或多个程序,storage means for storing one or more programs, 当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-7中任一所述的方法。The one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7. 16.一种计算机可读介质,其上存储有计算机程序,其特征在于,所述程序被处理器执行时实现如权利要求1-7中任一所述的方法。16. A computer-readable medium on which a computer program is stored, characterized in that, when the program is executed by a processor, the method according to any one of claims 1-7 is implemented.
CN202110547571.7A 2021-05-19 2021-05-19 Task scheduling method and device, electronic equipment and storage medium Pending CN113204418A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110547571.7A CN113204418A (en) 2021-05-19 2021-05-19 Task scheduling method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110547571.7A CN113204418A (en) 2021-05-19 2021-05-19 Task scheduling method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113204418A true CN113204418A (en) 2021-08-03

Family

ID=77031845

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110547571.7A Pending CN113204418A (en) 2021-05-19 2021-05-19 Task scheduling method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113204418A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113641502A (en) * 2021-08-23 2021-11-12 北京沃东天骏信息技术有限公司 Distributed data processing method, apparatus, electronic device and storage medium
CN116414863A (en) * 2021-12-29 2023-07-11 网联清算有限公司 Data processing method, device, electronic device and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109857558A (en) * 2019-01-18 2019-06-07 苏宁易购集团股份有限公司 A kind of data flow processing method and system
CN111459659A (en) * 2020-03-10 2020-07-28 中国平安人寿保险股份有限公司 Data processing method, device, scheduling server and medium
CN112269647A (en) * 2020-10-26 2021-01-26 广州华多网络科技有限公司 Node scheduling, switching and coordinating method and corresponding device, equipment and medium thereof
CN114253690A (en) * 2021-12-20 2022-03-29 重庆市通信建设有限公司 Task scheduling method, apparatus, electronic device, and computer-readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109857558A (en) * 2019-01-18 2019-06-07 苏宁易购集团股份有限公司 A kind of data flow processing method and system
CA3168286A1 (en) * 2019-01-18 2020-07-23 10353744 Canada Ltd. Data flow processing method and system
CN111459659A (en) * 2020-03-10 2020-07-28 中国平安人寿保险股份有限公司 Data processing method, device, scheduling server and medium
CN112269647A (en) * 2020-10-26 2021-01-26 广州华多网络科技有限公司 Node scheduling, switching and coordinating method and corresponding device, equipment and medium thereof
CN114253690A (en) * 2021-12-20 2022-03-29 重庆市通信建设有限公司 Task scheduling method, apparatus, electronic device, and computer-readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113641502A (en) * 2021-08-23 2021-11-12 北京沃东天骏信息技术有限公司 Distributed data processing method, apparatus, electronic device and storage medium
CN116414863A (en) * 2021-12-29 2023-07-11 网联清算有限公司 Data processing method, device, electronic device and storage medium

Similar Documents

Publication Publication Date Title
CN108737270B (en) Resource management method and device for server cluster
US10686728B2 (en) Systems and methods for allocating computing resources in distributed computing
CN112486648A (en) Task scheduling method, device, system, electronic equipment and storage medium
CN114610499B (en) Task scheduling method, device, computer readable storage medium and electronic device
CN110991808B (en) Task allocation method and device
CN110166507B (en) Multi-resource scheduling method and device
WO2016061935A1 (en) Resource scheduling method, device and computer storage medium
CN111767122A (en) Distributed task scheduling management method and device
CN111478781B (en) Message broadcasting method and device
CN114116173A (en) Method, device and system for dynamically adjusting task assignment
CN112104679A (en) Method, apparatus, device and medium for processing hypertext transfer protocol request
CN111491015B (en) Preheating task processing method and system, proxy server and service center
CN113204418A (en) Task scheduling method and device, electronic equipment and storage medium
CN113821506A (en) Task execution method, device, system, server and medium for task system
CN108958933B (en) Configuration parameter updating method, device and equipment of task executor
CN115617511B (en) Method, device, electronic equipment and storage medium for processing resource data
CN116974767A (en) Task scheduling system, method, device, equipment and medium
CN112448977A (en) System, method, apparatus and computer readable medium for assigning tasks
CN106657195B (en) Task processing method and relay device
CN118467140B (en) Task scheduling method and system
CN108683608B (en) Method and device for distributing flow
CN111694670A (en) Resource allocation method, device, equipment and computer readable medium
CN109842665B (en) Task processing method and device for task allocation server
CN108696557B (en) Information processing system, method and device
CN114020408A (en) Task fragment configuration method and device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210803