[go: up one dir, main page]

WO2014134757A1 - Rule engine of parallel business and implementation method therefor - Google Patents

Rule engine of parallel business and implementation method therefor Download PDF

Info

Publication number
WO2014134757A1
WO2014134757A1 PCT/CN2013/000365 CN2013000365W WO2014134757A1 WO 2014134757 A1 WO2014134757 A1 WO 2014134757A1 CN 2013000365 W CN2013000365 W CN 2013000365W WO 2014134757 A1 WO2014134757 A1 WO 2014134757A1
Authority
WO
WIPO (PCT)
Prior art keywords
parallel
engine
business
branch
mapping
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2013/000365
Other languages
French (fr)
Chinese (zh)
Inventor
徐国庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to PCT/CN2014/000218 priority Critical patent/WO2014154016A1/en
Publication of WO2014134757A1 publication Critical patent/WO2014134757A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/40Transformation of program code
    • G06F8/41Compilation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs

Definitions

  • the present invention relates to the field of computer software development, and is applicable to system integration, software system configuration, fault tolerance, and high performance computing.
  • the design scheme provides an effective support for parallel computing by adding a parallel execution body in the rule engine, declaring the way the branch is selected in the parallel execution body, and generating and searching modes.
  • a regular rule refers to a rule in a business process that is represented by an easy-to-understand script.
  • the script does not need to be compiled. It is read and interpreted by the rule engine framework during the runtime of the program.
  • the application invokes the rule engine interface to trigger. Execute the corresponding rules to achieve separation of business logic and business programming implementation.
  • the rule engine is different from the rule discovery engine, and the rule engine belongs to the program configuration system.
  • the rule discovery engine refers to a certain rule in the business process through pattern matching, which belongs to the pattern recognition category.
  • the rules engine includes the following parts: rule scripts (no compilation required), rules engine framework, applications that invoke the rules engine.
  • the general rule script includes: an executable body and a rule body, and the rule body includes a trigger condition and a call to the execution body and the method to be executed.
  • Workflow refers to a series of steps that are connected in succession.
  • the workflow engine uses an easy-to-understand script or the like to represent the relationship between these steps.
  • the workflow engine framework reads and interprets these steps. When used, the application invokes the corresponding workflow steps through the workflow engine interface. And get the result.
  • the workflow engine separates the workflow and specific programming.
  • Parallel computing in the present invention means that since data is distributed at different logical addresses, it is required to operate separately through the same or different subroutine modules, and the combination of these different subroutines or different logical addresses is called Branch, an operation (calculation) of data, needs to be selected from one or more possible branches, operations (or calculations) are spatially parallel, so called parallel computing.
  • the business rule engine and the workflow engine basically realize the effective configuration of the variable parts of the system, but at present, there is no better solution engine that can conveniently and quickly realize the effective support for parallel computing such as system data backup and joint query.
  • the parallel business rules engine uses parallel executables in the configuration file.
  • a parallel operation of data can be achieved by using different subroutines or using different logical addresses, and the applicant of the present invention refers to these different subroutines or a combination of different logical addresses as a branch.
  • Parallel execution has two modes of operation on data: data read, and data write (including modification and deletion).
  • Data reading includes branching, selection, competition, etc.
  • Branch selection Joint reading is to read data from a series of branches, and finally assemble into a result set; the choice is to assume that each branch data is the same, the execution body is from the branch (random Select a branch to read the result set; the competition is to assume that the data of each branch is the same, the executable reads from each branch, but only takes the fastest one, and the rest is discarded.
  • Data writing includes joint and simultaneous branch selection: Union is to split the data set into rules according to rules, each branch gets incomplete data, but the sum of all branch data is complete data; The data is not broken up and is completely saved to each branch.
  • the branch search and generation rules are as follows:
  • the branch address or subroutine name is obtained by mapping the parameters passed in the engine, and the parameter value of the participating mapping may be "" or null.
  • the mapping rule is that there are i variables, ⁇ 3 ⁇ 4 ⁇ is the set of these variables, the branch address or subroutine name is F( ⁇ Xi ⁇ ), then find a F( ⁇ Xi ⁇ ), through F(li) Obtain some or all of the required parts in ⁇ Xi ⁇ by inverse mapping.
  • the rule engine obtains support for multiple data reading and writing modes by using variable mapping rules instead of selecting branches by conditional judgment, which is a feature of the present invention.
  • Parallel computing is implemented by parallel execution.
  • parallel execution parameters There are three types of parallel execution parameters: constants, variables M that are passed in by the application when the rule engine framework is called, and parameters that are mapped from constant M and other parameters.
  • the second type of parallel execution body maps the logical address branch, and the second reference method of the parallel execution body method or parallel execution body maps the subroutine branch.
  • the parallel execution body is an execution node of the workflow engine; I: the previous node of the flow engine or the MJ of the flow engine is passed in the corresponding parameter, and the node is called; Each branch, then return the final result to eleven nodes; if this is the last node of the engine, then return to the Tuning engine.
  • JBPM and other workflow engines use forked nodes and merged nodes, and AND, OR, X0R means "simultaneous satisfaction", "only one” and "satisfy at least one".
  • JBPM and the present invention belong to different technologies. Specifically, there are three points: 1.
  • Data reading and data input include three steps of bifurcation, calculation and merging of branches, and JBPM's bifurcation node only represents points. Fork, the merged node of JBPM only represents the station, and the concept of branching in JBPM is different. 2.
  • the AND, OR, and X0R of JBPM only indicate whether the branch node is executed or not, and does not represent the data. Whether the result of reading is to merge or take a single complete result, etc., does not mean that the written data is taken apart or completely saved; 3.
  • the fork node and the merge node used by the workflow engine such as JBPM can only pass
  • the path between the forked node and the merged node is configured with
  • Ming parallel execution body is generated dynamically by mapping mode and (to) is configured to any number of branches, it is very flexible.
  • FIG. 1 is an explanation of the structure of a parallel business rules engine.
  • the parallel business rules engine needs to configure two components: the executable body and the rule body are owned by the general business rules engine, and the parallel rule body is unique to the solution.
  • Parallel rule body parameters and "method into executable body reference" can be mapped separately. By this combination of branch mapping, the parallel business rule engine can prepare for good scalability.
  • FIG. 2 is an illustration of the application of a parallel rule body in a 1:1 flow engine.
  • the previous node of the parallel node (such as 3 ⁇ 4) is used by the application to call the parallel point through the engine framework: the parallel execution body implicitly performs the branching by mapping; then, the parallel execution body obtains the parallel operation The end result; and the final result and control to the next node (if any), if there is no next node, then the result is passed to the HJ program that invokes the engine through the engine framework.
  • the rules engine needs to be interpreted by the rules engine framework, and the program is read into the rules engine framework.
  • the application passes the call to the framework SPI in the corresponding place, and passes the corresponding parameters, executes the rules, and finally obtains the result.
  • the application adjusts the ffl engine framework, executes the parallel execution body, and performs parallel operations.
  • Compile a method for passing HJ in the program, and use the Ding Day to generate a log file if there is no corresponding diary file.
  • Configure parallel execution of the body 1 D - parallel operation of the log file.
  • the "Method into Executable Reference" portion of the parallel execution body is configured to direct the HJ of the method of step 1, and the incoming parameters are consistent with the parameters in step 1.
  • Parallel Execution 1 is a write type and is declared as a simultaneous mode to accommodate the need for parallel writes.
  • the log file name parameter in parallel execution body 1 is configured in a mapping manner.
  • the log file name parameter in parallel execution body 2 is configured in a mapping manner.
  • Parallel Execution 2 is a read type and is declared as a union to accommodate the need to jointly query data from each branch.
  • Parallel Execution 1 Select to read the class and declare it as a selection method to accommodate the need to read a branch of data from the backup.
  • the parallel execution body 2 is selected as the write type, and Lu Ming is the simultaneous mode to accommodate the need to read a branch data from the backup.
  • the ticketing system buys tickets from different companies respectively. Assume that N companies, each company system purchase interface interface parameter format and return data are the same (or can be converted into the same), but the specific implementation method is different, The ticket-sending system needs to get the first ticket purchased. If one ticket is grabbed, the remaining tickets will be discarded.
  • the parameters are the interface parameters of each company system.
  • the implementation methods are the implementation methods of each company system.
  • the executable names are ZXT1 to ZXTN.
  • Lu Ming is a competitive reading method to adapt to competition from various branches? Oh, so you get the fastest ticket demand.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Devices For Executing Special Programs (AREA)

Abstract

Provided are a rule engine of parallel business and an implementation method therefor, comprising: declaring four selection modes of selection, unification, competition and simultaneousness in a parallel executor of an engine; generating and finding a branch by the parallel executor in a mapping mode; then executing the branch; and returning a result. In the present invention, a parallel function is added on the basis of the existing rule engine; moreover, parallel computation can be quickly configured by an engine user; and the needs of backup, system integration, high-performance computation, etc. of the user are met. The technology is also applied to a parallel workflow engine.

Description

并行业务规则弓 I擎及其实现方法  Parallel business rule bow I engine and its implementation method

技术领域 Technical field

[0001 ] 本发明涉及计算机软件开发领域, 应用于系统集成、 软件系统配置、 容错和高性能计算等方面。 具体地, 设计方案通过在规则引擎中增加并行执行体, 在并行执行体中声明对分支的选择方式, 及生成和 査找方式, 以提供对并行计算的有效支持。  [0001] The present invention relates to the field of computer software development, and is applicable to system integration, software system configuration, fault tolerance, and high performance computing. Specifically, the design scheme provides an effective support for parallel computing by adding a parallel execution body in the rule engine, declaring the way the branch is selected in the parallel execution body, and generating and searching modes.

背景技术 Background technique

[0002] 规则引擎 (regular rule)是指用一段易懂的脚本表示业务过程中的规则,脚本不需要经过编译,在程 序运行期由规则引擎框架读取和解释, 应用程序调用规则引擎接口触发执行相应的规则, 从而达到业务逻 辑和业务编程实现的分离。  [0002] A regular rule refers to a rule in a business process that is represented by an easy-to-understand script. The script does not need to be compiled. It is read and interpreted by the rule engine framework during the runtime of the program. The application invokes the rule engine interface to trigger. Execute the corresponding rules to achieve separation of business logic and business programming implementation.

[0003] 规则引擎不同于规则发现引擎, 规则引擎属于程序配置体系; 规则发现引擎是指通过模式匹配发 现业务过程中的某种规律, 属于模式识别范畴说。  [0003] The rule engine is different from the rule discovery engine, and the rule engine belongs to the program configuration system. The rule discovery engine refers to a certain rule in the business process through pattern matching, which belongs to the pattern recognition category.

[0004] 规则引擎包括如下部分: 规则脚本 (不需要编译)、 规则引擎框架、 调用规则引擎的应用程序。 一般规则脚本包括: 执行体和规则体, 规则体包括触发条件和对执行体和需执行方法的调用。  [0004] The rules engine includes the following parts: rule scripts (no compilation required), rules engine framework, applications that invoke the rules engine. The general rule script includes: an executable body and a rule body, and the rule body includes a trigger condition and a call to the execution body and the method to be executed.

[0005] 工作流 (workflow)指一系列前后相续相连的步骤书。 工作流引擎 (workflow engine)使用易懂的脚本 等表示这些步骤之间的关系, 工作流引擎框架读取和解释这些步骤, 使用时, 应用程序通过工作流引擎接 口调用执行相应的工作流步骤, 并得到结果。 工作流引擎分离了工作流程和具体的程序编程。  [0005] Workflow refers to a series of steps that are connected in succession. The workflow engine uses an easy-to-understand script or the like to represent the relationship between these steps. The workflow engine framework reads and interprets these steps. When used, the application invokes the corresponding workflow steps through the workflow engine interface. And get the result. The workflow engine separates the workflow and specific programming.

[0006] 本发明中的并行计算 (操作)指, 由于数据分布在不同的逻辑地址, 需要通过相同或不同的子程序 模块去分别操作, 这些不同的子程序或者不同的逻辑地址的组合称为分支, 对数据的一次操作 (计算), 需要从一个或多个可能的分支去选择, 操作 (或称计算) 在空间上是并行的, 因此称为并行计算。  [0006] Parallel computing (operation) in the present invention means that since data is distributed at different logical addresses, it is required to operate separately through the same or different subroutine modules, and the combination of these different subroutines or different logical addresses is called Branch, an operation (calculation) of data, needs to be selected from one or more possible branches, operations (or calculations) are spatially parallel, so called parallel computing.

[0007] 业务规则引擎和工作流引擎基本实现了对系统易变部分的有效配置, 但目前还没有较好的方案引 擎能便捷, 快速的实现对系统数据备份、 联合查询等并行计算的有效支持。 发明内容  [0007] The business rule engine and the workflow engine basically realize the effective configuration of the variable parts of the system, but at present, there is no better solution engine that can conveniently and quickly realize the effective support for parallel computing such as system data backup and joint query. . Summary of the invention

[0008] 为了填补了业务规则引擎在并行计算方面的空白, 并行业务规则引擎在配置文件中使用并行执行 体。 数据的一次并行操作可以通过使用不同的子程序或者使用不同的逻辑地址达到, 发明申请人将这些不 同的子程序或者不同的逻辑地址的组合称为分支。  [0008] To fill the gap in the parallel computing of the business rules engine, the parallel business rules engine uses parallel executables in the configuration file. A parallel operation of data can be achieved by using different subroutines or using different logical addresses, and the applicant of the present invention refers to these different subroutines or a combination of different logical addresses as a branch.

[0009] 并行执行体对数据有两种操作方式: 数据读取, 和数据写入 (包括修改删除)。 数据读取包括联 合、 选择、 竞争等分支选择方式: 联合读取就是从一系列分支中读取数据, 最后拼装成结果集; 选择就是 假设各个分支数据是一样的, 执行体从分支中 (随机)选择一个分支读取结果集; 竞争就是假设各个分支 数据是一样的, 执行体从各个分支读取, 但是只取最快的那份, 其余的丢弃。 数据写入包括联合、 同时 等分支选择方式: 联合就是将数据集按照规则拆散放到各个分支中, 每个分支得到的是不完整的数据, 但 是所有分支数据的总和是完整数据; 同时就是将数据不拆散, 完整的保存到各个分支中。  [0009] Parallel execution has two modes of operation on data: data read, and data write (including modification and deletion). Data reading includes branching, selection, competition, etc. Branch selection: Joint reading is to read data from a series of branches, and finally assemble into a result set; the choice is to assume that each branch data is the same, the execution body is from the branch (random Select a branch to read the result set; the competition is to assume that the data of each branch is the same, the executable reads from each branch, but only takes the fastest one, and the rest is discarded. Data writing includes joint and simultaneous branch selection: Union is to split the data set into rules according to rules, each branch gets incomplete data, but the sum of all branch data is complete data; The data is not broken up and is completely saved to each branch.

[0010] 分支的查找和生成规则为: 分支地址或子程序名称是 ώ引擎传入的参数通过映射得到的, 参与映 射的参数值可以是 " "或 null。 映射规则是, 设有 i个变量, {¾}为这些变量的集合, 分支地址或子程序 名称为 F({Xi}), 那么找出一个 F({Xi}), 通过 F(l i)可以通过逆映射获得 {Xi}中的所需要部分或全部。本 规则引擎通过变量映射规则而不是通过条件判断选择分支, 从而获得对多种数据读写方式的支持, 这是本 发明的特征。  [0010] The branch search and generation rules are as follows: The branch address or subroutine name is obtained by mapping the parameters passed in the engine, and the parameter value of the participating mapping may be "" or null. The mapping rule is that there are i variables, {3⁄4} is the set of these variables, the branch address or subroutine name is F({Xi}), then find a F({Xi}), through F(li) Obtain some or all of the required parts in {Xi} by inverse mapping. The rule engine obtains support for multiple data reading and writing modes by using variable mapping rules instead of selecting branches by conditional judgment, which is a feature of the present invention.

[0011] 并行计算通过并行执行体实现, 并行执行休参数有三种类型: 常量, 由应用程序调用规则引擎框 架时传入的变 M, 由常 M和其他参数映射得来的参数。 方法或执行体引用的方式有两种: 常量方法, 由常 量和其他参数映射而来的方法。 并行执行体的第二种类型映射了逻辑地址分支, 并行执行体中方法或并行 执行体的第二种引用方式映射了子程序分支。  [0011] Parallel computing is implemented by parallel execution. There are three types of parallel execution parameters: constants, variables M that are passed in by the application when the rule engine framework is called, and parameters that are mapped from constant M and other parameters. There are two ways to implement a method or a body reference: a constant method, a method that maps from constants and other parameters. The second type of parallel execution body maps the logical address branch, and the second reference method of the parallel execution body method or parallel execution body maps the subroutine branch.

[0012] 因为并行执行体是通过映射而非条件判断选 支 ¾ , SIlH,本发明的方案可应用于工作流引擎。  [0012] Since the parallel execution body is through the mapping instead of the conditional decision selection, SIlH, the scheme of the present invention can be applied to the workflow engine.

更正页 (细则第 91条) 具体是: 并行执行体作为工作流引擎的一个执行节点; I:作流引擎的上一个节点或该段 I:作流引擎的调 MJ 者传入相应的参数, 并调用该节点; 该节点映射各个分支, 然后返冋最终结果给十一个节点; 如果这是引 擎的最后一个节点, 那就返回给丄作流引擎的调川者。 有必要说明的是: JBPM等丄作流引擎使用分叉节点 和含并节点, 并且 AND, OR, X0R表示 "同时满足"、 "只满足一个"和 "满足至少一个"等对分支的选 择方式, 但是 JBPM与本发明属于不同的技术, 具体说来有三点: 1.数据读取和数据^入都包括分叉、 对 分支的计算和合并这三步, 而 JBPM的分叉节点只表示分叉, JBPM的合并节点只表示台并, 冈此本发明和 JBPM中的分支概念是不一样的; 2. JBPM的 AND, OR, X0R只表示对分支节点是否执行的判断, 并不表示 对数据读取的结果是进行合并还是取单一的完整结果等, 并不表示对写入数据是采取拆散还是完整保存; 3. JBPM等工作流引擎使用的分叉节点和合并节点中, 只能通过在分叉节点和合并节点之间配置 |*|定的路 径, 分叉节点和合并节点之间的一条路径就表示一个分支, 一 0到运行时期就没法增加减少或改变这些分 支, 本发明中的并行执行体则通过映射方式动态生成和 (成) 配置成任意个分支, 非常灵活。 附图说明 Correction page (Article 91) Specifically: the parallel execution body is an execution node of the workflow engine; I: the previous node of the flow engine or the MJ of the flow engine is passed in the corresponding parameter, and the node is called; Each branch, then return the final result to eleven nodes; if this is the last node of the engine, then return to the Tuning engine. It is necessary to explain that: JBPM and other workflow engines use forked nodes and merged nodes, and AND, OR, X0R means "simultaneous satisfaction", "only one" and "satisfy at least one". However, JBPM and the present invention belong to different technologies. Specifically, there are three points: 1. Data reading and data input include three steps of bifurcation, calculation and merging of branches, and JBPM's bifurcation node only represents points. Fork, the merged node of JBPM only represents the station, and the concept of branching in JBPM is different. 2. The AND, OR, and X0R of JBPM only indicate whether the branch node is executed or not, and does not represent the data. Whether the result of reading is to merge or take a single complete result, etc., does not mean that the written data is taken apart or completely saved; 3. The fork node and the merge node used by the workflow engine such as JBPM can only pass The path between the forked node and the merged node is configured with |*|, a path between the forked node and the merged node represents a branch, and a 0 to the running period cannot increase or decrease these branches. Ming parallel execution body is generated dynamically by mapping mode and (to) is configured to any number of branches, it is very flexible. DRAWINGS

[0013] 图 1是对并行业务规则引擎结构的解释。 并行业务规则引擎需耍配置二种组件: 执行体和规则体 是一般业务规则引擎所拥有的, 而并行规则体则是本方案所特有的。 并行规则体的参数和 "方法成执行体 引用" 部分都可以分别映射分支, 通过这种分支映射的组合, 并行业务规则引擎可以 Λ备很好的伸缩性 能。  [0013] FIG. 1 is an explanation of the structure of a parallel business rules engine. The parallel business rules engine needs to configure two components: the executable body and the rule body are owned by the general business rules engine, and the parallel rule body is unique to the solution. Parallel rule body parameters and "method into executable body reference" can be mapped separately. By this combination of branch mapping, the parallel business rule engine can prepare for good scalability.

[0014] 图 2是对并行规则体在 . 1 :作流引擎中的应用的解释。 I:作流引擎中, 并行 Ί 点的上一个节点 (如 ¾有)成应用程序通过引擎框架调用并行 点: 并行执行体隐式地通过映射方式执行分支; 然后, 并行执 行体获得并行运算的最终结果; 并将最终结果和控制交给下一个节点 (如果有), 如果没有下一个节点, 那就把结果交给通过引擎框架调用该引擎的应 HJ程序。 具体实施方式  [0014] FIG. 2 is an illustration of the application of a parallel rule body in a 1:1 flow engine. I: In the streaming engine, the previous node of the parallel node (such as 3⁄4) is used by the application to call the parallel point through the engine framework: the parallel execution body implicitly performs the branching by mapping; then, the parallel execution body obtains the parallel operation The end result; and the final result and control to the next node (if any), if there is no next node, then the result is passed to the HJ program that invokes the engine through the engine framework. detailed description

[0015] 业务规则引擎的实现冇以下步骤:  [0015] The implementation of the business rules engine 冇 the following steps:

1.规则引擎需要靠规则引擎框架解释执行, 应 程序读入规则引擎框架。  1. The rules engine needs to be interpreted by the rules engine framework, and the program is read into the rules engine framework.

2.配置文件中配置业务规则, 应用程序读入配置文件, 并通过框架解释这些规则。  2. Configure the business rules in the configuration file, the application reads in the configuration files, and interprets the rules through the framework.

3.应用程序在相应的地方通过对框架 SPI的调用, 并传入相应的参数, 执行规则, 最后获得结果。 3. The application passes the call to the framework SPI in the corresponding place, and passes the corresponding parameters, executes the rules, and finally obtains the result.

4.如果程序在该相应的地方需求规则改变, 只耍参数不变, 程序员就可以通过重新配 S业务规则达到 效果。 4. If the program changes the requirements in the corresponding place, and the parameters are the same, the programmer can achieve the effect by re-distributing the S business rules.

[0016] 并行业务规则引擎和并行业务 I:作流引擎的配置过程如下:  [0016] Parallel Business Rule Engine and Parallel Service I: The configuration process of the flow engine is as follows:

1. 首先定义需耍被调 MJ的分支的程序的方法, 如果分支程序的方法不同就进一步将该方法封装为执 行体。  1. First define the method of the program that needs to be called the branch of MJ. If the method of the branch program is different, the method is further encapsulated as an executable.

2. 定义并行执行体, 并行执行体卢明为读取或写入类型, 并卢明联合、 选择、 竞争、 同时 方式。 2. Define a parallel execution body, parallel execution body Luming for reading or writing types, and Luming joint, select, compete, and simultaneously.

3. 在并行执行体中配置传入参数、 对 "方法和执行体的引用"。 参数和 "方法成执行体引 W "的类型 可以由映射生成, 映射的结果可以作为参数; 或者作为步骤 1定义的执行体的名称, 进一步通过 执行体获得参数; 成者作为步骤 1定义的执行体的名称由 "方法或执行体引川"所引 。 3. Configure the incoming parameters, references to "methods and executables" in the parallel execution body. The type of the parameter and the "method into the execution body" can be generated by the mapping, and the result of the mapping can be used as a parameter; or as the name of the execution body defined in step 1, the parameter is further obtained by the execution body; the implementation is defined as the execution of step 1. The name of the body is quoted by "method or executive body".

4. 应用程序调 ffl引擎框架, 执行并行执行体, 做并行操作。  4. The application adjusts the ffl engine framework, executes the parallel execution body, and performs parallel operations.

[0017] 下面结合 Λ.体的应用场景, 进一步阐明本发明, 本发明可以应用 T但不限丁-以下情形:  [0017] The present invention will be further clarified below in conjunction with the application scenario of the body. The present invention can be applied to T but not limited to the following situations:

[0018] 假设日志系统需耍每大生成一个日志文件, 并且可以 ·所冇天的日志。  [0018] It is assumed that the log system needs to generate one log file per large, and can log the log of the day.

开发者可按照如-卜步骤开发和配置:  Developers can follow the steps to develop and configure:

1- 在程序中编译一个通 HJ的方法, 用丁 入日 , 如果没有相应的日忐文件就生成日志文件。 2. 配置并行执行体 1, 丁-对日志文件进行并行操作。 并行执行体的 "方法成执行体引用"部分配 置为对步骤 1的方法的引 HJ, 传入参数与步骤 1中的参数一致。 1- Compile a method for passing HJ in the program, and use the Ding Day to generate a log file if there is no corresponding diary file. 2. Configure parallel execution of the body 1, D - parallel operation of the log file. The "Method into Executable Reference" portion of the parallel execution body is configured to direct the HJ of the method of step 1, and the incoming parameters are consistent with the parameters in step 1.

3. 并行执行体 1为写入类型, 并声明为同时方式, 以适应并行写入的需求。  3. Parallel Execution 1 is a write type and is declared as a simultaneous mode to accommodate the need for parallel writes.

4. 并行执行体 1中的日志文件名参数以映射方式配置, 映射公式为 F (X) =Log |画 DD, 腦为月, DD为 日; MM和 DD分别作为向定参数传入, 表示当前时间, 因此, 对并行执行体 1的每次写入操作只 执行一个同定的分支。  4. The log file name parameter in parallel execution body 1 is configured in a mapping manner. The mapping formula is F (X) = Log | draw DD, the brain is month, DD is day; MM and DD are respectively passed as a given parameter, indicating The current time, therefore, only one identical branch is executed for each write operation of parallel execution body 1.

5. 在程序中编译一个通用的方法, 用丁读取日志。  5. Compile a generic method in the program and read the log with Ding.

6. 配置并行执行体 2, 用于对日志文件进行并行操作。 并行执行体的 "方法或执行体引用"部分配 置为对步骤 1的方法的引用, 传入参数与步骤 1中的参数一致。  6. Configure parallel execution body 2 to perform parallel operations on log files. The "Method or Execution Reference" section of the parallel execution body is configured as a reference to the method of step 1, and the incoming parameters are consistent with the parameters in step 1.

7. 并行执行体 2中的日志文件名参数以映射方式配置, 映射公式为 F (X) =Log |丽 DD, MM为月, DD为 曰。  7. The log file name parameter in parallel execution body 2 is configured in a mapping manner. The mapping formula is F (X) = Log | 丽 DD, MM is month, and DD is 曰.

8. 并行执行体 2为读取类型, 并声明为联合方式, 以适应从各个分支联合査询数据的需求。  8. Parallel Execution 2 is a read type and is declared as a union to accommodate the need to jointly query data from each branch.

[0019] 假设应用系统需耍访问两个数据库, 这两个数据库数据完全一样, 一个 W来备份, 一个用来做止 常业务访问: 两个数据序只有地址不一样。 [0019] Assume that the application system needs to access two databases. The two databases have the same data, one for backup and one for business access: the two data sequences have different addresses.

开发者可按照如卜步骤开发和配置:  Developers can follow the steps to develop and configure:

1- 在引擎脚本中先配置两个执行体,分别返冋两个数据厍的地址,现假设这两个执行体名称为 DB0001 和 DB0002;  1- In the engine script, first configure two executables, returning the addresses of the two data ports respectively. Now assume that the two executables are named DB0001 and DB0002;

2. 在程序中编译一个通用的方法, 用丁对两个数据库进行读取操作, 该方法的参数应包含数据库地 址。  2. Compile a generic method in the program, and use D to read the two databases. The parameters of the method should include the database address.

3. 配置并行执行体 1, 用于对两个数据库进行并行读取操作。 并行执行体的 "方法 执行体引 部分配置为对步骤 2的方法的引用, 传入参数与步骤 2中的参数一致。  3. Configure parallel execution body 1 for parallel read operations on both databases. The method execution part of the parallel execution body is configured as a reference to the method of step 2, and the incoming parameters are consistent with the parameters in step 2.

4. 并行规则体 1中的数据库地址参数被配置为对执行体 DB000X的引用,所引用的执行体名称是由映 射 F (X) =DB | 000X得来的 (如步骤 1 )。  4. The database address parameter in parallel rule body 1 is configured as a reference to the executable DB000X, and the referenced executable name is derived from the mapping F (X) = DB | 000X (as in step 1).

5. 并行执行体 1选择为读取类 , 声明为选择方式, 以适应从备份中读一个分支数据的需求。 5. Parallel Execution 1 Select to read the class and declare it as a selection method to accommodate the need to read a branch of data from the backup.

6. 在程序中编译一个通 ffl的方法, 用于对两个数据厍进行写入操作, 该方法的参数应包含数据库地 址。 6. Compile a pass ffl method in the program to write to the two data files. The parameters of the method should contain the database address.

7. 配置并行执行体 2, 对两个数据库进行并行写入操作。 并行执行体的 "方法成执行体引 ΜΓ 部分配置为对步骤 2的方法的引用, 传入参数 步骤 6中的参数一致。  7. Configure parallel execution 2 to perform parallel write operations on both databases. The "Method into Execution" section of the parallel execution body is configured to reference the method of step 2, and the parameters in the incoming parameter step 6 are consistent.

8. 并行规则体 2中的数据/ ΐ地址参数被配置为对执行体 DB000X的引川,所引 的执行体名称 ^由映 射 F (X) =DB | 000X得来的 (如步骤 1 )。  8. The data/ΐ address parameter in the parallel rule body 2 is configured to be the source of the executable DB000X, and the referenced executable name ^ is derived from the mapping F (X) = DB | 000X (as in step 1).

9. 并行执行体 2选择为写入类型, 卢明为同时方式, 以适应从备份中读一个分支数据的需求。  9. The parallel execution body 2 is selected as the write type, and Lu Ming is the simultaneous mode to accommodate the need to read a branch data from the backup.

[0020] 抢票系统分别从不同公司买票, 假设 N家公司, 每家公司系统购票接口参数格式和返冋数据都一 样 (或者可以转换成一样的), 只是具体的实现方法不一样, 抢票系统需耍获得购到的第一张票, 如果抢 到一张票以后, 其余的票都撤单丢弃。 [0020] The ticketing system buys tickets from different companies respectively. Assume that N companies, each company system purchase interface interface parameter format and return data are the same (or can be converted into the same), but the specific implementation method is different, The ticket-sending system needs to get the first ticket purchased. If one ticket is grabbed, the remaining tickets will be discarded.

开发者可按照如卜步骤开发和配置:  Developers can follow the steps to develop and configure:

1. 在规则引擎脚本中先配置 N个执行体, 参数为各家公司系统的接口参数, 实现方法分别为各家公 司系统的实现方法, 执行体名称为 ZXT1到 ZXTN。  1. Configure N executables in the rule engine script. The parameters are the interface parameters of each company system. The implementation methods are the implementation methods of each company system. The executable names are ZXT1 to ZXTN.

2. 配置并行执行体, J Τ·对这 Ν家公 r?]的并行购票操作。 由丁-这 N家公司的参数格式都是一致的, 困此, 并行执行体的参数同步骤 1的参数一样。 并打执行体的 "方法或执行体引用"部分配置为对 N个执行体的引用。 引 HJ的执仃体名称由映射 F(X)=ZXT|N得来 (见步骤 1)。 2. Configure the parallel execution, J Τ · the parallel purchase operation of this family. The parameter formats of D-these companies are the same. In this case, the parameters of the parallel execution body are the same as those of step 1. The "method or executable reference" part of the executable is configured as a reference to the N executables. The name of the executable of the HJ is derived from the mapping F(X)=ZXT|N (see step 1).

并行执行体选择规则卢明为竞争读取方式, 以适应从各个分支竞争? Ϊ洵, 从而最快获得票的需求。 Parallel Execution Body Selection Rules Lu Ming is a competitive reading method to adapt to competition from various branches? Oh, so you get the fastest ticket demand.

Claims

权 利 要 求 书 Claims [oooi] 规则引擎是指 一段易慷的代码表示业务过^中的规则, 它不需耍经过编译, 在程序运行期被读 取、 执行。 现有的业务规则引擎只解决了 "在什么条件下执行什么"的需求, 并行业务规则引擎及其方案 的特征是: 并行业务规则引擎根据业务的需要, 利 MJ并行执行体以映射方式获得所有分支, 并对分支中的 数据进行有选抒的操作, 选择方式冇联合、 选择、 竞争、 同时等, 从而方便¾活地满足各种并仃计算的需 求。  [oooi] The rule engine refers to an easy-to-follow code that represents the rules of a business. It does not need to be compiled and read and executed during the running of the program. The existing business rules engine only solves the requirement of "what under what conditions". The characteristics of the parallel business rules engine and its scheme are: Parallel business rules engine according to the needs of the business, the MJ parallel execution body obtains all by mapping Branching, and performing selective operations on the data in the branch, selecting methods, combining, selecting, competing, and simultaneously, so as to conveniently meet the needs of various parallel computing. [0002] 根据权利要求 [0001]所述的并行业务规则引擎及方案, 其特征在 Τ·: 并行业务规则引擎通过变量 映射的方式 ?ϊ找和生成不同的分支, 有别于传统的通过条件判断语句成条件触发语句 找和生成分支的方 式; 映射规则如下, 设有 i个变 ¾, {Xi}为这些变 的集合, 分支地址成子 序名称为 { Y }, 那么找山一 个 {Y } =F({Xi})的映射, 通过计算映射将 {Xi}映射到 F({Xi})。 [0002] A parallel business rule engine and scheme according to claim [0001], characterized in that : the parallel business rule engine uses a variable mapping method to find and generate different branches, which is different from the traditional pass condition. The judgment statement is a conditional triggering statement to find and generate a branch; the mapping rule is as follows, there are i variables, {Xi} is the set of these changes, the branch address is the sub-order name {Y}, then the mountain is a {Y} Mapping of =F({Xi}), mapping {Xi} to F({Xi}) by computing the map. [0003] 根据权利耍求 [0001]所述的并 ί业务规则引擎及方案, 其特征在丁 ·: 并了执行体包括 ί卖取和 ^入 两种类型, 通过卢明同时、 联合、 竞争、 选择等选杼方式, 从而适应不同应 W情景下并行操作需求。  [0003] According to the claim [0001], the business rule engine and scheme are characterized in that: and the execution body includes two types: 卖 sell and ^, through Lu Ming simultaneous, joint, competition Selecting and selecting the selection method to adapt to the parallel operation requirements under different scenarios. [0004] 根据权利耍求 [0001]所述的并行业务规则引擎及方案, 其特征在丁: 该方案可用丁并行 I:作流引 擎, 引擎中使川并行执行体将对并行节点的操作映射为并行操作, 从而很方便地获得良好的性能; Γ.作流 (workflow)指一系列前后相续相迕的步骤; I:作流引擎 (workflow engine)使 HJ易慷的脚本 表示这些步骤之 间的关系, I:作流引擎框架读取和解籽这些步骤, 使 HJ时, 应 程序通过 I:作流引擎接口调 执行相应的 I:作流步骤, 并得到结果; Γ作流引擎分离了 I:作流程和 Λ体的程序编程。 [0004] The parallel business rule engine and scheme according to claim [0001], characterized in that: the scheme can be used in the parallel parallel I: flow engine, and the engine parallelizes the operation of the parallel node For parallel operation, it is convenient to obtain good performance; work. Workflow refers to a series of successive steps; I: The workflow engine makes HJ easy scripts represent these steps The relationship between I, the flow engine framework to read and unseed these steps, so that HJ, the program through the I: flow engine interface to adjust the corresponding I: flow steps, and get the results; I: Program programming for processes and processes.
PCT/CN2013/000365 2013-03-07 2013-03-29 Rule engine of parallel business and implementation method therefor Ceased WO2014134757A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2014/000218 WO2014154016A1 (en) 2013-03-29 2014-03-10 Parallel database management system and design scheme

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201310073436.9A CN103116498B (en) 2013-03-07 2013-03-07 Parallel Business Rule Engine and Its Realization Method
CN201310073436.9 2013-03-07

Publications (1)

Publication Number Publication Date
WO2014134757A1 true WO2014134757A1 (en) 2014-09-12

Family

ID=48414884

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2013/000365 Ceased WO2014134757A1 (en) 2013-03-07 2013-03-29 Rule engine of parallel business and implementation method therefor

Country Status (2)

Country Link
CN (2) CN107291464B (en)
WO (1) WO2014134757A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115063096A (en) * 2022-05-11 2022-09-16 广东金赋科技股份有限公司 Intelligent office system

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107291464B (en) * 2013-03-07 2020-10-27 环球雅途集团有限公司 Parallel business rule engine branch infinite solution
WO2014154016A1 (en) * 2013-03-29 2014-10-02 深圳市并行科技有限公司 Parallel database management system and design scheme
CN104239008B (en) * 2013-06-07 2017-09-29 深圳市并行科技有限公司 Parallel database management system and design
CN115185616B (en) * 2022-09-14 2022-12-13 深圳依时货拉拉科技有限公司 Business rule engine device and processing method of business rule engine

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060253421A1 (en) * 2005-05-06 2006-11-09 Fang Chen Method and product for searching title metadata based on user preferences
CN101110022A (en) * 2007-08-30 2008-01-23 济南卓信智能科技有限公司 Method for implementing workflow model by software
CN101571810A (en) * 2009-05-31 2009-11-04 清华大学 Method for implementing program, method for verifying program result, devices and system
CN102722355A (en) * 2012-06-04 2012-10-10 南京中兴软创科技股份有限公司 Workflow mechanism-based concurrent ETL (Extract, Transform and Load) conversion method

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050021540A1 (en) * 2003-03-26 2005-01-27 Microsoft Corporation System and method for a rules based engine
US7539974B2 (en) * 2003-10-24 2009-05-26 Microsoft Corporation Scalable synchronous and asynchronous processing of monitoring rules
GB2419974A (en) * 2004-11-09 2006-05-10 Finsoft Ltd Calculating the quality of a data record
CN101609531A (en) * 2009-07-29 2009-12-23 金蝶软件(中国)有限公司 Data processing method in a kind of enterprise resource planning and device
CN102360291B (en) * 2011-10-07 2013-11-13 云南爱迪科技有限公司 Service-oriented business rule design method based on business rule engine
CN102509171B (en) * 2011-10-24 2014-11-12 浙江大学 Flow mining method facing to rule execution log
CN102542414B (en) * 2011-12-28 2016-03-30 焦点科技股份有限公司 A kind of operation flow of rule-based engine and the loosely coupled method of business data processing and system
CN107291464B (en) * 2013-03-07 2020-10-27 环球雅途集团有限公司 Parallel business rule engine branch infinite solution

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060253421A1 (en) * 2005-05-06 2006-11-09 Fang Chen Method and product for searching title metadata based on user preferences
CN101110022A (en) * 2007-08-30 2008-01-23 济南卓信智能科技有限公司 Method for implementing workflow model by software
CN101571810A (en) * 2009-05-31 2009-11-04 清华大学 Method for implementing program, method for verifying program result, devices and system
CN102722355A (en) * 2012-06-04 2012-10-10 南京中兴软创科技股份有限公司 Workflow mechanism-based concurrent ETL (Extract, Transform and Load) conversion method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115063096A (en) * 2022-05-11 2022-09-16 广东金赋科技股份有限公司 Intelligent office system

Also Published As

Publication number Publication date
CN103116498B (en) 2017-06-20
CN107291464B (en) 2020-10-27
CN103116498A (en) 2013-05-22
CN107291464A (en) 2017-10-24

Similar Documents

Publication Publication Date Title
US9182957B2 (en) Method and system for automated improvement of parallelism in program compilation
JP4339907B2 (en) Optimal code generation method and compiling device for multiprocessor
US6636880B1 (en) Automatic conversion of units in a computer program
WO2014134757A1 (en) Rule engine of parallel business and implementation method therefor
Francez et al. A linear-history semantics for languages for distributed programming
JP2018510445A (en) Domain-specific system and method for improving program performance
JP6003699B2 (en) Test data generation program, method and apparatus
CN116523023A (en) Operator fusion method and device, electronic equipment and storage medium
US8612954B2 (en) Fine slicing: generating an executable bounded slice for program
CN118409758A (en) Method, apparatus, medium and program product for compiling kernel functions
US9766866B2 (en) Techniques for determining instruction dependencies
JP6651977B2 (en) Information processing apparatus, compiling method, and compiling program
JPH103391A (en) Register allocation method and system using multiple interference graphs
JP2002297399A (en) METHOD FOR GIVING ϕ FUNCTION FOR PERFORMING STATIC SINGLE SUBSTITUTION
JP4830108B2 (en) Program processing apparatus, program processing method, parallel processing program compiler, and recording medium storing parallel processing program compiler
CN103942035A (en) Instruction processing method, compiler and instruction processor
Tetzel et al. Efficient compilation of regular path queries
Francez et al. A linear history semantics for distributed languages extended abstract
Altisen et al. Squeezing streams and composition of self-stabilizing algorithms
CN110187882B (en) Register pair allocation method and storage medium oriented to instruction source operand
CN112825031B (en) Process description method and device based on JSON format
Rodrigues et al. Towards an engine for coordination-based architectural reconfigurations
Boley RELFUN: A relational/functional integration with valued clauses
US9772827B2 (en) Techniques for determining instruction dependencies
Disney et al. Game semantics for type soundness

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13877400

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13877400

Country of ref document: EP

Kind code of ref document: A1