CN106814994A - A kind of parallel system optimization method towards big data - Google Patents
A kind of parallel system optimization method towards big data Download PDFInfo
- Publication number
- CN106814994A CN106814994A CN201710045825.9A CN201710045825A CN106814994A CN 106814994 A CN106814994 A CN 106814994A CN 201710045825 A CN201710045825 A CN 201710045825A CN 106814994 A CN106814994 A CN 106814994A
- Authority
- CN
- China
- Prior art keywords
- formula
- generated
- data
- parallel system
- intensive
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/30003—Arrangements for executing specific machine instructions
- G06F9/30007—Arrangements for executing specific machine instructions to perform operations on data operands
- G06F9/3001—Arithmetic instructions
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/38—Concurrent instruction execution, e.g. pipeline or look ahead
- G06F9/3885—Concurrent instruction execution, e.g. pipeline or look ahead using a plurality of independent parallel functional units
- G06F9/3893—Concurrent instruction execution, e.g. pipeline or look ahead using a plurality of independent parallel functional units controlled in tandem, e.g. multiplier-accumulator
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/465—Distributed object oriented systems
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computational Mathematics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
Description
技术领域technical field
本发明涉及面向大数据的并行系统优化方法。The invention relates to a large data-oriented parallel system optimization method.
背景技术Background technique
数据密集型复杂算式是指需要计算大量数据并拥有复杂依赖结构算式,大多数涉及连加和连乘操作,如该算式的计算过程需要消耗大量的时间。数据密集型复杂算式是现有的大数据分析的基础,在数据分析领域有非常重要的应用,现有技术存在如下问题:Data-intensive complex calculations refer to calculations that need to calculate a large amount of data and have complex dependency structures, most of which involve continuous addition and multiplication operations, such as The calculation process of this formula needs to consume a lot of time. Data-intensive complex calculations are the basis of existing big data analysis, and have very important applications in the field of data analysis. The existing technology has the following problems:
1、现有平台只提供基础的操作,如Hadoop中只提供了Map和Reduce的操作。这种模式对于没有经验的编程者来说是十分困难的。1. Existing platforms only provide basic operations. For example, Hadoop only provides Map and Reduce operations. This mode is very difficult for inexperienced programmers.
2、现有工具包只提供了现有的算法中的计算算式计算方法,并不能提供普适性的算式计算方法。2. Existing toolkits only provide computational formula calculation methods in existing algorithms, and cannot provide universal formula calculation methods.
3、在现有技术下,数据密集型复杂计算只能通过多轮的数据重分布完成,运行的时间大大增加。3. Under the existing technology, data-intensive complex calculations can only be completed through multiple rounds of data redistribution, and the running time is greatly increased.
数据密集型(复杂)算式指的是含有多个连加连乘等统计算以及计算数据量大的公式。Data-intensive (complex) formulas refer to formulas that contain multiple statistical calculations such as addition and multiplication, as well as formulas with a large amount of calculation data.
发明内容Contents of the invention
本发明是为了解决现有技术都是针对并行系统中的某一特定的算法,没有针对复杂计算表达式,且计算耗时长的问题,而提出的一种面向大数据的并行系统优化方法。The present invention is to solve the problem that the prior art is aimed at a certain specific algorithm in the parallel system, not for complex calculation expressions, and the calculation takes a long time, and proposes a large data-oriented parallel system optimization method.
一种面向大数据的并行系统优化方法以下步骤实现:A parallel system optimization method oriented to big data is implemented in the following steps:
步骤一:将数据密集型算式进行抽象化处理;Step 1: Abstract data-intensive calculations;
步骤二:将步骤一抽象化处理后的数据密集型算式生成算式语义树;Step 2: Generate an arithmetic semantic tree from the data-intensive formula after the abstraction in Step 1;
步骤三:将步骤二生成的语义树进行化简并生成算式依赖图;Step 3: Simplify the semantic tree generated in step 2 and generate an algorithm dependency graph;
步骤四:将步骤三生成的算式依赖图进行分层并生成任务序列;Step 4: Layer the formula dependency graph generated in Step 3 and generate a task sequence;
步骤五:根据步骤四生成的任务序列在并行系统中生成任务依赖关系,执行后得到数据密集型算式的计算结果。Step 5: Generate task dependencies in the parallel system according to the task sequence generated in step 4, and obtain the calculation result of the data-intensive formula after execution.
发明效果:Invention effect:
1.经过实验证明,该算法在数据量越大的情况下展现出越好的优化效果。在GB级别的数据量下,平均节约57.3%的计算时间。1. Experiments have proved that the algorithm shows better optimization effects when the amount of data is larger. Under the data volume of GB level, the calculation time is saved by 57.3% on average.
2.经过实验证明,在该算法下,算式的计算时间不依赖于算式的复杂度,而依赖于算式生成的任务数量。2. Experiments have proved that under this algorithm, the calculation time of the formula does not depend on the complexity of the formula, but depends on the number of tasks generated by the formula.
3.该算法具有普适性,可应用于不同的并行平台,如Hadoop和Spark;并不要求使用者的编程经验,给出复杂表达式即可得到优化的计算结果。3. The algorithm is universal and can be applied to different parallel platforms, such as Hadoop and Spark; it does not require the user's programming experience, and the optimized calculation result can be obtained by giving complex expressions.
附图说明Description of drawings
图1为本发明流程图;Fig. 1 is a flowchart of the present invention;
图2为语义树结构图;图中“/”代表除号,“-”代表减号,“*”代表乘号,“sum”代表连加符号,“pow”代表乘方符号,avg”代表平均数符号,“count”代表计数符号,“x”、“y”和“2”代表操作变量和操作数。Figure 2 is a semantic tree structure diagram; in the figure "/" represents the division sign, "-" represents the minus sign, "*" represents the multiplication sign, "sum" represents the addition symbol, "pow" represents the power symbol, and avg" represents Mean sign, "count" stands for count sign, "x", "y" and "2" stand for manipulated variable and operand.
图3为化简过程图;Fig. 3 is a simplified process diagram;
图4为分层示意图;Fig. 4 is a layered schematic diagram;
图5为生成的任务序列图;图中任务序列由下至上,MapReduce:avg()代表执行一轮平均数计算的映射归约操作;MapReduce:avg(),count()代表执行一轮平均数计算和计数计算的映射归约操作;MapReduce:sum()代表执行一轮连加计算的映射归约操作;共4轮MapReduce即映射归约操作。Figure 5 is the generated task sequence diagram; the task sequence in the figure is from bottom to top, MapReduce:avg() represents the map reduction operation that performs a round of average calculation; MapReduce:avg(), count() represents the execution of a round of average The map-reduce operation of calculation and counting calculation; MapReduce:sum() represents the map-reduce operation of performing one round of continuous addition calculation; a total of 4 rounds of MapReduce are the map-reduce operation.
图6为任务序列调度图;Fig. 6 is a task sequence scheduling diagram;
图7为简单聚集运算配置图;图中,输入变量x,map映射成为Key:value即键值对,经过reduce归约成结果,valuesum连加结果,valuecount计数结果。配置操作符:sum和count,即为在归约过程中进行连加和计数操作。Figure 7 is a simple aggregation operation configuration diagram; in the figure, the input variable x is mapped to Key:value, which is a key-value pair, and the result is reduced through reduce, the valuesum is added to the result, and the valuecount is counted. Configuration operators: sum and count, that is, continuous addition and counting operations in the reduction process.
图8为复杂聚集运算配置图;Figure 8 is a configuration diagram of complex aggregation operations;
图9为CCA、MCA、ACF优化前运行时间对比图;Figure 9 is a comparison chart of running time before optimization of CCA, MCA, and ACF;
图10为CCA、MCA、ACF优化后运行时间对比图;Figure 10 is a comparison chart of running time after optimization of CCA, MCA, and ACF;
图11为ACF优化前后对比图;Figure 11 is a comparison chart before and after ACF optimization;
图12为MCA优化前后对比图;Figure 12 is a comparison chart before and after MCA optimization;
图13为CCA优化前后对比图。Figure 13 is a comparison chart before and after CCA optimization.
具体实施方式detailed description
具体实施方式一:一种面向大数据的并行系统优化方法包括以下步骤:Specific embodiment one: a kind of big data-oriented parallel system optimization method comprises the following steps:
步骤一:将数据密集型复杂算式进行抽象化处理;Step 1: Abstract data-intensive and complex calculations;
步骤二:将步骤一抽象化处理后的数据密集型算式生成算式语义树;Step 2: Generate an arithmetic semantic tree from the data-intensive formula after the abstraction in Step 1;
步骤三:将步骤二生成的语义树进行化简并生成算式依赖图;Step 3: Simplify the semantic tree generated in step 2 and generate an algorithm dependency graph;
步骤四:将步骤三生成的算式依赖图进行分层并生成任务序列;Step 4: Layer the formula dependency graph generated in Step 3 and generate a task sequence;
步骤五:根据步骤四生成的任务序列在并行系统中生成任务依赖关系,执行后得到数据密集型算式的计算结果。Step 5: Generate task dependencies in the parallel system according to the task sequence generated in step 4, and obtain the calculation result of the data-intensive formula after execution.
具体实施方式二:本实施方式与具体实施方式一不同的是:所述步骤一中将数据密集型算式进行抽象化处理具体为:Specific embodiment 2: The difference between this embodiment and specific embodiment 1 is that in the first step, the data-intensive calculation formula is abstracted as follows:
将数据密集型算式中的子运算分定义为简单计算和聚集计算两种,每一个聚集运算要用一轮MapReduce完成,将数据密集型算式进行函数式抽象;所述简单计算为四则运算、乘方和开方,聚集计算为统计运算,MapReduce为编程模型,用于大规模数据集(大于1TB)的并行运算。The sub-operations in data-intensive formulas are divided into two types: simple calculations and aggregation calculations. Each aggregation operation needs to be completed with a round of MapReduce, and the data-intensive calculations are functionally abstracted; the simple calculations are four arithmetic operations, multiplication Square and square root, aggregation calculation is a statistical operation, and MapReduce is a programming model for parallel operations on large-scale data sets (greater than 1TB).
其它步骤及参数与具体实施方式一相同。Other steps and parameters are the same as those in Embodiment 1.
具体实施方式三:本实施方式与具体实施方式一或二不同的是:所述步骤二中将步骤一抽象化处理后的数据密集型算式生成算式语义树具体过程为:Embodiment 3: The difference between this embodiment and Embodiment 1 or 2 is that in Step 2, the data-intensive formula after the abstraction processing in Step 1 is generated into an arithmetic semantic tree. The specific process is as follows:
提取数据密集型算式中的变量,并确定子算式,将子算式中的运算符(加、减、乘、除、连加、连乘等)作为父节点,对应该运算符的计算变量作为子节点,生成算式语义树,所述语义树从叶子节点到根节点每一条路径上只有一个聚集运算。子算式如算式中的∑x2等。Extract the variables in the data-intensive formula, and determine the sub formula, the operator (addition, subtraction, multiplication, division, addition, multiplication, etc.) in the sub formula as the parent node, and the calculation variable corresponding to the operator as the child node to generate a computational semantic tree, and the semantic tree has only one aggregation operation on each path from a leaf node to a root node. Sub-calculation as in the formula ∑x 2 etc.
其它步骤及参数与具体实施方式一或二相同。Other steps and parameters are the same as those in Embodiment 1 or Embodiment 2.
具体实施方式四:本实施方式与具体实施方式一至三之一不同的是:所述步骤三中将步骤二生成的语义树进行化简并生成算式依赖图的具体过程为:Embodiment 4: The difference between this embodiment and one of Embodiments 1 to 3 is that in Step 3, the semantic tree generated in Step 2 is simplified and the specific process of generating the formula dependency graph is as follows:
将语义树中所有对应相同变量的结点合并为同一结点,对相同变量进行相同计算的结点合并为同一结点。Merge all the nodes corresponding to the same variable in the semantic tree into the same node, and merge the nodes that perform the same calculation on the same variable into the same node.
其它步骤及参数与具体实施方式一至三之一相同。Other steps and parameters are the same as those in Embodiments 1 to 3.
具体实施方式五:本实施方式与具体实施方式一至四之一不同的是:所述步骤四中将步骤三生成的算式依赖图进行分层并生成任务序列的具体过程为:Embodiment 5: The difference between this embodiment and one of Embodiments 1 to 4 is that in Step 4, the formula dependency graph generated in Step 3 is layered and the specific process of generating the task sequence is as follows:
根据算式依赖图中变量与运算符的距离进行分层,以任意变量作为初始节点,以变量到运算符经过的节点数作为运算符所在的层数,当变量与运算符之间有多条路径时,以经过节点数多的路径为准,其中每个运算符为一个节点;Layer according to the distance between the variable and the operator in the formula dependency graph, use any variable as the initial node, and use the number of nodes passed by the variable to the operator as the number of layers of the operator. When there are multiple paths between the variable and the operator When , the path with the most nodes shall prevail, and each operator is a node;
提取每一层相同变量的聚集运算,按照初始节点到终结节点的顺序生成任务序列;每一层中不同变量的聚集运算并行放入一轮MapReduce中执行;每一层中相同变量的聚集运算串行放入一轮MapReduce中执行。Extract the aggregation operation of the same variable in each layer, and generate a task sequence in the order from the initial node to the final node; the aggregation operation of different variables in each layer is put into a round of MapReduce in parallel for execution; the aggregation operation sequence of the same variable in each layer Rows are put into one round of MapReduce for execution.
其它步骤及参数与具体实施方式一至四之一相同。Other steps and parameters are the same as in one of the specific embodiments 1 to 4.
实施例一:如图1所示,一种面向大数据的并行系统优化方法的步骤为:Embodiment one: as shown in Figure 1, the steps of a parallel system optimization method for big data are:
1、抽象算式结构1. Abstract formula structure
将算式中的子运算分为简单计算和聚集计算两种,每一个聚集运算要用一轮MapReduce完成。将算式进行函数式抽象,如将连加符号表示为sum(),抽象后进行下一步操作。将上式抽象为函数式表达式为:Mathematical formula The sub-operations are divided into two types: simple calculation and aggregation calculation. Each aggregation operation needs to be completed by one round of MapReduce. Perform functional abstraction on the calculation, such as expressing the continuous addition symbol as sum(), and proceed to the next step after abstraction. Abstract the above formula into a functional expression:
2、生成算式语义树2. Generate an arithmetic semantic tree
根据算式的依赖关系生成如图2所示的语义树结构。Generate the semantic tree structure shown in Figure 2 according to the dependency relationship of the formula.
3、化简并生成算式依赖图3. Simplify and generate a formula dependency graph
进行语义树化简。我们采用两个原则进行语义树化简。Simplify the semantic tree. We adopt two principles for semantic tree simplification.
All-to-1。所有对应相同变量的结点合并为同一结点,用于消除冗余的结点。All-to-1. All nodes corresponding to the same variable are merged into the same node to eliminate redundant nodes.
Same-to-1。对相同变量进行相同计算的结点合并为同一结点。这样相同的计算可以在同一个任务中进行,进而减少了数据重分布过程。Same-to-1. Nodes that perform the same calculation on the same variable are combined into the same node. In this way, the same calculation can be performed in the same task, thereby reducing the data redistribution process.
应用这两个原则对例子进行化简,如图3所示。化简后生成了算式依赖图,依据该算式依赖图生成算式的任务序列。Apply these two principles to simplify the example, as shown in Figure 3. After simplification, the formula dependency graph is generated, and the task sequence of the formula is generated according to the formula dependency graph.
4、生成任务顺序4. Generate task order
根据算式依赖图生成计算任务计划,该计算用于任务分配。首先,我们根据算式与变量的距离进行分层,如图4所示。A calculation task plan is generated according to the calculation dependency graph, and the calculation is used for task allocation. First, we stratify according to the distance between the formula and the variable, as shown in Figure 4.
根据分层生成MapReduce任务,每一层的相同运算生成同一个任务,生成任务序列如图5所示。MapReduce tasks are generated according to the layers, and the same operation of each layer generates the same task. The generated task sequence is shown in Figure 5.
5、并行系统执行5. Parallel system execution
将任务序列生成并行系统中的job依赖关系序列,放入现有的大数据并行系统中执行。如图6所示。该算式生成3个MapReduce任务并行执行。The task sequence generates the job dependency sequence in the parallel system, and puts it into the existing big data parallel system for execution. As shown in Figure 6. This formula generates three MapReduce tasks to execute in parallel.
执行后,得到算式最终的计算结果。实验证明该算法可以极大的提升计算效率。After execution, the final calculation result of the formula is obtained. Experiments prove that the algorithm can greatly improve the computational efficiency.
在Hadoop中实现了该算法。通过Job Configure文件进行系统配置。我们将配置过程分为简单聚集运算和复杂聚集运算。简单聚集运算配置在Reduce过程中直接进行配置,如图7所示。The algorithm is implemented in Hadoop. System configuration is performed through the Job Configure file. We divide the configuration process into simple aggregate operations and complex aggregate operations. Simple aggregation operation configuration is directly configured in the Reduce process, as shown in Figure 7.
复杂聚集运算配置则需要在Mapper中建树来确定运算过程,从而生成计算结果,如图8所示。Complex aggregate operation configuration requires building a tree in Mapper to determine the operation process and generate calculation results, as shown in Figure 8.
本发明方法可以在现有并行平台,如Sparks和Hyracks等平台中应用。仅需将配置条件稍作修改,适合于上述系统即可。The method of the present invention can be applied in existing parallel platforms, such as Sparks and Hyracks. It is only necessary to slightly modify the configuration conditions to be suitable for the above-mentioned system.
本发明方法作为数据分析的基础优化算法,可以应用于数据分析方向,如商业分析、金融、工业、农业等领域。As a basic optimization algorithm for data analysis, the method of the present invention can be applied in the direction of data analysis, such as business analysis, finance, industry, agriculture and other fields.
复相关分析Complex correlation analysis(CCA): Complex correlation analysis Complex correlation analysis (CCA):
矩阵相关分析Matrix correlation analysis(MCA): Matrix correlation analysis Matrix correlation analysis (MCA):
任意复杂算式Arbitrary complex formula(ACF): Arbitrary complex formula (ACF):
运行时间对比图,如图9~图13可知,优化前后,运行时间出现大幅度缩减。并可以观察到数据量越大,优化效果越明显。The running time comparison chart, as shown in Figure 9 to Figure 13, shows that the running time has been greatly reduced before and after optimization. And it can be observed that the larger the amount of data, the more obvious the optimization effect.
Claims (5)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201710045825.9A CN106814994B (en) | 2017-01-20 | 2017-01-20 | A parallel system optimization method for big data |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201710045825.9A CN106814994B (en) | 2017-01-20 | 2017-01-20 | A parallel system optimization method for big data |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN106814994A true CN106814994A (en) | 2017-06-09 |
| CN106814994B CN106814994B (en) | 2019-02-19 |
Family
ID=59111200
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201710045825.9A Active CN106814994B (en) | 2017-01-20 | 2017-01-20 | A parallel system optimization method for big data |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN106814994B (en) |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107885587A (en) * | 2017-11-17 | 2018-04-06 | 清华大学 | A kind of executive plan generation method of big data analysis process |
| CN108255689A (en) * | 2018-01-11 | 2018-07-06 | 哈尔滨工业大学 | A kind of Apache Spark application automation tuning methods based on historic task analysis |
| CN115511662A (en) * | 2022-09-20 | 2022-12-23 | 昆明能讯科技有限责任公司 | Method for realizing parallel computation of power scheduling mass data and storage medium |
| WO2024065525A1 (en) * | 2022-09-29 | 2024-04-04 | Intel Corporation | Method and apparatus for optimizing deep learning computation graph |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20120137300A1 (en) * | 2010-11-30 | 2012-05-31 | Ryuji Sakai | Information Processor and Information Processing Method |
| CN102591712A (en) * | 2011-12-30 | 2012-07-18 | 大连理工大学 | Decoupling parallel scheduling method for rely tasks in cloud computing |
| US8977898B1 (en) * | 2012-09-24 | 2015-03-10 | Emc Corporation | Concurrent access to data during replay of a transaction log |
-
2017
- 2017-01-20 CN CN201710045825.9A patent/CN106814994B/en active Active
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20120137300A1 (en) * | 2010-11-30 | 2012-05-31 | Ryuji Sakai | Information Processor and Information Processing Method |
| CN102591712A (en) * | 2011-12-30 | 2012-07-18 | 大连理工大学 | Decoupling parallel scheduling method for rely tasks in cloud computing |
| US8977898B1 (en) * | 2012-09-24 | 2015-03-10 | Emc Corporation | Concurrent access to data during replay of a transaction log |
Cited By (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107885587A (en) * | 2017-11-17 | 2018-04-06 | 清华大学 | A kind of executive plan generation method of big data analysis process |
| CN107885587B (en) * | 2017-11-17 | 2018-12-07 | 清华大学 | A kind of executive plan generation method of big data analysis process |
| CN108255689A (en) * | 2018-01-11 | 2018-07-06 | 哈尔滨工业大学 | A kind of Apache Spark application automation tuning methods based on historic task analysis |
| CN108255689B (en) * | 2018-01-11 | 2021-02-12 | 哈尔滨工业大学 | Automatic Apache Spark application tuning method based on historical task analysis |
| CN115511662A (en) * | 2022-09-20 | 2022-12-23 | 昆明能讯科技有限责任公司 | Method for realizing parallel computation of power scheduling mass data and storage medium |
| WO2024065525A1 (en) * | 2022-09-29 | 2024-04-04 | Intel Corporation | Method and apparatus for optimizing deep learning computation graph |
Also Published As
| Publication number | Publication date |
|---|---|
| CN106814994B (en) | 2019-02-19 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Karloff et al. | A model of computation for mapreduce | |
| CN109993299B (en) | Data training method and device, storage medium, electronic device | |
| TW202123092A (en) | Circuit, method and non-transitory machine-readable storage devices for performing neural network computations | |
| CN106814994A (en) | A kind of parallel system optimization method towards big data | |
| CN107861916A (en) | A kind of method and apparatus for being used to perform nonlinear operation for neutral net | |
| CN103327092A (en) | Cell discovery method and system on information networks | |
| CN109903162B (en) | A ReRAM that accelerates random selection of blockchain MCMC and its working method | |
| CN108139898A (en) | Data processing schema compiler | |
| CN105117430B (en) | A kind of iterative task process discovery method based on equivalence class | |
| CN109815021B (en) | Resource key tree method and system for recursive tree modeling program | |
| CN119576557A (en) | A cloud resource dynamic scheduling method based on secure reinforcement learning | |
| CN103019852A (en) | MPI (message passing interface) parallel program load problem three-dimensional visualized analysis method suitable for large-scale cluster | |
| Amziani et al. | Formal modeling and evaluation of service-based business process elasticity in the cloud | |
| Zheng et al. | On the PATHGROUPS approach to rapid small phylogeny | |
| CN102799564A (en) | Fast fourier transformation (FFT) parallel method based on multi-core digital signal processor (DSP) platform | |
| CN109739649B (en) | Resource management method, device, device and computer-readable storage medium | |
| CN104462020B (en) | A kind of matrix increment reduction method of knowledge based granularity | |
| Jaiganesh et al. | B3: Fuzzy‐Based Data Center Load Optimization in Cloud Computing | |
| CN109412865A (en) | A kind of virtual network resource allocation method, system and electronic equipment | |
| CN104915187A (en) | Graph model calculation method and device | |
| Khudhair et al. | An innovative fractal architecture model for implementing MapReduce in an open multiprocessing parallel environment | |
| Bliss et al. | Solving polynomial systems in the cloud with polynomial homotopy continuation | |
| Rogala et al. | Datalogra: datalog with recursive aggregation in the spark RDD model | |
| Mehrjoo et al. | Mapreduce based particle swarm optimization for large scale problems | |
| Ingole et al. | Instruction set design for elementary set in tensilica xtensa |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |