[go: up one dir, main page]

CN111190892B - A method and device for processing abnormal data in data backfilling - Google Patents

A method and device for processing abnormal data in data backfilling Download PDF

Info

Publication number
CN111190892B
CN111190892B CN202010004932.9A CN202010004932A CN111190892B CN 111190892 B CN111190892 B CN 111190892B CN 202010004932 A CN202010004932 A CN 202010004932A CN 111190892 B CN111190892 B CN 111190892B
Authority
CN
China
Prior art keywords
data
abnormal data
field information
task flow
time field
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010004932.9A
Other languages
Chinese (zh)
Other versions
CN111190892A (en
Inventor
陈琪瑶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sipic Technology Co Ltd
Original Assignee
Sipic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sipic Technology Co Ltd filed Critical Sipic Technology Co Ltd
Priority to CN202010004932.9A priority Critical patent/CN111190892B/en
Publication of CN111190892A publication Critical patent/CN111190892A/en
Application granted granted Critical
Publication of CN111190892B publication Critical patent/CN111190892B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • G06F16/215Improving data quality; Data cleansing, e.g. de-duplication, removing invalid entries or correcting typographical errors
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/182Distributed file systems
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • G06F16/2471Distributed queries
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models
    • G06F16/283Multi-dimensional databases or data warehouses, e.g. MOLAP or ROLAP

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Quality & Reliability (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a method and a device for processing abnormal data in data backfill, and relates to the technical field of computers. One embodiment of the method comprises: in the case of abnormal data, determining metadata and time field information of the abnormal data; based on the stored blood relationship data, determining a target task flow and an execution sequence of the target task flow according to the metadata of the abnormal data; wherein the blood relationship data is at least used for expressing the association relationship between the data; and executing the target task flow according to the execution sequence and the time field information of the abnormal data. According to the method, in a data backfill scene, abnormal data is automatically processed based on the blood relationship between data and tasks, manual intervention is not needed, the processing efficiency is greatly improved, and the labor cost is reduced.

Description

一种数据回填中处理异常数据的方法和装置A method and device for processing abnormal data in data backfilling

技术领域technical field

本发明涉及计算机技术领域,尤其涉及一种数据回填中处理异常数据的方法和装置。The invention relates to the field of computer technology, and in particular, to a method and device for processing abnormal data in data backfilling.

背景技术Background technique

Airflow是一个可编程,调度和监控的工作流平台,基于有向无环图dag,airflow可以定义一组有依赖的任务,按照依赖依次执行。airflow提供了丰富的命令行工具用于系统管控,而其web管理界面同样也可以方便的管控调度任务,并且对任务运行状态进行实时监控,方便了系统的运维和管理。不同于其它调度器使用XML或者text文件方式定义工作流,airflow通过python文件作流,用户可以通过代码完全自定义自己的工作流。airflow的主要功能:工作流定义、任务调度、任务依赖、变量、池、分布式执行任务等。Backfill功能是airflow可以根据时间段对dag中的多个任务根据依赖关系进行数据回填。Clear功能是airflow可以根据时间段清除一些任务的状态,使得可对dag中的某个任务进行重运行。Airflow is a programmable, scheduling and monitoring workflow platform. Based on the directed acyclic graph dag, airflow can define a set of dependent tasks and execute them in sequence according to the dependencies. Airflow provides a wealth of command-line tools for system management and control, and its web management interface can also easily manage and control scheduling tasks, and monitor the running status of tasks in real time, which facilitates system operation, maintenance and management. Unlike other schedulers that use XML or text files to define workflows, airflow uses python files for flow, and users can fully customize their own workflows through code. The main functions of airflow: workflow definition, task scheduling, task dependencies, variables, pools, distributed execution tasks, etc. The backfill function is that airflow can backfill the data of multiple tasks in the dag according to the dependencies according to the time period. The Clear function is that airflow can clear the status of some tasks according to the time period, so that a task in the dag can be rerun.

Aiflow作为任务调度系统无法知晓数据的采集的异常情况,只能等数据开发工程师发现异常情况之后自主编写脚本触发backfill和clear功能才能重新运行数据。从发现问题到解决问题耗时。以及,Backfill以及Clear功能的作用范围都在一个dag范围内,无法根据依赖关系进行跨dag的任务重运行。在实际的数据采集场景下,一旦数据出现异常,数据回填的场景下,需要依赖数据开发人员的经验进行人工编译脚本处理,浪费人力资源,完全可以根据元数据获取数据的血缘关系来进行自动化。As a task scheduling system, Aiflow cannot know the abnormal situation of data collection, and can only re-run the data after the data development engineer finds the abnormal situation and writes scripts to trigger the backfill and clear functions. Time-consuming from finding the problem to solving it. In addition, the scope of the Backfill and Clear functions is within the scope of a dag, and tasks across dag cannot be re-run according to dependencies. In the actual data collection scenario, once the data is abnormal and the data is backfilled, it is necessary to rely on the experience of the data developer to manually compile the script processing, which wastes human resources.

发明内容SUMMARY OF THE INVENTION

有鉴于此,本发明实施例提供一种数据回填中处理异常数据的方法和装置,能够在数据回填的场景中,自动化处理异常数据问题,无需人为干预,大大提高了处理效率和降低人工成本。In view of this, embodiments of the present invention provide a method and apparatus for processing abnormal data in data backfilling, which can automatically process abnormal data problems in a data backfilling scenario without human intervention, greatly improving processing efficiency and reducing labor costs.

为实现上述目的,根据本发明实施例的一个方面,提供了一种数据回填中处理异常数据的方法。To achieve the above object, according to an aspect of the embodiments of the present invention, a method for processing abnormal data in data backfilling is provided.

本发明实施例的数据回填中处理异常数据的方法包括:在存在异常数据的情况下,确定所述异常数据的元数据和时间字段信息;基于存储的血缘关系数据,根据所述异常数据的元数据,确定目标任务流以及所述目标任务流的执行顺序;其中,所述血缘关系数据至少用于表述数据之间的关联关系;根据所述执行顺序和所述异常数据的时间字段信息,执行所述目标任务流。The method for processing abnormal data in data backfilling according to the embodiment of the present invention includes: in the case of abnormal data, determining metadata and time field information of the abnormal data; based on the stored blood relationship data, according to the metadata of the abnormal data data, determine the target task flow and the execution order of the target task flow; wherein, the blood relationship data is at least used to express the association relationship between the data; according to the execution order and the time field information of the abnormal data, execute the target task flow.

可选地,在存在异常数据的情况下,确定所述异常数据的元数据和时间字段信息之前,还包括:对存储在分布式文件系统中的原始数据进行解析,得到所述原始数据的时间字段信息;判断所述原始数据的时间字段信息是否符合正常范围条件;在所述原始数据的时间字段信息不符合正常范围条件时,确定所述原始数据中存在异常数据。Optionally, in the presence of abnormal data, before determining the metadata and time field information of the abnormal data, the method further includes: parsing the original data stored in the distributed file system to obtain the time of the original data. field information; determine whether the time field information of the original data conforms to the normal range condition; when the time field information of the original data does not conform to the normal range condition, it is determined that abnormal data exists in the original data.

可选地,基于存储的血缘关系数据,根据所述异常数据的元数据,确定目标任务流以及所述目标任务流的执行顺序的步骤包括:根据所述原始数据的时间字段信息,将所述历史数据回洗到相应的时间分区表中,以及根据所述异常数据的时间字段信息,确定所述异常数据的时间分区表的目标表名;其中,所述元数据至少包括表名;基于存储的血缘关系数据,根据所述异常数据的目标表名,确定目标任务流以及所述目标任务流的执行顺序。Optionally, based on the stored blood relationship data and according to the metadata of the abnormal data, the step of determining the target task flow and the execution order of the target task flow includes: according to the time field information of the original data, The historical data is backwashed into the corresponding time partition table, and according to the time field information of the abnormal data, the target table name of the time partition table of the abnormal data is determined; wherein, the metadata includes at least the table name; based on storage According to the blood relationship data of the abnormal data, the target task flow and the execution order of the target task flow are determined according to the target table name of the abnormal data.

可选地,所述时间分区表为hive表;和/或,Optionally, the time partition table is a hive table; and/or,

在根据所述执行顺序和所述异常数据的时间字段信息,执行所述目标任务流之后,还包括:将所述hive表中的数据更新至对应的数据库。After executing the target task flow according to the execution order and the time field information of the abnormal data, the method further includes: updating the data in the hive table to a corresponding database.

可选地,在根据所述执行顺序和所述异常数据的时间字段信息,执行所述目标任务流之后,还包括:确定所述目标任务流中不存在未执行成功的任务,判断是否存在后续延迟数据;如果存在,则确认存在异常数据的情况。Optionally, after executing the target task flow according to the execution order and the time field information of the abnormal data, the method further includes: determining that there are no tasks that have not been successfully executed in the target task flow, and judging whether there are subsequent tasks. Delay data; if present, confirm the presence of abnormal data.

可选地,还包括:确定所述异常数据为重复数据;对所述重复数据进行过滤。Optionally, the method further includes: determining that the abnormal data is duplicate data; and filtering the duplicate data.

为实现上述目的,根据本发明实施例的另一个方面,提供了一种数据回填中处理异常数据的装置。To achieve the above object, according to another aspect of the embodiments of the present invention, an apparatus for processing abnormal data in data backfilling is provided.

本发明实施例的数据回填中处理异常数据的装置包括:The apparatus for processing abnormal data in data backfilling according to the embodiment of the present invention includes:

元数据确定模块,用于在存在异常数据的情况下,确定所述异常数据的元数据和时间字段信息;a metadata determination module, configured to determine metadata and time field information of the abnormal data in the presence of abnormal data;

任务流确定模块,用于基于存储的血缘关系数据,根据所述异常数据的元数据,确定目标任务流以及所述目标任务流的执行顺序;其中,所述血缘关系数据至少用于表述数据之间的关联关系;The task flow determination module is used to determine the target task flow and the execution order of the target task flow based on the stored blood relationship data and according to the metadata of the abnormal data; wherein, the blood relationship data is at least used for expressing data. relationship between;

执行模块,用于根据所述执行顺序和所述异常数据的时间字段信息,执行所述目标任务流。An execution module, configured to execute the target task flow according to the execution sequence and the time field information of the abnormal data.

可选地,还包括异常检测模块,用于对存储在分布式文件系统中的原始数据进行解析,得到所述原始数据的时间字段信息;判断所述原始数据的时间字段信息是否符合正常范围条件;在所述原始数据的时间字段信息不符合正常范围条件时,确定所述原始数据中存在异常数据。Optionally, it also includes an abnormality detection module, which is used to parse the original data stored in the distributed file system to obtain the time field information of the original data; judge whether the time field information of the original data meets the normal range conditions ; When the time field information of the original data does not meet the normal range condition, it is determined that abnormal data exists in the original data.

可选地,所述任务流确定模块还用于,根据所述原始数据的时间字段信息,将所述历史数据回洗到相应的时间分区表中,以及根据所述异常数据的时间字段信息,确定所述异常数据的时间分区表的目标表名;其中,所述元数据至少包括表名;基于存储的血缘关系数据,根据所述异常数据的目标表名,确定目标任务流以及所述目标任务流的执行顺序。Optionally, the task flow determination module is further configured to, according to the time field information of the original data, backwash the historical data into a corresponding time partition table, and according to the time field information of the abnormal data, Determine the target table name of the time partition table of the abnormal data; wherein, the metadata includes at least a table name; based on the stored blood relationship data, according to the target table name of the abnormal data, determine the target task flow and the target The execution order of the task flow.

可选地,还包括执行结果确定模块,用于确定所述目标任务流中不存在未执行成功的任务,判断是否存在后续延迟数据;如果存在,则确认存在异常数据的情况。Optionally, an execution result determination module is further included, configured to determine that there is no unsuccessfully executed task in the target task flow, and determine whether there is subsequent delayed data; if so, confirm the existence of abnormal data.

可选地,还包括重复异常处理模块,用于确定所述异常数据为重复数据;对所述重复数据进行过滤。Optionally, a duplicate exception processing module is further included, configured to determine that the abnormal data is duplicate data; and filter the duplicate data.

为实现上述目的,根据本发明实施例的再一个方面,提供了一种电子设备。To achieve the above object, according to yet another aspect of the embodiments of the present invention, an electronic device is provided.

本发明实施例的电子设备包括:一个或多个处理器;存储装置,用于存储一个或多个程序,当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现上述任一项的数据回填中处理异常数据的方法。An electronic device according to an embodiment of the present invention includes: one or more processors; and a storage device for storing one or more programs, when the one or more programs are executed by the one or more processors, the one or more programs cause the One or more processors implement any one of the above-mentioned methods for processing abnormal data in data backfilling.

为实现上述目的,根据本发明实施例的再一个方面,提供了一种计算机可读介质,其上存储有计算机程序,其特征在于,所述程序被处理器执行时实现上述任一项的数据回填中处理异常数据的方法。In order to achieve the above object, according to another aspect of the embodiments of the present invention, a computer-readable medium is provided on which a computer program is stored, wherein the program is executed by a processor to realize the data of any one of the above Method for handling abnormal data in backfill.

上述发明中的一个实施例具有如下优点或有益效果:在数据回填的场景中,基于数据与任务之间的血缘关系,确定出任务流的执行顺序并跟踪任务的运行状态,自动化处理异常数据问题,无需人为干预,大大提高了处理效率和降低人工成本。An embodiment of the above invention has the following advantages or beneficial effects: in the scenario of data backfilling, based on the blood relationship between data and tasks, determine the execution sequence of task flow and track the running status of tasks, and automatically handle abnormal data problems , without human intervention, greatly improving the processing efficiency and reducing labor costs.

上述的非惯用的可选方式所具有的进一步效果将在下文中结合具体实施方式加以说明。Further effects of the above non-conventional alternatives will be described below in conjunction with specific embodiments.

附图说明Description of drawings

附图用于更好地理解本发明,不构成对本发明的不当限定。其中:The accompanying drawings are used for better understanding of the present invention and do not constitute an improper limitation of the present invention. in:

图1是根据本发明实施例的数据回填中处理异常数据的方法的主要流程的示意图;1 is a schematic diagram of the main flow of a method for processing abnormal data in data backfilling according to an embodiment of the present invention;

图2是根据本发明实施例的数据回填中处理异常数据的方法的示意图;2 is a schematic diagram of a method for processing abnormal data in data backfilling according to an embodiment of the present invention;

图3是根据本发明实施例的数据回填中处理异常数据的装置的主要模块的示意图;3 is a schematic diagram of main modules of an apparatus for processing abnormal data in data backfilling according to an embodiment of the present invention;

图4是本发明实施例可以应用于其中的示例性系统架构图;4 is an exemplary system architecture diagram to which an embodiment of the present invention may be applied;

图5是适于用来实现本发明实施例的终端设备或服务器的计算机系统的结构示意图。FIG. 5 is a schematic structural diagram of a computer system suitable for implementing a terminal device or a server according to an embodiment of the present invention.

具体实施方式Detailed ways

以下结合附图对本发明的示范性实施例做出说明,其中包括本发明实施例的各种细节以助于理解,应当将它们认为仅仅是示范性的。因此,本领域普通技术人员应当认识到,可以对这里描述的实施例做出各种改变和修改,而不会背离本发明的范围和精神。同样,为了清楚和简明,以下的描述中省略了对公知功能和结构的描述。Exemplary embodiments of the present invention are described below with reference to the accompanying drawings, which include various details of the embodiments of the present invention to facilitate understanding and should be considered as exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted from the following description for clarity and conciseness.

图1是根据本发明实施例的数据回填中处理异常数据的方法的主要流程的示意图,如图1所示,本发明实施例的数据回填中处理异常数据的方法主要包括:1 is a schematic diagram of the main flow of a method for processing abnormal data in data backfilling according to an embodiment of the present invention. As shown in FIG. 1 , the method for processing abnormal data in data backfilling according to an embodiment of the present invention mainly includes:

步骤S101:在存在异常数据的情况下,确定异常数据的元数据和时间字段信息。Step S101: In the case of abnormal data, determine metadata and time field information of the abnormal data.

步骤S102:基于存储的血缘关系数据,根据异常数据的元数据,确定目标任务流以及目标任务流的执行顺序;其中,血缘关系数据至少用于表述数据之间的关联关系。Step S102: Determine the target task flow and the execution sequence of the target task flow according to the metadata of the abnormal data based on the stored blood relationship data; wherein the blood relationship data is at least used to express the association relationship between the data.

步骤S103:根据执行顺序和异常数据的时间字段信息,执行目标任务流。Step S103: Execute the target task flow according to the execution sequence and the time field information of the abnormal data.

本发明实施例,在数据回填的场景中,基于数据与任务之间的血缘关系,确定出任务流的执行顺序并跟踪任务的运行状态,自动化处理异常数据问题,无需人为干预,大大提高了处理效率和降低人工成本。In the embodiment of the present invention, in the scenario of data backfilling, based on the blood relationship between data and tasks, the execution order of the task flow is determined and the running status of the tasks is tracked, and abnormal data problems are automatically processed without human intervention, which greatly improves the processing efficiency. efficiency and reduce labor costs.

优选的,在存在异常数据的情况下,确定异常数据的元数据和时间字段信息之前,对存储在分布式文件系统中的原始数据进行解析,得到原始数据的时间字段信息。判断原始数据的时间字段信息是否符合正常范围条件;在原始数据的时间字段信息不符合正常范围条件时,确定原始数据中存在异常数据。Preferably, in the presence of abnormal data, before determining the metadata and time field information of the abnormal data, the original data stored in the distributed file system is parsed to obtain the time field information of the original data. It is judged whether the time field information of the original data conforms to the normal range condition; when the time field information of the original data does not conform to the normal range condition, it is determined that abnormal data exists in the original data.

优选的,基于存储的血缘关系数据,根据异常数据的元数据,确定目标任务流以及目标任务流的执行顺序的过程中,根据原始数据的时间字段信息,将历史数据回洗到相应的时间分区表中,以及根据异常数据的时间字段信息,确定异常数据的时间分区表的目标表名;其中,元数据至少包括表名。基于存储的血缘关系数据,根据异常数据的目标表名,确定目标任务流以及目标任务流的执行顺序。Preferably, in the process of determining the target task flow and the execution sequence of the target task flow based on the stored blood relationship data and the metadata of the abnormal data, the historical data is backwashed to the corresponding time partition according to the time field information of the original data In the table, and according to the time field information of the abnormal data, the target table name of the time partition table of the abnormal data is determined; wherein, the metadata includes at least the table name. Based on the stored blood relationship data, the target task flow and the execution order of the target task flow are determined according to the target table name of the abnormal data.

优选的,时间分区表为hive表。在根据执行顺序和异常数据的时间字段信息,执行目标任务流之后,还包括:将hive表中的数据更新至对应的数据库。Preferably, the time partition table is a hive table. After executing the target task flow according to the execution order and the time field information of the abnormal data, the method further includes: updating the data in the hive table to the corresponding database.

优选的,在根据执行顺序和异常数据的时间字段信息,执行目标任务流之后,确定目标任务流中不存在未执行成功的任务,判断是否存在后续延迟数据。如果存在,则确认存在异常数据的情况。Preferably, after executing the target task flow according to the execution order and the time field information of the abnormal data, it is determined that there are no tasks that have not been successfully executed in the target task flow, and it is determined whether there is subsequent delayed data. If so, confirm the presence of abnormal data.

优选的,在本发明实施例中,确定异常数据为重复数据后,对重复数据进行过滤。Preferably, in this embodiment of the present invention, after it is determined that the abnormal data is duplicate data, the duplicate data is filtered.

现有技术中Aiflow作为任务调度系统,无法主动检测到数据采集的异常情况,只能等数据开发工程师发现异常情况之后自主编写脚本触发backfill和clear功能才能重新运行数据,从发现问题到解决问题耗时长,效率低下。以及,由于Aiflow没有提供dag之间的依赖关系,而且backfill以及clear功能的作用范围都在一个dag范围内,无法根据依赖关系进行跨dag的任务重运行。因此,在数据采集场景下,一旦数据出现延迟,数据回填的场景下,无法根据依赖关系获得数据回洗任务的正确的执行顺序,需要依赖数据开发人员的经验进行处理,耗时耗力。以及,针对特定的场景,确定相应的数据清洗的顺序的脚本,针对每一种场景使用一种脚本进行处理。这样的方式也是需要针对不同的清洗场景进行特定的脚本编写,以及调用不用的处理脚本进行处理。数据采集的异常情况,一般都是靠告警,然后通知数据开发师再进行处理。In the prior art, Aiflow, as a task scheduling system, cannot actively detect the abnormal situation of data collection, and can only re-run the data after the data development engineer finds the abnormal situation and writes a script to trigger the backfill and clear functions. time and inefficiency. And, since Aiflow does not provide dependencies between dags, and the scope of the backfill and clear functions is within the scope of a dag, it is impossible to rerun tasks across dags based on dependencies. Therefore, in the data collection scenario, once the data is delayed and the data is backfilled, the correct execution order of the data backwashing task cannot be obtained according to the dependency relationship. It needs to rely on the experience of the data developer for processing, which is time-consuming and labor-intensive. And, for a specific scenario, a script for determining the sequence of the corresponding data cleaning, and one type of script is used for processing each scenario. This method also needs to write specific scripts for different cleaning scenarios, and call different processing scripts for processing. The abnormal situation of data collection is generally based on the alarm, and then the data developer is notified for processing.

针对上述现有技术存在的问题,本发明实施例通过统一的数据存储技术hive,同时也使用同一种数据处理技术,这样数据流关系更能方便的获得。hive是基于Hadoop的一个数据仓库工具,可以将结构化的数据文件映射为一张数据库表,并提供完整的sql查询功能,可以将sql语句转换为MapReduce任务进行运行。其次,在数据仓库的模型上也进行了统一,在每个业务模块同一套基础的数据模型,数据流的关系非常清晰明了。最后,在脚本层面上也使用同一套的数据清洗模版,传入的时间参数使用同一个宏参数,这样在重新运行任务的时候为自动化打下了基础。在此基础上,本发明实施例根据数据流的走向即从元数据中获取数据的血缘关系来重新执行历史任务。并将最新的hive表中的数据更新至OLAP引擎kylin。Olap为在线联机分析应用,是一种对于多维数据分析查询的解决方案。Kylin是一个Hadoop生态圈下的MOLAP系统,支持TB到PB级别数据量的分布式Olap分析引擎。以及,本发明实施例使用元数据atlas来将数据流与airflow上任务的关联起来,根据数据流的走向确定数据清洗聚合任务的顺序。其中,atlas是一个可伸缩和可扩展的核心基础治理服务集合,使企业能够有效地和高效地满足Hadoop中的合规性要求,并允许与整个企业数据生态系统的集成。Apache Atlas为组织提供开放式元数据管理和治理功能,用以构建其数据资产目录,对这些资产进行分类和管理,并提供围绕这些数据资产的协作功能。In view of the above problems in the prior art, the embodiment of the present invention uses the unified data storage technology hive, and also uses the same data processing technology, so that the data flow relationship can be obtained more conveniently. Hive is a data warehouse tool based on Hadoop, which can map structured data files into a database table, and provides a complete SQL query function, which can convert SQL statements into MapReduce tasks for operation. Secondly, the model of the data warehouse is also unified. In the same basic data model for each business module, the relationship between the data flow is very clear. Finally, the same set of data cleaning templates is also used at the script level, and the same macro parameter is used for the incoming time parameter, which lays the foundation for automation when rerunning the task. On this basis, the embodiment of the present invention re-executes the historical task according to the trend of the data flow, that is, the blood relationship of the data obtained from the metadata. And update the data in the latest hive table to the OLAP engine kylin. Olap is an online online analysis application, which is a solution for multi-dimensional data analysis and query. Kylin is a MOLAP system under the Hadoop ecosystem, supporting a distributed Olap analysis engine with data volumes ranging from TB to PB. And, in the embodiment of the present invention, metadata atlas is used to associate the data flow with the tasks on the airflow, and the sequence of the data cleaning and aggregation tasks is determined according to the trend of the data flow. Among them, atlas is a scalable and extensible collection of core foundational governance services that enables enterprises to effectively and efficiently meet compliance requirements in Hadoop and allows integration with the entire enterprise data ecosystem. Apache Atlas provides open metadata management and governance capabilities for organizations to build their catalog of data assets, categorize and manage those assets, and provide collaboration capabilities around these data assets.

图2是根据本发明实施例的数据回填中处理异常数据的方法的示意图。FIG. 2 is a schematic diagram of a method for processing abnormal data in data backfilling according to an embodiment of the present invention.

如图2所示,首先对分布式文件系统hdfs的原始json(JavaScript ObjectNotation)文件进行解析,获取时间字段信息,根据时间字段信息判断时间是否在合理范围内,若不在合理范围内则为异常,反之正常。若为异常,判断异常的数据情况是采集历史数据重复还是历史数据延迟问题。若数据出现重复采集的情况,则过滤这些数据,然后正常进行数据清洗流程。若采集的数据出现历史数据延迟,根据时间的分布获取影响的时间范围,该影响的时间范围为延迟影响的数据的时间段,即由延迟数据的时间字段信息确定的数值范围,并将历史数据回洗到相应的时间分区表中,并获取受直接影响的hive表的表明以及受影响的时间范围。As shown in Figure 2, the original json (JavaScript ObjectNotation) file of the distributed file system hdfs is first parsed to obtain the time field information. According to the time field information, it is judged whether the time is within a reasonable range. If it is not within a reasonable range, it is abnormal. The opposite is normal. If it is abnormal, determine whether the abnormal data situation is due to the duplication of collected historical data or the delay of historical data. If the data is collected repeatedly, filter the data, and then perform the normal data cleaning process. If there is a historical data delay in the collected data, the affected time range is obtained according to the time distribution. The affected time range is the time period of the data affected by the delay, that is, the value range determined by the time field information of the delayed data, and the historical data Backwash into the corresponding time-partitioned table and get an indication of the hive table directly affected and the time range affected.

根据受影响的表名去元数据atlas的血缘关系中查询相应的任务名。根据受影响的时间范围重新执行数据清洗任务,并等待所有任务执行成功,若有部分任务执行失败,则将失败的任务重新提交执行并等待执行成功。若所有任务执行成功,则判断是否有后续的受到影响的hive表。若有受影响的hive表,则再一次获取根据受影响的表名去元数据atlas的血缘关系中查询相应的任务名,再重复上一述操作,直到没有下一个受影响的hive表。根据之前所有的受影响的表的列表和数据,将kylin中的相应的cube进行更新。According to the affected table name, go to the blood relationship of the metadata atlas to query the corresponding task name. Re-execute the data cleaning task according to the affected time range, and wait for all tasks to be successfully executed. If some tasks fail to execute, resubmit the failed tasks for execution and wait for the successful execution. If all tasks are successfully executed, it is judged whether there are subsequent affected hive tables. If there is an affected hive table, obtain the corresponding task name from the blood relationship of the metadata atlas according to the affected table name again, and repeat the above operation until there is no next affected hive table. Update the corresponding cube in kylin based on the previous list and data of all affected tables.

本发明实施例可以在hdfs中原始日志文件进行分析,根据数据的采集情况进行数据的回洗。在使用中可以根据数据质量监控系统连接起来,一旦数据采集链路出现问题,可以无需人为干预,即可自动化处理数据问题,这一步主要是前两步的数据异常判断来实现的。同时若是数据的统计逻辑进行了修改,需要重新执行整个数据模型中涉及的任务,也可以通过这个程序自动化跟踪运行。通过对血缘关系的使用,正确判断任务的执行顺序并跟踪任务的运行状态,提高数据问题的处理效率,解放人力。In the embodiment of the present invention, the original log file in hdfs can be analyzed, and data backwashing can be performed according to the data collection situation. In use, it can be connected according to the data quality monitoring system. Once there is a problem with the data acquisition link, the data problem can be processed automatically without human intervention. This step is mainly realized by the judgment of abnormal data in the first two steps. At the same time, if the statistical logic of the data is modified, it is necessary to re-execute the tasks involved in the entire data model, and it can also be automatically tracked and run through this program. Through the use of blood relationship, it can correctly judge the execution order of tasks and track the running status of tasks, improve the processing efficiency of data problems, and liberate manpower.

图3是根据本发明实施例数据回填中处理异常数据的装置的主要模块的示意图,如图3所示,本发明实施例的数据回填中处理异常数据的装置300包括元数据确定模块301、任务流确定模块302和执行模块303。3 is a schematic diagram of main modules of an apparatus for processing abnormal data in data backfilling according to an embodiment of the present invention. As shown in FIG. 3 , the apparatus 300 for processing abnormal data in data backfilling according to an embodiment of the present invention includes a metadata determination module 301, a task Flow determination module 302 and execution module 303 .

元数据确定模块301用于,在存在异常数据的情况下,确定异常数据的元数据和时间字段信息。The metadata determination module 301 is configured to determine metadata and time field information of the abnormal data in the presence of abnormal data.

任务流确定模块302用于,基于存储的血缘关系数据,根据异常数据的元数据,确定目标任务流以及目标任务流的执行顺序;其中,血缘关系数据至少用于表述数据之间的关联关系。任务流确定模块还用于,根据原始数据的时间字段信息,将历史数据回洗到相应的时间分区表中,以及根据异常数据的时间字段信息,确定异常数据的时间分区表的目标表名;其中,元数据至少包括表名;基于存储的血缘关系数据,根据异常数据的目标表名,确定目标任务流以及目标任务流的执行顺序。The task flow determination module 302 is configured to, based on the stored blood relationship data and according to the metadata of the abnormal data, determine the target task flow and the execution sequence of the target task flow; wherein the blood relationship data is at least used to express the association relationship between the data. The task flow determination module is also used to, according to the time field information of the original data, backwash the historical data into the corresponding time partition table, and determine the target table name of the time partition table of the abnormal data according to the time field information of the abnormal data; The metadata includes at least a table name; based on the stored blood relationship data, the target task flow and the execution order of the target task flow are determined according to the target table name of the abnormal data.

执行模块303用于,根据执行顺序和异常数据的时间字段信息,执行目标任务流。The execution module 303 is configured to execute the target task flow according to the execution sequence and the time field information of the exception data.

本发明实施例的数据回填中处理异常数据的装置,还包括异常检测模块,用于对存储在分布式文件系统中的原始数据进行解析,得到原始数据的时间字段信息。以及,判断原始数据的时间字段信息是否符合正常范围条件。在原始数据的时间字段信息不符合正常范围条件时,确定原始数据中存在异常数据。The apparatus for processing abnormal data in data backfilling according to the embodiment of the present invention further includes an abnormality detection module for analyzing the original data stored in the distributed file system to obtain time field information of the original data. And, it is judged whether the time field information of the original data conforms to the normal range condition. When the time field information of the original data does not meet the normal range condition, it is determined that abnormal data exists in the original data.

本发明实施例的数据回填中处理异常数据的装置,还包括执行结果确定模块和重复异常处理模块。执行结果确定模块用于确定目标任务流中不存在未执行成功的任务,判断是否存在后续延迟数据;如果存在,则确认存在异常数据的情况。重复异常处理模块用于确定异常数据为重复数据;对重复数据进行过滤。The apparatus for processing abnormal data in data backfilling according to the embodiment of the present invention further includes an execution result determination module and a repeated exception processing module. The execution result determination module is used to determine that there is no task that has not been successfully executed in the target task flow, and to determine whether there is subsequent delayed data; if so, to confirm that there is abnormal data. The repeated exception processing module is used to determine the abnormal data as repeated data; filter the repeated data.

本发明实施例,在数据回填的场景中,基于数据与任务之间的血缘关系,确定出任务流的执行顺序并跟踪任务的运行状态,自动化处理异常数据问题,无需人为干预,大大提高了处理效率和降低人工成本。In the embodiment of the present invention, in the scenario of data backfilling, based on the blood relationship between data and tasks, the execution order of the task flow is determined and the running status of the tasks is tracked, and abnormal data problems are automatically processed without human intervention, which greatly improves the processing efficiency. efficiency and reduce labor costs.

图4示出了可以应用本发明实施例的数据回填中处理异常数据的方法或数据回填中处理异常数据的装置的示例性系统架构400。FIG. 4 shows an exemplary system architecture 400 of a method for processing abnormal data in data backfilling or an apparatus for processing abnormal data in data backfilling according to an embodiment of the present invention.

如图4所示,系统架构400可以包括终端设备401、402、403,网络404和服务器405。网络404用以在终端设备401、402、403和服务器405之间提供通信链路的介质。网络404可以包括各种连接类型,例如有线、无线通信链路或者光纤电缆等等。As shown in FIG. 4 , the system architecture 400 may include terminal devices 401 , 402 , and 403 , a network 404 and a server 405 . The network 404 is a medium used to provide a communication link between the terminal devices 401 , 402 , 403 and the server 405 . The network 404 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.

用户可以使用终端设备401、402、403通过网络404与服务器405交互,以接收或发送消息等。终端设备401、402、403上可以安装有各种通讯客户端应用,例如购物类应用、网页浏览器应用、搜索类应用、即时通信工具、邮箱客户端、社交平台软件等(仅为示例)。The user can use the terminal devices 401, 402, 403 to interact with the server 405 through the network 404 to receive or send messages and the like. Various communication client applications may be installed on the terminal devices 401 , 402 and 403 , such as shopping applications, web browser applications, search applications, instant messaging tools, email clients, social platform software, etc. (only examples).

终端设备401、402、403可以是具有显示屏并且支持网页浏览的各种电子设备,包括但不限于智能手机、平板电脑、膝上型便携计算机和台式计算机等等。The terminal devices 401, 402, 403 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop computers, desktop computers, and the like.

服务器405可以是提供各种服务的服务器,例如对用户利用终端设备401、402、403所浏览的购物类网站提供支持的后台管理服务器(仅为示例)。后台管理服务器可以对接收到的产品信息查询请求等数据进行分析等处理,并将处理结果反馈给终端设备。The server 405 may be a server that provides various services, for example, a background management server that provides support for shopping websites browsed by the terminal devices 401 , 402 , and 403 (just an example). The background management server can analyze and process the received product information query request and other data, and feed back the processing result to the terminal device.

需要说明的是,本发明实施例所提供的数据回填中处理异常数据的方法一般由服务器405执行,相应地,数据回填中处理异常数据的装置一般设置于服务器405中。It should be noted that the method for processing abnormal data in data backfill provided by the embodiment of the present invention is generally performed by the server 405 , and accordingly, the apparatus for processing abnormal data in data backfill is generally set in the server 405 .

应该理解,图4中的终端设备、网络和服务器的数目仅仅是示意性的。根据实现需要,可以具有任意数目的终端设备、网络和服务器。It should be understood that the numbers of terminal devices, networks and servers in FIG. 4 are merely illustrative. There can be any number of terminal devices, networks and servers according to implementation needs.

下面参考图5,其示出了适于用来实现本发明实施例的终端设备的计算机系统500的结构示意图。图5示出的终端设备仅仅是一个示例,不应对本发明实施例的功能和使用范围带来任何限制。Referring to FIG. 5 below, it shows a schematic structural diagram of a computer system 500 suitable for implementing a terminal device according to an embodiment of the present invention. The terminal device shown in FIG. 5 is only an example, and should not impose any limitations on the functions and scope of use of the embodiments of the present invention.

如图5所示,计算机系统500包括中央处理单元(CPU)501,其可以根据存储在只读存储器(ROM)502中的程序或者从存储部分508加载到随机访问存储器(RAM)503中的程序而执行各种适当的动作和处理。在RAM 503中,还存储有系统500操作所需的各种程序和数据。CPU 501、ROM 502以及RAM 503通过总线504彼此相连。输入/输出(I/O)接口505也连接至总线504。As shown in FIG. 5, a computer system 500 includes a central processing unit (CPU) 501 which can be loaded into a random access memory (RAM) 503 according to a program stored in a read only memory (ROM) 502 or a program from a storage section 508 Instead, various appropriate actions and processes are performed. In the RAM 503, various programs and data necessary for the operation of the system 500 are also stored. The CPU 501 , the ROM 502 , and the RAM 503 are connected to each other through a bus 504 . An input/output (I/O) interface 505 is also connected to bus 504 .

以下部件连接至I/O接口505:包括键盘、鼠标等的输入部分506;包括诸如阴极射线管(CRT)、液晶显示器(LCD)等以及扬声器等的输出部分507;包括硬盘等的存储部分508;以及包括诸如LAN卡、调制解调器等的网络接口卡的通信部分509。通信部分509经由诸如因特网的网络执行通信处理。驱动器510也根据需要连接至I/O接口505。可拆卸介质511,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器510上,以便于从其上读出的计算机程序根据需要被安装入存储部分508。The following components are connected to the I/O interface 505: an input section 506 including a keyboard, a mouse, etc.; an output section 507 including a cathode ray tube (CRT), a liquid crystal display (LCD), etc., and a speaker, etc.; a storage section 508 including a hard disk, etc. ; and a communication section 509 including a network interface card such as a LAN card, a modem, and the like. The communication section 509 performs communication processing via a network such as the Internet. A drive 510 is also connected to the I/O interface 505 as needed. A removable medium 511, such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, etc., is mounted on the drive 510 as needed so that a computer program read therefrom is installed into the storage section 508 as needed.

特别地,根据本发明公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本发明公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信部分509从网络上被下载和安装,和/或从可拆卸介质511被安装。在该计算机程序被中央处理单元(CPU)501执行时,执行本发明的系统中限定的上述功能。In particular, the processes described above with reference to the flowcharts may be implemented as computer software programs in accordance with the disclosed embodiments of the present invention. For example, embodiments disclosed herein include a computer program product comprising a computer program carried on a computer-readable medium, the computer program containing program code for performing the method illustrated in the flowchart. In such an embodiment, the computer program may be downloaded and installed from the network via the communication portion 509 and/or installed from the removable medium 511 . When the computer program is executed by the central processing unit (CPU) 501, the above-described functions defined in the system of the present invention are performed.

需要说明的是,本发明所示的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本发明中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本发明中,计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:无线、电线、光缆、RF等等,或者上述的任意合适的组合。It should be noted that the computer-readable medium shown in the present invention may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two. The computer-readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above. More specific examples of computer readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable read only memory (EPROM or flash memory), fiber optics, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing. In the present invention, a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device. In the present invention, however, a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, carrying computer-readable program code therein. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing. A computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device . Program code embodied on a computer readable medium may be transmitted using any suitable medium including, but not limited to, wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.

附图中的流程图和框图,图示了按照本发明各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,上述模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图或流程图中的每个方框、以及框图或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more logical functions for implementing the specified functions executable instructions. It should also be noted that, in some alternative implementations, the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It is also noted that each block of the block diagrams or flowchart illustrations, and combinations of blocks in the block diagrams or flowchart illustrations, can be implemented in special purpose hardware-based systems that perform the specified functions or operations, or can be implemented using A combination of dedicated hardware and computer instructions is implemented.

描述于本发明实施例中所涉及到的模块可以通过软件的方式实现,也可以通过硬件的方式来实现。所描述的模块也可以设置在处理器中,例如,可以描述为:一种处理器包括元数据确定模块、任务流确定模块和执行模块。其中,这些模块的名称在某种情况下并不构成对该模块本身的限定,例如,元数据确定模块还可以被描述为“在存在异常数据的情况下,确定异常数据的元数据和时间字段信息的模块”。The modules involved in the embodiments of the present invention may be implemented in a software manner, and may also be implemented in a hardware manner. The described modules can also be provided in the processor, for example, it can be described as: a processor includes a metadata determination module, a task flow determination module and an execution module. Among them, the names of these modules do not constitute a limitation of the module itself under certain circumstances. For example, the metadata determination module can also be described as "in the presence of abnormal data, determine the metadata and time fields of abnormal data. Information Module".

作为另一方面,本发明还提供了一种计算机可读介质,该计算机可读介质可以是上述实施例中描述的设备中所包含的;也可以是单独存在,而未装配入该设备中。上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被一个该设备执行时,使得该设备包括:在存在异常数据的情况下,确定异常数据的元数据和时间字段信息;基于存储的血缘关系数据,根据异常数据的元数据,确定目标任务流以及目标任务流的执行顺序;其中,血缘关系数据至少用于表述数据之间的关联关系;根据执行顺序和异常数据的时间字段信息,执行目标任务流。As another aspect, the present invention also provides a computer-readable medium, which may be included in the device described in the above embodiments; or may exist alone without being assembled into the device. The above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by a device, the device includes: in the presence of abnormal data, metadata and time field information for determining abnormal data ; Based on the stored blood relationship data, according to the metadata of the abnormal data, determine the execution order of the target task flow and the target task flow; wherein, the blood relationship data is at least used to express the relationship between the data; according to the execution order and the abnormal data Time field information to execute the target task flow.

本发明实施例,在数据回填的场景中,基于数据与任务之间的血缘关系,确定出任务流的执行顺序并跟踪任务的运行状态,自动化处理异常数据问题,无需人为干预,大大提高了处理效率和降低人工成本。In the embodiment of the present invention, in the scenario of data backfilling, based on the blood relationship between data and tasks, the execution order of the task flow is determined and the running status of the tasks is tracked, and abnormal data problems are automatically processed without human intervention, which greatly improves the processing efficiency. efficiency and reduce labor costs.

上述具体实施方式,并不构成对本发明保护范围的限制。本领域技术人员应该明白的是,取决于设计要求和其他因素,可以发生各种各样的修改、组合、子组合和替代。任何在本发明的精神和原则之内所作的修改、等同替换和改进等,均应包含在本发明保护范围之内。The above-mentioned specific embodiments do not constitute a limitation on the protection scope of the present invention. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may occur depending on design requirements and other factors. Any modifications, equivalent replacements and improvements made within the spirit and principle of the present invention shall be included within the protection scope of the present invention.

Claims (8)

1. A method for processing abnormal data in data backfilling is characterized by comprising the following steps:
in the case of abnormal data, determining metadata and time field information of the abnormal data;
based on the stored blood relationship data, determining a target task flow and an execution sequence of the target task flow according to the metadata of the abnormal data; wherein the blood relationship data is at least used for expressing the association relationship between the data;
executing the target task flow according to the execution sequence and the time field information of the abnormal data,
the step of determining a target task flow and an execution sequence of the target task flow according to the metadata of the abnormal data based on the stored blood relationship data comprises the following steps:
according to time field information of original data stored in a distributed file system, backwashing historical data into a corresponding time partition table, and according to the time field information of abnormal data, determining a target table name of the time partition table of the abnormal data; wherein the metadata includes at least a table name;
and determining a target task flow and an execution sequence of the target task flow according to the target table name of the abnormal data based on the stored blood relationship data.
2. The method of claim 1, wherein in the presence of anomalous data, prior to determining metadata and time field information for the anomalous data, further comprising:
analyzing original data stored in a distributed file system to obtain time field information of the original data;
judging whether the time field information of the original data meets the condition of a normal range or not; and when the time field information of the original data does not meet the normal range condition, determining that abnormal data exists in the original data.
3. The method of claim 1, wherein the partition schedule is a hive schedule; and/or the presence of a gas in the gas,
after executing the target task flow according to the execution sequence and the time field information of the abnormal data, the method further comprises the following steps: and updating the data in the hive table to a corresponding database.
4. An apparatus for handling exception data during data backfill, comprising:
the metadata determining module is used for determining the metadata and the time field information of the abnormal data under the condition that the abnormal data exists;
the task flow determining module is used for determining a target task flow and an execution sequence of the target task flow according to the metadata of the abnormal data based on the stored blood relationship data; wherein the blood relationship data is at least used for expressing the association relationship between the data;
an execution module for executing the target task flow according to the execution sequence and the time field information of the abnormal data,
the task flow determining module is further used for washing back historical data into a corresponding time partition table according to time field information of original data stored in the distributed file system, and determining a target table name of the time partition table of the abnormal data according to the time field information of the abnormal data; wherein the metadata includes at least a table name; and determining a target task flow and an execution sequence of the target task flow according to the target table name of the abnormal data based on the stored blood relationship data.
5. The apparatus of claim 4, further comprising an anomaly detection module, configured to parse raw data stored in the distributed file system to obtain time field information of the raw data; judging whether the time field information of the original data meets the condition of a normal range or not; and when the time field information of the original data does not meet the normal range condition, determining that abnormal data exists in the original data.
6. The apparatus according to claim 4, further comprising an execution result determining module, configured to determine that there is no task that is not successfully executed in the target task stream, and determine whether there is subsequent delay data; if so, confirming the existence of abnormal data.
7. An electronic device, comprising:
one or more processors;
a storage device to store one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-3.
8. A computer-readable medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1-3.
CN202010004932.9A 2020-01-03 2020-01-03 A method and device for processing abnormal data in data backfilling Active CN111190892B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010004932.9A CN111190892B (en) 2020-01-03 2020-01-03 A method and device for processing abnormal data in data backfilling

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010004932.9A CN111190892B (en) 2020-01-03 2020-01-03 A method and device for processing abnormal data in data backfilling

Publications (2)

Publication Number Publication Date
CN111190892A CN111190892A (en) 2020-05-22
CN111190892B true CN111190892B (en) 2022-07-12

Family

ID=70708612

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010004932.9A Active CN111190892B (en) 2020-01-03 2020-01-03 A method and device for processing abnormal data in data backfilling

Country Status (1)

Country Link
CN (1) CN111190892B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111782190A (en) * 2020-06-29 2020-10-16 京东数字科技控股有限公司 Abnormal location method and device, storage medium and electronic device
CN111858065B (en) * 2020-07-28 2023-02-03 中国平安财产保险股份有限公司 Data processing method, device, storage medium and device
CN114584802B (en) * 2020-11-30 2024-03-08 腾讯科技(深圳)有限公司 Multimedia processing method, device, medium and electronic equipment
CN112965793B (en) * 2021-01-21 2023-11-21 中国互联网络信息中心 A data warehouse task scheduling method and system for identification parsing data
CN113138895A (en) * 2021-04-19 2021-07-20 建信金融科技有限责任公司 Monitoring method and device of cloud desktop equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060020939A1 (en) * 2004-06-10 2006-01-26 International Business Machines Corporation Exception handling in the processing of proposal requests in a grid computing environment
CN107688504A (en) * 2016-08-05 2018-02-13 中兴通讯股份有限公司 Data management abnormal means to save the situation and system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060020939A1 (en) * 2004-06-10 2006-01-26 International Business Machines Corporation Exception handling in the processing of proposal requests in a grid computing environment
CN107688504A (en) * 2016-08-05 2018-02-13 中兴通讯股份有限公司 Data management abnormal means to save the situation and system

Also Published As

Publication number Publication date
CN111190892A (en) 2020-05-22

Similar Documents

Publication Publication Date Title
CN111190892B (en) A method and device for processing abnormal data in data backfilling
US10678601B2 (en) Orchestration service for multi-step recipe composition with flexible, topology-aware, and massive parallel execution
US11321085B2 (en) Meta-indexing, search, compliance, and test framework for software development
US11531539B2 (en) Automated compliance and testing framework for software development
KR20210045299A (en) Long running workflows for document processing using robotic process automation
US8468391B2 (en) Utilizing log event ontology to deliver user role specific solutions for problem determination
US20160041846A1 (en) Providing configurable workflow capabilities
CN115374102A (en) Data processing method and system
US20150019284A1 (en) Dynamically modifying business processes based on real-time events
US20140123114A1 (en) Framework for integration and execution standardization (fiesta)
CN113760240B (en) Method and device for generating data model
CN113485763B (en) Data processing method, device, electronic device and computer readable medium
CN109960212B (en) Task sending method and device
CN113760677A (en) Abnormal link analysis method, device, equipment and storage medium
US20230036186A1 (en) Systems and methods for data integration
US10789563B2 (en) Building confidence of system administrator in productivity tools and incremental expansion of adoption
US20230185472A1 (en) Dynamic system workload placement in cloud infrastructures
CN110795259A (en) Method and device for analyzing application collapse
CN112579402B (en) Method and device for positioning faults of application system
US20240111831A1 (en) Multi-tenant solver execution service
CN112148762B (en) A real-time data flow statistics method and device
US20240112067A1 (en) Managed solver execution using different solver types
CN113626379B (en) Research and development data management methods, devices, equipment and media
CN115033574A (en) Information generation method, information generation device, electronic device, and storage medium
Praveen Data Engineering Concepts: From Basics To Advance Techniques

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 215123 building 14, Tengfei Innovation Park, 388 Xinping street, Suzhou Industrial Park, Suzhou City, Jiangsu Province

Applicant after: Sipic Technology Co.,Ltd.

Address before: 215123 building 14, Tengfei Innovation Park, 388 Xinping street, Suzhou Industrial Park, Suzhou City, Jiangsu Province

Applicant before: AI SPEECH Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant