[go: up one dir, main page]

CN111104514B - Training method and device for document tag model - Google Patents

Training method and device for document tag model Download PDF

Info

Publication number
CN111104514B
CN111104514B CN201911338269.XA CN201911338269A CN111104514B CN 111104514 B CN111104514 B CN 111104514B CN 201911338269 A CN201911338269 A CN 201911338269A CN 111104514 B CN111104514 B CN 111104514B
Authority
CN
China
Prior art keywords
model
sub
label
document
recall
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911338269.XA
Other languages
Chinese (zh)
Other versions
CN111104514A (en
Inventor
刘呈祥
何伯磊
肖欣延
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201911338269.XA priority Critical patent/CN111104514B/en
Publication of CN111104514A publication Critical patent/CN111104514A/en
Application granted granted Critical
Publication of CN111104514B publication Critical patent/CN111104514B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a training method and device for a document tag model, and relates to the technical field of document tag prediction. The specific implementation scheme is as follows: obtaining a pre-trained document tag model, wherein the document tag model is obtained by pre-training universal training data of each application scene; acquiring scene training data of an application scene to be applied, wherein the scene training data comprises: a plurality of documents and corresponding tag information under an application scene to be applied; obtaining a sub-model related to an application scene to be applied in a document tag model; training the sub-model by using scene training data to obtain a trained document tag model, so that training data required for training the document tag model in an application scene to be applied can be reduced, and training cost is reduced under the condition of ensuring the accuracy of the document tag model.

Description

文档标签模型的训练方法及装置Training method and device for document labeling model

技术领域technical field

本申请涉及数据处理技术领域,具体涉及文档标签预测技术领域,尤其涉及文档标签模型的训练方法及装置。The present application relates to the technical field of data processing, in particular to the technical field of document label prediction, and in particular to a training method and device for a document label model.

背景技术Background technique

目前,文档的标签预测技术是文档内容理解的重要工作。对于新的文档标签预测场景,主要的解决思路有以下两种,一种是训练通用的文档标签模型:训练模型时不考虑各个场景间的差别,在所有场景都使用通用的文档标签模型。另一种是单独训练文档标签模型:单独为新的场景准备训练数据进行训练。Currently, label prediction techniques for documents are an important work in document content understanding. For the new document label prediction scenario, there are two main solutions. One is to train a general document label model: the difference between each scene is not considered when training the model, and the general document label model is used in all scenarios. The other is to train the document labeling model separately: prepare the training data for the new scene separately for training.

第一种方法中,训练得到的模型,缺乏场景或者领域针对性,在单个场景下的预测准确度低。第二种方法中,需要准备的训练数据需求量大,训练成本高。In the first method, the trained model lacks scene or domain specificity, and the prediction accuracy in a single scene is low. In the second method, a large amount of training data needs to be prepared, and the training cost is high.

发明内容Contents of the invention

本申请提出一种文档标签模型的训练方法及装置,根据待适用的应用场景的场景训练数据,对经过预训练的文档标签模型中与待适用的应用场景相关的子模型进行训练,从而在确保文档标签模型的准确度的前提下,降低待适用的应用场景下文档标签模型的训练成本。This application proposes a training method and device for a document labeling model. According to the scene training data of the application scene to be applied, the sub-models related to the application scene to be applied in the pre-trained document labeling model are trained, so as to ensure On the premise of the accuracy of the document labeling model, the training cost of the document labeling model in the applicable application scenario is reduced.

本申请一方面实施例提出了一种文档标签模型的训练方法,包括:An embodiment of the present application proposes a training method for a document tagging model, including:

获取经过预训练的文档标签模型,所述文档标签模型采用各个应用场景的通用训练数据进行预训练得到;Obtaining a pre-trained document label model, which is obtained by pre-training the general training data of each application scenario;

获取待适用的应用场景的场景训练数据,所述场景训练数据包括:所述待适用的应用场景下的多个文档以及对应的标签信息;Acquiring scene training data of the application scene to be applied, the scene training data including: multiple documents and corresponding label information under the application scene to be applied;

获取所述文档标签模型中与所述待适用的应用场景相关的子模型;Acquiring sub-models related to the application scenario to be applied in the document labeling model;

采用所述场景训练数据对所述子模型进行训练,得到训练好的文档标签模型。The sub-model is trained by using the scene training data to obtain a trained document label model.

在本申请一个实施例中,所述文档标签模型包括:预处理层、候选召回层、粗排层和精排层;In an embodiment of the present application, the document labeling model includes: a preprocessing layer, a candidate recall layer, a rough sorting layer and a fine sorting layer;

所述候选召回层包括:并联的关键词召回子模型、多标签分类召回子模型、显式召回子模型和隐式召回子模型;The candidate recall layer includes: a parallel keyword recall sub-model, a multi-label classification recall sub-model, an explicit recall sub-model and an implicit recall sub-model;

所述粗排层包括:并联的规则子模型和语义匹配子模型;The rough sorting layer includes: a parallel rule sub-model and a semantic matching sub-model;

与所述待适用的应用场景相关的子模型包括:语义匹配子模型,以及以下子模型中的任意一个或者多个:多标签分类召回子模型、显式召回子模型和隐式召回子模型。The sub-models related to the applicable application scenario include: a semantic matching sub-model, and any one or more of the following sub-models: a multi-label classification recall sub-model, an explicit recall sub-model and an implicit recall sub-model.

在本申请一个实施例中,在与所述待适用的应用场景相关的子模型包括:语义匹配子模型、多标签分类召回子模型、显式召回子模型和隐式召回子模型时,所述采用所述场景训练数据对所述子模型进行训练,得到训练好的文档标签模型,包括:In one embodiment of the present application, when the sub-models related to the application scenario to be applied include: semantic matching sub-model, multi-label classification recall sub-model, explicit recall sub-model and implicit recall sub-model, the Using the scene training data to train the sub-model to obtain a trained document label model, including:

针对所述场景训练数据中的每个文档,将所述文档分别输入多标签分类召回子模型、显式召回子模型和隐式召回子模型,并将各个输出结果进行合并,得到候选标签结果;For each document in the scene training data, input the document into the multi-label classification recall sub-model, the explicit recall sub-model and the implicit recall sub-model respectively, and merge the output results to obtain candidate label results;

将所述文档以及所述候选标签结果输入所述语义匹配子模型,获取所述文档与所述候选标签结果中各个候选标签的相关度;Input the document and the candidate label result into the semantic matching sub-model, and obtain the correlation between the document and each candidate label in the candidate label result;

根据所述文档与所述候选标签结果中各个候选标签的相关度,以及所述文档对应的标签信息,对语义匹配子模型、多标签分类召回子模型、显式召回子模型和隐式召回子模型的系数进行调整,得到训练好的文档标签模型。According to the correlation between the document and each candidate label in the candidate label result, and the label information corresponding to the document, the semantic matching sub-model, the multi-label classification recall sub-model, the explicit recall sub-model and the implicit recall sub-model The coefficients of the model are adjusted to obtain a trained document label model.

在本申请一个实施例中,所述场景训练数据还包括:标签集合,所述标签集合包括:文档标签模型可以预测的标签,以便文档标签模型结合所述标签集合对场景训练数据中的文档进行标签预测。In one embodiment of the present application, the scene training data further includes: a label set, and the label set includes: labels that can be predicted by the document label model, so that the document label model combines the label set with the documents in the scene training data. label prediction.

在本申请一个实施例中,所述采用所述场景训练数据对所述子模型进行训练,得到训练好的文档标签模型之前,还包括:In one embodiment of the present application, the training of the sub-model using the scene training data, before obtaining the trained document label model, further includes:

对所述文档标签模型中的多标签分类召回子模型、显式召回子模型和隐式召回子模型的系数进行初始化操作。Initialize the coefficients of the multi-label classification recall sub-model, the explicit recall sub-model and the implicit recall sub-model in the document label model.

本申请实施例的文档标签模型的训练方法,通过获取经过预训练的文档标签模型,文档标签模型采用各个应用场景的通用训练数据进行预训练得到;获取待适用的应用场景的场景训练数据,场景训练数据包括:待适用的应用场景下的多个文档以及对应的标签信息;获取文档标签模型中与待适用的应用场景相关的子模型;采用场景训练数据对子模型进行训练,得到训练好的文档标签模型,从而能够减少待适用的应用场景下训练文档标签模型所需要的训练数据,在确保文档标签模型的准确度的情况下降低训练成本。The training method of the document label model in the embodiment of the present application is obtained by obtaining a pre-trained document label model, and the document label model is obtained by pre-training with general training data of each application scenario; the scene training data of the application scene to be applied is obtained, and the scene The training data includes: multiple documents in the application scenario to be applied and the corresponding label information; obtain the sub-model related to the application scenario to be applied in the document label model; use the scene training data to train the sub-model to obtain the trained The document labeling model can reduce the training data required for training the document labeling model in the applicable application scenarios, and reduce the training cost while ensuring the accuracy of the document labeling model.

本申请另一方面实施例提出了一种文档标签模型的训练装置,包括:Another embodiment of the present application proposes a training device for a document labeling model, including:

获取模块,用于获取经过预训练的文档标签模型,所述文档标签模型采用各个应用场景的通用训练数据进行预训练得到;An acquisition module, configured to acquire a pre-trained document label model, which is obtained by pre-training the general training data of each application scenario;

所述获取模块,还用于获取待适用的应用场景的场景训练数据,所述场景训练数据包括:所述待适用的应用场景下的多个文档以及对应的标签信息;The obtaining module is also used to obtain scene training data of the application scene to be applied, and the scene training data includes: multiple documents and corresponding label information under the application scene to be applied;

所述获取模块,还用于获取所述文档标签模型中与所述待适用的应用场景相关的子模型;The obtaining module is also used to obtain a sub-model related to the application scenario to be applied in the document label model;

训练模块,用于采用所述场景训练数据对所述子模型进行训练,得到训练好的文档标签模型。A training module, configured to use the scene training data to train the sub-models to obtain a trained document labeling model.

在本申请一个实施例中,所述文档标签模型包括:预处理层、候选召回层、粗排层和精排层;In an embodiment of the present application, the document labeling model includes: a preprocessing layer, a candidate recall layer, a rough sorting layer and a fine sorting layer;

所述候选召回层包括:并联的关键词召回子模型、多标签分类召回子模型、显式召回子模型和隐式召回子模型;The candidate recall layer includes: a parallel keyword recall sub-model, a multi-label classification recall sub-model, an explicit recall sub-model and an implicit recall sub-model;

所述粗排层包括:并联的规则子模型和语义匹配子模型;The rough sorting layer includes: a parallel rule sub-model and a semantic matching sub-model;

与所述待适用的应用场景相关的子模型包括:语义匹配子模型,以及以下子模型中的任意一个或者多个:多标签分类召回子模型、显式召回子模型和隐式召回子模型。The sub-models related to the applicable application scenario include: a semantic matching sub-model, and any one or more of the following sub-models: a multi-label classification recall sub-model, an explicit recall sub-model and an implicit recall sub-model.

在本申请一个实施例中,在与所述待适用的应用场景相关的子模型包括:语义匹配子模型、多标签分类召回子模型、显式召回子模型和隐式召回子模型时,所述训练模块具体用于,In one embodiment of the present application, when the sub-models related to the application scenario to be applied include: semantic matching sub-model, multi-label classification recall sub-model, explicit recall sub-model and implicit recall sub-model, the The training module is specifically used for,

针对所述场景训练数据中的每个文档,将所述文档分别输入多标签分类召回子模型、显式召回子模型和隐式召回子模型,并将各个输出结果进行合并,得到候选标签结果;For each document in the scene training data, input the document into the multi-label classification recall sub-model, the explicit recall sub-model and the implicit recall sub-model respectively, and merge the output results to obtain candidate label results;

将所述文档以及所述候选标签结果输入所述语义匹配子模型,获取所述文档与所述候选标签结果中各个候选标签的相关度;Input the document and the candidate label result into the semantic matching sub-model, and obtain the correlation between the document and each candidate label in the candidate label result;

根据所述文档与所述候选标签结果中各个候选标签的相关度,以及所述文档对应的标签信息,对语义匹配子模型、多标签分类召回子模型、显式召回子模型和隐式召回子模型的系数进行调整,得到训练好的文档标签模型。According to the correlation between the document and each candidate label in the candidate label result, and the label information corresponding to the document, the semantic matching sub-model, the multi-label classification recall sub-model, the explicit recall sub-model and the implicit recall sub-model The coefficients of the model are adjusted to obtain a trained document label model.

在本申请一个实施例中,所述场景训练数据还包括:标签集合,所述标签集合包括:文档标签模型可以预测的标签,以便文档标签模型结合所述标签集合对场景训练数据中的文档进行标签预测。In one embodiment of the present application, the scene training data further includes: a label set, and the label set includes: labels that can be predicted by the document label model, so that the document label model combines the label set with the documents in the scene training data. label prediction.

在本申请一个实施例中,所述的装置还包括:初始化模块,用于对所述文档标签模型中的多标签分类召回子模型、显式召回子模型和隐式召回子模型的系数进行初始化操作。In one embodiment of the present application, the device further includes: an initialization module, configured to initialize the coefficients of the multi-label classification recall sub-model, the explicit recall sub-model and the implicit recall sub-model in the document label model operate.

本申请实施例的文档标签模型的训练装置,通过获取经过预训练的文档标签模型,文档标签模型采用各个应用场景的通用训练数据进行预训练得到;获取待适用的应用场景的场景训练数据,场景训练数据包括:待适用的应用场景下的多个文档以及对应的标签信息;获取文档标签模型中与待适用的应用场景相关的子模型;采用场景训练数据对子模型进行训练,得到训练好的文档标签模型,从而能够减少待适用的应用场景下训练文档标签模型所需要的训练数据,在确保文档标签模型的准确度的情况下降低训练成本。The training device for the document label model in the embodiment of the present application is obtained by obtaining a pre-trained document label model, and the document label model is obtained by pre-training with general training data of each application scenario; the scene training data of the application scene to be applied is obtained, and the scene The training data includes: multiple documents in the application scenario to be applied and the corresponding label information; obtain the sub-model related to the application scenario to be applied in the document label model; use the scene training data to train the sub-model to obtain the trained The document labeling model can reduce the training data required for training the document labeling model in the applicable application scenarios, and reduce the training cost while ensuring the accuracy of the document labeling model.

本申请另一方面实施例提出了一种电子设备,包括:至少一个处理器;以及与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行本申请实施例的文档标签模型的训练方法。Another embodiment of the present application provides an electronic device, including: at least one processor; and a memory connected in communication with the at least one processor; wherein, the memory stores information that can be executed by the at least one processor. instructions, the instructions are executed by the at least one processor, so that the at least one processor can execute the method for training a document label model according to the embodiment of the present application.

本申请另一方面实施例提出了一种存储有计算机指令的非瞬时计算机可读存储介质,所述计算机指令用于使所述计算机执行本申请实施例的文档标签模型的训练方法。Another embodiment of the present application provides a non-transitory computer-readable storage medium storing computer instructions, the computer instructions are used to enable the computer to execute the method for training the document labeling model of the embodiment of the present application.

上述可选方式所具有的其他效果将在下文中结合具体实施例加以说明。Other effects of the above optional manner will be described below in conjunction with specific embodiments.

附图说明Description of drawings

附图用于更好地理解本方案,不构成对本申请的限定。其中:The accompanying drawings are used to better understand the solution, and do not constitute a limitation to the application. in:

图1是根据本申请第一实施例的示意图;Fig. 1 is a schematic diagram according to the first embodiment of the present application;

图2是文档标签模型结构的示意图。Fig. 2 is a schematic diagram of the document labeling model structure.

图3是根据本申请第二实施例的示意图;Fig. 3 is a schematic diagram according to the second embodiment of the present application;

图4是根据本申请第三实施例的示意图;Fig. 4 is a schematic diagram according to a third embodiment of the present application;

图5是用来实现本申请实施例的文档标签模型的训练方法的电子设备的框图;Fig. 5 is a block diagram of an electronic device used to implement the training method of the document labeling model of the embodiment of the present application;

具体实施方式Detailed ways

以下结合附图对本申请的示范性实施例做出说明,其中包括本申请实施例的各种细节以助于理解,应当将它们认为仅仅是示范性的。因此,本领域普通技术人员应当认识到,可以对这里描述的实施例做出各种改变和修改,而不会背离本申请的范围和精神。同样,为了清楚和简明,以下的描述中省略了对公知功能和结构的描述。Exemplary embodiments of the present application are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present application to facilitate understanding, and they should be regarded as exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.

下面参考附图描述本申请实施例的文档标签模型的训练方法及装置。The following describes the method and device for training the document labeling model in the embodiments of the present application with reference to the accompanying drawings.

图1是根据本申请第一实施例的示意图。其中,需要说明的是,本实施例提供的文档标签模型的训练方法的执行主体为文档标签模型的训练装置,该装置可以由软件和/或硬件的方式实现,该装置可以配置在终端设备或者服务器中,该实施例对此不作具体限定。Fig. 1 is a schematic diagram according to the first embodiment of the present application. Wherein, it should be noted that the execution subject of the training method of the document label model provided in this embodiment is the training device of the document label model, which can be realized by software and/or hardware, and which can be configured in a terminal device or In the server, this embodiment does not specifically limit it.

如图1所示,该文档标签模型的训练方法可以包括:As shown in Figure 1, the training method of this document label model can comprise:

步骤101,获取经过预训练的文档标签模型,文档标签模型采用各个应用场景的通用训练数据进行预训练得到。In step 101, a pre-trained document labeling model is acquired, and the document labeling model is pre-trained using general training data of various application scenarios.

本申请中,文档标签模型结构的示意图可以如图2所示,在图2中,文档标签模型包括:预处理层、候选召回层、粗排层和精排层。其中,预处理层用于对文档进行分段、分句、分词、词性标注POS、命名实体识别NER等处理,获取预处理结果;所述预处理结果包括:分段结果、分句结果、分词结果、词性标注结果以及命名实体识别结果。In this application, a schematic diagram of the structure of the document labeling model can be shown in FIG. 2 . In FIG. 2 , the document labeling model includes: a preprocessing layer, a candidate recall layer, a rough sorting layer and a fine sorting layer. Among them, the preprocessing layer is used to perform segmentation, sentence segmentation, word segmentation, part-of-speech tagging POS, named entity recognition NER and other processing on the document to obtain preprocessing results; the preprocessing results include: segmentation results, sentence segmentation results, and word segmentation results. results, part-of-speech tagging results, and named entity recognition results.

其中,候选召回层包括:并联的关键词召回子模型、多标签分类召回子模型、显式召回子模型和隐式召回子模型。这4个召回子模型的输入为文档以及文档对应的预处理结果;输出结果为多个候选标签。将4个召回子模型的输出结果进行合并,得到候选标签结果。其中,关键词召回子模型,用于通过分析文档的语义结构并统计特征来确定候选标签。多标签分类召回子模型,用于基于NN的multi-label classification来确定候选标签。显式召回子模型,基于字面匹配和频次筛选来确定候选标签。隐式召回子模型,基于主次要成分分析来确定候选标签。Among them, the candidate recall layer includes: parallel keyword recall sub-model, multi-label classification recall sub-model, explicit recall sub-model and implicit recall sub-model. The input of these four recall sub-models is the document and the preprocessing result corresponding to the document; the output result is a plurality of candidate labels. The output results of the 4 recall sub-models are combined to obtain the candidate label results. Among them, the keyword recall sub-model is used to determine candidate tags by analyzing the semantic structure and statistical features of the document. Multi-label classification recall sub-model for NN-based multi-label classification to determine candidate labels. An explicit recall submodel to identify candidate labels based on literal matching and frequency filtering. Implicit recall submodel to determine candidate labels based on principal-subordinate component analysis.

其中,粗排层包括:并联的规则子模型和语义匹配子模型。规则子模型用于根据预设规则确定候选标签结果中待过滤的候选标签。语义匹配子模型用于确定文档与候选标签结果中各个候选标签的文本相关度,根据文本相关度确定待过滤的候选标签。对候选标签结果中待过滤的候选标签进行过滤,得到过滤后的候选标签结果。其中,文本相关度指的是文本与候选标签之间语义层面的相似度。Among them, the rough sorting layer includes: a parallel rule sub-model and a semantic matching sub-model. The rule sub-model is used to determine the candidate tags to be filtered in the candidate tag results according to preset rules. The semantic matching sub-model is used to determine the text correlation between the document and each candidate tag in the candidate tag results, and determine the candidate tags to be filtered according to the text correlation. Filter the candidate tags to be filtered in the candidate tag results to obtain the filtered candidate tag results. Among them, the text correlation refers to the semantic level similarity between the text and the candidate tags.

其中,精排层用于根据过滤后的候选标签结果中各个候选标签的文本相关度、标签热度以及标签粒度,对各个候选标签进行排序,根据排序结果,预测与文档对应的标签信息。其中,标签热度指的是用户对候选标签的关注热度,例如候选标签的搜索热度等。标签粒度基于候选标签的组成词类型和长度计算确定。候选标签的内容越详细,则标签粒度越小。例如以标签粒度排序,则百度->百度联盟->百度联盟峰会;娱乐->娱乐明星。Among them, the fine sorting layer is used to sort each candidate tag according to the text relevance, tag popularity and tag granularity of each candidate tag in the filtered candidate tag results, and predict the tag information corresponding to the document according to the sorting result. Wherein, the popularity of a tag refers to the popularity of a user's attention to a candidate tag, such as the search popularity of a candidate tag. Tag granularity is calculated based on the word type and length of the candidate tags. The more detailed the content of the candidate tags, the smaller the tag granularity. For example, sort by tag granularity, then Baidu -> Baidu Alliance -> Baidu Alliance Summit; Entertainment -> Entertainment Star.

本申请中,应用场景例如对长文档进行重实体的标签预测、对问答进行重准确的标签预测、对用户原创内容进行重召回的标签预测等等。其中,预测对象可以包括:长文档、问答、用户原创内容等。预测需求例如重召回、重准确、重实体、重分类、重高商业价值等。In this application, application scenarios include re-entity tag prediction for long documents, re-accurate tag prediction for questions and answers, and re-recall tag prediction for user-generated content, etc. Wherein, the prediction objects may include: long documents, questions and answers, user original content, and the like. Forecast needs such as re-recall, re-accuracy, re-entity, re-classification, re-high commercial value, etc.

本申请中,各个应用场景的通用训练数据,例如可以指各个应用场景的训练数据进行组合得到的训练数据。本申请中,在确定待适用的应用场景前,可以采用各个应用场景的大量通用训练数据对初始的文档标签模型进行预训练,以在确定待适用的应用场景后,减少待适用的应用场景下的训练数据的数量。In this application, the general training data of each application scenario may refer to training data obtained by combining the training data of each application scenario, for example. In this application, before determining the application scenarios to be applied, a large amount of general training data of each application scenario can be used to pre-train the initial document label model, so as to reduce the number of application scenarios to be applied after the application scenarios to be applied are determined. The amount of training data.

步骤102,获取待适用的应用场景的场景训练数据,场景训练数据包括:所述待适用的应用场景下的多个文档以及对应的标签信息。Step 102, acquiring scenario training data of the application scenario to be applied, where the scenario training data includes: multiple documents in the application scenario to be applied and corresponding label information.

步骤103,获取文档标签模型中与待适用的应用场景相关的子模型。Step 103, obtaining sub-models in the document tag model related to the application scenarios to be applied.

本申请中,与待适用的应用场景相关的子模型包括:语义匹配子模型,以及以下子模型中的任意一个或者多个:多标签分类召回子模型、显式召回子模型和隐式召回子模型。本申请中,可以根据具体的待适用应用场景从上述子模型中选择子模型进行重训练或者微调。In this application, the sub-models related to the application scenarios to be applied include: semantic matching sub-model, and any one or more of the following sub-models: multi-label classification recall sub-model, explicit recall sub-model and implicit recall sub-model Model. In this application, a sub-model may be selected from the above-mentioned sub-models for retraining or fine-tuning according to specific application scenarios to be applied.

步骤104,采用场景训练数据对子模型进行训练,得到训练好的文档标签模型。Step 104, using scene training data to train the sub-models to obtain a trained document labeling model.

本申请中,在与待适用的应用场景相关的子模型包括:语义匹配子模型、多标签分类召回子模型、显式召回子模型和隐式召回子模型时,文档标签模型的训练装置执行步骤104的过程具体可以为,针对场景训练数据中的每个文档,将文档分别输入多标签分类召回子模型、显式召回子模型和隐式召回子模型,并将各个输出结果进行合并,得到候选标签结果;将文档以及候选标签结果输入语义匹配子模型,获取文档与候选标签结果中各个候选标签的相关度;根据文档与候选标签结果中各个候选标签的相关度,以及文档对应的标签信息,对语义匹配子模型、多标签分类召回子模型、显式召回子模型和隐式召回子模型的系数进行调整,得到训练好的文档标签模型。In this application, when the sub-models related to the application scenario to be applied include: semantic matching sub-model, multi-label classification recall sub-model, explicit recall sub-model and implicit recall sub-model, the training device of the document label model performs steps The process of step 104 can specifically be, for each document in the scene training data, input the document into the multi-label classification recall sub-model, the explicit recall sub-model and the implicit recall sub-model respectively, and combine the output results of each to obtain the candidate Tag results; input the document and candidate tag results into the semantic matching sub-model to obtain the correlation between the document and each candidate tag in the candidate tag results; according to the correlation between the document and each candidate tag in the candidate tag results, and the corresponding tag information of the document, The coefficients of the semantic matching sub-model, the multi-label classification recall sub-model, the explicit recall sub-model and the implicit recall sub-model are adjusted to obtain a trained document label model.

本申请中,为了提高训练好的文档标签模型的准确度,场景训练数据中还可以包括:标签集合,标签集合包括:文档标签模型可以预测的标签,以便文档标签模型结合标签集合对场景训练数据中的文档进行标签预测。In this application, in order to improve the accuracy of the trained document label model, the scene training data may also include: a label set, which includes: labels that can be predicted by the document label model, so that the document label model combines the label set with the scene training data Label prediction for documents in .

本申请中,步骤104之前,所述的方法还可以包括以下步骤:对文档标签模型中的多标签分类召回子模型、显式召回子模型和隐式召回子模型的系数进行初始化操作,以避免预训练的文档标签模型中上述子模型的系数对待适用的应用场景下训练时的干扰,进一步提高文档标签模型在待适用的应用场景下的准确度。In the present application, before step 104, the method may also include the following steps: initialize the coefficients of the multi-label classification recall sub-model, the explicit recall sub-model and the implicit recall sub-model in the document label model, so as to avoid The coefficients of the above sub-models in the pre-trained document labeling model are subject to interference during training in applicable application scenarios, further improving the accuracy of the document labeling model in applicable application scenarios.

本申请实施例的文档标签模型的训练方法,通过获取经过预训练的文档标签模型,文档标签模型采用各个应用场景的通用训练数据进行预训练得到;获取待适用的应用场景的场景训练数据,场景训练数据包括:待适用的应用场景下的多个文档以及对应的标签信息;获取文档标签模型中与待适用的应用场景相关的子模型;采用场景训练数据对子模型进行训练,得到训练好的文档标签模型,从而能够减少待适用的应用场景下训练文档标签模型所需要的训练数据,在确保文档标签模型的准确度的情况下降低训练成本。The training method of the document label model in the embodiment of the present application is obtained by obtaining a pre-trained document label model, and the document label model is obtained by pre-training with general training data of each application scenario; the scene training data of the application scene to be applied is obtained, and the scene The training data includes: multiple documents in the application scenario to be applied and the corresponding label information; obtain the sub-model related to the application scenario to be applied in the document label model; use the scene training data to train the sub-model to obtain the trained The document labeling model can reduce the training data required for training the document labeling model in the applicable application scenarios, and reduce the training cost while ensuring the accuracy of the document labeling model.

为了实现上述实施例,本申请实施例还提供一种文档标签模型的训练装置。In order to realize the foregoing embodiments, the embodiments of the present application further provide a training device for a document labeling model.

图3是根据本申请第二实施例的示意图。如图3所示,该文档标签模型的训练装置100包括:Fig. 3 is a schematic diagram according to the second embodiment of the present application. As shown in Figure 3, the training device 100 of the document labeling model includes:

获取模块110,用于获取经过预训练的文档标签模型,所述文档标签模型采用各个应用场景的通用训练数据进行预训练得到;The acquisition module 110 is configured to acquire a pre-trained document label model, which is obtained by pre-training the general training data of each application scenario;

所述获取模块110,还用于获取待适用的应用场景的场景训练数据,所述场景训练数据包括:所述待适用的应用场景下的多个文档以及对应的标签信息;The obtaining module 110 is also used to obtain scene training data of the application scene to be applied, and the scene training data includes: multiple documents and corresponding label information in the application scene to be applied;

所述获取模块110,还用于获取所述文档标签模型中与所述待适用的应用场景相关的子模型;The obtaining module 110 is also used to obtain a sub-model related to the application scenario to be applied in the document label model;

训练模块120,用于采用所述场景训练数据对所述子模型进行训练,得到训练好的文档标签模型。The training module 120 is configured to use the scene training data to train the sub-models to obtain a trained document label model.

在本申请一个实施例中,所述文档标签模型包括:预处理层、候选召回层、粗排层和精排层;In an embodiment of the present application, the document labeling model includes: a preprocessing layer, a candidate recall layer, a rough sorting layer and a fine sorting layer;

所述候选召回层包括:并联的关键词召回子模型、多标签分类召回子模型、显式召回子模型和隐式召回子模型;The candidate recall layer includes: a parallel keyword recall sub-model, a multi-label classification recall sub-model, an explicit recall sub-model and an implicit recall sub-model;

所述粗排层包括:并联的规则子模型和语义匹配子模型;The rough sorting layer includes: a parallel rule sub-model and a semantic matching sub-model;

与所述待适用的应用场景相关的子模型包括:语义匹配子模型,以及以下子模型中的任意一个或者多个:多标签分类召回子模型、显式召回子模型和隐式召回子模型。The sub-models related to the applicable application scenario include: a semantic matching sub-model, and any one or more of the following sub-models: a multi-label classification recall sub-model, an explicit recall sub-model and an implicit recall sub-model.

在本申请一个实施例中,在与所述待适用的应用场景相关的子模型包括:语义匹配子模型、多标签分类召回子模型、显式召回子模型和隐式召回子模型时,所述训练模块120具体用于,In one embodiment of the present application, when the sub-models related to the application scenario to be applied include: semantic matching sub-model, multi-label classification recall sub-model, explicit recall sub-model and implicit recall sub-model, the The training module 120 is specifically used for,

针对所述场景训练数据中的每个文档,将所述文档分别输入多标签分类召回子模型、显式召回子模型和隐式召回子模型,并将各个输出结果进行合并,得到候选标签结果;For each document in the scene training data, input the document into the multi-label classification recall sub-model, the explicit recall sub-model and the implicit recall sub-model respectively, and merge the output results to obtain candidate label results;

将所述文档以及所述候选标签结果输入所述语义匹配子模型,获取所述文档与所述候选标签结果中各个候选标签的相关度;Input the document and the candidate label result into the semantic matching sub-model, and obtain the correlation between the document and each candidate label in the candidate label result;

根据所述文档与所述候选标签结果中各个候选标签的相关度,以及所述文档对应的标签信息,对语义匹配子模型、多标签分类召回子模型、显式召回子模型和隐式召回子模型的系数进行调整,得到训练好的文档标签模型。According to the correlation between the document and each candidate label in the candidate label result, and the label information corresponding to the document, the semantic matching sub-model, the multi-label classification recall sub-model, the explicit recall sub-model and the implicit recall sub-model The coefficients of the model are adjusted to obtain a trained document label model.

在本申请一个实施例中,所述场景训练数据还包括:标签集合,所述标签集合包括:文档标签模型可以预测的标签,以便文档标签模型结合所述标签集合对场景训练数据中的文档进行标签预测。In one embodiment of the present application, the scene training data further includes: a label set, and the label set includes: labels that can be predicted by the document label model, so that the document label model combines the label set with the documents in the scene training data. label prediction.

在本申请一个实施例中,结合参考图4,所述的装置还包括:初始化模块130,对所述文档标签模型中的多标签分类召回子模型、显式召回子模型和隐式召回子模型的系数进行初始化操作。In one embodiment of the present application, with reference to FIG. 4 , the device further includes: an initialization module 130, for the multi-label classification recall sub-model, explicit recall sub-model and implicit recall sub-model in the document label model The coefficients are initialized.

其中,需要说明的是,前述对文档标签模型的训练方法的解释说明也适用于本实施例的文档标签模型的训练装置,此处不再赘述。Wherein, it should be noted that the foregoing explanations on the training method of the document labeling model are also applicable to the training device of the document labeling model in this embodiment, which will not be repeated here.

本申请实施例的文档标签模型的训练装置,通过获取经过预训练的文档标签模型,文档标签模型采用各个应用场景的通用训练数据进行预训练得到;获取待适用的应用场景的场景训练数据,场景训练数据包括:待适用的应用场景下的多个文档以及对应的标签信息;获取文档标签模型中与待适用的应用场景相关的子模型;采用场景训练数据对子模型进行训练,得到训练好的文档标签模型,从而能够减少待适用的应用场景下训练文档标签模型所需要的训练数据,在确保文档标签模型的准确度的情况下降低训练成本。The training device for the document label model in the embodiment of the present application is obtained by obtaining a pre-trained document label model, and the document label model is obtained by pre-training with general training data of each application scenario; the scene training data of the application scene to be applied is obtained, and the scene The training data includes: multiple documents in the application scenario to be applied and the corresponding label information; obtain the sub-model related to the application scenario to be applied in the document label model; use the scene training data to train the sub-model to obtain the trained The document labeling model can reduce the training data required for training the document labeling model in the applicable application scenarios, and reduce the training cost while ensuring the accuracy of the document labeling model.

根据本申请的实施例,本申请还提供了一种电子设备和一种可读存储介质。According to the embodiments of the present application, the present application also provides an electronic device and a readable storage medium.

如图5所示,是根据本申请实施例的文档标签模型的训练方法的电子设备的框图。电子设备旨在表示各种形式的数字计算机,诸如,膝上型计算机、台式计算机、工作台、个人数字助理、服务器、刀片式服务器、大型计算机、和其它适合的计算机。电子设备还可以表示各种形式的移动装置,诸如,个人数字处理、蜂窝电话、智能电话、可穿戴设备和其它类似的计算装置。本文所示的部件、它们的连接和关系、以及它们的功能仅仅作为示例,并且不意在限制本文中描述的和/或者要求的本申请的实现。As shown in FIG. 5 , it is a block diagram of an electronic device according to a method for training a document tagging model according to an embodiment of the present application. Electronic device is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. Electronic devices may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are by way of example only, and are not intended to limit implementations of the applications described and/or claimed herein.

如图5所示,该电子设备包括:一个或多个处理器301、存储器302,以及用于连接各部件的接口,包括高速接口和低速接口。各个部件利用不同的总线互相连接,并且可以被安装在公共主板上或者根据需要以其它方式安装。处理器可以对在电子设备内执行的指令进行处理,包括存储在存储器中或者存储器上以在外部输入/输出装置(诸如,耦合至接口的显示设备)上显示GUI的图形信息的指令。在其它实施方式中,若需要,可以将多个处理器和/或多条总线与多个存储器和多个存储器一起使用。同样,可以连接多个电子设备,各个设备提供部分必要的操作(例如,作为服务器阵列、一组刀片式服务器、或者多处理器系统)。图5中以一个处理器301为例。As shown in FIG. 5 , the electronic device includes: one or more processors 301 , a memory 302 , and interfaces for connecting various components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and can be mounted on a common motherboard or otherwise as desired. The processor may process instructions executed within the electronic device, including instructions stored in or on the memory, to display graphical information of a GUI on an external input/output device such as a display device coupled to an interface. In other implementations, multiple processors and/or multiple buses may be used with multiple memories and multiple memories, if desired. Likewise, multiple electronic devices may be connected, with each device providing some of the necessary operations (eg, as a server array, a set of blade servers, or a multi-processor system). One processor 301 is taken as an example in FIG. 5 .

存储器302即为本申请所提供的非瞬时计算机可读存储介质。其中,所述存储器存储有可由至少一个处理器执行的指令,以使所述至少一个处理器执行本申请所提供的文档标签模型的训练方法。本申请的非瞬时计算机可读存储介质存储计算机指令,该计算机指令用于使计算机执行本申请所提供的文档标签模型的训练方法。The memory 302 is a non-transitory computer-readable storage medium provided in this application. Wherein, the memory stores instructions executable by at least one processor, so that the at least one processor executes the method for training a document tagging model provided in this application. The non-transitory computer-readable storage medium of the present application stores computer instructions, and the computer instructions are used to cause a computer to execute the document labeling model training method provided in the present application.

存储器302作为一种非瞬时计算机可读存储介质,可用于存储非瞬时软件程序、非瞬时计算机可执行程序以及模块,如本申请实施例中的文档标签模型的训练方法对应的程序指令/模块(例如,附图3所示的获取模块110、训练模块120,附图4所示的初始化模块130)。处理器301通过运行存储在存储器302中的非瞬时软件程序、指令以及模块,从而执行服务器的各种功能应用以及数据处理,即实现上述方法实施例中的文档标签模型的训练方法。The memory 302, as a non-transitory computer-readable storage medium, can be used to store non-transitory software programs, non-transitory computer-executable programs and modules, such as program instructions/modules corresponding to the training method of the document label model in the embodiment of the present application ( For example, the acquisition module 110 and the training module 120 shown in FIG. 3, and the initialization module 130 shown in FIG. 4). The processor 301 executes various functional applications and data processing of the server by running the non-transitory software programs, instructions and modules stored in the memory 302, that is, realizes the training method of the document labeling model in the above method embodiment.

存储器302可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储根据文本标签模型的训练的电子设备的使用所创建的数据等。此外,存储器302可以包括高速随机存取存储器,还可以包括非瞬时存储器,例如至少一个磁盘存储器件、闪存器件、或其他非瞬时固态存储器件。在一些实施例中,存储器302可选包括相对于处理器301远程设置的存储器,这些远程存储器可以通过网络连接至文档标签模型的训练的电子设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。The memory 302 may include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function; data etc. In addition, the memory 302 may include a high-speed random access memory, and may also include a non-transitory memory, such as at least one magnetic disk storage device, a flash memory device, or other non-transitory solid-state storage devices. In some embodiments, the memory 302 may optionally include a memory that is remotely located relative to the processor 301, and these remote memories may be connected to the electronic device for training the document labeling model through a network. Examples of the aforementioned networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.

文档标签模型的训练的方法的电子设备还可以包括:输入装置303和输出装置304。处理器301、存储器302、输入装置303和输出装置304可以通过总线或者其他方式连接,图5中以通过总线连接为例。The electronic device of the method for training a document labeling model may further include: an input device 303 and an output device 304 . The processor 301, the memory 302, the input device 303, and the output device 304 may be connected via a bus or in other ways. In FIG. 5, connection via a bus is taken as an example.

输入装置303可接收输入的数字或字符信息,以及产生与文档标签模型的训练的电子设备的用户设置以及功能控制有关的键信号输入,例如触摸屏、小键盘、鼠标、轨迹板、触摸板、指示杆、一个或者多个鼠标按钮、轨迹球、操纵杆等输入装置。输出装置304可以包括显示设备、辅助照明装置(例如,LED)和触觉反馈装置(例如,振动电机)等。该显示设备可以包括但不限于,液晶显示器(LCD)、发光二极管(LED)显示器和等离子体显示器。在一些实施方式中,显示设备可以是触摸屏。The input device 303 can receive the input number or character information, and generate key signal input related to the user setting and function control of the training electronic equipment of the document label model, such as a touch screen, a small keyboard, a mouse, a trackpad, a touchpad, an indication input devices such as sticks, one or more mouse buttons, trackballs, joysticks, etc. The output device 304 may include a display device, an auxiliary lighting device (eg, LED), a tactile feedback device (eg, a vibration motor), and the like. The display device may include, but is not limited to, a liquid crystal display (LCD), a light emitting diode (LED) display, and a plasma display. In some implementations, the display device may be a touch screen.

此处描述的系统和技术的各种实施方式可以在数字电子电路系统、集成电路系统、专用ASIC(专用集成电路)、计算机硬件、固件、软件、和/或它们的组合中实现。这些各种实施方式可以包括:实施在一个或者多个计算机程序中,该一个或者多个计算机程序可在包括至少一个可编程处理器的可编程系统上执行和/或解释,该可编程处理器可以是专用或者通用可编程处理器,可以从存储系统、至少一个输入装置、和至少一个输出装置接收数据和指令,并且将数据和指令传输至该存储系统、该至少一个输入装置、和该至少一个输出装置。Various implementations of the systems and techniques described herein can be implemented in digital electronic circuitry, integrated circuit systems, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include being implemented in one or more computer programs executable and/or interpreted on a programmable system including at least one programmable processor, the programmable processor Can be special-purpose or general-purpose programmable processor, can receive data and instruction from storage system, at least one input device, and at least one output device, and transmit data and instruction to this storage system, this at least one input device, and this at least one output device an output device.

这些计算程序(也称作程序、软件、软件应用、或者代码)包括可编程处理器的机器指令,并且可以利用高级过程和/或面向对象的编程语言、和/或汇编/机器语言来实施这些计算程序。如本文使用的,术语“机器可读介质”和“计算机可读介质”指的是用于将机器指令和/或数据提供给可编程处理器的任何计算机程序产品、设备、和/或装置(例如,磁盘、光盘、存储器、可编程逻辑装置(PLD)),包括,接收作为机器可读信号的机器指令的机器可读介质。术语“机器可读信号”指的是用于将机器指令和/或数据提供给可编程处理器的任何信号。These computing programs (also referred to as programs, software, software applications, or codes) include machine instructions for a programmable processor and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine language calculation program. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or means for providing machine instructions and/or data to a programmable processor ( For example, magnetic disks, optical disks, memories, programmable logic devices (PLDs), including machine-readable media that receive machine instructions as machine-readable signals. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.

为了提供与用户的交互,可以在计算机上实施此处描述的系统和技术,该计算机具有:用于向用户显示信息的显示装置(例如,CRT(阴极射线管)或者LCD(液晶显示器)监视器);以及键盘和指向装置(例如,鼠标或者轨迹球),用户可以通过该键盘和该指向装置来将输入提供给计算机。其它种类的装置还可以用于提供与用户的交互;例如,提供给用户的反馈可以是任何形式的传感反馈(例如,视觉反馈、听觉反馈、或者触觉反馈);并且可以用任何形式(包括声输入、语音输入或者、触觉输入)来接收来自用户的输入。To provide for interaction with the user, the systems and techniques described herein can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user. ); and a keyboard and pointing device (eg, a mouse or a trackball) through which a user can provide input to the computer. Other kinds of devices can also be used to provide interaction with the user; for example, the feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and can be in any form (including Acoustic input, speech input or, tactile input) to receive input from the user.

可以将此处描述的系统和技术实施在包括后台部件的计算系统(例如,作为数据服务器)、或者包括中间件部件的计算系统(例如,应用服务器)、或者包括前端部件的计算系统(例如,具有图形用户界面或者网络浏览器的用户计算机,用户可以通过该图形用户界面或者该网络浏览器来与此处描述的系统和技术的实施方式交互)、或者包括这种后台部件、中间件部件、或者前端部件的任何组合的计算系统中。可以通过任何形式或者介质的数字数据通信(例如,通信网络)来将系统的部件相互连接。通信网络的示例包括:局域网(LAN)、广域网(WAN)和互联网。The systems and techniques described herein can be implemented in a computing system that includes back-end components (e.g., as a data server), or a computing system that includes middleware components (e.g., an application server), or a computing system that includes front-end components (e.g., as a a user computer having a graphical user interface or web browser through which a user can interact with embodiments of the systems and techniques described herein), or including such backend components, middleware components, Or any combination of front-end components in a computing system. The components of the system can be interconnected by any form or medium of digital data communication, eg, a communication network. Examples of communication networks include: Local Area Network (LAN), Wide Area Network (WAN) and the Internet.

计算机系统可以包括客户端和服务器。客户端和服务器一般远离彼此并且通常通过通信网络进行交互。通过在相应的计算机上运行并且彼此具有客户端-服务器关系的计算机程序来产生客户端和服务器的关系。A computer system may include clients and servers. Clients and servers are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by computer programs running on the respective computers and having a client-server relationship to each other.

应该理解,可以使用上面所示的各种形式的流程,重新排序、增加或删除步骤。例如,本发申请中记载的各步骤可以并行地执行也可以顺序地执行也可以不同的次序执行,只要能够实现本申请公开的技术方案所期望的结果,本文在此不进行限制。It should be understood that steps may be reordered, added or deleted using the various forms of flow shown above. For example, the steps described in the present application may be executed in parallel, sequentially, or in a different order, as long as the desired result of the technical solution disclosed in the present application can be achieved, no limitation is imposed herein.

上述具体实施方式,并不构成对本申请保护范围的限制。本领域技术人员应该明白的是,根据设计要求和其他因素,可以进行各种修改、组合、子组合和替代。任何在本申请的精神和原则之内所作的修改、等同替换和改进等,均应包含在本申请保护范围之内。The above specific implementation methods are not intended to limit the protection scope of the present application. It should be apparent to those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made depending on design requirements and other factors. Any modifications, equivalent replacements and improvements made within the spirit and principles of this application shall be included within the protection scope of this application.

Claims (10)

1.一种文档标签模型的训练方法,其特征在于,包括:1. A training method for a document labeling model, characterized in that, comprising: 获取经过预训练的文档标签模型,所述文档标签模型采用各个应用场景的通用训练数据进行预训练得到;Obtaining a pre-trained document label model, which is obtained by pre-training the general training data of each application scenario; 获取待适用的应用场景的场景训练数据,所述场景训练数据包括:所述待适用的应用场景下的多个文档以及对应的标签信息;Acquiring scene training data of the application scene to be applied, the scene training data including: multiple documents and corresponding label information under the application scene to be applied; 获取所述文档标签模型中与所述待适用的应用场景相关的子模型;Acquiring sub-models related to the application scenario to be applied in the document labeling model; 采用所述场景训练数据对所述子模型进行训练,得到训练好的文档标签模型;Using the scene training data to train the sub-model to obtain a trained document label model; 其中,在与所述待适用的应用场景相关的子模型包括:语义匹配子模型、多标签分类召回子模型、显式召回子模型和隐式召回子模型时,所述采用所述场景训练数据对所述子模型进行训练,得到训练好的文档标签模型,包括:Wherein, when the sub-models related to the application scenario to be applied include: semantic matching sub-model, multi-label classification recall sub-model, explicit recall sub-model and implicit recall sub-model, the use of the scenario training data The sub-model is trained to obtain a trained document label model, including: 针对所述场景训练数据中的每个文档,将所述文档分别输入多标签分类召回子模型、显式召回子模型和隐式召回子模型,并将各个输出结果进行合并,得到候选标签结果;For each document in the scene training data, input the document into the multi-label classification recall sub-model, the explicit recall sub-model and the implicit recall sub-model respectively, and merge the output results to obtain candidate label results; 将所述文档以及所述候选标签结果输入所述语义匹配子模型,获取所述文档与所述候选标签结果中各个候选标签的相关度;Input the document and the candidate label result into the semantic matching sub-model, and obtain the correlation between the document and each candidate label in the candidate label result; 根据所述文档与所述候选标签结果中各个候选标签的相关度,以及所述文档对应的标签信息,对语义匹配子模型、多标签分类召回子模型、显式召回子模型和隐式召回子模型的系数进行调整,得到训练好的文档标签模型。According to the correlation between the document and each candidate label in the candidate label result, and the label information corresponding to the document, the semantic matching sub-model, the multi-label classification recall sub-model, the explicit recall sub-model and the implicit recall sub-model The coefficients of the model are adjusted to obtain a trained document label model. 2.根据权利要求1所述的方法,其特征在于,所述文档标签模型包括:预处理层、候选召回层、粗排层和精排层;2. The method according to claim 1, wherein the document labeling model comprises: a preprocessing layer, a candidate recall layer, a rough sorting layer and a fine sorting layer; 所述候选召回层包括:并联的关键词召回子模型、多标签分类召回子模型、显式召回子模型和隐式召回子模型;The candidate recall layer includes: a parallel keyword recall sub-model, a multi-label classification recall sub-model, an explicit recall sub-model and an implicit recall sub-model; 所述粗排层包括:并联的规则子模型和语义匹配子模型;The rough sorting layer includes: a parallel rule sub-model and a semantic matching sub-model; 与所述待适用的应用场景相关的子模型包括:语义匹配子模型,以及以下子模型中的任意一个或者多个:多标签分类召回子模型、显式召回子模型和隐式召回子模型。The sub-models related to the applicable application scenario include: a semantic matching sub-model, and any one or more of the following sub-models: a multi-label classification recall sub-model, an explicit recall sub-model and an implicit recall sub-model. 3.根据权利要求1所述的方法,其特征在于,所述场景训练数据还包括:标签集合,所述标签集合包括:文档标签模型可以预测的标签,以便文档标签模型结合所述标签集合对场景训练数据中的文档进行标签预测。3. The method according to claim 1, wherein the scene training data further comprises: a tag set, the tag set includes: a label that the document tag model can predict, so that the document tag model combines the tag set pair Documents in the scene training data for label prediction. 4.根据权利要求1所述的方法,其特征在于,所述采用所述场景训练数据对所述子模型进行训练,得到训练好的文档标签模型之前,还包括:4. The method according to claim 1, wherein said adopting said scene training data to train said sub-model, before obtaining the trained document label model, further comprising: 对所述文档标签模型中的多标签分类召回子模型、显式召回子模型和隐式召回子模型的系数进行初始化操作。Initialize the coefficients of the multi-label classification recall sub-model, the explicit recall sub-model and the implicit recall sub-model in the document label model. 5.一种文档标签模型的训练装置,其特征在于,包括:5. A training device for a document labeling model, comprising: 获取模块,用于获取经过预训练的文档标签模型,所述文档标签模型采用各个应用场景的通用训练数据进行预训练得到;An acquisition module, configured to acquire a pre-trained document label model, which is obtained by pre-training the general training data of each application scenario; 所述获取模块,还用于获取待适用的应用场景的场景训练数据,所述场景训练数据包括:所述待适用的应用场景下的多个文档以及对应的标签信息;The obtaining module is also used to obtain scene training data of the application scene to be applied, and the scene training data includes: multiple documents and corresponding label information under the application scene to be applied; 所述获取模块,还用于获取所述文档标签模型中与所述待适用的应用场景相关的子模型;The obtaining module is also used to obtain a sub-model related to the application scenario to be applied in the document label model; 训练模块,用于采用所述场景训练数据对所述子模型进行训练,得到训练好的文档标签模型;A training module, configured to use the scene training data to train the sub-model to obtain a trained document label model; 其中,在与所述待适用的应用场景相关的子模型包括:语义匹配子模型、多标签分类召回子模型、显式召回子模型和隐式召回子模型时,所述训练模块具体用于,Wherein, when the sub-models related to the application scenario to be applied include: a semantic matching sub-model, a multi-label classification recall sub-model, an explicit recall sub-model and an implicit recall sub-model, the training module is specifically used for, 针对所述场景训练数据中的每个文档,将所述文档分别输入多标签分类召回子模型、显式召回子模型和隐式召回子模型,并将各个输出结果进行合并,得到候选标签结果;For each document in the scene training data, input the document into the multi-label classification recall sub-model, the explicit recall sub-model and the implicit recall sub-model respectively, and merge the output results to obtain candidate label results; 将所述文档以及所述候选标签结果输入所述语义匹配子模型,获取所述文档与所述候选标签结果中各个候选标签的相关度;Input the document and the candidate label result into the semantic matching sub-model, and obtain the correlation between the document and each candidate label in the candidate label result; 根据所述文档与所述候选标签结果中各个候选标签的相关度,以及所述文档对应的标签信息,对语义匹配子模型、多标签分类召回子模型、显式召回子模型和隐式召回子模型的系数进行调整,得到训练好的文档标签模型。According to the correlation between the document and each candidate label in the candidate label result, and the label information corresponding to the document, the semantic matching sub-model, the multi-label classification recall sub-model, the explicit recall sub-model and the implicit recall sub-model The coefficients of the model are adjusted to obtain a trained document label model. 6.根据权利要求5所述的装置,其特征在于,所述文档标签模型包括:预处理层、候选召回层、粗排层和精排层;6. The device according to claim 5, wherein the document labeling model comprises: a preprocessing layer, a candidate recall layer, a rough sorting layer and a fine sorting layer; 所述候选召回层包括:并联的关键词召回子模型、多标签分类召回子模型、显式召回子模型和隐式召回子模型;The candidate recall layer includes: a parallel keyword recall sub-model, a multi-label classification recall sub-model, an explicit recall sub-model and an implicit recall sub-model; 所述粗排层包括:并联的规则子模型和语义匹配子模型;The rough sorting layer includes: a parallel rule sub-model and a semantic matching sub-model; 与所述待适用的应用场景相关的子模型包括:语义匹配子模型,以及以下子模型中的任意一个或者多个:多标签分类召回子模型、显式召回子模型和隐式召回子模型。The sub-models related to the applicable application scenario include: a semantic matching sub-model, and any one or more of the following sub-models: a multi-label classification recall sub-model, an explicit recall sub-model and an implicit recall sub-model. 7.根据权利要求5所述的装置,其特征在于,所述场景训练数据还包括:标签集合,所述标签集合包括:文档标签模型可以预测的标签,以便文档标签模型结合所述标签集合对场景训练数据中的文档进行标签预测。7. The device according to claim 5, wherein the scene training data further comprises: a tag set, the tag set includes: tags that can be predicted by the document tag model, so that the document tag model combines the tag set pairs Documents in the scene training data for label prediction. 8.根据权利要求5所述的装置,其特征在于,还包括:初始化模块,用于对所述文档标签模型中的多标签分类召回子模型、显式召回子模型和隐式召回子模型的系数进行初始化操作。8. The device according to claim 5, further comprising: an initialization module, configured to recall submodels, explicit recall submodels and implicit recall submodels for multi-label classification in the document label model Coefficients are initialized. 9.一种电子设备,其特征在于,包括:9. An electronic device, characterized in that it comprises: 至少一个处理器;以及at least one processor; and 与所述至少一个处理器通信连接的存储器;其中,a memory communicatively coupled to the at least one processor; wherein, 所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1-4中任一项所述的方法。The memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can perform any one of claims 1-4. Methods. 10.一种存储有计算机指令的非瞬时计算机可读存储介质,其特征在于,所述计算机指令用于使所述计算机执行权利要求1-4中任一项所述的方法。10. A non-transitory computer-readable storage medium storing computer instructions, wherein the computer instructions are used to make the computer execute the method according to any one of claims 1-4.
CN201911338269.XA 2019-12-23 2019-12-23 Training method and device for document tag model Active CN111104514B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911338269.XA CN111104514B (en) 2019-12-23 2019-12-23 Training method and device for document tag model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911338269.XA CN111104514B (en) 2019-12-23 2019-12-23 Training method and device for document tag model

Publications (2)

Publication Number Publication Date
CN111104514A CN111104514A (en) 2020-05-05
CN111104514B true CN111104514B (en) 2023-04-25

Family

ID=70423892

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911338269.XA Active CN111104514B (en) 2019-12-23 2019-12-23 Training method and device for document tag model

Country Status (1)

Country Link
CN (1) CN111104514B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111581545B (en) * 2020-05-12 2023-09-19 腾讯科技(深圳)有限公司 Method for sorting recall documents and related equipment
CN111783448B (en) * 2020-06-23 2024-03-15 北京百度网讯科技有限公司 Document dynamic adjustment method, device, equipment and readable storage medium
CN111782949A (en) * 2020-06-30 2020-10-16 北京百度网讯科技有限公司 Method and apparatus for generating information
CN111858895B (en) * 2020-07-30 2024-04-05 阳光保险集团股份有限公司 Sequencing model determining method, sequencing device and electronic equipment
CN112149733B (en) * 2020-09-23 2024-04-05 北京金山云网络技术有限公司 Model training method, model quality determining method, model training device, model quality determining device, electronic equipment and storage medium
CN112580706B (en) * 2020-12-11 2024-05-17 北京地平线机器人技术研发有限公司 Training data processing method and device applied to data management platform and electronic equipment
CN112560402B (en) * 2020-12-28 2024-08-02 北京百度网讯科技有限公司 Model training method, device and electronic equipment
CN112784033B (en) * 2021-01-29 2023-11-03 北京百度网讯科技有限公司 Aging grade identification model training and application method and electronic equipment
CN113011490B (en) * 2021-03-16 2024-03-08 北京百度网讯科技有限公司 Model training methods, devices and electronic equipment
CN113239128B (en) * 2021-06-01 2022-03-18 平安科技(深圳)有限公司 Data pair classification method, device, equipment and storage medium based on implicit characteristics
CN114255743A (en) * 2021-12-13 2022-03-29 北京声智科技有限公司 Training method of voice recognition model, voice recognition method and device
CN117456416B (en) * 2023-11-03 2024-06-07 北京饼干科技有限公司 Method and system for intelligently generating material labels
CN117932497B (en) * 2024-03-19 2024-06-25 腾讯科技(深圳)有限公司 Model determination method and related device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015187155A1 (en) * 2014-06-04 2015-12-10 Waterline Data Science, Inc. Systems and methods for management of data platforms
CN108153856A (en) * 2017-12-22 2018-06-12 北京百度网讯科技有限公司 For the method and apparatus of output information
CN108304439A (en) * 2017-10-30 2018-07-20 腾讯科技(深圳)有限公司 A kind of semantic model optimization method, device and smart machine, storage medium
CN108733779A (en) * 2018-05-04 2018-11-02 百度在线网络技术(北京)有限公司 The method and apparatus of text figure
CN109376222A (en) * 2018-09-27 2019-02-22 国信优易数据有限公司 Question and answer matching degree calculation method, question and answer automatic matching method and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10127214B2 (en) * 2014-12-09 2018-11-13 Sansa Al Inc. Methods for generating natural language processing systems
US9836450B2 (en) * 2014-12-09 2017-12-05 Sansa AI Inc. Methods and systems for providing universal portability in machine learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015187155A1 (en) * 2014-06-04 2015-12-10 Waterline Data Science, Inc. Systems and methods for management of data platforms
CN108304439A (en) * 2017-10-30 2018-07-20 腾讯科技(深圳)有限公司 A kind of semantic model optimization method, device and smart machine, storage medium
CN108153856A (en) * 2017-12-22 2018-06-12 北京百度网讯科技有限公司 For the method and apparatus of output information
CN108733779A (en) * 2018-05-04 2018-11-02 百度在线网络技术(北京)有限公司 The method and apparatus of text figure
CN109376222A (en) * 2018-09-27 2019-02-22 国信优易数据有限公司 Question and answer matching degree calculation method, question and answer automatic matching method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
谢晨阳.基于层次监督的多标签文档分类问题研究.《中国优秀硕士学位论文全文数据库 信息科技辑》.2019,全文. *

Also Published As

Publication number Publication date
CN111104514A (en) 2020-05-05

Similar Documents

Publication Publication Date Title
CN111104514B (en) Training method and device for document tag model
CN112560912B (en) Classification model training methods, devices, electronic equipment and storage media
CN111522967B (en) Knowledge graph construction method, device, equipment and storage medium
CN111967262B (en) Method and device for determining entity tags
CN111125435B (en) Method, device and computer equipment for determining video label
CN111488740B (en) Causal relationship judging method and device, electronic equipment and storage medium
CN112487814B (en) Entity classification model training method, entity classification device and electronic equipment
CN111143561B (en) Intention recognition model training method and device and electronic equipment
CN112001180A (en) Multi-mode pre-training model acquisition method and device, electronic equipment and storage medium
CN112036509A (en) Method and apparatus for training image recognition models
CN111507104A (en) Method and device for establishing label labeling model, electronic equipment and readable storage medium
CN111241234B (en) Text classification method and device
CN110705460A (en) Image category identification method and device
CN111522944B (en) Method, apparatus, device and storage medium for outputting information
CN111460791B (en) Text classification method, device, equipment and storage medium
CN111931500B (en) Search information processing method and device
CN111078878B (en) Text processing method, device, device and computer-readable storage medium
CN111127191B (en) Risk assessment method and risk assessment device
CN113836925A (en) Training method, device, electronic device and storage medium for pre-trained language model
CN112380847B (en) Point of interest processing method, device, electronic device and storage medium
CN111984774B (en) Searching method, searching device, searching equipment and storage medium
CN111241302B (en) Position information map generation method, device, equipment and medium
CN114429633B (en) Text recognition method, training method and device of model, electronic equipment and medium
CN113312451B (en) Text label determination method and device
CN111984775A (en) Question and answer quality determination method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant