[go: up one dir, main page]

CN107679225B - Reply generation method based on keywords - Google Patents

Reply generation method based on keywords Download PDF

Info

Publication number
CN107679225B
CN107679225B CN201710986821.0A CN201710986821A CN107679225B CN 107679225 B CN107679225 B CN 107679225B CN 201710986821 A CN201710986821 A CN 201710986821A CN 107679225 B CN107679225 B CN 107679225B
Authority
CN
China
Prior art keywords
keyword
decoder
prediction result
context vector
sent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710986821.0A
Other languages
Chinese (zh)
Other versions
CN107679225A (en
Inventor
张伟男
朱庆福
宋皓宇
刘挺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Heilongjiang Radio And Television Station
Original Assignee
Harbin Institute of Technology Shenzhen
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology Shenzhen filed Critical Harbin Institute of Technology Shenzhen
Priority to CN201710986821.0A priority Critical patent/CN107679225B/en
Publication of CN107679225A publication Critical patent/CN107679225A/en
Application granted granted Critical
Publication of CN107679225B publication Critical patent/CN107679225B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Machine Translation (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

一种基于关键词的回复生成方法,本发明涉及基于关键词的回复生成方法。本发明为了解决现有方法灵活性差、容易产生语意损失,以及序列对序列模型倾向于生成一般性万能回复的问题。本发明包括:一:根据输入的消息生成关键词;二:将输入的消息转化成上下文向量,将第一个关键词和上下文向量送入解码器,若得到的预测结果与第一个关键词一致,则将第二个关键词和上下文向量送入解码器;若得到的预测结果与第一个关键词不一致,则仍将第一个关键词和上下文向量送入解码器,直至得到的预测结果与第一个关键词一致后,再将第二个关键词和上下文向量送入解码器,直至所有关键词按顺序送入解码器,并得到预测结果。本发明用于聊天机器人回复生成领域。

Figure 201710986821

A keyword-based reply generation method, and the present invention relates to a keyword-based reply generation method. The present invention aims to solve the problems that the existing methods have poor flexibility, are prone to semantic loss, and the sequence-to-sequence model tends to generate a general universal reply. The present invention includes: 1: generating keywords according to the input message; 2: converting the input message into a context vector, and sending the first keyword and the context vector to the decoder, if the obtained prediction result is the same as the first keyword If they are consistent, the second keyword and context vector are sent to the decoder; if the obtained prediction result is inconsistent with the first keyword, the first keyword and context vector are still sent to the decoder until the prediction is obtained. After the result is consistent with the first keyword, the second keyword and the context vector are sent to the decoder until all keywords are sent to the decoder in order, and the prediction result is obtained. The present invention is used in the field of chatting robot reply generation.

Figure 201710986821

Description

Reply generation method based on keywords
Technical Field
The invention relates to the field of reply generation of a chat robot (also called a man-machine conversation system) in the field of computer artificial intelligence, in particular to a reply generation method based on keywords.
Background
A chat robot is a computer program that simulates human interaction and converses with humans using natural language processing techniques. The origin of chat robots was first traced back to the article "Computing Machinery and Intelligence" published by Turing in Mind 1950, which presented the classic "Turing Test" which has been considered as the ultimate goal of computer artificial Intelligence for decades. In the chat robot, reply generation is a core module. In recent years, a reply generation manner using a neural network is increasingly attracting interest. LSTM-based sequence-to-sequence (Seq2Seq) models are a class of neural network generation models that can maximize the probability of generation for a given previous dialog turn. The method makes the continuous dialogue rounds form a one-to-one mapping relation. Similar models are, for example, dialogue models based on neural network machine translation NMT.
As in Seq2Seq model is used for mapping one sequence to another sequence, and is widely applied to the current scenes of open domain chat robots, machine translation, syntactic analysis, question-answering systems, and the like, and the basic structure of the model is as shown in fig. 1. The Seq2Seq model adopts an Encoder-Decoder framework, which can be regarded as a research mode in the field of text processing. For sentence pairs<I,O>The model targets an input sentence I, and it is desired to generate a target sentence O through an Encoder-Decoder framework. I and O may be in the same language (such as question and answer and chat) or in two different languages (such as machine translation). Both I and O are made up of sequences of words, assuming I ═ I<i1,i2,...im>And O ═<o1,o2,...on>As the name implies, Encoder encodes an input sentence X, which is transformed into an intermediate semantic representation C by nonlinear transformation:
C=f(i1,i2…im)
for the Decoder, the task is to generate the word o to be generated at time i from the intermediate semantic representation C of the sentence X and the history information that has been generated beforei
Figure GDA0002633744730000011
Each output character O is thus generated in turn, and the entire system generates the target sentence O from the input sentence I.
The chat robot is constructed by using the Seq2Seq model, and the following modeling can be carried out: for the above-mentioned < I, O > pair, the user input statements are modeled using I and the reply statements of the chat robot are modeled using O. After a user inputs a Message, the Message is encoded by an Encoder to form an intermediate semantic representation C through the calculation of an Encoder-Decoder framework; and the Decoder generates a reply sentence of the chat robot according to the intermediate semantic representation C. Thus, the user inputs different messages, and the chat robot generates new responses corresponding to the messages, so that an actual conversation system is formed.
When the Seq2Seq model is applied to the scene of the chat robot, the structural units of the Encoder and the Decoder generally adopt RNN (neural network), and the RNN model is the most common deep learning model for a linear sequence such as a text; more are now used than the improved model of RNN, LSTM model and GRU model: the two models have obviously better effect than the traditional RNN model when processing the situation that sentences are long. Meanwhile, to improve the model effect, a multi-layer Seq2Seq network is now often used, as shown in fig. 2.
However, the current reply generation technology generally has the problem of fuzzy reply: models tend to generate generic universal replies such as "I don't knock", "Me, too", etc. Li et al.2015 proposes a method that uses maximum mutual information loss instead of cross entropy loss, and Serban et al.2016 introduces a random variable in the generation process. Vlad Serban et al.2016 proposes a keyword method for enhancing the implicit expression of keyword information through a keyword sub-model, and Mou et al.2016 proposes a method for generating a keyword first and then generating the remaining reply part by forward and reverse keywords.
The existing single keyword technology requires only one keyword, but the number of the keywords replied differently is not fixed, so that the problem exists; the multi-keyword approach, however, cannot guarantee that keywords appear explicitly in the final reply because the information of multiple keywords is compressed.
Disclosure of Invention
The invention aims to provide a reply generation method based on keywords, which aims to solve the problems that the existing method is poor in flexibility and easy to generate semantic loss, and a sequence pair sequence model is prone to generating general universal replies.
A reply generation method based on keywords comprises the following steps:
the method comprises the following steps: generating a keyword according to the input message;
step two: and taking the message input in the step one and the generated key words as input, and decoding.
Converting the message input in the step one into a context vector, sending the first keyword and the context vector generated in the step one into a decoder to obtain a prediction result, and if the obtained prediction result is consistent with the first keyword, sending the second keyword and the context vector into the decoder; if the obtained prediction result is inconsistent with the first keyword, the first keyword and the context vector are still sent to the decoder until the obtained prediction result is consistent with the first keyword, and then the second keyword and the context vector are sent to the decoder until all the keywords are sent to the decoder in sequence to obtain the prediction result.
The invention has the beneficial effects that:
in the aspect of testing a data set, English data adopts a Ubuntu (Wuban graph) data set, the data is from a Ubuntu chat room, and the data volume reaches 290w pairs; the Chinese data adopts a microblog data set, the sources of the Chinese data are the microblog of the Sina microblog and the corresponding comments, and the data volume reaches 110w pairs.
In the evaluation criteria aspect, automatic evaluation uses an Embedding Metrics (word Embedding matrix), including an Average method of performing mean pooling, a Greedy method of considering alignment information, and an extreme method of max pooling. The results of the active evaluation are given in the following table:
Figure GDA0002633744730000031
in the aspect of manual evaluation, 0 is used for indicating grammar errors and unsmoothness; +1 represents scene dependent; +2 represents a no-syntax, fluency problem, independent of the scene. For the Ubuntu dataset, the accuracy in the expertise was not considered. The results of the manual evaluation are shown in the following table:
Figure GDA0002633744730000032
drawings
FIG. 1 is a diagram of the basic structure of Seq2 Seq; a, B, C, W, X, Y, Z denotes words, < go > denotes a start symbol, < eos > denotes an end symbol;
FIG. 2 is a diagram of a multi-level Seq2Seq model; LSTM is long-short term memory network, in is input, out is output; 1, 2, 3, which represents the 1 st, 2 nd, 3 rd layer of the network;
FIG. 3 is a diagram illustrating keyword transfer rules; EOS is an end character;
FIG. 4 is a schematic diagram of the overall structure of the model of the present invention; o iswiEmbedding a vector, p, for the word of the ith characterwiFor the probability that the ith word is predicted, | V | is the size of the word list.
Detailed Description
The first embodiment is as follows: as shown in fig. 3 and 4, a keyword-based reply generation method includes the steps of:
the method comprises the following steps: generating a plurality of keywords according to the input message;
step two: and taking the message input in the step one and the generated key words as input, and decoding.
Converting the message input in the step one into a context vector, sending the first keyword and the context vector generated in the step one into a decoder to obtain a prediction result, and if the obtained prediction result is consistent with the first keyword, sending the second keyword and the context vector into the decoder; if the obtained prediction result is inconsistent with the first keyword, the first keyword and the context vector are still sent to the decoder until the obtained prediction result is consistent with the first keyword, and then the second keyword and the context vector are sent to the decoder until all the keywords are sent to the decoder in sequence to obtain the prediction result.
The following table gives examples of replies generated by the present invention:
message Keyword Recovery
Not visible daily, e.g. trilateral Three-month peach blossom fairy tale Vanished March flower
One music a day, one second grid! Clothes with rigid mark The clothes is like my
Story telling of small girl in French with super-lovely big eyes French girl future French girl's good lovely
Forgiving me that the fries are in a low point Smiling life Defining smile points for solitary life
Universal color matching reference The color modeling is a bit The color and the shape are beautiful
The second embodiment is as follows: the first difference between the present embodiment and the specific embodiment is: in the first step, the generation of the keyword according to the input message is specifically divided into two cases (because the standard answer exists in the training process, but the prediction process does not exist, the generation of the keyword is divided into two cases according to different processes):
in the first case, in the training process, part-of-speech tagging is performed on the standard answer by using a part-of-speech tagging tool, and all words with parts-of-speech being nouns in the tagging result are selected as keywords.
In the second case, in the prediction process, in order to maintain consistency with the training process, the selection range of the keyword is limited to all the nouns in the decoder vocabulary, and these nouns are used as candidate words. The value of their mutual information (PMI) with each candidate word is calculated as the score of the candidate word using all words in the input message. And further selecting all candidate words with mutual information values larger than 0, and simultaneously sorting the candidate words according to the sequence of the mutual information values from large to small. The final key word is the top N of the candidate words after screeningkA word, wherein NkAnd (4) exceeding the parameters for the upper limit of the keywords set manually.
Other steps and parameters are the same as those in the first embodiment.
The third concrete implementation mode: the present embodiment differs from the first or second embodiment in that: and in the second step, the keywords and the context vector are sent to a decoder to obtain a prediction result, and the prediction result is realized by the following formula:
another core of the invention is to introduce a Keyword gating function: adding the embedding information of the keywords to the existing gate, adding a keyword gate, and modifying the new memory calculation mode as follows:
zi=σ(WzEyi+Uzsi-1+Czci+Vztj)
ri=σ(WrEyi+Ursi-1+Crci+Vrtj)
ki=σ(WkEyi+Uksi-1+Ckci+Vktj)
Figure GDA0002633744730000051
Figure GDA0002633744730000052
wherein z isiTo update the gate, σ is a non-linear activation function, Wz、Uz、Cz、Vz、Wr、Ur、Cr、Vr、Wk、Uk、Ck、VkU, C, V are learnable parameters, E is a word vector matrix, yiFor one-hot representation of the prediction at time i, si-1For the decoder hidden state vector at time i-1, ciContext vector for time i (used to represent incoming messages), tjIs a one-hot representation of the jth keyword, riTo forget the door, kiIn order to be a keyword gate, the user can select,
Figure GDA0002633744730000053
for the new memory vector at time i,
Figure GDA0002633744730000054
for multiplication by vector elements one by one, tanh is the activation function.
The overall structure of the model of the invention is shown in figure 4.
Other steps and parameters are the same as those in the first or second embodiment.
The fourth concrete implementation mode: the method is applied to a reply generation process based on the keywords in a man-machine conversation system of a computer artificial intelligent chat robot.
The following examples were used to demonstrate the beneficial effects of the present invention:
the first embodiment is as follows:
the invention can be directly applied to the chat robot system in the open domain, and is a core module of the chat robot. The application carrier is a chatting robot 'stupid' developed by the social computing and information retrieval research center of Harbin Industrial university.
Firstly, the module predicts a plurality of keywords according to input, and then the module decodes a sentence of reply by combining the input and the keywords to complete a reply generation task based on the keywords.
In terms of a deployment mode, the invention can be independently used as a computing node and deployed on cloud computing platforms such as Ariiyun or Meiqun cloud, and communication with other modules can be carried out in a mode of binding IP addresses and port numbers.
In the specific implementation of the present invention, because the deep learning related technology is used, a corresponding deep learning framework needs to be used: the related experiment of the technology is realized based on the open source frame Pythrch. If necessary, other frameworks can be replaced, such as tensierflow which is also open source, or PadlePadle used inside an enterprise, etc.
The present invention is capable of other embodiments and its several details are capable of modifications in various obvious respects, all without departing from the spirit and scope of the present invention.

Claims (4)

1.一种基于关键词的回复生成方法,其特征在于:所述基于关键词的回复生成方法包括以下步骤:1. a keyword-based reply generation method, is characterized in that: the keyword-based reply generation method comprises the following steps: 步骤一:根据输入的消息生成关键词;Step 1: Generate keywords according to the input message; 步骤二:以步骤一输入的消息和生成的关键词作为输入,进行解码;Step 2: take the message input in step 1 and the generated keyword as input, and decode; 将步骤一输入的消息转化成上下文向量,将步骤一中生成的第一个关键词和上下文向量送入解码器得到预测结果,若得到的预测结果与第一个关键词一致,则将第二个关键词和上下文向量送入解码器;若得到的预测结果与第一个关键词不一致,则仍将第一个关键词和上下文向量送入解码器,直至得到的预测结果与第一个关键词一致后,再将第二个关键词和上下文向量送入解码器,直至所有关键词按顺序送入解码器,并得到预测结果。Convert the message input in step 1 into a context vector, and send the first keyword and context vector generated in step 1 to the decoder to obtain the prediction result. If the obtained prediction result is consistent with the first keyword, the second A keyword and context vector are sent to the decoder; if the obtained prediction result is inconsistent with the first keyword, the first keyword and context vector are still sent to the decoder until the obtained prediction result is consistent with the first keyword. After the words are consistent, the second keyword and the context vector are sent to the decoder until all keywords are sent to the decoder in order, and the prediction result is obtained. 2.根据权利要求1所述的一种基于关键词的回复生成方法,其特征在于:所述步骤一中根据输入的消息生成关键词具体分为两种情况:2. a kind of keyword-based reply generation method according to claim 1, is characterized in that: in described step 1, according to the input message generation keyword is specifically divided into two kinds of situations: 第一种情况为在训练过程中从标准答案中提取所有名词作为关键词;The first case is to extract all nouns from the standard answer as keywords during the training process; 第二种情况为在预测过程中根据输入的消息使用互信息的值预测出关键词。The second case is to use the value of mutual information to predict keywords according to the input message during the prediction process. 3.根据权利要求2所述的一种基于关键词的回复生成方法,其特征在于:所述步骤二中关键词和上下文向量送入解码器得到预测结果通过以下公式实现:3. a kind of reply generation method based on keyword according to claim 2, is characterized in that: in described step 2, keyword and context vector are sent into decoder and obtain prediction result and realize by following formula: zi=σ(WzEyi+Uzsi-1+Czci+Vztj)z i =σ(W z Ey i +U z s i-1 +C z c i +V z t j ) ri=σ(WrEyi+Ursi-1+Crci+Vrtj)r i =σ(W r Ey i +U r s i-1 +C r c i +V r t j ) ki=σ(WkEyi+Uksi-1+Ckci+Vktj)k i =σ(W k Ey i +U k s i-1 +C k c i +V k t j )
Figure FDA0002633744720000011
Figure FDA0002633744720000011
Figure FDA0002633744720000012
Figure FDA0002633744720000012
其中zi为更新门,σ为非线性激活函数,Wz、Uz、Cz、Vz、Wr、Ur、Cr、Vr、Wk、Uk、Ck、Vk、U、C、V为可学习参数,E为词向量矩阵,yi为i时刻预测结果的独热表示,si-1为第i-1时刻解码器隐层状态向量,ci为i时刻的上下文向量,tj为第j个关键词的独热表示,ri为忘记门,ki为关键词门,
Figure FDA0002633744720000013
为i时刻的新记忆向量,°为按向量元素逐个相乘,tanh为激活函数。
where zi is the update gate, σ is the nonlinear activation function, W z , U z , C z , V z , W r , U r , C r , V r , W k , U k , C k , V k , U, C, and V are learnable parameters, E is the word vector matrix, y i is the one-hot representation of the prediction result at time i, s i-1 is the state vector of the hidden layer of the decoder at time i-1, and c i is time i The context vector of , t j is the one-hot representation of the jth keyword, ri is the forget gate, ki is the keyword gate,
Figure FDA0002633744720000013
is the new memory vector at time i, ° is the multiplication of vector elements one by one, and tanh is the activation function.
4.一种权利要求1所述方法的应用,其特征在于,将所述方法应用于计算机人工智能聊天机器人的人机对话系统中基于关键词的回复生成过程。4 . An application of the method according to claim 1 , wherein the method is applied to a keyword-based reply generation process in a human-machine dialogue system of a computer artificial intelligence chatbot. 5 .
CN201710986821.0A 2017-10-20 2017-10-20 Reply generation method based on keywords Expired - Fee Related CN107679225B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710986821.0A CN107679225B (en) 2017-10-20 2017-10-20 Reply generation method based on keywords

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710986821.0A CN107679225B (en) 2017-10-20 2017-10-20 Reply generation method based on keywords

Publications (2)

Publication Number Publication Date
CN107679225A CN107679225A (en) 2018-02-09
CN107679225B true CN107679225B (en) 2021-03-09

Family

ID=61141800

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710986821.0A Expired - Fee Related CN107679225B (en) 2017-10-20 2017-10-20 Reply generation method based on keywords

Country Status (1)

Country Link
CN (1) CN107679225B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108509539B (en) * 2018-03-16 2021-08-17 联想(北京)有限公司 Information processing method and electronic device
CN108491514B (en) * 2018-03-26 2020-12-01 清华大学 Method and device for questioning in dialogue system, electronic device, and computer-readable medium
CN110472198B (en) * 2018-05-10 2023-01-24 腾讯科技(深圳)有限公司 Keyword determination method, text processing method and server
CN109543017B (en) * 2018-11-21 2022-12-13 广州语义科技有限公司 Legal question keyword generation method and system
CN110738026B (en) * 2019-10-23 2022-04-19 腾讯科技(深圳)有限公司 Method and apparatus for generating description text
CN113239169B (en) * 2021-06-01 2023-12-05 平安科技(深圳)有限公司 Answer generation method, device, equipment and storage medium based on artificial intelligence
CN115470325B (en) * 2021-06-10 2024-05-10 腾讯科技(深圳)有限公司 Message reply method, device and equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104063483A (en) * 2014-07-03 2014-09-24 无锡市崇安区科技创业服务中心 Method for complementing contexts of key word in self-adaptive mode
US9064005B2 (en) * 2001-05-09 2015-06-23 Nuance Communications, Inc. System and method of finding documents related to other documents and of finding related words in response to a query to refine a search
CN105912712A (en) * 2016-04-29 2016-08-31 华南师范大学 Big data-based robot conversation control method and system
CN106856092A (en) * 2015-12-09 2017-06-16 中国科学院声学研究所 Chinese speech keyword retrieval method based on feedforward neural network language model
CN106934068A (en) * 2017-04-10 2017-07-07 江苏东方金钰智能机器人有限公司 The method that robot is based on the semantic understanding of environmental context

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9064005B2 (en) * 2001-05-09 2015-06-23 Nuance Communications, Inc. System and method of finding documents related to other documents and of finding related words in response to a query to refine a search
CN104063483A (en) * 2014-07-03 2014-09-24 无锡市崇安区科技创业服务中心 Method for complementing contexts of key word in self-adaptive mode
CN106856092A (en) * 2015-12-09 2017-06-16 中国科学院声学研究所 Chinese speech keyword retrieval method based on feedforward neural network language model
CN105912712A (en) * 2016-04-29 2016-08-31 华南师范大学 Big data-based robot conversation control method and system
CN106934068A (en) * 2017-04-10 2017-07-07 江苏东方金钰智能机器人有限公司 The method that robot is based on the semantic understanding of environmental context

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于购买意向的移动电子商务智能客服系统;蔡志文等;《科技管理研究》;20150920;第179页-第183页 *

Also Published As

Publication number Publication date
CN107679225A (en) 2018-02-09

Similar Documents

Publication Publication Date Title
CN107679225B (en) Reply generation method based on keywords
US11409945B2 (en) Natural language processing using context-specific word vectors
TWI732271B (en) Human-machine dialog method, device, electronic apparatus and computer readable medium
CN108334487B (en) Missing semantic information completion method and device, computer equipment and storage medium
CN106919646B (en) Chinese text abstract generating system and method
CN111460132B (en) Generation type conference abstract method based on graph convolution neural network
CN112818107B (en) Conversation robot for daily life and chat method thereof
CN113127624B (en) Question answering model training method and device
CN108153913B (en) Training method of reply information generation model, reply information generation method and device
CN107423284B (en) Construction method and system of sentence representation fused with internal structure information of Chinese words
CN108052512A (en) A kind of iamge description generation method based on depth attention mechanism
CN108595436A (en) The generation method and system of emotion conversation content, storage medium
CN109522545A (en) A kind of appraisal procedure that more wheels are talked with coherent property amount
CN112364148B (en) A generative chatbot based on deep learning method
CN113420111B (en) Intelligent question answering method and device for multi-hop reasoning problem
CN111382568B (en) Training method and device of word segmentation model, storage medium and electronic equipment
CN110795549A (en) Short text dialogue method, device, device and storage medium
CN111522924A (en) Emotional chat type reply generation method with theme perception
CN111046157B (en) Universal English man-machine conversation generation method and system based on balanced distribution
CN112131368A (en) Dialog generation method and device, electronic equipment and storage medium
CN114510576A (en) An Entity Relation Extraction Method Based on BERT and BiGRU Fusion Attention Mechanism
CN108364066A (en) Artificial neural network chip and its application process based on N-GRAM and WFST models
CN110955765A (en) Corpus construction method and apparatus of intelligent assistant, computer device and storage medium
CN110297894A (en) A kind of Intelligent dialogue generation method based on auxiliary network
CN115906863B (en) Emotion analysis method, device, equipment and storage medium based on contrast learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230110

Address after: No. 333, Hanshui Road, Nangang District, Harbin, Heilongjiang Province, 150001

Patentee after: HEILONGJIANG RADIO AND TELEVISION STATION

Address before: 150001 No. 92 West straight street, Nangang District, Heilongjiang, Harbin

Patentee before: HARBIN INSTITUTE OF TECHNOLOGY

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210309