CN107562911A - More wheel interaction probabilistic model training methods and auto-answer method - Google Patents
More wheel interaction probabilistic model training methods and auto-answer method Download PDFInfo
- Publication number
- CN107562911A CN107562911A CN201710816017.8A CN201710816017A CN107562911A CN 107562911 A CN107562911 A CN 107562911A CN 201710816017 A CN201710816017 A CN 201710816017A CN 107562911 A CN107562911 A CN 107562911A
- Authority
- CN
- China
- Prior art keywords
- turn
- round
- srd
- questions
- interaction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 230000003993 interaction Effects 0.000 title claims abstract description 46
- 238000012549 training Methods 0.000 title claims abstract description 19
- 230000002452 interceptive effect Effects 0.000 claims abstract description 15
- 230000006835 compression Effects 0.000 claims description 16
- 238000007906 compression Methods 0.000 claims description 16
- 230000004044 response Effects 0.000 claims description 11
- 238000004458 analytical method Methods 0.000 claims description 4
- 238000005516 engineering process Methods 0.000 abstract description 8
- 208000014674 injury Diseases 0.000 description 16
- 230000008733 trauma Effects 0.000 description 16
- 201000005111 ocular hyperemia Diseases 0.000 description 8
- 239000013598 vector Substances 0.000 description 5
- 238000003745 diagnosis Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 3
- 201000010099 disease Diseases 0.000 description 3
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 3
- 206010019233 Headaches Diseases 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 206010012601 diabetes mellitus Diseases 0.000 description 2
- 231100000869 headache Toxicity 0.000 description 2
- 230000028327 secretion Effects 0.000 description 2
- 241000219109 Citrullus Species 0.000 description 1
- 235000012828 Citrullus lanatus var citroides Nutrition 0.000 description 1
- 206010010744 Conjunctivitis allergic Diseases 0.000 description 1
- 208000031973 Conjunctivitis infective Diseases 0.000 description 1
- 206010011878 Deafness Diseases 0.000 description 1
- 208000003251 Pruritus Diseases 0.000 description 1
- 201000001028 acute contagious conjunctivitis Diseases 0.000 description 1
- 208000002205 allergic conjunctivitis Diseases 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 208000024998 atopic conjunctivitis Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 229940079593 drug Drugs 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000015654 memory Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 239000004745 nonwoven fabric Substances 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 235000020095 red wine Nutrition 0.000 description 1
- 230000006403 short-term memory Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Landscapes
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses one kind to take turns interaction probabilistic model training method and auto-answer method more, the more wheel interaction probabilistic models of N members are obtained based on more wheel dialogue trainings, in N members the problem of interaction probabilistic models include in each single-wheel dialogue and the binary answered of taking turns to the first interaction probability of N more, during automatic-answering back device is carried out, answer of the answer of the problem of selection selection proposes in real time for enquirement side maximum probability as real-time the problem of proposing, higher precision can be obtained, and efficiently solve existing interactive automatic question answering technology needs to be manually specified without broad applicability and rules of interaction, the problem of taking time and effort, without human configuration rule, various interaction scenarios can be widely applicable to.
Description
Technical Field
The invention relates to a natural language processing technology, in particular to a multi-round interactive probability model training method and an automatic response method.
Background
The problem to be solved is how to realize interactive question-answering between a computer and a person, and particularly needs to complete the following functions:
(1) When multiple conditions are required to answer a question asked by a questioner and the questioner does not adequately provide these conditions, the computer will direct the questioner to provide these conditions. For example, in advance of a flight, a plurality of conditions such as departure date, departure city, arrival city, etc. are required, and if the questioner fails to provide, the computer needs to actively ask the questioner such as "where you go? "and the like, the requirements are obtained from the questioner.
(2) When the inquired things need to be guided to finish according to steps, the computer guides the inquirer to finish the whole dialogue continuously according to steps. For example, in medical automated diagnosis, in order to complete a diagnosis of whether a questioner has diabetes, a step-by-step diagnosis may be required, in which the questioner is first asked whether it has family medical history, if so, the questioner is further asked whether it has a corresponding disease, otherwise, the questioner is asked whether it has a check record, etc. In this process, the computer is required to continuously interact with the questioner according to the questions of the questioner and the steps of the asked matters, so as to complete the conversation.
Currently, the relevant solutions are interaction rule based methods. The basic idea of the method is to manually set interaction rules fixed under specific conditions, and when certain conditions are met, the interaction rules are directly utilized to respond. In the technology disclosed in patent application with the application number of 201610012448.4 and the invention name of an intelligent robot-oriented question-answer interaction method and system, a corresponding answer template can be automatically called for the question-answer system under four conditions of no hearing, no understanding, active inquiry and no answer, so as to interact with a questioner. This approach is easy to implement, but the disadvantages include: (1) Interaction can be carried out only in scenes meeting specific conditions, and the scenes needing interaction cannot be exhaustive, so that the method has no wide applicability and can only process a small number of scenes; (2) The interaction rules need to be specified manually, and are time-consuming and labor-consuming.
Disclosure of Invention
The invention provides a multi-round interactive probability model training method and an automatic response method, which aim to solve the problems that the existing interactive automatic question-answering technology has no wide applicability, and interaction rules need to be manually specified, and time and labor are consumed. The invention is realized by the following technical scheme:
a multi-turn interaction probability model training method based on a multi-turn dialog corpus, the multi-turn dialog corpus comprising a plurality of multi-turn dialogs, each multi-turn dialog comprising a plurality of single-turn dialogs, each single-turn dialog comprising a question posed by a questioner and an answer made by a responder, the question and the answer each comprising at least one vocabulary, the method comprising:
step A: performing information compression on each single-round conversation;
and B: generalizing the information of the questions and the answers in each single-round conversation after information compression;
and C: performing word vectorization on the questions and answers in each single-turn conversation after information generalization;
step D: and calculating the binary-to-N-element interaction probability of the questions and the answers in each single-turn dialog after word vectorization to obtain an N-element multi-turn interaction probability model based on the multi-turn dialog corpus.
Further, the step a includes:
step A1: performing dependency syntactic analysis on the questions and the answers in each single-turn dialog respectively to extract vocabularies serving as core syntactic components in the questions and the answers in each single-turn dialog, wherein the core syntactic components comprise predicates, subjects, objects and subjects;
step A2: calculating the tf-idf value of each vocabulary in each single round of conversation corresponding to the single round of conversation to which the vocabulary belongs;
step A3: and deleting the vocabularies which have tf-idf values smaller than a preset threshold value and are not used as core syntactic components in the questions and the answers in each single-turn dialogue.
Further, the step B includes:
and carrying out named entity recognition on all vocabularies in the questions and the answers in each single-round conversation after information compression, and generalizing the vocabularies belonging to the named entities in the questions and the answers in each single-round conversation into the named entity types corresponding to the vocabularies.
Further, recording the ith multi-turn dialog in the multi-turn dialog corpus as mrd i And the jth single-round dialog in the ith multiple-round dialog is srd j i ,mrd i =<srd 1 i ,srd 2 i ,…,srd m i &M is the number of single-round conversations in the ith multiple-round conversations, srd j i =<q j i ,a j i >,q j i Is srd j i Question asked by the middle questioner, a j i Is srd j i Answer by the middle responder, w k Is q is j i Or a j i The k-th word in (1), then calculate w k Corresponding to srd j i The formula for the tf-idf value of (a) is:
wherein tf-idf (w) k ,srd i ) Is w k Corresponding to srd j i Tf-idf value of, N k,i For multi-turn dialogue mrd i Middle w k Frequency of occurrence of, N i For multi-turn dialogue mrd i The total vocabulary number, | MRDC | is the number of multi-turn dialogs in the multi-turn dialog corpus; i { l: w k ∈mrd l } | is a corpus of multi-turn dialogues containing w k The number of sessions in the multiple rounds.
Further, for q j i And a j i The method for word vectorization comprises the following steps:
to q is j i And a j i Respectively carrying out word vectorization on all the vocabularies in the Chinese character to obtain
Is srd j i A new single round of dialogue is obtained after the word vectorization,wherein,is q is j i The words in (1) are subjected to word vectorization,is a j i The words in (2) are subjected to word vectorization.
An automatic response method based on a multi-round interaction probability model comprises the following steps:
reading the questions asked by the questioner in real time;
sequentially performing information compression, information generalization and word vectorization on the problems proposed in real time;
selecting as the answer to the question posed in real time the answer with the highest probability for the question posed in real time using an N-ary multi-round interaction probability model as defined in claim 1.
Compared with the prior art, the invention has the following advantages and beneficial effects:
the automatic response method provided by the invention is characterized in that an N-element multi-round interaction probability model is obtained based on multi-round dialogue corpus training, the N-element multi-round interaction probability model comprises binary to N-element interaction probabilities of questions and answers in each single round of dialogue, and in the process of automatic response, the answer with the maximum probability of the questions to be asked in real time is selected as the answer of the question to be asked in real time, so that higher precision can be obtained, the problems that the existing interactive automatic question-answering technology is not wide in applicability, the interaction rule needs manual specification, time and labor are consumed are effectively solved, the manual configuration rule is not needed, and the method can be widely applied to various interaction scenes.
Drawings
Fig. 1 is a flow chart of an automatic response method provided by an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the following embodiments and the accompanying drawings.
Fig. 1 is a flow chart of an automatic response method provided by an embodiment of the present invention, which includes a flow chart of a multi-round interaction probability model training method. The multi-round interactive probability model training method provided by the embodiment of the invention is based on a multi-round dialogue corpus, the multi-round dialogue corpus comprises a plurality of multi-round dialogues, each multi-round dialogue comprises a plurality of single-round dialogues, each single-round dialogue comprises a question provided by a questioner and an answer made by a responder, and the question and the answer both comprise at least one vocabulary. It should be noted that the questions may be presented in the form of question sentences or non-question sentences, the questions only representing the words of the questioning party, and the answers may also be presented in the form of question sentences or non-question sentences, the answers only representing the words of the answering party. The questioner generally refers to a user (e.g., a patient), and the responder refers to a computer that automatically answers questions posed by the questioner using the method provided by the present invention.
The multi-round interactive probability model training method comprises the following steps:
step A: compressing information of each single-turn conversation;
and B, step B: generalizing the information of the questions and the answers in each single-turn conversation after information compression;
and C: performing word vectorization on the questions and answers in each single-turn conversation after information generalization;
step D: and calculating the binary-to-N-element interaction probability of the questions and the answers in each single-turn dialogue after word vectorization to obtain an N-element multi-turn interaction probability model based on the multi-turn dialogue corpus.
The step A comprises the following steps:
step A1: and performing dependency syntax analysis on the questions and the answers in each single-turn dialog respectively to extract vocabularies serving as core syntax components in the questions and the answers in each single-turn dialog, wherein the core syntax components comprise predicates, subjects, objects and subjects. For example, after performing dependency parsing on the sentence "ask for whether there is a flight to Beijing" or not, several words such as "tomorrow", "whether there is a flight to Beijing", "flight", and the like are extracted as core syntax components.
Step A2: and calculating the tf-idf value of each vocabulary in each single-turn dialog corresponding to the single-turn dialog to which the vocabulary belongs. Recording the ith multi-turn dialog in the multi-turn dialog corpus as mrd i The jth of the ith multiple-round conversations is denoted as srd j i ,mrd i =<srd 1 i ,srd 2 i ,…,srd m i &M is the number of single-round conversations in the ith multiple-round conversations, srd j i =<q j i ,a j i >,q j i Is srd j i Question asked by the middle questioner, a j i Is srd j i Answer by the middle responder, w k Is q is j i Or a j i The k-th word in (1), then calculate w k Corresponds to srd j i The formula for the tf-idf value of (a) is:
wherein tf-idf (w) k ,srd i ) Is w k Corresponding to srd j i Tf-idf value of, N k,i For multi-turn dialogue mrd i Middle w k Frequency of occurrence of, N i For multi-turn dialogue mrd i The total vocabulary number, | MRDC | is the number of multi-turn dialogs in the multi-turn dialog corpus; i { l: w |) k ∈mrd l Is w contained in the multi-turn dialogue corpus k The number of sessions in the multiple rounds.
Step A3: and deleting the vocabularies which have tf-idf values smaller than a preset threshold value and are not used as core syntactic components in the questions and the answers in each single-turn dialogue.
The step B comprises the following steps:
and carrying out named entity recognition on all vocabularies in the questions and the answers in each single-round conversation after information compression, and generalizing the vocabularies belonging to the named entities in the questions and the answers in each single-round conversation into the named entity types corresponding to the vocabularies. The named entities comprise time, names of persons, names of places and names of organizations. For example, the question "whether there is a flight to beijing" in tomorrow "will be changed to" whether there is a flight to [ place ] or not [ date ] after the processing of step 2 ". In this step, more named identification categories can be expanded, such as flight number, airport name, drug name, disease name, etc., for example, do you ask the question "ask about diabetes to eat watermelon? After more named entities are extracted and generalized, the method becomes 'asking for the question of disease and eating fruit'. This only needs to extend the named entity recognition technology, and the named entity recognition adopts a technology similar to the Stanford segment, which is not described again.
To q is j i And a j i The method for performing word vectorization comprises the following steps:
to q is j i And a j i Respectively carrying out word vectorization on all the vocabularies in the Chinese character to obtain
Is srd j i A new single round of dialogue is obtained after word vectorization,wherein,is q j i The words in (1) are subjected to word vectorization,is a j i The words in (1) are subjected to word vectorization. Using feedback neural network to perform multi-turn dialogue corpusAll the question and answer texts which are subjected to information compression and information generalization are subjected to word vectorization (word 2 vec) operation based on a Recurrent Neural Network (RNN). The word2vec method based on RNN is a publicly published paper technology, and is not described herein again.
In step D, for the whole multi-turn dialogue corpus,the binary probabilities of (c) are as follows:
that is, the statistics of the interactive responses in the previous round areUnder the condition of (1), further question is started at this timeI.e. only consider a conversation forward, and calculate that this interaction gives a result on the basis of the previous conversationThe possibility of (a). For example, calculate that the sentence "i want to make tomorrow" appears in "ask what day you want to make a reservation? The "probability later" or "I do not feel headache" after asking you to ask you for headache ". By analogy, if two sentences are considered forward, it is the calculationSuch as "tomorrow" in "i want to order an airline ticket", "ask for which day you want to reserve? "probability after. If N-1 questions or answers are considered forward, then this is a calculationN-ary probability of. N-grams of all questions and answers to interactThe probability is recorded as an N-element multi-round interaction probability model in the multi-round dialogue corpus as a whole. And training the N-ary multi-round interaction probability model based on a multi-round dialogue corpus by using an N-ary multi-round interaction probability model training method based on a Long Short-Term Memory network (LSTM). The typical LSTM neural network elements are publicly published information and will not be described in detail herein.
The invention also provides an automatic response method based on the multi-round interaction probability model, which comprises the following steps:
reading the questions asked by the questioner in real time;
sequentially performing information compression, information generalization and word vectorization on the problems proposed in real time;
using an N-ary multi-round interaction probability model as in claim 1, the answer with the highest probability for the question posed in real time is selected as the answer to the question posed in real time.
By the method, 1460 ten thousand of conversations are collected, the conversations with the turn 1 are removed, and 1030 ten thousand of conversations are remained, wherein each multi-turn conversation contains 6.8 single-turn conversations on average. After training an N-element multi-round interaction probability model by using a long-short term memory network, starting from the first question of 100 multi-round conversations selected randomly, carrying out multi-round automatic interaction, and finding out that the precision of the multi-round interaction is 76.4% through testing and the supportable round is unlimited. The detailed accuracy index for each round is as follows:
| number of rounds | Accuracy of measurement |
| 1 | 84.5% |
| 2 | 83.1% |
| 3 | 79.5% |
| 4 | 77.2% |
| 5 | 71.3% |
| 7 | 67.5% |
The result shows that the method can obtain higher precision under the condition of overcoming the defects that the traditional rule-based method has limited scenes and the rules need manual configuration. A
The following multiple rounds of dialog "diagnosis process of common pinkeye" are taken as an example to illustrate the implementation steps:
| number of rounds | Question asked | Answering machine |
| 1 | Doctor is good | You are good and what they need help |
| 2 | Yesterday's eyes are very red and do not know what reason? | Asking you to have trauma to the eye? |
| 3 | Non-woven fabric | Asking you for secretion in their eyes? |
| 4 | With secretion | Is it aqueous or viscous? |
| 5 | Stiff and sticky | Is the eyes itchy? |
| 6 | Is | You may get allergic conjunctivitis |
The multi-turn dialog includes 6 single-turn dialogs.
First, information compression is performed on 6 single-round conversations, taking the second single-round conversation as an example:
the question "yesterday eyes are very red and do not know what reason? "and answer" ask you for a trauma to the eye? "depending on the syntactic analysis will extract" yesterday "," eyes "," red "," know "," reason "," you "," have "," trauma "several words as core syntactic components.
"yesterday is the eye red and does not know what reason? "and" ask you for trauma to the eye? "the tf-idf values of the respective words are as follows:
| word | tf-idf value |
| Yesterday | 0.11 |
| Eye(s) | 0.23 |
| Very much | 0.05 |
| Red wine | 0.32 |
| Is not limited to | 0.05 |
| Is aware of | 0.09 |
| What is | 0.08 |
| Reason | 0.26 |
| Asking questions | 0.04 |
| You | 0.01 |
| Have had a | 0.15 |
| Eye(s) | 0.82 |
| Is/are as follows | 0.00 |
| Trauma | 0.67 |
| Does one | 0.21 |
If the threshold value is taken to be 0.1, the single round of dialog will be compressed to "yesterday eye red, without knowing the cause" and "do you have eye trauma" after deleting the vocabulary with tf-idf values less than the threshold value and not as a core syntactic component.
Then, the questions and answers in the 6 single-turn dialogs after information compression are generalized. Taking a single-round conversation after information compression of "yesterday eye red, without knowing the reason" and "do you have eye trauma" as an example, named entity recognition is performed on "yesterday eye red, without knowing the reason" and "do you have eye trauma", respectively, named entity of "yesterday eye red, without knowing" of the reason "is recognized, and the type of the named entity is date, so that the word" yesterday "is generalized to date, and the following results are obtained: "[ date ] eye red, unknown cause", "do you have eye trauma".
And then, carrying out word vectorization on the questions and the answers in the six single-turn conversations after the information generalization, wherein each word becomes a vector with a fixed length after the word vectorization. The advantage of word vectorization is that the questions and answers are further generalized, so that the vectors of the words or sentences with the same or similar meanings are similar, and the data are not sparse during subsequent training. For example, the vectors for "you" and "you" would be very similar, as would the vectors for "eyes" and "eyes", do you have trauma, and do you have trauma? The vector of "is also similar.
The binary to N-gram interaction probability of the question and the answer in each single-turn dialog after the word vectorization is calculated, for example, the binary interaction probability of "do you have eye trauma" is p ("do you have eye trauma" | "[ date ] eye red, no reason known") =0.15, and when one party asks for "[ date ] eye red, no reason known", the probability of answering with "do you have eye trauma" is 15%. Likewise, 3, 4 to N-gram interaction probabilities will be obtained further. And recording the N-element interaction probability of all the questions and answers as an N-element multi-round interaction probability model in the multi-round dialogue corpus.
When a questioning party proposes 'yesterday' my eyes have a little red and a what reason ', firstly, information compression, information generalization and word vectorization are sequentially carried out on the' yesterday 'my eyes have a little red and a what reason', then an N-element multi-round interactive probability model obtained by the method is utilized, the probability that 'do you have trauma' for automatically calculating is the highest, and then 'do you have trauma' for outputting as an answer.
The above-described embodiments are merely preferred embodiments, which are not intended to limit the scope of the present invention, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the present invention should be included in the scope of the present invention.
Claims (6)
1. A multi-turn interactive probability model training method based on a multi-turn dialog corpus comprising a plurality of multi-turn dialogues, each multi-turn dialog comprising a plurality of single-turn dialogues, each single-turn dialog comprising a question posed by a questioner and an answer made by a responder, both the question and the answer comprising at least one vocabulary, the method comprising:
step A: performing information compression on each single-round conversation;
and B: generalizing the information of the questions and the answers in each single-round conversation after information compression;
and C: performing word vectorization on the questions and answers in each single-turn conversation after information generalization;
step D: and calculating the binary-to-N-element interaction probability of the questions and the answers in each single-turn dialog after word vectorization to obtain an N-element multi-turn interaction probability model based on the multi-turn dialog corpus.
2. The method for multi-round interactive probabilistic model training as claimed in claim 1, wherein said step a comprises:
step A1: performing dependency syntax analysis on the questions and the answers in each single-turn conversation respectively to extract vocabularies serving as core syntax components in the questions and the answers in each single-turn conversation, wherein the core syntax components comprise predicates, subjects, objects and subjects;
step A2: calculating the tf-idf value of each vocabulary in each single round of conversation corresponding to the single round of conversation to which the vocabulary belongs;
step A3: and deleting the vocabularies which have tf-idf values smaller than a preset threshold value and are not used as core syntactic components in the questions and the answers in each single-turn dialogue.
3. The method for multi-round interactive probabilistic model training as claimed in claim 2, wherein said step B comprises:
and carrying out named entity recognition on all vocabularies in the questions and the answers in each single-round conversation after information compression, and generalizing the vocabularies belonging to the named entities in the questions and the answers in each single-round conversation into the named entity types corresponding to the vocabularies.
4. The method of claim 1, wherein the ith multi-turn dialog in the multi-turn dialog corpus is denoted as mrd i And the jth single-round dialog in the ith multiple-round dialog is srd j i ,mrd i =<srd 1 i ,srd 2 i ,…,srd m i >, m is the number of single-round conversations in the ith multiple-round conversations, srd j i =<q j i ,a j i >,q j i Is srd j i Question asked by the middle questioner, a j i Is srd j i Answer by the middle responder, w k Is q j i Or a j i The k-th word in (1), then calculate w k Corresponds to srd j i The tf-idf value of (a) is given by:
wherein tf-idf (w) k ,srd i ) Is w k Corresponds to srd j i Tf-idf value of, N k,i For multi-turn dialogue mrd i Middle w k Frequency of occurrence of, N i For multi-turn dialogue mrd i The total vocabulary number, | MRDC | is the number of multi-turn dialogs in the multi-turn dialog corpus; i { l: w k ∈mrd l Is w contained in the multi-turn dialogue corpus k The number of sessions in the multiple rounds.
5. The method of claim 4, wherein q is a pair of rounds of interactive probabilistic model training j i And a j i The method for word vectorization comprises the following steps:
to q is j i And a j i Respectively carrying out word vectorization on all the vocabularies in the Chinese character to obtain
Is srd j i New single round pair obtained after word vectorizationIf so, the user can use the mobile phone to call the phone,wherein,is q j i The words in (1) are subjected to word vectorization,is a j i The words in (1) are subjected to word vectorization.
6. An automatic response method based on a multi-round interaction probability model is characterized by comprising the following steps:
reading the questions asked by the questioner in real time;
sequentially performing information compression, information generalization and word vectorization on the problems proposed in real time;
selecting as the answer to the question posed in real time the answer with the highest probability for the question posed in real time using an N-ary multi-round interaction probability model as claimed in claim 1.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201710816017.8A CN107562911A (en) | 2017-09-12 | 2017-09-12 | More wheel interaction probabilistic model training methods and auto-answer method |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201710816017.8A CN107562911A (en) | 2017-09-12 | 2017-09-12 | More wheel interaction probabilistic model training methods and auto-answer method |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN107562911A true CN107562911A (en) | 2018-01-09 |
Family
ID=60979611
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201710816017.8A Pending CN107562911A (en) | 2017-09-12 | 2017-09-12 | More wheel interaction probabilistic model training methods and auto-answer method |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN107562911A (en) |
Cited By (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110232920A (en) * | 2019-06-21 | 2019-09-13 | 百度在线网络技术(北京)有限公司 | Method of speech processing and device |
| CN110442690A (en) * | 2019-06-26 | 2019-11-12 | 重庆兆光科技股份有限公司 | A kind of query optimization method, system and medium based on probability inference |
| CN111813961A (en) * | 2020-08-25 | 2020-10-23 | 腾讯科技(深圳)有限公司 | Data processing method and device based on artificial intelligence and electronic equipment |
| CN111984778A (en) * | 2020-09-08 | 2020-11-24 | 四川长虹电器股份有限公司 | Dependency syntax analysis and Chinese grammar-based multi-round semantic analysis method |
| CN112365892A (en) * | 2020-11-10 | 2021-02-12 | 杭州大搜车汽车服务有限公司 | Man-machine interaction method, device, electronic device and storage medium |
| CN113887232A (en) * | 2021-12-07 | 2022-01-04 | 北京云迹科技有限公司 | Named entity identification method and device of dialogue information and electronic equipment |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103164616A (en) * | 2013-02-02 | 2013-06-19 | 杭州卓健信息科技有限公司 | Intelligent hospital guide system and intelligent hospital guide method |
| CN104166644A (en) * | 2014-07-09 | 2014-11-26 | 苏州市职业大学 | Term translation mining method based on cloud computing |
| CN105989040A (en) * | 2015-02-03 | 2016-10-05 | 阿里巴巴集团控股有限公司 | Intelligent question-answer method, device and system |
| US20170187709A1 (en) * | 2014-07-29 | 2017-06-29 | Lexisnexis Risk Solutions Inc. | Systems and methods for combined otp and kba identity authentication utilizing academic publication data |
-
2017
- 2017-09-12 CN CN201710816017.8A patent/CN107562911A/en active Pending
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103164616A (en) * | 2013-02-02 | 2013-06-19 | 杭州卓健信息科技有限公司 | Intelligent hospital guide system and intelligent hospital guide method |
| CN104166644A (en) * | 2014-07-09 | 2014-11-26 | 苏州市职业大学 | Term translation mining method based on cloud computing |
| US20170187709A1 (en) * | 2014-07-29 | 2017-06-29 | Lexisnexis Risk Solutions Inc. | Systems and methods for combined otp and kba identity authentication utilizing academic publication data |
| CN105989040A (en) * | 2015-02-03 | 2016-10-05 | 阿里巴巴集团控股有限公司 | Intelligent question-answer method, device and system |
Cited By (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110232920A (en) * | 2019-06-21 | 2019-09-13 | 百度在线网络技术(北京)有限公司 | Method of speech processing and device |
| CN110232920B (en) * | 2019-06-21 | 2021-11-19 | 阿波罗智联(北京)科技有限公司 | Voice processing method and device |
| CN110442690A (en) * | 2019-06-26 | 2019-11-12 | 重庆兆光科技股份有限公司 | A kind of query optimization method, system and medium based on probability inference |
| CN110442690B (en) * | 2019-06-26 | 2021-08-17 | 重庆兆光科技股份有限公司 | A query optimization method, system and medium based on probabilistic reasoning |
| CN111813961A (en) * | 2020-08-25 | 2020-10-23 | 腾讯科技(深圳)有限公司 | Data processing method and device based on artificial intelligence and electronic equipment |
| CN111984778A (en) * | 2020-09-08 | 2020-11-24 | 四川长虹电器股份有限公司 | Dependency syntax analysis and Chinese grammar-based multi-round semantic analysis method |
| CN111984778B (en) * | 2020-09-08 | 2022-06-03 | 四川长虹电器股份有限公司 | Dependency syntax analysis and Chinese grammar-based multi-round semantic analysis method |
| CN112365892A (en) * | 2020-11-10 | 2021-02-12 | 杭州大搜车汽车服务有限公司 | Man-machine interaction method, device, electronic device and storage medium |
| CN113887232A (en) * | 2021-12-07 | 2022-01-04 | 北京云迹科技有限公司 | Named entity identification method and device of dialogue information and electronic equipment |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN107562911A (en) | More wheel interaction probabilistic model training methods and auto-answer method | |
| US20230394247A1 (en) | Human-machine collaborative conversation interaction system and method | |
| CN107247868B (en) | Artificial intelligence auxiliary inquiry system | |
| CN110580516B (en) | Interaction method and device based on intelligent robot | |
| CN106295792B (en) | Dialogue data interaction processing method and device based on multi-model output | |
| CN110674639A (en) | Natural language understanding method based on pre-training model | |
| CN108984655B (en) | Intelligent customer service guiding method for customer service robot | |
| CN112328742A (en) | Training method and device based on artificial intelligence, computer equipment and storage medium | |
| WO2020000779A1 (en) | Method and apparatus for obtaining quality evaluation model, and computer device and storage medium | |
| US20220092441A1 (en) | Training method and apparatus, dialogue processing method and system, and medium | |
| CN109325780A (en) | A kind of exchange method of the intelligent customer service system in E-Governance Oriented field | |
| CN110266900A (en) | Recognition methods, device and the customer service system that client is intended to | |
| CN117252260B (en) | Interview skill training method, equipment and medium based on large language model | |
| CN114186048A (en) | Question answering method, device, computer equipment and medium based on artificial intelligence | |
| CN108920603B (en) | Customer service guiding method based on customer service machine model | |
| CN108632137A (en) | Answer model training method, intelligent chat method, device, equipment and medium | |
| CN110322959A (en) | A kind of Knowledge based engineering depth medical care problem method for routing and system | |
| CN112287082A (en) | Data processing method, device, device and storage medium combining RPA and AI | |
| CN120407717A (en) | Multi-round dialogue question-answering method, system, electronic device, and storage medium | |
| CN111680501B (en) | Query information identification method and device based on deep learning and storage medium | |
| CN118838998A (en) | Man-machine interaction method and device and computer readable storage medium | |
| CN116453674A (en) | Intelligent medical system | |
| CN110969005B (en) | Method and device for determining similarity between entity corpora | |
| CN110427470A (en) | Question and answer processing method, device and electronic equipment | |
| CN115438158A (en) | Intelligent dialogue method, device, equipment and storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| AD01 | Patent right deemed abandoned | ||
| AD01 | Patent right deemed abandoned |
Effective date of abandoning: 20220318 |