[go: up one dir, main page]

WO2018157700A1 - Procédé et dispositif permettant de générer un dialogue, et support d'informations - Google Patents

Procédé et dispositif permettant de générer un dialogue, et support d'informations Download PDF

Info

Publication number
WO2018157700A1
WO2018157700A1 PCT/CN2018/075222 CN2018075222W WO2018157700A1 WO 2018157700 A1 WO2018157700 A1 WO 2018157700A1 CN 2018075222 W CN2018075222 W CN 2018075222W WO 2018157700 A1 WO2018157700 A1 WO 2018157700A1
Authority
WO
WIPO (PCT)
Prior art keywords
training
model
parameter
question
answer message
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2018/075222
Other languages
English (en)
Chinese (zh)
Inventor
陈立
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Publication of WO2018157700A1 publication Critical patent/WO2018157700A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/9032Query formulation
    • G06F16/90332Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/091Active learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/092Reinforcement learning

Definitions

  • the present application relates to the field of Internet technologies, and in particular, to a dialog generation method, apparatus, and storage medium.
  • the popular intelligent dialogue system is based on the above ideas.
  • the intelligent dialogue system can automatically answer the question after receiving the question initiated by the user terminal, and the dialogue between the person and the machine is formed during the question and answer process in the past.
  • the embodiment of the present application provides a dialog generating method, which is applied to a computing device, and the method includes:
  • a dialog is generated based on the first parameter model and the second parameter model.
  • An embodiment of the present application provides a dialog generating apparatus, where the apparatus includes a processor and a memory connected to the processor, where the memory stores a machine readable instruction module executable by the processor,
  • the machine readable instruction module includes:
  • a first obtaining module configured to acquire a training dialog corpus and a first entity labeling result of the training dialog corpus
  • a first training module configured to train the first parameter model according to the training dialog corpus and the first entity labeling result
  • a processing module configured to perform a dialog corpus reorganization expansion process on the training dialog corpus based on the first parameter model, to obtain a recombination extended dialog corpus;
  • a second training module configured to train the second parameter model according to the training dialog corpus, the first entity labeling result, the recombination extended dialog corpus, and the second entity labeling result of the reorganized extended dialog corpus, wherein The second parameter model is used to identify a user's questioning intention;
  • a generating module configured to generate a dialog based on the first parameter model and the second parameter model.
  • the embodiment of the present application further provides a non-transitory computer readable storage medium storing machine readable instructions, the machine readable instructions being executable by a processor to perform the following operations:
  • a dialog is generated based on the first parameter model and the second parameter model.
  • FIG. 1A is a schematic diagram of an implementation environment of a dialog generating apparatus provided by an embodiment of the present application
  • 1B is a schematic structural diagram of a dialog generation platform provided by an embodiment of the present application.
  • 2A is a flowchart of a dialog generation method provided by an embodiment of the present application.
  • 2B is a flowchart of a training process of a parameter dependent dependency model and a state jump model provided by an embodiment of the present application;
  • FIG. 3 is a schematic diagram of a process for generating a dialog provided by an embodiment of the present application.
  • FIG. 5 is a flowchart of a training process of an intent parameter identification model provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a process of dialog generation and active learning provided by an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a dialog generating apparatus according to an embodiment of the present application.
  • FIG. 7B is a schematic structural diagram of a dialog generating apparatus according to an embodiment of the present application.
  • FIG. 7C is a schematic structural diagram of a dialog generating apparatus according to an embodiment of the present disclosure.
  • FIG. 8 is a schematic structural diagram of a server device according to an embodiment of the present application.
  • human-machine dialogue is mainly generated through two types of intelligent dialogue systems, one is intelligent customer service and the other is intelligent platform.
  • the intelligent customer service usually collects a large amount of dialogue corpus for model training, and then responds to the user's question based on the trained linear model and simple context features, thereby generating a dialogue.
  • the intelligent platform also needs to collect a large amount of dialogue corpus to carry out model training, and then respond to the user's questions based on the trained model and simple context, thereby generating a dialogue.
  • the embodiment of the present application provides a dialog generation method and apparatus, which may be based on the first parameter model training based on the training dialog corpus and the entity annotation result of the training dialog corpus.
  • the model reorganizes and expands the training dialogue corpus, and then completes the training of the second parameter model with the reorganization and expansion of the dialogue corpus, so as to realize the man-machine dialogue based on the first parameter model and the second parameter model.
  • the training dialogue corpus is reorganized and expanded in the model training process, thereby greatly reducing the number of training dialogue corpora and the number of physical annotations collected during the initial training phase, thereby effectively saving manpower and time, not only cost Low, and improve the efficiency of dialog generation; at the same time, storing less training conversation corpus also saves the storage resources of computing devices and improves the processing performance of computing devices.
  • Task orientation derived from "task-driven”, which emphasizes the task orientation and regulation.
  • task-oriented in the game refers to the use of tasks to run through the entire game process, guiding users to grow.
  • the human-machine dialogue scenario involved in the example of the present application is task-oriented.
  • task-oriented in the human-machine dialogue scenario also uses tasks to run through the entire dialogue process to guide users.
  • State Jump Generally speaking, the jump relationship between states is represented by a state jump diagram.
  • a state jump diagram generally focuses on describing the state changes of an object during its life cycle, including the jump of objects between different states and the external events that trigger those jumps.
  • the state jump refers to that after the user submits a user question and answer message, the dialog generation platform should return a system question and answer message matching the user question and answer message.
  • This user question and answer message submitted by the user to a system question and answer message given by the dialogue generation platform can be called a state jump.
  • FIG. 1A is a schematic diagram of an implementation environment of a dialog generating apparatus provided by an embodiment of the present application.
  • the computing device 1 integrates the dialog generating platform 11 (also referred to as a dialog generating device or a dialog generating system) provided by any embodiment of the present application.
  • the computing device 1 and the user terminal 2 are connected by a network 3, which may be a wired network or a wireless network.
  • the dialog generation method provided by the embodiment of the present application is applied to a dialog generation platform for providing a task-oriented dialog generation service, which can be applied to multiple scenarios such as taxiing, ordering, and online shop customer service, and is not limited to a single scene. That is, the parameter model constructed by the embodiment of the present application for implementing the human-machine dialogue function belongs to a general model. Based on the built-up general model, the dialog generation platform can be widely applied in many scenarios, thereby effectively liberating manpower and improving productivity.
  • a dialog generation platform and a corresponding dialog programming API are provided.
  • the dialog generation API is provided to the user for accessing a specific service to generate a dialog.
  • the dialog generation platform establishes a task-oriented parameter model based on convolutional neural network and enhanced learning technology, and uses the dialog corpus to train the parameter model, and then based on the trained parameter model and such as CRF (Conditional Random Field).
  • CRF Consumer Random Field
  • NLU Natural Language Understanding
  • DQN Deep Q Network
  • the dialog generation platform 11 mainly includes the following parts:
  • a dialog corpus parameter parser 111 configured to establish a parameter dependent dependency model and a state jump model according to the training dialog corpus and the entity labeling result of the training dialog corpus;
  • an intent parameter identifier 112 for training the intent parameter identification model according to the parameter model generated by the training dialog corpus and the dialog corpus parameter parser 111;
  • a dialog generation system 113 for generating a dialog based on a model generated by the dialog corpus parameter parser 111 and the intent parameter recognizer 112, and managing the dialog using a session (session control) manager;
  • the active learning system 114 is configured to actively explore and learn according to the dialogue generated by the online man-machine dialogue, so as to improve the accuracy of the parameter model and increase the dialogue expandability.
  • dialog generation platform For a detailed explanation of the various parts of the dialog generation platform, please refer to the following examples.
  • FIG. 2A is a flowchart of a dialog generation method provided by an embodiment of the present application.
  • the dialog generation method provided by the embodiment of the present application is applicable to the computing device 1 shown in FIG. 1A.
  • the method process provided by the embodiment of the present application includes the following steps.
  • Step 201 Acquire a training conversation corpus and label the result of the first entity of the training conversation corpus, and train the first parameter model according to the training conversation corpus and the first entity annotation result.
  • the training dialogue corpus includes a plurality of conversations, each conversation consisting of at least one user quiz message and at least one system quiz message, which may be derived from a natural conversation collected on the network.
  • each dialog in the training dialog corpus is marked with an entity, that is, the entity in the dialog is marked in the form of a key-value.
  • the at least one represents one or more in quantity.
  • the first parameter model includes a parameter necessary dependency model and a state jump model.
  • the parameter dependent model is used to identify the necessary parameters in the dialogue and the dependencies between the parameters
  • the state jump model is used to determine the state jump relationship of the dialogue.
  • the necessary parameters in the dialogue refer to the parameters necessary for a dialogue.
  • the user wants to take a taxi to the A square, so he has a human-machine dialogue with the dialogue generation platform.
  • the user's destination parameter (the value of which is "A square") is in the dialogue.
  • the user's departure parameter is an optional parameter in the dialogue, because the current intelligent terminal basically has a positioning function, and can automatically report the current location of the user.
  • the dialog generation platform can establish a parameter dependency relationship between the user's departure point parameter and the destination parameter, thereby according to the departure place.
  • the parameter locks the current square where the user wants to go to the A square.
  • the state transition relationship of the dialog is essentially used to specify that after the user submits a user question and answer message, the dialog generation platform should return a system question and answer message that matches the user question and answer message.
  • This system quiz message is a response to a question made by a user question and answer message. For example, when the user puts forward a phrase "I want to take a taxi to A Plaza", then according to the state of the conversation jump relationship, the dialogue generation platform should return to the user a system question and answer message such as "I ask where you are”, instead of a sentence A systematic question and answer message such as "The weather is fine today.”
  • the training parameter must depend on the model and the state jump model, and can be implemented in the following manner, as shown in FIG. 2B:
  • Step 201a Train the CRF model according to the training dialog corpus and the first entity labeling result.
  • the CRF model is an undirected graph model, which can be used in sequence labeling tasks such as word segmentation, part-of-speech tagging, named entity recognition, and data segmentation.
  • sequence labeling tasks such as word segmentation, part-of-speech tagging, named entity recognition, and data segmentation.
  • each parameter in the CRF model needs to be initialized, and in the process of training the CRF model, the CRF model can also be optimized by using a random gradient descent and a forward backward propagating method. The various parameters in the to minimize the error of the CRF model.
  • Step 201b Split the training dialog corpus into a matching pair of at least one system question and answer message and a user question and answer message.
  • the conversation corpus parameter parser 111 first splits the training conversation corpus into a matching pair of the system question and answer message and the user question and answer message.
  • a dialogue in the training dialogue corpus can split a matching pair of at least one system question and answer message and a user question and answer message.
  • a matching pair includes a question and an answer. For example, see Figure 3, "Where are you in the location" and "B-cell" is a matching question between the system Q&A message and the user question and answer message, "Help me book a car to the airport at 8 o'clock tomorrow morning” and "Already booked for you” "It is also a matching pair of system Q&A messages and user Q&A messages.
  • Step 201c Analyze and process the system quiz message and the user question and answer message included in the obtained at least one matching pair based on the CRF model, and obtain a target parameter sequence of the training dialog corpus.
  • the collected training dialogue corpus only performs simple entity annotation, there are still unlabeled parts in the training dialogue corpus.
  • the system question and answer message and the user question and answer message included in the obtained at least one matching pair are analyzed and processed based on the CRF model, and on the other hand, the labeling of the unlabeled entity in the training dialogue corpus is completed. Analyze the sequence of target parameters necessary for model training included in the matching pair.
  • the target parameter sequence includes at least one entity parameter and a value of at least one entity parameter.
  • the target parameter sequence includes two entity parameters, an ori parameter and a dst parameter, and the values of the two entity parameters are “B-cell” and “A-square”, respectively.
  • the target parameter sequence includes two entity parameters, a time parameter and a dst parameter, and the values of the two entity parameters are “8 o'clock in the morning” and “airport” respectively.
  • step 201d based on the target parameter sequence, the initial parameter dependent model is trained, and the trained parameter dependent model is obtained.
  • the initial parameter dependent dependency model may be a CNN (Convolutional Neural Network), an LSTM (Long Short-Term Memory), and an LR (Logistic Regression) network. At least two combined models. In order to ensure the performance of the trained model, a combination of the above three networks can be adopted.
  • the initial parameter dependent model is a hybrid model.
  • the initial parameter dependent model includes the CNN layer involved in the CNN network, the LSTM layer involved in the LSTM network, and the LS layer involved in the LS network.
  • LSTM is a variant of RNN (Recurrent Neural Network), which belongs to the feedback neural network in the field of artificial neural networks and can learn long-term dependencies.
  • RNN is used to process sequence data.
  • this common neural network model cannot be applied to scenarios such as predicting what the next word of a sentence is. In this scenario, you usually need to use the preceding words, because the words in a sentence are not independent.
  • RNN is called a cyclic neural network because the current output of a sequence is also related to the previous output.
  • the specific expression is to memorize the previous output and apply the previous output to the calculation of the current output, that is, the nodes between the hidden layers are no longer connected but connected, and the input of the hidden layer includes not only the input.
  • the output of the layer also includes the output of the hidden layer at the previous moment.
  • the target parameter sequence is used as the initial parameter, and the training sample of the model is necessary.
  • the initial parameter must depend on each parameter in the model to correspond to an initialization value.
  • the initial parameters must rely on the model to extract the feature parameters of the target parameter sequence to train the model, so as to obtain the optimal value of each parameter in the parameter dependent model, and complete the training of the parameter dependent model.
  • Step 201 e Perform feature extraction processing on the target parameter sequence based on the parameter dependent dependency model, obtain feature information of the target parameter, and train the initial state jump model based on the feature information to obtain a trained state jump model.
  • the initial state jump model is a model using an LSTM network.
  • the initial state jump model is trained based on the feature information, that is, the feature information is used as an input of the initial state jump model, and the values of each parameter in the initial state jump model are continuously optimized, thereby obtaining the best of each parameter. Value, complete the training of the state jump model.
  • the generation process of the parameter dependent dependency model can be described by the left branch shown in FIG.
  • the generation process of the state jump model can be described by the right branch shown in FIG.
  • Step 202 Perform a dialog corpus recombination expansion process on the training dialog corpus based on the first parameter model, obtain a recombination extended dialog corpus, and according to the training dialog corpus, the first entity labeling result, the reorganization extended dialog corpus, and the second to reorganize the extended dialog corpus The entity labels the result and trains the second parameter model, wherein the second parameter model is used to identify the user's questioning intent.
  • a dialog corpus reorganization expansion process is performed on the training dialog corpus based on the first parameter model, and the reorganization is extended by the reorganization.
  • the performing the dialog corpus reorganization extension process may be implemented by first splitting the training dialog corpus into at least one system question and answer message - a matching pair of user question and answer messages; and then, for each of the obtained at least one matching pair The matching pair, based on the first parameter model and other matching pairs except the matching pair, automatically expands the system question and answer message matching the user question and answer message included in the matching pair, and obtains the reorganized extended dialog corpus.
  • the training dialog corpus splits to get 1000 system question and answer messages - the matching pairs of user question and answer messages
  • the dialog corpus reorganization extension is performed, for a user question and answer message of a matching pair, based on A parameter model detects, among the remaining 999 matching pairs, whether there is a system question and answer message with which a new matching pair can be formed; if such a system question and answer message exists, a matching pair of the system question and answer message and the user question and answer message is newly generated.
  • the system question and answer message For example, for the user question and answer message "I want to take a taxi to A Plaza", in addition to the system question and answer message "Please ask where you are", the system question and answer message with "When are you going?" To some extent, it is also matched. Therefore, it is possible to newly generate a matching pair of user quiz messages such as “I want to take a taxi to A Plaza” - "When do you want to leave” and the system Q&A message.
  • the system Q&A message of the user quiz message may be extended based on the scenario involved in the user quiz message and not according to other matching pairs, which is not specifically limited in this embodiment of the present application.
  • step 201 the process of performing entity labeling on the re-integrated dialog corpus may be referred to in the foregoing step 201, and details are not described herein again.
  • the initial second parameter model is trained according to the training dialog corpus, the first entity labeling result, the recombination extended dialog corpus, and the second entity labeling result of the reorganization extended dialog corpus, and the trained second The two-parameter model, that is, the intent parameter identification model.
  • the intent parameter identification model is used to identify the intent of each user question and answer message proposed by the user and the parameters implied in the intent. For example, for a user question and answer message "I want to take a taxi to A Plaza", the intent parameter identification model needs to analyze the user that this is constructing the destination dst parameter, and the value of the parameter is "A square".
  • the initial second parameter model is a combination of at least two of CNN, RNN, and DNN (Deep Neural Network).
  • the initial second parameter model is also a hybrid model, such as the initial second parameter model including the CNN layer involved in the CNN, the RNN layer involved in the RNN, and the DNN layer involved in the DNN.
  • DNN differs from CNN and RNN in that DNN refers to a fully connected neuron structure, and does not include a convolution unit or a temporal association.
  • the model training is performed by training the conversation corpus, the first entity labeling result, the recombination expanding dialogue corpus, and the second entity labeling result, that is, performing feature extraction on the labeled training file and based on the extracted feature pair initial initial parameter
  • the values of each parameter in the initial second parameter model can be obtained, that is, the training of the intent parameter identification model is completed.
  • the dialog generation platform can perform a dialogue interaction with the user on the line based on the obtained parameter model. For details, see step 203 below.
  • Step 203 Generate a dialog based on the first parameter model and the second parameter model.
  • the dialog is generated based on the first parameter model and the second parameter model, that is, after receiving a user question and answer message sent by the user terminal, acquiring the user question and answer based on the first parameter model and the second parameter model.
  • the message matches the first system question and answer message, and sends the first system question and answer message to the user terminal.
  • the user question and answer message and the system question and answer message form a dialogue.
  • the dialog generation system receives the first parameter model and the second parameter model obtained in step 201 and step 202 after receiving a user question and answer message sent by the user terminal. Obtaining a first system question and answer message that matches the received user question and answer message, and returning the first system question and answer message to the user terminal. For example, after receiving a user Q&A message such as “I am going to A Plaza”, I will return to the system Q&A message “Where are you?”.
  • human-machine dialogue has formed during the questioning and answering process.
  • the embodiment of the present application adopts a session manager for dialog management.
  • the embodiment of the present application also performs log collection processing.
  • the data collected as the log includes a user question and answer message proposed by the user and a system question and answer message matching the user question and answer message. That is, the dialog generation system collects the received user question and answer message and each system question and answer message that matches the collected user question and answer message, and collects each user question and answer message and various system questions and answers that match the collected user question and answer message.
  • the message is stored as a log.
  • the log collection is mainly based on two considerations.
  • the active learning system further learns through the collected logs to continuously optimize the above steps 201 and 202.
  • the parametric model improves the accuracy of the model.
  • DQN technology can be used to influence the dialogue generation based on the collected logs. That is, the active learning system also has an active exploration learning mechanism, which can dynamically expand the dialogue. The detailed process is as follows:
  • the active learning system may obtain a second system question and answer message matching the user question and answer message in the stored log, and send the second system question and answer message to the user terminal, waiting for user feedback. .
  • the first system question and answer message defaults to being the most relevant to the user question and answer message.
  • the degree of association between the second system question and answer message and the user question and answer message is less than the degree of association between the first system question and answer message and the user question and answer message.
  • the system question and answer message matching the user question and answer message is generally listed as a List, and the system question and answer message in the List is based on the user question and answer.
  • the relevance of the message is sorted, for example, the first system question and answer message with the highest relevance is ranked first, and so on.
  • This active exploration learning mechanism is to return to the system terminal to answer the system quiz message, and try to return to the user terminal other system question and answer messages than the first system question and answer message to try to expand the dialogue.
  • the active learning system obtains a feedback message sent by the user terminal to the second system question and answer message; if it is determined that the second system question and answer message conforms to the user's question intention based on the feedback message, the first system question message and the second system question message may be Both are system question and answer messages that match the user's question and answer message. For example, continue to use “I want to take a taxi to A Plaza" as an example. In addition to the first system question and answer message such as "Where are you in your location", the second system question and answer message such as "When do you want to leave?" is also in line with the user.
  • the method provided by the embodiment of the present application can automatically perform the first parameter model training based on the training dialog corpus and the entity labeling result of the training dialog corpus, and can further reorganize and expand the training dialog corpus based on the obtained first parameter model. Reorganizing the extended dialogue corpus, and then combining the re-expanding dialog corpus to complete the training of the second parameter model, thereby implementing man-machine dialogue based on the first parameter model and the second parameter model.
  • the training dialogue corpus is reorganized and expanded in the model training process, thereby greatly reducing the number of training dialogue corpora and the number of physical annotations collected during the initial training phase, thereby effectively saving manpower and time, not only cost Low, and improve the efficiency of dialog generation; at the same time, storing less training conversation corpus also saves the storage resources of computing devices and improves the processing performance of computing devices.
  • the training process of the first parameter model and the second parameter model is completed by combining at least two network models, so that the first parameter model and the second parameter model have good performance, thereby ensuring the state jump function of the dialog generation platform. It can realize multiple rounds of questions and answers, and the intelligence is better.
  • the online automatic learning can be actively carried out, which enhances the accuracy of the training parameter model and the expansion of the dialogue, and further reduces the number of training dialogue corpus, thereby saving the computing equipment. Storage resources and improved processing performance of computing devices.
  • FIG. 7A is a schematic structural diagram of a dialog generating apparatus according to an embodiment of the present application.
  • the apparatus includes:
  • a first obtaining module 701 configured to acquire a training dialog corpus and a first entity labeling result of the training dialog corpus
  • the first training module 702 is configured to train the first parameter model according to the training dialog corpus and the first entity labeling result;
  • the processing module 703 is configured to perform a dialog corpus recombination expansion process on the training dialog corpus based on the first parameter model to obtain a recombination extended dialog corpus;
  • the second training module 704 is configured to train the second parameter model according to the training dialog corpus, the first entity labeling result, the recombination extended dialog corpus, and the second entity labeling result of the reorganized extended dialog corpus, wherein the second parameter model is used for Identify the user's intent to ask questions;
  • the generating module 705 is configured to generate a dialog based on the first parameter model and the second parameter model.
  • the first parameter model includes a parameter dependent dependency model and a state jump model
  • the parameter dependent model is used to identify necessary parameters in the dialog and dependencies between the parameters
  • the state jump The rotation model is used to determine the state jump relationship of the conversation
  • the first training module 702 is configured to: according to the training dialog corpus and the first entity labeling result, the training parameter is dependent on the model; and, according to the training dialog corpus and the first entity labeling result, the state jump model is trained.
  • the training dialog corpus includes a plurality of conversations, each conversation consisting of at least one user quiz message and at least one system question and answer message;
  • the first training module 702 is configured to train the CRF model according to the training dialog corpus and the first entity annotation result; split the training dialog corpus into a matching pair of at least one system question and answer message and a user question and answer message; based on the CRF model, obtain the obtained The system question message and the user question message included in the at least one matching pair are analyzed and processed, and the target parameter sequence of the training dialog corpus is obtained, and the target parameter sequence includes at least one entity parameter and at least one entity parameter value; based on the target parameter sequence, The necessary parameters of the initial parameters are trained to obtain the necessary parameter dependence model after training.
  • the first training module 702 is configured to perform feature extraction processing on the target parameter sequence based on the parameter dependent dependency model to obtain feature information of the target parameter; and, based on the feature information, the initial state jump model Train and get the state jump model after training.
  • the processing module 703 is configured to split the training dialog corpus into a matching pair of at least one system question message and a user question message; for each matching pair of the obtained at least one matching pair, based on a first parameter model and a matching pair other than the matching pair, expanding a system question and answer message matching the user question and answer message included in the matching pair, and obtaining a reorganized extended dialog corpus;
  • the second training module 704 is configured to train the initial second parameter model according to the training dialog corpus, the first entity labeling result, the recombination extended training corpus, and the second entity labeling result, to obtain the trained second parameter model.
  • the device further includes:
  • the collecting module 706 is configured to collect the received user question and answer message and a system question and answer message that matches the collected user question and answer message;
  • the storage module 707 is configured to store the collected user question and answer message and the system question and answer message that matches the collected user question and answer message as a log.
  • the generating module 705 is further configured to: after receiving the user question and answer message sent by the user terminal, acquire the Q&A with the user based on the first parameter model and the second parameter model. The message matching the first system question and answer message sends the first system question and answer message to the user terminal.
  • the device further includes:
  • the second obtaining module 708 is configured to: after receiving the user question and answer message sent by the user terminal, obtain a second system question and answer message matching the user question and answer message in the stored log, and the degree of association between the second system question and answer message and the user question and answer message Less than the degree of association between the first system question and answer message and the user question and answer message;
  • a sending module 709 configured to send a second system question and answer message to the user terminal
  • the second obtaining module 708 is further configured to obtain a feedback message sent by the user terminal to the second system Q&A message; if the second system Q&A message is determined to be in accordance with the user's questioning intention based on the feedback message, the first system Q&A message and the second system are obtained.
  • the question and answer messages are used as system question and answer messages that match the user's question and answer message.
  • the apparatus provided by the embodiment of the present application can automatically perform the first parameter model training based on the training dialog corpus and the entity labeling result of the training dialog corpus, and can further reorganize and expand the training dialog corpus based on the obtained first parameter model. Reorganizing the extended dialogue corpus, and then combining the re-expanding dialog corpus to complete the training of the second parameter model, thereby implementing man-machine dialogue based on the first parameter model and the second parameter model.
  • the training dialogue corpus is reorganized and expanded in the model training process, thereby greatly reducing the number of training dialogue corpora and the number of physical annotations collected during the initial training phase, thereby effectively saving manpower and time, not only cost Low, and improve the efficiency of dialog generation; at the same time, storing less training conversation corpus also saves the storage resources of computing devices and improves the processing performance of computing devices.
  • the first parameter model and the second parameter model have good performance, thereby ensuring the state jump function of the dialog generation platform. Can achieve multiple rounds of questions and answers, better intelligence.
  • the online automatic learning can be actively carried out, which enhances the accuracy of the training parameter model and the expansion of the dialogue, and further reduces the number of training dialogue corpus, thereby saving the computing equipment. Storage resources and improved processing performance of computing devices.
  • FIG. 8 is a server device, which may be used to implement the dialog generation method shown in any of the above embodiments, according to an embodiment of the present application.
  • the server 800 may generate a large difference due to different configurations or performances, and may include one or more Central Processing Units (CPUs) 822 (eg, one or more processors).
  • CPUs Central Processing Units
  • memory 832 one or more storage media 830 storing application 842 or data 844 (eg, one or one storage device in Shanghai).
  • the memory 832 and the storage medium 830 may be short-term storage or persistent storage.
  • Programs stored in memory 832 or storage medium 830 may include one or more machine readable instruction modules (not shown) that may be executed by processor 822 to implement the embodiments described above.
  • the dialog generation method may include one or more machine readable instruction modules (not shown) that may be executed by processor 822 to implement the embodiments described above.
  • Server 800 may also include one or more power sources 828, one or more wired or wireless network interfaces 850, one or more input and output interfaces 858, and/or one or more operating systems 841, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM and more.
  • One or more programs are stored in memory 832 or storage medium 830 and are configured to be executed by one or more processors 822 for performing dialog generation as described in any one or more of the above-described embodiments for performing any of the embodiments of the present application.
  • Machine readable instructions of a method are stored in memory 832 or storage medium 830 and are configured to be executed by one or more processors 822 for performing dialog generation as described in any one or more of the above-described embodiments for performing any of the embodiments of the present application.
  • the dialog generating apparatus provided in the foregoing embodiment only uses the division of each functional module described above when generating a dialog.
  • the function allocation may be completed by different functional modules as needed.
  • the internal structure of the device is divided into different functional modules to perform all or part of the functions of the dialog generating device described above.
  • the dialog generation device provided by the foregoing embodiment is the same as the embodiment of the dialog generation method, and the specific implementation process is described in detail in the method embodiment, and details are not described herein again.
  • the steps of implementing the above embodiments may be completed by hardware, or may be instructed by a program to execute related hardware, and the program may be stored in a non-volatile computer readable storage.
  • the above-mentioned storage medium may be a read only memory, a magnetic disk or an optical disk or the like.
  • the technical solution of the embodiments of the present application may be embodied in the form of a software product in essence or in the form of a software product stored in a storage medium, including a plurality of instructions.
  • a terminal device (which may be a mobile phone, a personal computer, a server, or a network device, etc.) is caused to execute the dialog generation method described in the various embodiments of the present application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Machine Translation (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

L'invention a trait à un procédé et un dispositif permettant de générer un dialogue, et à un support d'informations. Le procédé comprend les étapes consistant : à acquérir un corpus de dialogue d'apprentissage et un premier résultat d'annotation physique du corpus de dialogue d'apprentissage (201); à entraîner un premier modèle de paramètres sur la base du corpus de dialogue d'apprentissage et du premier résultat d'annotation physique; à réaliser un traitement de réorganisation et de développement de corpus de dialogue sur le corpus de dialogue d'apprentissage selon le premier modèle de paramètres, afin de produire un corpus de dialogue réorganisé et développé; à entraîner un second modèle de paramètres sur la base du corpus de dialogue d'apprentissage, du premier résultat d'annotation physique, du corpus de dialogue réorganisé et développé produit, et d'un second résultat d'annotation physique du corpus de dialogue réorganisé et développé, le second modèle de paramètres servant à identifier l'intention d'un utilisateur pour poser une question (202); et à générer un dialogue en fonction du premier et du second modèle de paramètres (203).
PCT/CN2018/075222 2017-03-02 2018-02-05 Procédé et dispositif permettant de générer un dialogue, et support d'informations Ceased WO2018157700A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710120561.9A CN106951468B (zh) 2017-03-02 2017-03-02 对话生成方法及装置
CN201710120561.9 2017-03-02

Publications (1)

Publication Number Publication Date
WO2018157700A1 true WO2018157700A1 (fr) 2018-09-07

Family

ID=59468108

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/075222 Ceased WO2018157700A1 (fr) 2017-03-02 2018-02-05 Procédé et dispositif permettant de générer un dialogue, et support d'informations

Country Status (3)

Country Link
CN (1) CN106951468B (fr)
TW (1) TW201833903A (fr)
WO (1) WO2018157700A1 (fr)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110852109A (zh) * 2019-11-11 2020-02-28 腾讯科技(深圳)有限公司 语料生成方法、语料生成装置、和存储介质
CN111061853A (zh) * 2019-12-26 2020-04-24 竹间智能科技(上海)有限公司 一种快速获取faq模型训练语料的方法
CN111488444A (zh) * 2020-04-13 2020-08-04 深圳追一科技有限公司 基于场景切换的对话方法、装置、电子设备及存储介质
CN111832291A (zh) * 2020-06-02 2020-10-27 北京百度网讯科技有限公司 实体识别模型的生成方法、装置、电子设备及存储介质
CN112417127A (zh) * 2020-12-02 2021-02-26 网易(杭州)网络有限公司 对话模型的训练、对话生成方法、装置、设备及介质
CN112560507A (zh) * 2020-12-17 2021-03-26 中国平安人寿保险股份有限公司 用户模拟器构建方法、装置、电子设备及存储介质
CN112559718A (zh) * 2020-12-24 2021-03-26 北京百度网讯科技有限公司 对话处理的方法、装置、电子设备和存储介质
CN112667796A (zh) * 2021-01-05 2021-04-16 网易(杭州)网络有限公司 一种对话回复方法、装置、电子设备及可读存储介质
CN113033664A (zh) * 2021-03-26 2021-06-25 网易(杭州)网络有限公司 问答模型训练方法、问答方法、装置、设备及存储介质
CN113495943A (zh) * 2020-04-02 2021-10-12 山东大学 一种基于知识追踪与转移的人机对话方法
CN113539245A (zh) * 2021-07-05 2021-10-22 思必驰科技股份有限公司 语言模型自动训练方法及系统
CN113836278A (zh) * 2021-08-13 2021-12-24 北京百度网讯科技有限公司 通用对话模型的训练与对话生成方法、装置
CN113869064A (zh) * 2021-10-13 2021-12-31 平安科技(深圳)有限公司 基于机器学习的意图语料生成方法、设备及可读存储介质
WO2022105119A1 (fr) * 2020-11-17 2022-05-27 平安科技(深圳)有限公司 Procédé de génération de corpus d'apprentissage pour un modèle de reconnaissance d'intention, et dispositif associé
CN115422335A (zh) * 2022-09-01 2022-12-02 美的集团(上海)有限公司 与对话系统的交互方法和对话系统的训练方法
CN115905496A (zh) * 2022-12-23 2023-04-04 北京百度网讯科技有限公司 对话数据生成方法、模型训练方法、装置、设备及介质

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106951468B (zh) * 2017-03-02 2018-12-28 腾讯科技(深圳)有限公司 对话生成方法及装置
CN107506823B (zh) * 2017-08-22 2020-11-13 南京大学 一种用于对话生成的混合神经网络模型的构建方法
CN107679557B (zh) * 2017-09-19 2020-11-27 平安科技(深圳)有限公司 驾驶模型训练方法、驾驶人识别方法、装置、设备及介质
CN108364066B (zh) * 2017-11-30 2019-11-08 中国科学院计算技术研究所 基于n-gram和wfst模型的人工神经网络芯片及其应用方法
CN109949800B (zh) * 2017-12-20 2021-08-10 北京京东尚科信息技术有限公司 语音打车方法及系统
CN108268616B (zh) * 2018-01-04 2020-09-01 中国科学院自动化研究所 融合规则信息的可控制性对话管理扩展方法
CN108282587B (zh) * 2018-01-19 2020-05-26 重庆邮电大学 基于状态跟踪与策略导向下的移动客服对话管理方法
CN108415939B (zh) * 2018-01-25 2021-04-16 北京百度网讯科技有限公司 基于人工智能的对话处理方法、装置、设备及计算机可读存储介质
CN108363690A (zh) * 2018-02-08 2018-08-03 北京十三科技有限公司 基于神经网络的对话语义意图预测方法及学习训练方法
CN108829719B (zh) * 2018-05-07 2022-03-01 中国科学院合肥物质科学研究院 一种非事实类问答答案选择方法及系统
CN108763568A (zh) * 2018-06-05 2018-11-06 北京玄科技有限公司 智能机器人交互流程的管理方法、多轮对话方法及装置
CN110648657B (zh) * 2018-06-27 2024-02-02 北京搜狗科技发展有限公司 一种语言模型训练方法、构建方法和装置
CN109002500B (zh) * 2018-06-29 2024-08-27 北京百度网讯科技有限公司 对话生成方法、装置、设备及计算机可读介质
CN109933659A (zh) * 2019-03-22 2019-06-25 重庆邮电大学 一种面向出行领域的车载多轮对话方法
CN110188331B (zh) * 2019-06-03 2023-05-26 腾讯科技(深圳)有限公司 模型训练方法、对话系统评价方法、装置、设备及存储介质
CN110334186B (zh) * 2019-07-08 2021-09-28 北京三快在线科技有限公司 数据查询方法、装置、计算机设备及计算机可读存储介质
CN110390928B (zh) * 2019-08-07 2022-01-11 广州多益网络股份有限公司 一种自动拓增语料的语音合成模型训练方法和系统
JP7287333B2 (ja) * 2020-04-06 2023-06-06 トヨタ自動車株式会社 制御装置、プログラム、及び情報処理方法
CN111563152A (zh) * 2020-06-19 2020-08-21 平安科技(深圳)有限公司 智能问答语料分析方法、装置、电子设备及可读存储介质
CN114547258A (zh) * 2020-11-24 2022-05-27 深圳前海微众银行股份有限公司 对话处理方法、装置、设备及存储介质
CN113641807B (zh) * 2021-07-28 2024-05-24 北京百度网讯科技有限公司 对话推荐模型的训练方法、装置、设备和存储介质
CN114169332B (zh) * 2021-11-30 2025-05-30 科讯嘉联信息技术有限公司 一种基于深度学习模型的地址命名实体识别的调优方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7742924B2 (en) * 2004-05-11 2010-06-22 Fujitsu Limited System and method for updating information for various dialog modalities in a dialog scenario according to a semantic context
CN104836720A (zh) * 2014-02-12 2015-08-12 北京三星通信技术研究有限公司 交互式通信中进行信息推荐的方法及装置
CN104951433A (zh) * 2015-06-24 2015-09-30 北京京东尚科信息技术有限公司 基于上下文进行意图识别的方法和系统
CN105487663A (zh) * 2015-11-30 2016-04-13 北京光年无限科技有限公司 一种面向智能机器人的意图识别方法和系统
CN106951468A (zh) * 2017-03-02 2017-07-14 腾讯科技(深圳)有限公司 对话生成方法及装置

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8543565B2 (en) * 2007-09-07 2013-09-24 At&T Intellectual Property Ii, L.P. System and method using a discriminative learning approach for question answering
CN103871402B (zh) * 2012-12-11 2017-10-10 北京百度网讯科技有限公司 语言模型训练系统、语音识别系统及相应方法
CN104598445B (zh) * 2013-11-01 2019-05-10 腾讯科技(深圳)有限公司 自动问答系统和方法
CN104572998B (zh) * 2015-01-07 2017-09-01 北京云知声信息技术有限公司 用于自动问答系统的问答排序模型更新方法及装置
CN104679826B (zh) * 2015-01-09 2019-04-30 北京京东尚科信息技术有限公司 基于分类模型的上下文识别的方法和系统
CN105224623B (zh) * 2015-09-22 2019-06-18 北京百度网讯科技有限公司 数据模型的训练方法及装置
CN106407333B (zh) * 2016-09-05 2020-03-03 北京百度网讯科技有限公司 基于人工智能的口语查询识别方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7742924B2 (en) * 2004-05-11 2010-06-22 Fujitsu Limited System and method for updating information for various dialog modalities in a dialog scenario according to a semantic context
CN104836720A (zh) * 2014-02-12 2015-08-12 北京三星通信技术研究有限公司 交互式通信中进行信息推荐的方法及装置
CN104951433A (zh) * 2015-06-24 2015-09-30 北京京东尚科信息技术有限公司 基于上下文进行意图识别的方法和系统
CN105487663A (zh) * 2015-11-30 2016-04-13 北京光年无限科技有限公司 一种面向智能机器人的意图识别方法和系统
CN106951468A (zh) * 2017-03-02 2017-07-14 腾讯科技(深圳)有限公司 对话生成方法及装置

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110852109B (zh) * 2019-11-11 2024-11-22 腾讯科技(深圳)有限公司 语料生成方法、语料生成装置、和存储介质
CN110852109A (zh) * 2019-11-11 2020-02-28 腾讯科技(深圳)有限公司 语料生成方法、语料生成装置、和存储介质
CN111061853A (zh) * 2019-12-26 2020-04-24 竹间智能科技(上海)有限公司 一种快速获取faq模型训练语料的方法
CN111061853B (zh) * 2019-12-26 2024-01-12 竹间智能科技(上海)有限公司 一种快速获取faq模型训练语料的方法
CN113495943B (zh) * 2020-04-02 2023-07-14 山东大学 一种基于知识追踪与转移的人机对话方法
CN113495943A (zh) * 2020-04-02 2021-10-12 山东大学 一种基于知识追踪与转移的人机对话方法
CN111488444A (zh) * 2020-04-13 2020-08-04 深圳追一科技有限公司 基于场景切换的对话方法、装置、电子设备及存储介质
CN111832291A (zh) * 2020-06-02 2020-10-27 北京百度网讯科技有限公司 实体识别模型的生成方法、装置、电子设备及存储介质
CN111832291B (zh) * 2020-06-02 2024-01-09 北京百度网讯科技有限公司 实体识别模型的生成方法、装置、电子设备及存储介质
WO2022105119A1 (fr) * 2020-11-17 2022-05-27 平安科技(深圳)有限公司 Procédé de génération de corpus d'apprentissage pour un modèle de reconnaissance d'intention, et dispositif associé
CN112417127B (zh) * 2020-12-02 2023-08-22 网易(杭州)网络有限公司 对话模型的训练、对话生成方法、装置、设备及介质
CN112417127A (zh) * 2020-12-02 2021-02-26 网易(杭州)网络有限公司 对话模型的训练、对话生成方法、装置、设备及介质
CN112560507A (zh) * 2020-12-17 2021-03-26 中国平安人寿保险股份有限公司 用户模拟器构建方法、装置、电子设备及存储介质
CN112559718B (zh) * 2020-12-24 2024-04-12 北京百度网讯科技有限公司 对话处理的方法、装置、电子设备和存储介质
CN112559718A (zh) * 2020-12-24 2021-03-26 北京百度网讯科技有限公司 对话处理的方法、装置、电子设备和存储介质
CN112667796B (zh) * 2021-01-05 2023-08-11 网易(杭州)网络有限公司 一种对话回复方法、装置、电子设备及可读存储介质
CN112667796A (zh) * 2021-01-05 2021-04-16 网易(杭州)网络有限公司 一种对话回复方法、装置、电子设备及可读存储介质
CN113033664A (zh) * 2021-03-26 2021-06-25 网易(杭州)网络有限公司 问答模型训练方法、问答方法、装置、设备及存储介质
CN113539245A (zh) * 2021-07-05 2021-10-22 思必驰科技股份有限公司 语言模型自动训练方法及系统
CN113539245B (zh) * 2021-07-05 2024-03-15 思必驰科技股份有限公司 语言模型自动训练方法及系统
CN113836278B (zh) * 2021-08-13 2023-08-11 北京百度网讯科技有限公司 通用对话模型的训练与对话生成方法、装置
CN113836278A (zh) * 2021-08-13 2021-12-24 北京百度网讯科技有限公司 通用对话模型的训练与对话生成方法、装置
CN113869064A (zh) * 2021-10-13 2021-12-31 平安科技(深圳)有限公司 基于机器学习的意图语料生成方法、设备及可读存储介质
CN115422335A (zh) * 2022-09-01 2022-12-02 美的集团(上海)有限公司 与对话系统的交互方法和对话系统的训练方法
CN115422335B (zh) * 2022-09-01 2024-05-03 美的集团(上海)有限公司 与对话系统的交互方法和对话系统的训练方法
CN115905496A (zh) * 2022-12-23 2023-04-04 北京百度网讯科技有限公司 对话数据生成方法、模型训练方法、装置、设备及介质
CN115905496B (zh) * 2022-12-23 2023-09-22 北京百度网讯科技有限公司 对话数据生成方法、模型训练方法、装置、设备及介质

Also Published As

Publication number Publication date
CN106951468B (zh) 2018-12-28
TW201833903A (zh) 2018-09-16
CN106951468A (zh) 2017-07-14

Similar Documents

Publication Publication Date Title
WO2018157700A1 (fr) Procédé et dispositif permettant de générer un dialogue, et support d'informations
JP7300435B2 (ja) 音声インタラクションするための方法、装置、電子機器、およびコンピュータ読み取り可能な記憶媒体
WO2019076286A1 (fr) Procédé et dispositif de reconnaissance d'intention d'utilisateur pour une déclaration
JP2021067939A (ja) 音声インタラクション制御のための方法、装置、機器及び媒体
CN109145104B (zh) 用于对话交互的方法和装置
US20200012720A1 (en) Hierarchical annotation of dialog acts
CN110334347A (zh) 基于自然语言识别的信息处理方法、相关设备及存储介质
CN114550705B (zh) 对话推荐方法、模型的训练方法、装置、设备及介质
JP7436077B2 (ja) スキルの音声ウェイクアップ方法および装置
CN111737432A (zh) 一种基于联合训练模型的自动对话方法和系统
CN108304561B (zh) 一种基于有限数据的语义理解方法、设备及机器人
CN112100339A (zh) 用于智能语音机器人的用户意图识别方法、装置和电子设备
CN112035630A (zh) 结合rpa和ai的对话交互方法、装置、设备及存储介质
CN112069830A (zh) 一种智能会话方法及装置
CN108363478A (zh) 针对可穿戴设备深度学习应用模型分载系统及方法
CN118485147A (zh) 一种加速ai大模型输出函数调用的方法及装置
CN114064943A (zh) 会议管理方法、装置、存储介质及电子设备
CN118820395A (zh) 智能对话系统、方法、设备及介质
CN109299231A (zh) 对话状态跟踪方法、系统、电子设备及存储介质
CN113836932B (zh) 交互方法、装置和系统,以及智能设备
CN117573096B (zh) 一种融合抽象语法树结构信息的智能代码补全方法
CN119719354A (zh) 基于人工智能的文档生成方法及装置
CN119415632A (zh) 一种基于民航大语言模型的答复信息获取方法
CN115017285B (zh) 会话应答方法、介质、装置及计算设备
CN118689978A (zh) 一种基于知识图谱融合增强的问答方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18761740

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18761740

Country of ref document: EP

Kind code of ref document: A1