WO2018000267A1 - 一种机器人交互内容的生成方法、系统及机器人 - Google Patents
一种机器人交互内容的生成方法、系统及机器人 Download PDFInfo
- Publication number
- WO2018000267A1 WO2018000267A1 PCT/CN2016/087752 CN2016087752W WO2018000267A1 WO 2018000267 A1 WO2018000267 A1 WO 2018000267A1 CN 2016087752 W CN2016087752 W CN 2016087752W WO 2018000267 A1 WO2018000267 A1 WO 2018000267A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- robot
- signal
- generating
- user
- parameters
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
Definitions
- the invention relates to the field of robot interaction technology, and in particular to a method, a system and a robot for generating robot interactive content.
- an expression is made in the process of human interaction.
- a reasonable expression feedback is given, and the person comes to a life scene on a certain time axis, such as eating, Sleeping, exercise, etc.
- changes in various scene values can affect the feedback of human expression.
- the current desire for robots to make expression feedback is mainly through pre-designed methods and deep learning training corpus.
- This kind of feedback through pre-designed programs and corpus training has the following disadvantages:
- the output of the expression depends on the human text representation, that is, similar to a question-and-answer machine, the different words of the user trigger different expressions.
- the robot actually outputs the expression according to the human pre-designed interaction mode, which leads to the robot.
- the object of the present invention is to provide a method, a system and a robot for generating interactive content of a robot, which can improve the anthropomorphicity of the interactive content generation of the robot based on multimodal input and active interactive variable parameters, improve the human-computer interaction experience, and improve intelligence. .
- a method for generating robot interactive content comprising:
- the method for generating the variable parameter of the robot includes:
- the robot's self-cognitive parameters are fitted to the parameters of the scene in the variable parameters to generate robotic variable parameters.
- variable parameter includes at least a behavior of changing a user's original behavior and a change, and a parameter value representing a behavior of changing a user's original behavior and a change.
- the step of generating the robot interaction content according to the multi-modal signal and the user intention, in combination with the current robot variable parameter further comprises: combining the current robot according to the user intention and the multi-modal signal
- the fitting curve of the variable parameter and the parameter change probability generates the robot interaction content.
- the method for generating a fitting curve of the parameter change probability comprises: using a probability algorithm, using a network to make a probability estimation of parameters between the robots, and calculating a scene parameter change of the robot on the life time axis on the life time axis. After that, the probability of each parameter change forms a fitted curve of the parameter change probability.
- the multi-modal signal includes at least an image signal
- the step of generating the robot interaction content according to the multi-modal signal and the user intention, in combination with the current robot variable parameter specifically includes:
- the robot interaction content is generated in conjunction with the current robot variable parameters based on the image signal and the user intent.
- the multi-modal signal includes at least a voice signal
- the step of generating the robot interaction content according to the multi-modal signal and the user intention, in combination with the current robot variable parameter specifically includes:
- the robot interaction content is generated in conjunction with the current robot variable parameters based on the speech signal and the user intent.
- the multi-modal signal includes at least a gesture signal
- the step of generating the robot interaction content according to the multi-modal signal and the user intention, in combination with the current robot variable parameter specifically includes:
- the robot interaction content is generated in accordance with the current robot variable parameter based on the gesture signal and the user intent.
- the invention relates to a system for generating robot interactive content, which comprises:
- An intent identification module configured to determine a user intent according to the multimodal signal
- a content generating module configured to generate the robot interaction content according to the multi-modal signal and the user intention, in combination with the current robot variable parameter.
- the system comprises a time axis based and artificial intelligence cloud processing module for:
- the robot's self-cognitive parameters are fitted to the parameters of the scene in the variable parameters to generate robotic variable parameters.
- variable parameter includes at least a behavior of changing a user's original behavior and a change, and a parameter value representing a behavior of changing a user's original behavior and a change.
- the time-based and artificial intelligence cloud processing module is further configured to: generate the robot interaction content according to the user intent and the multi-modal signal, in combination with the current robot variable parameter and the fitting curve of the parameter change probability.
- the time axis-based and artificial intelligence cloud processing module is further configured to: use a probability algorithm to calculate a probability of each parameter change of the robot on the life time axis after the time axis scene parameter is changed, to form a fitting curve.
- the multi-modal signal includes at least an image signal
- the content generating module is specifically configured to: generate the robot interaction content according to the image signal and the user intention, in combination with the current robot variable parameter.
- the multi-modal signal includes at least a voice signal
- the content generating module is specifically configured to: generate the robot interaction content according to the voice signal and the user intention, in combination with the current robot variable parameter.
- the multi-modal signal includes at least a gesture signal
- the content generating module is specifically configured to: generate the robot interaction content according to the gesture signal and the user intention, in combination with the current robot variable parameter.
- the invention discloses a robot comprising a system for generating interactive content of a robot as described above.
- a method for generating interactive content of a robot includes: acquiring a multi-modal signal; determining a user intent according to the multi-modal signal; and combining current mutable variable parameters according to the multi-modal signal and the user intention Generate robot interaction content.
- multi-modal signals such as image signals and speech signals can be combined with robot variable parameters to more accurately generate robot interaction content, thereby more accurately and anthropomorphic interaction and communication with people.
- the variable parameters are: during the human-computer interaction process, the user actively controls The parameters of the system, for example: control the robot to do the movement, control the robot to do the communication, and so on.
- the invention adds the robot variable parameter to the interactive content generation of the robot, so that the robot can generate the interactive content according to the previous variable parameter, for example, when the variable parameter is that the robot has been moving for one hour, again When the robot sends a command such as cleaning, the robot will say that I am tired and refuse to clean. In this way, the robot is more anthropomorphic when interacting with humans, so that the robot has a human lifestyle in the life time axis. This method can enhance the anthropomorphicity of the robot interactive content generation, enhance the human-computer interaction experience, and improve the intelligence.
- FIG. 1 is a flowchart of a method for generating interactive content of a robot according to Embodiment 1 of the present invention
- FIG. 2 is a schematic diagram of a system for generating interactive content of a robot according to a second embodiment of the present invention.
- Computer devices include user devices and network devices.
- the user equipment or the client includes but is not limited to a computer, a smart phone, a PDA, etc.;
- the network device includes but is not limited to a single network server, a server group composed of multiple network servers, or a cloud computing-based computer or network server. cloud.
- the computer device can operate alone to carry out the invention, and can also access the network and implement the invention through interoperation with other computer devices in the network.
- the network in which the computer device is located includes, but is not limited to, the Internet, a wide area network, a metropolitan area network, a local area network, a VPN network, and the like.
- first means “first,” “second,” and the like may be used herein to describe the various elements, but the elements should not be limited by these terms, and the terms are used only to distinguish one element from another.
- the term “and/or” used herein includes any and all combinations of one or more of the associated listed items. When a unit is referred to as being “connected” or “coupled” to another unit, it can be directly connected or coupled to the other unit, or an intermediate unit can be present.
- a method for generating interactive content of a robot including:
- a method for generating interactive content of a robot includes: acquiring a multi-modal signal; determining a user intent according to the multi-modal signal; and combining current mutable variable parameters according to the multi-modal signal and the user intention Generate robot interaction content.
- multi-modal signals such as image signals and speech signals can be combined with robot variable parameters to more accurately generate robot interaction content, thereby more accurately and anthropomorphic interaction and communication with people.
- variable parameters are: parameters that the user actively controls during the human-computer interaction process, for example, controlling the robot to perform motion, controlling the robot to perform communication, and the like.
- the invention adds the robot variable parameter to the interactive content generation of the robot, so that the robot can generate the interactive content according to the previous variable parameter, for example, when the variable parameter is that the robot has been moving for one hour, again When the robot sends a command such as cleaning, the robot will say that I am tired and refuse to clean.
- the robot is more anthropomorphic when interacting with humans, so that the robot has a human lifestyle in the life time axis.
- This method can enhance the anthropomorphicity of the robot interactive content generation, enhance the human-computer interaction experience, and improve the intelligence.
- the multi-modal signal is generally a combination of a plurality of signals, such as a picture signal plus a voice signal, or a picture signal plus a voice signal plus a gesture signal.
- the robot variable parameter 300 is completed and set in advance. Specifically, the robot variable parameter 300 is a series of parameter collections, and this parameter is transmitted to the system to generate interactive content.
- variable parameters are specifically: sudden changes in people and machines, such as one day on the time axis is eating, sleeping, interacting, running, eating, sleeping.
- the life timeline and variable parameters can be used to change the attributes of self-cognition, such as mood values, fatigue values, etc., and can also automatically add new self-awareness information, such as no previous anger value, based on the life time axis and
- the scene of the variable factor will automatically add to the self-cognition of the robot based on the scene that previously simulated the human self-cognition.
- the robot will use this as a variable parameter.
- the robot when the user interacts with the robot during this time period, the robot will go out to go shopping at 12 noon to generate interactive content, instead of combining the previous 12 noon to generate interactive content in the meal, in the specific interaction
- the robot combines the acquired multi-modal signals, such as a combination of voice information and picture information, or a combination with a video signal, and variable parameters.
- the multi-modal signal is generally a combination of a plurality of signals, such as a picture signal plus a voice signal, or a picture signal plus a voice signal plus a gesture signal.
- the multi-modal signal includes the robot to obtain expressions and text emotions, etc., which can be input or combined by voice input or video input or gesture, the robot expression input is happy, the text analysis is unhappy, and the user controls the robot to exercise for many times. .
- the robot will refuse to accept the instructions and interact as: I am very tired and need rest now.
- the method for generating the variable parameter of the robot includes:
- the robot's self-cognitive parameters are fitted to the parameters of the scene in the variable parameters to generate robotic variable parameters.
- the parameters in the self-cognition are matched with the parameters of the scene used in the variable participation axis, and the influence of the personification is generated. .
- variable parameter includes at least a behavior that changes an original user's behavior and a change, and a parameter value that represents an act of changing a user's original behavior and a change.
- variable parameters are in the same state as the original plan.
- the sudden change causes the user to be in another state.
- the variable parameter represents the change of the behavior or state, and the state or behavior of the user after the change. For example, it was originally running at 5 pm, and suddenly there were other things, such as going to play, then changing from running to playing is a variable parameter, and the probability of such a change is also studied.
- the knot is based on the multimodal signal and the user intent
- the step of generating the robot interaction content with the current robot variable parameter further comprises: generating the robot interaction content according to the user intention and the multi-modal signal, combining the current robot variable parameter and the fitting curve of the parameter change probability.
- the fitting curve can be generated by the probability training of the variable parameters, thereby generating the robot interaction content.
- the method for generating a fitting curve of the parameter change probability includes: using a probability algorithm, using a network to make a probability estimate of a parameter between the robots, and calculating a time on the life time axis of the robot on the life time axis After the scene parameters are changed, the probability of each parameter change forms a fitting curve of the parameter change probability.
- the probability algorithm can use a Bayesian probability algorithm.
- the parameters in the self-cognition are matched with the parameters of the scene used in the variable participation axis, and the influence of the personification is generated.
- the robot will know its geographical location, and will change the way the interactive content is generated according to the geographical environment in which it is located.
- Bayesian probability algorithm to estimate the parameters between robots using Bayesian network, and calculate the probability of each parameter change after the change of the time axis scene parameters of the robot itself on the life time axis.
- the curve dynamically affects the self-recognition of the robot itself.
- This innovative module makes the robot itself a human lifestyle. For the expression, it can be changed according to the location scene.
- the multi-modal signal includes at least an image signal
- the step of generating the robot interaction content according to the multi-modal signal and the user intention, in combination with the current robot variable parameter specifically includes:
- the robot interaction content is generated in conjunction with the current robot variable parameters based on the image signal and the user intent.
- the multi-modal signal includes at least an image signal, so that the robot can grasp the user's intention, and in order to better understand the user's intention, other signals, such as a voice signal, a gesture signal, etc., are generally added, so that the robot can be more accurately understood. Whether the user is the real expression or the meaning of a joke.
- the multi-modal signal includes at least a voice signal
- the step of generating the robot interaction content according to the multi-modal signal and the user intention, in combination with the current robot variable parameter specifically includes:
- the robot interaction content is generated in conjunction with the current robot variable parameters based on the speech signal and the user intent.
- the multimodal signal includes at least a gesture signal
- the generating robot is combined with current robot variable parameters according to the multimodal signal and the user intent
- the steps of interactive content specifically include:
- the robot interaction content is generated in accordance with the current robot variable parameter based on the gesture signal and the user intent.
- the robot sings continuously for a while, and then the user tells the robot by voice and continues to sing.
- the picture signal shows that the user is serious, then the robot will reply, too tired, let me take a break, with a tired face.
- the picture information number shows that the user is happy, then the robot will reply, the owner, I will take a break and sing to you, with a happy face.
- the voice signal and the picture signal can accurately understand the meaning of the user, thereby more accurately replying to the user.
- other signals are more accurate, such as gesture signals, video signals, and the like.
- a system for generating interactive content of a robot according to the present invention is disclosed, which is characterized in that:
- the obtaining module 201 is configured to acquire a multi-modal signal
- the intent identification module 202 is configured to determine a user intent according to the multimodal signal
- the content generating module 203 is configured to generate the robot interaction content according to the multi-modal signal and the user intention, in combination with the current robot variable parameter.
- variable parameters are: parameters that the user actively controls during the human-computer interaction process, for example, controlling the robot to perform motion, controlling the robot to perform communication, and the like.
- the invention adds the robot variable parameter to the interactive content generation of the robot, so that the robot can generate the interactive content according to the previous variable parameter, for example, when the variable parameter is that the robot has been moving for one hour, again When the robot sends a command such as cleaning, the robot will say that I am tired and refuse to clean.
- the multi-modal signal is generally a combination of a plurality of signals, such as a picture signal plus a voice signal, or a picture signal plus a voice signal plus a gesture signal.
- the multimodal information in this embodiment may be user expression, voice information, gesture information, scene information, image information, video information, face information, pupil iris information, light sense information, and finger One or more of the grain information and the like.
- voice information may be user expression, voice information, gesture information, scene information, image information, video information, face information, pupil iris information, light sense information, and finger One or more of the grain information and the like.
- variable parameter can be something that the robot has done in a preset period of time, such as the robot interacting with the user for an hour during the last time period, when the user expresses the continuation to the robot through the multi-modal signal.
- the intent of the conversation then the robot can say that I am tired and need to take a break, and with tired state content, such as expressions.
- the multimodal signal shows the user making a joke, then the robot can say don't tease me with a happy expression.
- the multi-modal signal is generally a combination of a plurality of signals, such as a picture signal plus a voice signal, or a picture signal plus a voice signal plus a gesture signal.
- the system includes a time axis based and artificial intelligence cloud processing module for:
- the robot's self-cognitive parameters are fitted to the parameters of the scene in the variable parameters to generate robotic variable parameters.
- variable parameter includes at least a behavior that changes an original user's behavior and a change, and a parameter value that represents an act of changing a user's original behavior and a change.
- variable parameters are in the same state as the original plan.
- the sudden change causes the user to be in another state.
- the variable parameter represents the change of the behavior or state, and the state or behavior of the user after the change. For example, it was originally running at 5 pm, and suddenly there were other things, such as going to play, then changing from running to playing is a variable parameter, and the probability of such a change is also studied.
- the time-based and artificial intelligence cloud processing module is further configured to: generate a robot interaction according to the user intent and the multi-modal signal, combining the current robot variable parameter and the fitting curve of the parameter change probability content.
- the fitting curve can be generated by the probability training of the variable parameters, thereby generating the robot interaction content.
- the time axis-based and artificial intelligence cloud processing module is further configured to: use a probability algorithm to calculate a probability of each parameter change of the robot on the life time axis after the time axis scene parameter changes, to form a fit curve.
- the self-cognition of the robot itself is expanded The exhibition, the parameters in the self-cognition and the parameters of the scene used in the variable participation Su axis, to produce the influence of anthropomorphization.
- the robot will know its geographical location, and will change the way the interactive content is generated according to the geographical environment in which it is located.
- Bayesian probability algorithm to estimate the parameters between robots using Bayesian network, and calculate the probability of each parameter change after the change of the time axis scene parameters of the robot itself on the life time axis.
- the curve dynamically affects the self-recognition of the robot itself.
- This innovative module makes the robot itself a human lifestyle. For the expression, it can be changed according to the location scene.
- the multi-modal signal includes at least an image signal
- the expression generating module is specifically configured to: generate the robot interaction content according to the image signal and the user intention, in combination with the current robot variable parameter.
- the multi-modal signal includes at least an image signal, so that the robot can grasp the user's intention, and in order to better understand the user's intention, other signals, such as a voice signal, a gesture signal, etc., are generally added, so that the robot can be more accurately understood. Whether the user is the real expression or the meaning of a joke.
- the multi-modal signal includes at least a voice signal
- the expression generating module is specifically configured to: generate the robot interaction content according to the voice signal and the user intention, in combination with the current robot variable parameter.
- the multi-modality signal includes at least a gesture signal
- the expression generation module is specifically configured to: generate the robot interaction content according to the gesture signal and the user intention, in combination with the current robot variable parameter.
- the robot sings continuously for a while, and then the user tells the robot by voice and continues to sing.
- the picture signal shows that the user is serious, then the robot will reply, too tired, let me take a break, with a tired face.
- the picture information number shows that the user is happy, then the robot will reply, the owner, I will take a break and sing to you, with a happy face.
- the voice signal and the picture signal can accurately understand the meaning of the user, thereby more accurately replying to the user.
- the invention discloses a robot comprising a system for generating interactive content of a robot as described above.
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Manipulator (AREA)
- Toys (AREA)
Abstract
一种机器人交互内容的生成方法,包括:获取多模态信号(S101);根据所述多模态信号确定用户意图(S102);根据所述多模态信号和所述用户意图,结合当前的机器人可变参数生成机器人交互内容(S103)。本方法将机器人可变参数加入到机器人的交互内容生成中去,使得机器人在生成交互内容时可以根据之前的可变参数进行生成,这样使机器人与人交互时更加拟人化,使得机器人在生活时间轴内具有人类的生活方式,该方法能够提升机器人交互内容生成的拟人性,提升人机交互体验,提高智能性。
Description
本发明涉及机器人交互技术领域,尤其涉及一种机器人交互内容的生成方法、系统及机器人。
通常人类再交互过程中做出一个表情,一般是在眼睛看到或者耳朵听到声音之后,经过大脑分析过后进行合理的表情反馈,人来在某一天的时间轴上的生活场景,比如吃饭,睡觉,运动等,各种场景值的变化会影响人类表情的反馈。而对于机器人而言,目前想让机器人做出表情上的反馈,主要通过预先设计好的方式与深度学习训练语料得来,这种通过预先设计好的程序与语料训练的表情反馈存在以下缺点:表情的输出依赖于人类的文本表示,即与一个问答的机器相似,用户不同的话语触发不同的表情,这种情况下机器人实际还是按照人类预先设计好的交互方式进行表情的输出,这导致机器人不能更加拟人化,不能像人类一样,根据人与人之间的交互次数,交互行为,亲密度等,做出表情反馈,因此表情的生成需要大量的人机交互,导致机器人的智能性很差。
因此,如何提出一种基于多模态输入与主动交互可变参数的表情生成方法,能够提升机器人交互内容生成的拟人性,是本技术领域亟需解决的技术问题。
发明内容
本发明的目的是提供一种机器人交互内容的生成方法、系统及机器人,基于多模态输入与主动交互可变参数,能够提升机器人交互内容生成的拟人性,提升人机交互体验,提高智能性。
本发明的目的是通过以下技术方案来实现的:
一种机器人交互内容的生成方法,包括:
获取多模态信号;
根据所述多模态信号确定用户意图;
根据所述多模态信号和所述用户意图,结合当前的机器人可变参数生
成机器人交互内容。
优选的,所述机器人可变参数的生成方法包括:
将机器人的自我认知的参数与可变参数中场景的参数进行拟合,生成机器人可变参数。
优选的,其中,所述可变参数至少包括改变用户原本的行为和改变之后的行为,以及代表改变用户原本的行为和改变之后的行为的参数值。
优选的,所述根据所述多模态信号和所述用户意图,结合当前的机器人可变参数生成机器人交互内容的步骤进一步包括:根据所述用户意图和多模态信号,结合当前的机器人可变参数以及参数改变概率的拟合曲线生成机器人交互内容。
优选的,所述参数改变概率的拟合曲线的生成方法包括:使用概率算法,将机器人之间的参数用网络做概率估计,计算当生活时间轴上的机器人在生活时间轴上的场景参数改变后,每个参数改变的概率,形成所述参数改变概率的拟合曲线。
优选的,所述多模态信号至少包括图像信号,所述根据所述多模态信号和所述用户意图,结合当前的机器人可变参数生成机器人交互内容的步骤具体包括:
根据所述图像信号和所述用户意图,结合当前的机器人可变参数生成机器人交互内容。
优选的,所述多模态信号至少包括语音信号,所述根据所述多模态信号和所述用户意图,结合当前的机器人可变参数生成机器人交互内容的步骤具体包括:
根据所述语音信号和所述用户意图,结合当前的机器人可变参数生成机器人交互内容。
优选的,所述多模态信号至少包括手势信号,所述根据所述多模态信号和所述用户意图,结合当前的机器人可变参数生成机器人交互内容的步骤具体包括:
根据所述手势信号和所述用户意图,结合当前的机器人可变参数生成机器人交互内容。
本发明一种机器人交互内容的生成系统,其特征在于,包括:
获取模块,用于获取多模态信号;
意图识别模块,用于根据所述多模态信号确定用户意图;
内容生成模块,用于根据所述多模态信号和所述用户意图,结合当前的机器人可变参数生成机器人交互内容。
优选的,所述系统包括基于时间轴与人工智能云处理模块,用于:
将机器人的自我认知的参数与可变参数中场景的参数进行拟合,生成机器人可变参数。
优选的,其中,所述可变参数至少包括改变用户原本的行为和改变之后的行为,以及代表改变用户原本的行为和改变之后的行为的参数值。
优选的,所述基于时间轴与人工智能云处理模块进一步用于:根据所述用户意图和多模态信号,结合当前的机器人可变参数以及参数改变概率的拟合曲线生成机器人交互内容。
优选的,所述基于时间轴与人工智能云处理模块进一步用于:使用概率算法,计算生活时间轴上的机器人在时间轴场景参数改变后的每个参数改变的概率,形成拟合曲线。
优选的,所述多模态信号至少包括图像信号,所述内容生成模块具体用于:根据所述图像信号和所述用户意图,结合当前的机器人可变参数生成机器人交互内容。
优选的,所述多模态信号至少包括语音信号,所述内容生成模块具体用于:根据所述语音信号和所述用户意图,结合当前的机器人可变参数生成机器人交互内容。
优选的,所述多模态信号至少包括手势信号,所述内容生成模块具体用于:根据所述手势信号和所述用户意图,结合当前的机器人可变参数生成机器人交互内容。
本发明公开一种机器人,包括如上述任一所述的一种机器人交互内容的生成系统。
相比现有技术,本发明具有以下优点:现有机器人对于应用场景来说,一般是基于固的场景中的问答交互机器人交互内容的生成方法,无法基于当前的场景来更加准确的生成机器人的表情。本发明一种机器人交互内容的生成方法,包括:获取多模态信号;根据所述多模态信号确定用户意图;根据所述多模态信号和所述用户意图,结合当前的机器人可变参数生成机器人交互内容。这样就可以根据多模态信号例如图像信号、语音信号,结合机器人可变参数来更加准确地生成机器人交互内容,从而更加准确、拟人化的与人进行交互和沟通。可变参数为:人机交互过程中,用户主动控
制的参数,例如:控制机器人去做运动,控制机器人做交流等。本发明将机器人可变参数加入到机器人的交互内容生成中去,使得机器人在生成交互内容时可以根据之前的可变参数进行生成,例如当可变参数为机器人已经运动了一个小时了,再次向机器人发送打扫卫生等命令时,机器人就会说我累了,拒绝打扫。这样使机器人与人交互时更加拟人化,使得机器人在生活时间轴内具有人类的生活方式,该方法能够提升机器人交互内容生成的拟人性,提升人机交互体验,提高智能性。
图1是本发明实施例一的一种机器人交互内容的生成方法的流程图;
图2是本发明实施例二的一种机器人交互内容的生成系统的示意图。
虽然流程图将各项操作描述成顺序的处理,但是其中的许多操作可以被并行地、并发地或者同时实施。各项操作的顺序可以被重新安排。当其操作完成时处理可以被终止,但是还可以具有未包括在附图中的附加步骤。处理可以对应于方法、函数、规程、子例程、子程序等等。
计算机设备包括用户设备与网络设备。其中,用户设备或客户端包括但不限于电脑、智能手机、PDA等;网络设备包括但不限于单个网络服务器、多个网络服务器组成的服务器组或基于云计算的由大量计算机或网络服务器构成的云。计算机设备可单独运行来实现本发明,也可接入网络并通过与网络中的其他计算机设备的交互操作来实现本发明。计算机设备所处的网络包括但不限于互联网、广域网、城域网、局域网、VPN网络等。
在这里可能使用了术语“第一”、“第二”等等来描述各个单元,但是这些单元不应当受这些术语限制,使用这些术语仅仅是为了将一个单元与另一个单元进行区分。这里所使用的术语“和/或”包括其中一个或更多所列出的相关联项目的任意和所有组合。当一个单元被称为“连接”或“耦合”到另一单元时,其可以直接连接或耦合到所述另一单元,或者可以存在中间单元。
这里所使用的术语仅仅是为了描述具体实施例而不意图限制示例性实施例。除非上下文明确地另有所指,否则这里所使用的单数形式“一个”、“一项”还意图包括复数。还应当理解的是,这里所使用的术语“包括”
和/或“包含”规定所陈述的特征、整数、步骤、操作、单元和/或组件的存在,而不排除存在或添加一个或更多其他特征、整数、步骤、操作、单元、组件和/或其组合。
下面结合附图和较佳的实施例对本发明作进一步说明。
实施例一
如图1所示,本实施例中公开一种机器人交互内容的生成方法,包括:
S101、获取多模态信号;
S102、根据所述多模态信号确定用户意图;
S103、根据所述多模态信号和所述用户意图,结合当前的机器人可变参数生成机器人交互内容。
现有机器人对于应用场景来说,一般是基于固定的场景中的问答交互机器人交互内容的生成方法,无法基于当前的场景来更加准确的生成机器人的表情。本发明一种机器人交互内容的生成方法,包括:获取多模态信号;根据所述多模态信号确定用户意图;根据所述多模态信号和所述用户意图,结合当前的机器人可变参数生成机器人交互内容。这样就可以根据多模态信号例如图像信号、语音信号,结合机器人可变参数来更加准确地生成机器人交互内容,从而更加准确、拟人化的与人进行交互和沟通。可变参数为:人机交互过程中,用户主动控制的参数,例如:控制机器人去做运动,控制机器人做交流等。本发明将机器人可变参数加入到机器人的交互内容生成中去,使得机器人在生成交互内容时可以根据之前的可变参数进行生成,例如当可变参数为机器人已经运动了一个小时了,再次向机器人发送打扫卫生等命令时,机器人就会说我累了,拒绝打扫。这样使机器人与人交互时更加拟人化,使得机器人在生活时间轴内具有人类的生活方式,该方法能够提升机器人交互内容生成的拟人性,提升人机交互体验,提高智能性。其中多模态信号一般为多种信号的组合,例如图片信号加上语音信号,或者图片信号加语音信号再加手势信号等。机器人可变参数300是提前进行拟合和设置完成的,具体来讲,机器人可变参数300是一系列的参数合集,将这个参数传输给系统进行生成交互内容。
本实施例中,可变参数具体是:人与机器发生的突发改变,比如时间轴上的一天生活是吃饭、睡觉、交互、跑步、吃饭、睡觉。那在这个情况下,假如突然改变机器人的场景,比如在跑步的时间段带去海边等等,这些人类主动对于机器人的参数,作为可变参数,这些改变会使得机器人的
自我认知产生改变。生活时间轴与可变参数可以对自我认知中的属性,例如心情值,疲劳值等等的更改,也可以自动加入新的自我认知信息,比如之前没有愤怒值,基于生活时间轴和可变因素的场景就会自动根据之前模拟人类自我认知的场景,从而对机器人的自我认知进行添加。
例如,按照生活时间轴,在中午12点的时候应该是吃饭的时间,而如果改变了这个场景,比如在中午12点的时候出去逛街了,那么机器人就会将这个作为其中的一个可变参数进行写入,在这个时间段内用户与机器人交互时,机器人就会结合到中午12点出去逛街进行生成交互内容,而不是以之前的中午12点在吃饭进行结合生成交互内容,在具体生成交互内容时,机器人就会结合获取的多模态信号,例如语音信息和图片信息的组合、或者与视频信信号的组合等和可变参数进行生成。这样就可以加入一些人类生活中的突发事件在机器人的生活轴中,让机器人的交互更加拟人化。其中多模态信号一般为多种信号的组合,例如图片信号加上语音信号,或者图片信号加语音信号再加手势信号等。
又如,多模态信号包括机器人获取表情和文本情感等,这些可以通过语音输入或者视频输入或者手势输入或组合,机器人表情输入为开心,文本分析为不开心,同时用户多次控制机器人做运动。机器人就会拒绝接受指令,并交互为:我非常疲劳,目前需要休息。
根据其中一个示例,所述机器人可变参数的生成方法包括:
将机器人的自我认知的参数与可变参数中场景的参数进行拟合,生成机器人可变参数。这样通过在结合可变参数的机器人的场景,将机器人本身的自我认知行扩展,对自我认知中的参数与可变参会苏轴中使用场景的参数进行拟合,产生拟人化的影响。
根据其中一个示例,其中,所述可变参数至少包括改变用户原本的行为和改变之后的行为,以及代表改变用户原本的行为和改变之后的行为的参数值。
可变参数就是按照原本计划,是处于一种状态的,突然的改变让用户处于了另一种状态,可变参数就代表了这种行为或状态的变化,以及变化之后用户的状态或者行为,例如原本在下午5点是在跑步,突然有其他的事,例如去打球,那么从跑步改为打球就是可变参数,另外还要研究这种改变的几率。
根据其中另一个示例,所述根据所述多模态信号和所述用户意图,结
合当前的机器人可变参数生成机器人交互内容的步骤进一步包括:根据所述用户意图和多模态信号,结合当前的机器人可变参数以及参数改变概率的拟合曲线生成机器人交互内容。这样就可以通过可变参数的概率训练生成拟合曲线,从而生成机器人交互内容。
根据其中另一个示例,所述参数改变概率的拟合曲线的生成方法包括:使用概率算法,将机器人之间的参数用网络做概率估计,计算当生活时间轴上的机器人在生活时间轴上的场景参数改变后,每个参数改变的概率,形成所述参数改变概率的拟合曲线。概率算法可以采用贝叶斯概率算法。
通过在结合可变参数的机器人的场景,将机器人本身的自我认知行扩展,对自我认知中的参数与可变参会苏轴中使用场景的参数进行拟合,产生拟人化的影响。同时,加上对于地点场景的识别,使得机器人会知道自己的地理位置,会根据自己所处的地理环境,改变交互内容生成的方式。另外,我们使用贝叶斯概率算法,将机器人之间的参数用贝叶斯网络做概率估计,计算生活时间轴上的机器人本身时间轴场景参数改变后,每个参数改变的概率,形成拟合曲线,动态影响机器人本身的自我认知。这种创新的模块使得机器人本身具有人类的生活方式,对于表情这块,可按照所处的地点场景,做表情方面的改变。
根据其中另一个示例,所述多模态信号至少包括图像信号,所述根据所述多模态信号和所述用户意图,结合当前的机器人可变参数生成机器人交互内容的步骤具体包括:
根据所述图像信号和所述用户意图,结合当前的机器人可变参数生成机器人交互内容。多模态信号至少包括图像信号,这样可以让机器人掌握用户的意图,而为了更好的了解到用户的意图,一般会加入其它信号,例如语音信号、手势信号等,这样可以更加准确的了解到用户到底是真实的表达的意思,还是开玩笑试探的意思。
根据其中另一个示例,所述多模态信号至少包括语音信号,所述根据所述多模态信号和所述用户意图,结合当前的机器人可变参数生成机器人交互内容的步骤具体包括:
根据所述语音信号和所述用户意图,结合当前的机器人可变参数生成机器人交互内容。
根据其中另一个示例,所述多模态信号至少包括手势信号,所述根据所述多模态信号和所述用户意图,结合当前的机器人可变参数生成机器人
交互内容的步骤具体包括:
根据所述手势信号和所述用户意图,结合当前的机器人可变参数生成机器人交互内容。
例如,机器人在前一段时间内持续的在唱歌,然后用户这时通过语音告诉机器人,继续唱吧。而图片信号显示的是用户是一脸严肃的,那么机器人就会回复说,太累了,让我休息一下,配上疲惫的脸。而如果图片信息号显示的是用户是一脸开心,那么机器人就会回复说,主人我先休息一下再给你唱,配上开心的脸。这样就可以根据多模态信号的不同而生成不同的回复。一般通过语音信号和图片信号就可以较为准确地了解到用户的意思,从而更加准确的回复用户。当然加上其他信号更加准确,例如手势信号,视频信号等。
实施例二
如图2所示,本实施例中公开本发明一种机器人交互内容的生成系统,其特征在于,包括:
获取模块201,用于获取多模态信号;
意图识别模块202,用于根据所述多模态信号确定用户意图;
内容生成模块203,用于根据所述多模态信号和所述用户意图,结合当前的机器人可变参数生成机器人交互内容。
这样就可以根据多模态信号例如图像信号、语音信号,结合机器人可变参数来更加准确地生成机器人交互内容,从而更加准确、拟人化的与人进行交互和沟通。可变参数为:人机交互过程中,用户主动控制的参数,例如:控制机器人去做运动,控制机器人做交流等。本发明将机器人可变参数加入到机器人的交互内容生成中去,使得机器人在生成交互内容时可以根据之前的可变参数进行生成,例如当可变参数为机器人已经运动了一个小时了,再次向机器人发送打扫卫生等命令时,机器人就会说我累了,拒绝打扫。这样使机器人与人交互时更加拟人化,使得机器人在生活时间轴内具有人类的生活方式,该方法能够提升机器人交互内容生成的拟人性,提升人机交互体验,提高智能性。其中多模态信号一般为多种信号的组合,例如图片信号加上语音信号,或者图片信号加语音信号再加手势信号等。
本实施例中的多模态信息可以是用户表情、语音信息、手势信息、场景信息、图像信息、视频信息、人脸信息、瞳孔虹膜信息、光感信息和指
纹信息等其中的其中一种或几种。本实施例中优选为图片信号加语音信号再加手势信号,这样识别的准确并且识别的效率高。
例如,可变参数可以是机器人在一个预设的时间段做过的事情,如机器人在上个时间段与用户互动交谈了一个小时了,这时用户如果通过多模态信号向机器人表达出继续交谈的意图,那么机器人就可以说我累了需要休息一下,并且配上疲惫的状态内容,例如表情等。如果多模态信号显示用户为开玩笑,那么机器人就可以说别逗我啦,并配上开心的表情。其中多模态信号一般为多种信号的组合,例如图片信号加上语音信号,或者图片信号加语音信号再加手势信号等。
根据其中一个示例,所述系统包括基于时间轴与人工智能云处理模块,用于:
将机器人的自我认知的参数与可变参数中场景的参数进行拟合,生成机器人可变参数。
这样通过在结合可变参数的机器人的场景,将机器人本身的自我认知行扩展,对自我认知中的参数与可变参会苏轴中使用场景的参数进行拟合,产生拟人化的影响。
根据其中一个示例,其中,所述可变参数至少包括改变用户原本的行为和改变之后的行为,以及代表改变用户原本的行为和改变之后的行为的参数值。
可变参数就是按照原本计划,是处于一种状态的,突然的改变让用户处于了另一种状态,可变参数就代表了这种行为或状态的变化,以及变化之后用户的状态或者行为,例如原本在下午5点是在跑步,突然有其他的事,例如去打球,那么从跑步改为打球就是可变参数,另外还要研究这种改变的几率。
根据其中另一个示例,所述基于时间轴与人工智能云处理模块进一步用于:根据所述用户意图和多模态信号,结合当前的机器人可变参数以及参数改变概率的拟合曲线生成机器人交互内容。这样就可以通过可变参数的概率训练生成拟合曲线,从而生成机器人交互内容。
根据其中另一个示例,所述基于时间轴与人工智能云处理模块进一步用于:使用概率算法,计算生活时间轴上的机器人在时间轴场景参数改变后的每个参数改变的概率,形成拟合曲线。
通过在结合可变参数的机器人的场景,将机器人本身的自我认知行扩
展,对自我认知中的参数与可变参会苏轴中使用场景的参数进行拟合,产生拟人化的影响。同时,加上对于地点场景的识别,使得机器人会知道自己的地理位置,会根据自己所处的地理环境,改变交互内容生成的方式。另外,我们使用贝叶斯概率算法,将机器人之间的参数用贝叶斯网络做概率估计,计算生活时间轴上的机器人本身时间轴场景参数改变后,每个参数改变的概率,形成拟合曲线,动态影响机器人本身的自我认知。这种创新的模块使得机器人本身具有人类的生活方式,对于表情这块,可按照所处的地点场景,做表情方面的改变。
根据其中另一个示例,所述多模态信号至少包括图像信号,所述表情生成模块具体用于:根据所述图像信号和所述用户意图,结合当前的机器人可变参数生成机器人交互内容。
多模态信号至少包括图像信号,这样可以让机器人掌握用户的意图,而为了更好的了解到用户的意图,一般会加入其它信号,例如语音信号、手势信号等,这样可以更加准确的了解到用户到底是真实的表达的意思,还是开玩笑试探的意思。
根据其中另一个示例,所述多模态信号至少包括语音信号,所述表情生成模块具体用于:根据所述语音信号和所述用户意图,结合当前的机器人可变参数生成机器人交互内容。
根据其中另一个示例,所述多模态信号至少包括手势信号,所述表情生成模块具体用于:根据所述手势信号和所述用户意图,结合当前的机器人可变参数生成机器人交互内容。
例如,机器人在前一段时间内持续的在唱歌,然后用户这时通过语音告诉机器人,继续唱吧。而图片信号显示的是用户是一脸严肃的,那么机器人就会回复说,太累了,让我休息一下,配上疲惫的脸。而如果图片信息号显示的是用户是一脸开心,那么机器人就会回复说,主人我先休息一下再给你唱,配上开心的脸。这样就可以根据多模态信号的不同而生成不同的回复。一般通过语音信号和图片信号就可以较为准确地了解到用户的意思,从而更加准确的回复用户。
本发明公开一种机器人,包括如上述任一所述的一种机器人交互内容的生成系统。
以上内容是结合具体的优选实施方式对本发明所作的进一步详细说明,不能认定本发明的具体实施只局限于这些说明。对于本发明所属技术
领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干简单推演或替换,都应当视为属于本发明的保护范围。
Claims (17)
- 一种机器人交互内容的生成方法,其特征在于,包括:获取多模态信号;根据所述多模态信号确定用户意图;根据所述多模态信号和所述用户意图,结合当前的机器人可变参数生成机器人交互内容。
- 根据权利要求1所述的生成方法,其特征在于,所述机器人可变参数的生成方法包括:将机器人的自我认知的参数与可变参数中场景的参数进行拟合,生成机器人可变参数。
- 根据权利要求2所述的生成方法,其特征在于,其中,所述可变参数至少包括改变用户原本的行为和改变之后的行为,以及代表改变用户原本的行为和改变之后的行为的参数值。
- 根据权利要求1所述的生成方法,其特征在于,所述根据所述多模态信号和所述用户意图,结合当前的机器人可变参数生成机器人交互内容的步骤进一步包括:根据所述用户意图和多模态信号,结合当前的机器人可变参数以及参数改变概率的拟合曲线生成机器人交互内容。
- 根据权利要求4所述的生成方法,其特征在于,所述参数改变概率的拟合曲线的生成方法包括:使用概率算法,将机器人之间的参数用网络做概率估计,计算当生活时间轴上的机器人在生活时间轴上的场景参数改变后,每个参数改变的概率,形成所述参数改变概率的拟合曲线。
- 根据权利要求1所述的生成方法,其特征在于,所述多模态信号至少包括图像信号,所述根据所述多模态信号和所述用户意图,结合当前的机器人可变参数生成机器人交互内容的步骤具体包括:根据所述图像信号和所述用户意图,结合当前的机器人可变参数生成机器人交互内容。
- 根据权利要求1所述的生成方法,其特征在于,所述多模态信号至少包括语音信号,所述根据所述多模态信号和所述用户意图,结合当前的机器人可变参数生成机器人交互内容的步骤具体包括:根据所述语音信号和所述用户意图,结合当前的机器人可变参数生成机器人交互内容。
- 根据权利要求1所述的生成方法,其特征在于,所述多模态信号至 少包括手势信号,所述根据所述多模态信号和所述用户意图,结合当前的机器人可变参数生成机器人交互内容的步骤具体包括:根据所述手势信号和所述用户意图,结合当前的机器人可变参数生成机器人交互内容。
- 一种机器人交互内容的生成系统,其特征在于,包括:获取模块,用于获取多模态信号;意图识别模块,用于根据所述多模态信号确定用户意图;内容生成模块,用于根据所述多模态信号和所述用户意图,结合当前的机器人可变参数生成机器人交互内容。
- 根据权利要求9所述的生成系统,其特征在于,所述系统包括基于时间轴与人工智能云处理模块,用于:将机器人的自我认知的参数与可变参数中场景的参数进行拟合,生成机器人可变参数。
- 根据权利要求10所述的生成系统,其特征在于,其中,所述可变参数至少包括改变用户原本的行为和改变之后的行为,以及代表改变用户原本的行为和改变之后的行为的参数值。
- 根据权利要求9所述的生成系统,其特征在于,所述基于时间轴与人工智能云处理模块进一步用于:根据所述用户意图和多模态信号,结合当前的机器人可变参数以及参数改变概率的拟合曲线生成机器人交互内容。
- 根据权利要求12所述的生成系统,其特征在于,所述基于时间轴与人工智能云处理模块进一步用于:使用概率算法,计算生活时间轴上的机器人在时间轴场景参数改变后的每个参数改变的概率,形成拟合曲线。
- 根据权利要求9所述的生成系统,其特征在于,所述多模态信号至少包括图像信号,所述内容生成模块具体用于:根据所述图像信号和所述用户意图,结合当前的机器人可变参数生成机器人交互内容。
- 根据权利要求9所述的生成系统,其特征在于,所述多模态信号至少包括语音信号,所述内容生成模块具体用于:根据所述语音信号和所述用户意图,结合当前的机器人可变参数生成机器人交互内容。
- 根据权利要求9所述的生成系统,其特征在于,所述多模态信号至少包括手势信号,所述内容生成模块具体用于:根据所述手势信号和所述用户意图,结合当前的机器人可变参数生成机器人交互内容。
- 一种机器人,其特征在于,包括如权利要求9至16任一所述的一种机器人交互内容的生成系统。
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201680001745.7A CN106462255A (zh) | 2016-06-29 | 2016-06-29 | 一种机器人交互内容的生成方法、系统及机器人 |
| PCT/CN2016/087752 WO2018000267A1 (zh) | 2016-06-29 | 2016-06-29 | 一种机器人交互内容的生成方法、系统及机器人 |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2016/087752 WO2018000267A1 (zh) | 2016-06-29 | 2016-06-29 | 一种机器人交互内容的生成方法、系统及机器人 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2018000267A1 true WO2018000267A1 (zh) | 2018-01-04 |
Family
ID=58215718
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2016/087752 Ceased WO2018000267A1 (zh) | 2016-06-29 | 2016-06-29 | 一种机器人交互内容的生成方法、系统及机器人 |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN106462255A (zh) |
| WO (1) | WO2018000267A1 (zh) |
Families Citing this family (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107030691B (zh) * | 2017-03-24 | 2020-04-14 | 华为技术有限公司 | 一种看护机器人的数据处理方法及装置 |
| CN107564522A (zh) * | 2017-09-18 | 2018-01-09 | 郑州云海信息技术有限公司 | 一种智能控制方法及装置 |
| CN108363492B (zh) * | 2018-03-09 | 2021-06-25 | 南京阿凡达机器人科技有限公司 | 一种人机交互方法及交互机器人 |
| CN110154048B (zh) * | 2019-02-21 | 2020-12-18 | 北京格元智博科技有限公司 | 机器人的控制方法、控制装置和机器人 |
| CN110228065A (zh) * | 2019-04-29 | 2019-09-13 | 北京云迹科技有限公司 | 机器人运动控制方法及装置 |
| CN112775991B (zh) * | 2021-02-10 | 2021-09-07 | 溪作智能(深圳)有限公司 | 一种机器人的头部机构、机器人及机器人的控制方法 |
| CN113450436B (zh) * | 2021-06-28 | 2022-04-15 | 武汉理工大学 | 一种基于多模态相关性的人脸动画生成方法及系统 |
| CN116756285B (zh) * | 2023-06-20 | 2025-11-04 | 北京花房科技有限公司 | 虚拟机器人的互动方法、设备和存储介质 |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102103707A (zh) * | 2009-12-16 | 2011-06-22 | 群联电子股份有限公司 | 情感引擎、情感引擎系统及电子装置的控制方法 |
| CN103294725A (zh) * | 2012-03-03 | 2013-09-11 | 李辉 | 智能应答机器人软件 |
| CN105511608A (zh) * | 2015-11-30 | 2016-04-20 | 北京光年无限科技有限公司 | 基于智能机器人的交互方法及装置、智能机器人 |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP1956528B1 (en) * | 2007-02-08 | 2018-10-03 | Samsung Electronics Co., Ltd. | Apparatus and method for expressing behavior of software robot |
| CN104951077A (zh) * | 2015-06-24 | 2015-09-30 | 百度在线网络技术(北京)有限公司 | 基于人工智能的人机交互方法、装置和终端设备 |
-
2016
- 2016-06-29 CN CN201680001745.7A patent/CN106462255A/zh active Pending
- 2016-06-29 WO PCT/CN2016/087752 patent/WO2018000267A1/zh not_active Ceased
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102103707A (zh) * | 2009-12-16 | 2011-06-22 | 群联电子股份有限公司 | 情感引擎、情感引擎系统及电子装置的控制方法 |
| CN103294725A (zh) * | 2012-03-03 | 2013-09-11 | 李辉 | 智能应答机器人软件 |
| CN105511608A (zh) * | 2015-11-30 | 2016-04-20 | 北京光年无限科技有限公司 | 基于智能机器人的交互方法及装置、智能机器人 |
Also Published As
| Publication number | Publication date |
|---|---|
| CN106462255A (zh) | 2017-02-22 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2018000267A1 (zh) | 一种机器人交互内容的生成方法、系统及机器人 | |
| US11024294B2 (en) | System and method for dialogue management | |
| CN106956271B (zh) | 预测情感状态的方法和机器人 | |
| CN107030691B (zh) | 一种看护机器人的数据处理方法及装置 | |
| US20190206402A1 (en) | System and Method for Artificial Intelligence Driven Automated Companion | |
| US11003860B2 (en) | System and method for learning preferences in dialogue personalization | |
| WO2018000259A1 (zh) | 一种机器人交互内容的生成方法、系统及机器人 | |
| CN112204565B (zh) | 用于基于视觉背景无关语法模型推断场景的系统和方法 | |
| WO2018000268A1 (zh) | 一种机器人交互内容的生成方法、系统及机器人 | |
| WO2018006370A1 (zh) | 一种虚拟3d机器人的交互方法、系统及机器人 | |
| WO2018006374A1 (zh) | 一种基于主动唤醒的功能推荐方法、系统及机器人 | |
| CN106471572B (zh) | 一种同步语音及虚拟动作的方法、系统及机器人 | |
| CN112204563A (zh) | 用于基于用户通信的视觉场景构建的系统和方法 | |
| WO2018006371A1 (zh) | 一种同步语音及虚拟动作的方法、系统及机器人 | |
| Papaioannou et al. | Hybrid chat and task dialogue for more engaging hri using reinforcement learning | |
| CN114303151B (zh) | 经由使用组合神经网络的场景建模进行自适应对话的系统和方法 | |
| WO2018000266A1 (zh) | 一种机器人交互内容的生成方法、系统及机器人 | |
| De Simone et al. | Empowering human interaction: A socially assistive robot for support in trade shows | |
| WO2018000260A1 (zh) | 一种机器人交互内容的生成方法、系统及机器人 | |
| WO2018000258A1 (zh) | 一种机器人交互内容的生成方法、系统及机器人 | |
| WO2018000261A1 (zh) | 一种机器人交互内容的生成方法、系统及机器人 | |
| Zhen-Tao et al. | Communication atmosphere in humans and robots interaction based on the concept of fuzzy atmosfield generated by emotional states of humans and robots | |
| JP7548970B2 (ja) | ドッペルゲンガー遠隔ロボットシステム | |
| Babu et al. | Marve: a prototype virtual human interface framework for studying human-virtual human interaction | |
| Rossi et al. | A ros architecture for personalised hri with a bartender social robot |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16906668 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 16906668 Country of ref document: EP Kind code of ref document: A1 |