[go: up one dir, main page]

WO2018000266A1 - 一种机器人交互内容的生成方法、系统及机器人 - Google Patents

一种机器人交互内容的生成方法、系统及机器人 Download PDF

Info

Publication number
WO2018000266A1
WO2018000266A1 PCT/CN2016/087751 CN2016087751W WO2018000266A1 WO 2018000266 A1 WO2018000266 A1 WO 2018000266A1 CN 2016087751 W CN2016087751 W CN 2016087751W WO 2018000266 A1 WO2018000266 A1 WO 2018000266A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
information
parameter
user
generating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2016/087751
Other languages
English (en)
French (fr)
Inventor
邱楠
杨新宇
王昊奋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Gowild Robotics Co Ltd
Original Assignee
Shenzhen Gowild Robotics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Gowild Robotics Co Ltd filed Critical Shenzhen Gowild Robotics Co Ltd
Priority to PCT/CN2016/087751 priority Critical patent/WO2018000266A1/zh
Priority to CN201680001751.2A priority patent/CN106462804A/zh
Publication of WO2018000266A1 publication Critical patent/WO2018000266A1/zh
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • G06N5/022Knowledge engineering; Knowledge acquisition
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Definitions

  • the invention relates to the field of robot interaction technology, and in particular to a method, a system and a robot for generating robot interactive content.
  • the robot is generally based on the question and answer interaction in the solid scene.
  • the current intention of letting robots make feedback on expressions is mainly through pre-designed methods and deep learning training corpus.
  • This kind of feedback through pre-designed programs and corpus training has the following shortcomings. And for the scene in which it is located, it will affect the changes in human expression, such as: excited in the billiard room, very happy at home.
  • the robot wants to make the expression feedback, mainly through the pre-designed way and deep learning.
  • the output of the expression depends on the human text representation, that is, similar to a question and answer machine, the different words of the user trigger different expressions.
  • the robot actually outputs the expression according to the human interaction pattern pre-designed.
  • the robot can't be more anthropomorphic. It can't behave like human beings.
  • different expressions are expressed.
  • the generation of robot interactive content is completely passive, so the generation of expressions requires a lot of Human-computer interaction leads to poor intelligence of the robot.
  • the object of the present invention is to provide a method, a system and a robot for generating robot interactive content, so that the robot itself actively interacts with the variable parameters to have a human lifestyle, enhances the anthropomorphicity of the robot interactive content generation, and enhances the human-computer interaction experience. Improve intelligence.
  • a method for generating robot interactive content comprising:
  • the robot interaction content is generated according to the user intent and location scene information in combination with the current robot variable parameters.
  • the method for generating the variable parameter of the robot includes:
  • the robot's self-cognitive parameters are fitted to the parameters of the scene in the variable parameters to generate robotic variable parameters.
  • variable parameter includes at least a behavior of changing a user's original behavior and a change, and a parameter value representing a behavior of changing a user's original behavior and a change.
  • the step of generating the robot interaction content according to the user intention and the location scene information in combination with the current robot variable parameter further comprises: combining the current robot variable parameter and the parameter according to the user intention and the location scene information.
  • a fitting curve that changes the probability generates a robot interaction content.
  • the method for generating a fitting curve of the parameter change probability comprises: using a probability algorithm, using a network to make a probability estimation of parameters between the robots, and calculating a scene parameter change of the robot on the life time axis on the life time axis. After that, the probability of each parameter change forms a fitted curve of the parameter change probability.
  • the step of acquiring location scene information specifically includes: acquiring location scene information by using video information.
  • the step of acquiring location scene information specifically includes: acquiring location scene information by using picture information.
  • the step of acquiring location scene information specifically includes: acquiring location scene information by using gesture information.
  • the invention discloses a system for generating robot interactive content, comprising:
  • An intention identification module configured to acquire user information, and determine a user intention according to the user information
  • a scene recognition module configured to acquire location scene information
  • the content generating module is configured to generate the robot interaction content according to the user intent and the location scenario information, in combination with the current robot variable parameter.
  • the system includes an artificial intelligence cloud processing module, configured to: fit a self-cognitive parameter of the robot with a parameter of the scene in the variable parameter to generate a robot variable parameter.
  • an artificial intelligence cloud processing module configured to: fit a self-cognitive parameter of the robot with a parameter of the scene in the variable parameter to generate a robot variable parameter.
  • variable parameter includes at least a behavior of changing a user's original behavior and a change, and a parameter value representing a behavior of changing a user's original behavior and a change.
  • the content generating module is further configured to: generate the robot interaction content according to the user intent and the location scene information, in combination with the current robot variable parameter and the fitting curve of the parameter change probability.
  • the system includes a fitting curve generating module, configured to: use a probability algorithm to estimate a parameter between the robots by using a network, and calculate a scene parameter of the robot on a life time axis on a life time axis. After the change, the probability of each parameter change forms a fitted curve of the parameter change probability.
  • the scene recognition module is specifically configured to acquire video information.
  • the scene recognition module is specifically configured to acquire picture information.
  • the scene recognition module is specifically configured to acquire gesture information.
  • the invention discloses a robot comprising a system for generating interactive content of a robot as described above.
  • the existing robot is generally based on the method of generating the interactive interactive content of the question and answer interactive robot in the fixed scene, and cannot generate the robot more accurately based on the current scene. expression.
  • the generating method of the present invention includes: a method for generating a robot interactive content, comprising: acquiring user information, determining a user intent according to the user information; acquiring location scene information; and combining the current robot according to the user intention and the location scene information; Variable parameters generate robot interaction content. In this way, according to the current location scene information, combined with the variable parameters of the robot, the robot interaction content can be generated more accurately, thereby more accurately and anthropomorphic interaction and communication with people.
  • variable parameters are: parameters that the user actively controls during the human-computer interaction process, for example, controlling the robot to perform motion, controlling the robot to perform communication, and the like.
  • the invention adds the robot variable parameter to the interactive content generation of the robot, so that the robot can generate the interactive content according to the previous variable parameter, for example, when the variable parameter is that the robot has been moving for one hour, again When the robot sends a command such as cleaning, the robot will say that I am tired and refuse to clean.
  • the robot is more anthropomorphic when interacting with humans, so that the robot has a human lifestyle in the life time axis.
  • This method can enhance the anthropomorphicity of the robot interactive content generation, enhance the human-computer interaction experience, and improve the intelligence.
  • FIG. 1 is a flowchart of a method for generating interactive content of a robot according to Embodiment 1 of the present invention
  • FIG. 2 is a schematic diagram of a system for generating interactive content of a robot according to a second embodiment of the present invention.
  • Computer devices include user devices and network devices.
  • the user equipment or the client includes but is not limited to a computer, a smart phone, a PDA, etc.;
  • the network device includes but is not limited to a single network server, a server group composed of multiple network servers, or a cloud computing-based computer or network server. cloud.
  • the computer device can operate alone to carry out the invention, and can also access the network and implement the invention through interoperation with other computer devices in the network.
  • the network in which the computer device is located includes, but is not limited to, the Internet, a wide area network, a metropolitan area network, a local area network, a VPN network, and the like.
  • first means “first,” “second,” and the like may be used herein to describe the various elements, but the elements should not be limited by these terms, and the terms are used only to distinguish one element from another.
  • the term “and/or” used herein includes any and all combinations of one or more of the associated listed items. When a unit is referred to as being “connected” or “coupled” to another unit, it can be directly connected or coupled to the other unit, or an intermediate unit can be present.
  • a method for generating interactive content of a robot including:
  • the existing robot is generally based on the method of generating the interactive interactive content of the question and answer interaction robot in the solid scene, and cannot generate the robot more accurately based on the current scene.
  • the generating method of the present invention includes: a method for generating a robot interactive content, comprising: acquiring user information, determining a user intent according to the user information; acquiring location scene information; and combining the current robot according to the user intention and the location scene information; Variable parameters generate robot interaction content.
  • the robot interaction content can be generated more accurately, thereby more accurately and anthropomorphic interaction and communication with people.
  • variable parameters are: parameters that the user actively controls during the human-computer interaction process, for example, controlling the robot to perform motion, controlling the robot to perform communication, and the like.
  • the invention adds the robot variable parameter to the interactive content generation of the robot, so that the robot can generate the interactive content according to the previous variable parameter, for example, when the variable parameter is that the robot has been moving for one hour, again When the robot sends a command such as cleaning, the robot will say that I am tired and refuse to clean. In this way, the robot is more anthropomorphic when interacting with humans, so that the robot has a human lifestyle in the life time axis.
  • the robot variable parameter 300 is completed and set in advance. Specifically, the robot variable parameter 300 is a series of parameter collections, and this parameter is transmitted to the system to generate interactive content.
  • the user information in this embodiment may be one or more of user expression, voice information, gesture information, scene information, image information, video information, face information, pupil iris information, light sense information, and fingerprint information.
  • the user's expression is preferred, so that the recognition is accurate and the recognition efficiency is high.
  • variable parameters are specifically: sudden changes in people and machines, such as one day on the time axis is eating, sleeping, interacting, running, eating, sleeping. In this case, if the scene of the robot is suddenly changed, such as taking the beach at the time of running, etc., these human active parameters for the robot, as variable parameters, will cause the robot's self-cognition to change.
  • the life timeline and variable parameters can be used to change the attributes of self-cognition, such as mood values, fatigue values, etc., and can also automatically add new self-awareness information, such as no previous anger value, based on the life time axis and The scene of the variable factor will automatically add to the self-cognition of the robot based on the scene that previously simulated the human self-cognition.
  • the robot will use this as a variable parameter.
  • the robot will go out to go shopping at 12 noon to generate interactive content, instead of combining the previous 12 noon to generate interactive content.
  • the robot will combine the acquired user information. For example, voice information, video information, picture information, and the like are generated and variable parameters are generated. In this way, some unexpected events in human life can be added to the life axis of the robot, making the interaction of the robot more anthropomorphic.
  • the method for generating the variable parameter of the robot includes:
  • the robot's self-cognitive parameters are fitted to the parameters of the scene in the variable parameters to generate robotic variable parameters.
  • the parameters in the self-cognition are matched with the parameters of the scene used in the variable participation axis, and the influence of the personification is generated. .
  • variable parameter includes at least a behavior that changes an original user's behavior and a change, and a parameter value that represents an act of changing a user's original behavior and a change.
  • variable parameters are in the same state as the original plan.
  • the sudden change causes the user to be in another state.
  • the variable parameter represents the change of the behavior or state, and the state or behavior of the user after the change. For example, it was originally running at 5 pm, and suddenly there were other things, such as going to play, then changing from running to playing is a variable parameter, and the probability of such a change is also studied.
  • the step of generating the robot interaction content in combination with the current robot variable parameter according to the user intention and the location scene information further comprises: combining the current robot variable according to the user intention and the location scene information
  • the parameters and the fitting curve of the parameter change probability generate robot interaction content.
  • the fitting curve can be generated by the probability training of the variable parameters, thereby generating the robot interaction content.
  • the method for generating a fitting curve of the parameter change probability includes: using a probability algorithm, using a network to make a probability estimate of a parameter between the robots, and calculating a time on the life time axis of the robot on the life time axis After the scene parameters are changed, the probability of each parameter change forms a fitting curve of the parameter change probability.
  • the probability algorithm can adopt the Bayesian probability algorithm.
  • the parameters in the self-cognition are matched with the parameters of the scene used in the variable participation axis, and the influence of the personification is generated.
  • the robot will know its geographical location, and will change the way the interactive content is generated according to the geographical environment in which it is located.
  • Bayesian probability algorithm to estimate the parameters between robots using Bayesian network, and calculate the probability of each parameter change after the change of the time axis scene parameters of the robot itself on the life time axis.
  • the curve dynamically affects the self-recognition of the robot itself.
  • This innovative module makes the robot itself a human lifestyle. For the expression, it can be changed according to the location scene.
  • the step of acquiring location scene information specifically includes: acquiring location scene information by using video information.
  • location scene information can be obtained through video, and the video acquisition is more accurate.
  • the step of acquiring location scene information specifically includes: acquiring location scene information by using picture information.
  • the image acquisition can save the robot's calculations and make the robot's reaction more rapid.
  • the step of acquiring location scene information specifically includes: acquiring location scene information by using gesture information.
  • the gesture can be used to make the robot more applicable. For example, if the disabled or the owner sometimes does not want to talk, the gesture can be used to transmit information to the robot.
  • a system for generating interactive content of a robot includes:
  • the intent identification module 201 is configured to acquire user information, and determine a user intention according to the user information;
  • the scene recognition module 202 is configured to acquire location scene information.
  • the content generating module 203 is configured to generate the robot interaction content according to the current robot variable parameter sent by the robot variable parameter 301 according to the user intention and the location scene information.
  • the robot interaction content can be generated more accurately, thereby more accurately and anthropomorphic interaction and communication with people.
  • the variable parameters are: parameters that the user actively controls during the human-computer interaction process, for example, controlling the robot to perform motion, controlling the robot to perform communication, and the like.
  • the invention adds the robot variable parameter to the interactive content generation of the robot, so that the robot can generate the interactive content according to the previous variable parameter, for example, when the variable parameter is that the robot has been moving for one hour, again When the robot sends a command such as cleaning, the robot will say that I am tired and refuse to clean.
  • this method can enhance the anthropomorphicity of robot interactive content generation, enhance the human-computer interaction experience, and improve intelligence.
  • variable parameter can be something that the robot has done in a preset period of time, such as the robot interacting with the user for an hour during the last time period, when the user says to the robot to continue talking, then the location If it is in the room, then the robot can say that I am tired and need to take a break, and with tired state content, such as expressions. If the location is outdoors, the robot can also say that I want to go out and have a happy expression.
  • the system includes an artificial intelligence cloud processing module for: fitting a self-cognitive parameter of the robot to a parameter of the scene in the variable parameter to generate a robot variable parameter.
  • an artificial intelligence cloud processing module for: fitting a self-cognitive parameter of the robot to a parameter of the scene in the variable parameter to generate a robot variable parameter.
  • variable parameter includes at least a behavior that changes an original user's behavior and a change, and a parameter value that represents an act of changing a user's original behavior and a change.
  • variable parameters are in the same state as the original plan.
  • the sudden change causes the user to be in another state.
  • the variable parameter represents the change of the behavior or state, and the state or behavior of the user after the change. For example, it was originally running at 5 pm, and suddenly there were other things, such as going to play, then changing from running to playing is a variable parameter, and the probability of such a change is also studied.
  • the content generation module is further configured to: generate the robot interaction content according to the current robot variable parameter and the fitting curve of the parameter change probability according to the user intention and the location scene information.
  • the fitting curve can be generated by the probability training of the variable parameters, thereby generating the robot interaction content.
  • the system includes a fitting curve generating module, configured to: use a probability algorithm to estimate a parameter between the robots using a network for probability estimation, and calculate a robot on a life time axis on a life time axis After the scene parameters are changed, the probability of each parameter change forms a fitting curve of the parameter change probability.
  • the probability algorithm can adopt the Bayesian probability algorithm.
  • the parameters in the self-cognition are matched with the parameters of the scene used in the variable participation axis, and the influence of the personification is generated.
  • the geographical location will change the way the interactive content is generated according to the geographical environment in which it is located.
  • Bayesian probability algorithm to estimate the parameters between robots using Bayesian network, and calculate the probability of each parameter change after the change of the time axis scene parameters of the robot itself on the life time axis.
  • the curve dynamically affects the self-recognition of the robot itself.
  • This innovative module makes the robot itself a human lifestyle. For the expression, it can be changed according to the location scene.
  • the scene recognition module is specifically configured to acquire video information.
  • Such location scene information can be obtained through video, and the video acquisition is more accurate.
  • the scene recognition module is specifically configured to acquire picture information.
  • the image acquisition can save the robot's calculations and make the robot's reaction more rapid.
  • the scene recognition module is specifically configured to acquire gesture information.
  • the gesture can be used to make the robot more applicable. For example, if the disabled or the owner sometimes does not want to talk, the gesture can be used to transmit information to the robot.
  • the invention discloses a robot comprising a system for generating interactive content of a robot as described above.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Manipulator (AREA)

Abstract

一种机器人交互内容的生成方法,包括:获取用户信息,根据所述用户信息确定用户意图(S101);获取地点场景信息(S102);根据所述用户意图和地点场景信息,结合当前的机器人可变参数生成机器人交互内容(S103)。将机器人可变参数加入到机器人的交互内容生成中去,使得机器人在生成交互内容时可以根据之前的可变参数进行生成。这样使机器人与人交互时更加拟人化,使得机器人在生活时间轴内具有人类的生活方式,该方法能够提升机器人交互内容生成的拟人性,提升人机交互体验,提高智能性。

Description

一种机器人交互内容的生成方法、系统及机器人 技术领域
本发明涉及机器人交互技术领域,尤其涉及一种机器人交互内容的生成方法、系统及机器人。
背景技术
通常机器人对于应用场景来说,一般是基于固的场景中的问答交互,人来在某一天的时间轴上的生活场景,比如吃饭,睡觉,运动等,各种场景值的变化会影响人类表情的反馈。而对于机器人而言,目前想让机器人做出表情上的反馈,主要通过预先设计好的方式与深度学习训练语料得来,这种通过预先设计好的程序与语料训练的表情反馈存在以下缺点,而且对于本身所在的场景来说,更会影响人类本身的表情变化,比如:在桌球室很兴奋,在家很开心等。而对于机器人而言,目前想让机器人做出表情上的反馈,主要通过预先设计好的方式与深度学习得来,对于场景上问答主要通过预先设计好的程序与语料训练的表情,反馈存在以下缺点:表情的输出依赖于人类的文本表示,即与一个问答的机器相似,用户不同的话语触发不同的表情,这种情况下机器人实际还是按照人类预先设计好的交互方式进行表情的输出,这导致机器人不能更加拟人化,不能像人类一样,在不同的可变参数的生活场景,地点场景,表现出不同的表情,即机器人交互内容的生成方式完全是被动的,因此表情的生成需要大量的人机交互,导致机器人的智能性很差。
因此,如何使得机器人本身在主动交互可变参数内具有人类的生活方式,提升机器人交互内容生成的拟人性,成为亟需解决的技术问题。
发明内容
本发明的目的是提供一种机器人交互内容的生成方法、系统及机器人,使得机器人本身再主动交互可变参数内具有人类的生活方式,提升机器人交互内容生成的拟人性,提升人机交互体验,提高智能性。
本发明的目的是通过以下技术方案来实现的:
一种机器人交互内容的生成方法,包括:
获取用户信息,根据所述用户信息确定用户意图;
获取地点场景信息;
根据所述用户意图和地点场景信息,结合当前的机器人可变参数生成机器人交互内容。
优选的,所述机器人可变参数的生成方法包括:
将机器人的自我认知的参数与可变参数中场景的参数进行拟合,生成机器人可变参数。
优选的,其中,所述可变参数至少包括改变用户原本的行为和改变之后的行为,以及代表改变用户原本的行为和改变之后的行为的参数值。
优选的,所述根据所述用户意图和地点场景信息,结合当前的机器人可变参数生成机器人交互内容的步骤进一步包括:根据所述用户意图和地点场景信息,结合当前的机器人可变参数以及参数改变概率的拟合曲线生成机器人交互内容。
优选的,所述参数改变概率的拟合曲线的生成方法包括:使用概率算法,将机器人之间的参数用网络做概率估计,计算当生活时间轴上的机器人在生活时间轴上的场景参数改变后,每个参数改变的概率,形成所述参数改变概率的拟合曲线。
优选的,所述获取地点场景信息的步骤具体包括:通过视频信息获取地点场景信息。
优选的,所述获取地点场景信息的步骤具体包括:通过图片信息获取地点场景信息。
优选的,所述获取地点场景信息的步骤具体包括:通过手势信息获取地点场景信息。
本发明公开一种机器人交互内容的生成系统,包括:
意图识别模块,用于获取用户信息,根据所述用户信息确定用户意图;
场景识别模块,用于获取地点场景信息;
内容生成模块,用于根据所述用户意图和地点场景信息,结合当前的机器人可变参数生成机器人交互内容。
优选的,所述系统包括人工智能云处理模块,用于:将机器人的自我认知的参数与可变参数中场景的参数进行拟合,生成机器人可变参数。
优选的,其中,所述可变参数至少包括改变用户原本的行为和改变之后的行为,以及代表改变用户原本的行为和改变之后的行为的参数值。
优选的,所述内容生成模块进一步用于:根据所述用户意图和地点场景信息,结合当前的机器人可变参数以及参数改变概率的拟合曲线生成机器人交互内容。
优选的,其中,所述系统包括拟合曲线生成模块,用于:使用概率算法,将机器人之间的参数用网络做概率估计,计算当生活时间轴上的机器人在生活时间轴上的场景参数改变后,每个参数改变的概率,形成所述参数改变概率的拟合曲线。
优选的,所述场景识别模块具体用于,获取视频信息。
优选的,所述场景识别模块具体用于,获取图片信息。
优选的,所述场景识别模块具体用于,获取手势信息。
本发明公开一种机器人,包括如上述任一所述的一种机器人交互内容的生成系统。
相比现有技术,本发明具有以下优点:现有机器人对于应用场景来说,一般是基于固定的场景中的问答交互机器人交互内容的生成方法,无法基于当前的场景来更加准确的生成机器人的表情。本发明的生成方法包括:一种机器人交互内容的生成方法包括:获取用户信息,根据所述用户信息确定用户意图;获取地点场景信息;根据所述用户意图和地点场景信息,结合当前的机器人可变参数生成机器人交互内容。这样就可以根据当前的地点场景信息,结合机器人可变参数来更加准确地生成机器人交互内容,从而更加准确、拟人化的与人进行交互和沟通。可变参数为:人机交互过程中,用户主动控制的参数,例如:控制机器人去做运动,控制机器人做交流等。本发明将机器人可变参数加入到机器人的交互内容生成中去,使得机器人在生成交互内容时可以根据之前的可变参数进行生成,例如当可变参数为机器人已经运动了一个小时了,再次向机器人发送打扫卫生等命令时,机器人就会说我累了,拒绝打扫。这样使机器人与人交互时更加拟人化,使得机器人在生活时间轴内具有人类的生活方式,该方法能够提升机器人交互内容生成的拟人性,提升人机交互体验,提高智能性。
附图说明
图1是本发明实施例一的一种机器人交互内容的生成方法的流程图;
图2是本发明实施例二的一种机器人交互内容的生成系统的示意图。
具体实施方式
虽然流程图将各项操作描述成顺序的处理,但是其中的许多操作可以被并行地、并发地或者同时实施。各项操作的顺序可以被重新安排。当其操作完成时处理可以被终止,但是还可以具有未包括在附图中的附加步骤。处理可以对应于方法、函数、规程、子例程、子程序等等。
计算机设备包括用户设备与网络设备。其中,用户设备或客户端包括但不限于电脑、智能手机、PDA等;网络设备包括但不限于单个网络服务器、多个网络服务器组成的服务器组或基于云计算的由大量计算机或网络服务器构成的云。计算机设备可单独运行来实现本发明,也可接入网络并通过与网络中的其他计算机设备的交互操作来实现本发明。计算机设备所处的网络包括但不限于互联网、广域网、城域网、局域网、VPN网络等。
在这里可能使用了术语“第一”、“第二”等等来描述各个单元,但是这些单元不应当受这些术语限制,使用这些术语仅仅是为了将一个单元与另一个单元进行区分。这里所使用的术语“和/或”包括其中一个或更多所列出的相关联项目的任意和所有组合。当一个单元被称为“连接”或“耦合”到另一单元时,其可以直接连接或耦合到所述另一单元,或者可以存在中间单元。
这里所使用的术语仅仅是为了描述具体实施例而不意图限制示例性实施例。除非上下文明确地另有所指,否则这里所使用的单数形式“一个”、“一项”还意图包括复数。还应当理解的是,这里所使用的术语“包括”和/或“包含”规定所陈述的特征、整数、步骤、操作、单元和/或组件的存在,而不排除存在或添加一个或更多其他特征、整数、步骤、操作、单元、组件和/或其组合。
下面结合附图和较佳的实施例对本发明作进一步说明。
实施例一
如图1所示,本实施例中公开一种机器人交互内容的生成方法,包括:
S101、获取用户信息,根据所述用户信息确定用户意图;
S102、获取地点场景信息;
S103、根据所述用户意图和地点场景信息,结合当前的机器人可变参数300生成机器人交互内容。
现有机器人对于应用场景来说,一般是基于固的场景中的问答交互机器人交互内容的生成方法,无法基于当前的场景来更加准确的生成机器人 的表情。本发明的生成方法包括:一种机器人交互内容的生成方法包括:获取用户信息,根据所述用户信息确定用户意图;获取地点场景信息;根据所述用户意图和地点场景信息,结合当前的机器人可变参数生成机器人交互内容。这样就可以根据当前的地点场景信息,结合机器人可变参数来更加准确地生成机器人交互内容,从而更加准确、拟人化的与人进行交互和沟通。对于人来讲每天的生活都具有一定的规律性,为了让机器人与人沟通时更加拟人化。可变参数为:人机交互过程中,用户主动控制的参数,例如:控制机器人去做运动,控制机器人做交流等。本发明将机器人可变参数加入到机器人的交互内容生成中去,使得机器人在生成交互内容时可以根据之前的可变参数进行生成,例如当可变参数为机器人已经运动了一个小时了,再次向机器人发送打扫卫生等命令时,机器人就会说我累了,拒绝打扫。这样使机器人与人交互时更加拟人化,使得机器人在生活时间轴内具有人类的生活方式,该方法能够提升机器人交互内容生成的拟人性,提升人机交互体验,提高智能性。交互内容可以是表情或文字或语音等。机器人可变参数300是提前进行拟合和设置完成的,具体来讲,机器人可变参数300是一系列的参数合集,将这个参数传输给系统进行生成交互内容。
本实施例中的用户信息可以是用户表情、语音信息、手势信息、场景信息、图像信息、视频信息、人脸信息、瞳孔虹膜信息、光感信息和指纹信息等其中的其中一种或几种。本实施例中优选为用户表情,这样识别的准确并且识别的效率高。
本实施例中,可变参数具体是:人与机器发生的突发改变,比如时间轴上的一天生活是吃饭、睡觉、交互、跑步、吃饭、睡觉。那在这个情况下,假如突然改变机器人的场景,比如在跑步的时间段带去海边等等,这些人类主动对于机器人的参数,作为可变参数,这些改变会使得机器人的自我认知产生改变。生活时间轴与可变参数可以对自我认知中的属性,例如心情值,疲劳值等等的更改,也可以自动加入新的自我认知信息,比如之前没有愤怒值,基于生活时间轴和可变因素的场景就会自动根据之前模拟人类自我认知的场景,从而对机器人的自我认知进行添加。
例如,按照生活时间轴,在中午12点的时候应该是吃饭的时间,而如果改变了这个场景,比如在中午12点的时候出去逛街了,那么机器人就会将这个作为其中的一个可变参数进行写入,在这个时间段内用户与机器人 交互时,机器人就会结合到中午12点出去逛街进行生成交互内容,而不是以之前的中午12点在吃饭进行结合生成交互内容,在具体生成交互内容时,机器人就会结合获取的用户信息,例如语音信息、视屏信息、图片信息等和可变参数进行生成。这样就可以加入一些人类生活中的突发事件在机器人的生活轴中,让机器人的交互更加拟人化。
根据其中一个示例,所述机器人可变参数的生成方法包括:
将机器人的自我认知的参数与可变参数中场景的参数进行拟合,生成机器人可变参数。这样通过在结合可变参数的机器人的场景,将机器人本身的自我认知行扩展,对自我认知中的参数与可变参会苏轴中使用场景的参数进行拟合,产生拟人化的影响。
根据其中一个示例,其中,所述可变参数至少包括改变用户原本的行为和改变之后的行为,以及代表改变用户原本的行为和改变之后的行为的参数值。
可变参数就是按照原本计划,是处于一种状态的,突然的改变让用户处于了另一种状态,可变参数就代表了这种行为或状态的变化,以及变化之后用户的状态或者行为,例如原本在下午5点是在跑步,突然有其他的事,例如去打球,那么从跑步改为打球就是可变参数,另外还要研究这种改变的几率。
根据其中另一个示例,所述根据所述用户意图和地点场景信息,结合当前的机器人可变参数生成机器人交互内容的步骤进一步包括:根据所述用户意图和地点场景信息,结合当前的机器人可变参数以及参数改变概率的拟合曲线生成机器人交互内容。这样就可以通过可变参数的概率训练生成拟合曲线,从而生成机器人交互内容。
根据其中另一个示例,所述参数改变概率的拟合曲线的生成方法包括:使用概率算法,将机器人之间的参数用网络做概率估计,计算当生活时间轴上的机器人在生活时间轴上的场景参数改变后,每个参数改变的概率,形成所述参数改变概率的拟合曲线。其中,概率算法可以采用贝叶斯概率算法。
通过在结合可变参数的机器人的场景,将机器人本身的自我认知行扩展,对自我认知中的参数与可变参会苏轴中使用场景的参数进行拟合,产生拟人化的影响。同时,加上对于地点场景的识别,使得机器人会知道自己的地理位置,会根据自己所处的地理环境,改变交互内容生成的方式。 另外,我们使用贝叶斯概率算法,将机器人之间的参数用贝叶斯网络做概率估计,计算生活时间轴上的机器人本身时间轴场景参数改变后,每个参数改变的概率,形成拟合曲线,动态影响机器人本身的自我认知。这种创新的模块使得机器人本身具有人类的生活方式,对于表情这块,可按照所处的地点场景,做表情方面的改变。
根据其中另一个示例,所述获取地点场景信息的步骤具体包括:通过视频信息获取地点场景信息。这样地点场景信息可以通过视频来获取,通过视频获取更加准确。
根据其中另一个示例,所述获取地点场景信息的步骤具体包括:通过图片信息获取地点场景信息。通过图片获取可以省去机器人的计算量,使机器人的反应更加迅速。
根据其中另一个示例,所述获取地点场景信息的步骤具体包括:通过手势信息获取地点场景信息。通过手势获取可以使机器人的适用范围更加广,例如对于残疾人士或者主人有时候不想说话,就可以通过手势向机器人传递信息。
实施例二
如图2所示,本实施例中公开一种机器人交互内容的生成系统,包括:
意图识别模块201,用于获取用户信息,根据所述用户信息确定用户意图;
场景识别模块202,用于获取地点场景信息;
内容生成模块203,用于根据所述用户意图和地点场景信息,结合机器人可变参数301发送的当前的机器人可变参数生成机器人交互内容。
这样就可以根据当前的地点场景信息,结合机器人可变参数来更加准确地生成机器人交互内容,从而更加准确、拟人化的与人进行交互和沟通。对于人来讲每天的生活都具有一定的规律性,为了让机器人与人沟通时更加拟人化。可变参数为:人机交互过程中,用户主动控制的参数,例如:控制机器人去做运动,控制机器人做交流等。本发明将机器人可变参数加入到机器人的交互内容生成中去,使得机器人在生成交互内容时可以根据之前的可变参数进行生成,例如当可变参数为机器人已经运动了一个小时了,再次向机器人发送打扫卫生等命令时,机器人就会说我累了,拒绝打扫。这样使机器人与人交互时更加拟人化,使得机器人在生活时间轴内具 有人类的生活方式,该方法能够提升机器人交互内容生成的拟人性,提升人机交互体验,提高智能性。
例如,可变参数可以是机器人在一个预设的时间段做过的事情,如机器人在上个时间段与用户互动交谈了一个小时了,这时用户如果向机器人说继续交谈,这时的地点如果是在房间,那么机器人就可以说我累了需要休息一下,并且配上疲惫的状态内容,例如表情等。如果地点在室外,那么机器人也可以说我想出去走走,并配上开心的表情。
根据其中一个示例,所述系统包括人工智能云处理模块,用于:将机器人的自我认知的参数与可变参数中场景的参数进行拟合,生成机器人可变参数。这样通过在结合可变参数的机器人的场景,将机器人本身的自我认知行扩展,对自我认知中的参数与可变参会苏轴中使用场景的参数进行拟合,产生拟人化的影响。
根据其中一个示例,其中,所述可变参数至少包括改变用户原本的行为和改变之后的行为,以及代表改变用户原本的行为和改变之后的行为的参数值。
可变参数就是按照原本计划,是处于一种状态的,突然的改变让用户处于了另一种状态,可变参数就代表了这种行为或状态的变化,以及变化之后用户的状态或者行为,例如原本在下午5点是在跑步,突然有其他的事,例如去打球,那么从跑步改为打球就是可变参数,另外还要研究这种改变的几率。
根据其中另一个示例,所述内容生成模块进一步用于:根据所述用户意图和地点场景信息,结合当前的机器人可变参数以及参数改变概率的拟合曲线生成机器人交互内容。这样就可以通过可变参数的概率训练生成拟合曲线,从而生成机器人交互内容。
根据其中另一个示例,其中,所述系统包括拟合曲线生成模块,用于:使用概率算法,将机器人之间的参数用网络做概率估计,计算当生活时间轴上的机器人在生活时间轴上的场景参数改变后,每个参数改变的概率,形成所述参数改变概率的拟合曲线。其中,概率算法可以采用贝叶斯概率算法。
通过在结合可变参数的机器人的场景,将机器人本身的自我认知行扩展,对自我认知中的参数与可变参会苏轴中使用场景的参数进行拟合,产生拟人化的影响。同时,加上对于地点场景的识别,使得机器人会知道自 己的地理位置,会根据自己所处的地理环境,改变交互内容生成的方式。另外,我们使用贝叶斯概率算法,将机器人之间的参数用贝叶斯网络做概率估计,计算生活时间轴上的机器人本身时间轴场景参数改变后,每个参数改变的概率,形成拟合曲线,动态影响机器人本身的自我认知。这种创新的模块使得机器人本身具有人类的生活方式,对于表情这块,可按照所处的地点场景,做表情方面的改变。
根据其中另一个示例,所述场景识别模块具体用于,获取视频信息。这样地点场景信息可以通过视频来获取,通过视频获取更加准确。
根据其中另一个示例,所述场景识别模块具体用于,获取图片信息。通过图片获取可以省去机器人的计算量,使机器人的反应更加迅速。
根据其中另一个示例,所述场景识别模块具体用于,获取手势信息。通过手势获取可以使机器人的适用范围更加广,例如对于残疾人士或者主人有时候不想说话,就可以通过手势向机器人传递信息。
本发明公开一种机器人,包括如上述任一所述的一种机器人交互内容的生成系统。
以上内容是结合具体的优选实施方式对本发明所作的进一步详细说明,不能认定本发明的具体实施只局限于这些说明。对于本发明所属技术领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干简单推演或替换,都应当视为属于本发明的保护范围。

Claims (17)

  1. 一种机器人交互内容的生成方法,其特征在于,包括:
    获取用户信息,根据所述用户信息确定用户意图;
    获取地点场景信息;
    根据所述用户意图和地点场景信息,结合当前的机器人可变参数生成机器人交互内容。
  2. 根据权利要求1所述的生成方法,其特征在于,所述机器人可变参数的生成方法包括:
    将机器人的自我认知的参数与可变参数中场景的参数进行拟合,生成机器人可变参数。
  3. 根据权利要求2所述的生成方法,其特征在于,其中,所述可变参数至少包括改变用户原本的行为和改变之后的行为,以及代表改变用户原本的行为和改变之后的行为的参数值。
  4. 根据权利要求1所述的生成方法,其特征在于,所述根据所述用户意图和地点场景信息,结合当前的机器人可变参数生成机器人交互内容的步骤进一步包括:根据所述用户意图和地点场景信息,结合当前的机器人可变参数以及参数改变概率的拟合曲线生成机器人交互内容。
  5. 根据权利要求4所述的生成方法,其特征在于,所述参数改变概率的拟合曲线的生成方法包括:使用概率算法,将机器人之间的参数用网络做概率估计,计算当生活时间轴上的机器人在生活时间轴上的场景参数改变后,每个参数改变的概率,形成所述参数改变概率的拟合曲线。
  6. 根据权利要求1所述的生成方法,其特征在于,所述获取地点场景信息的步骤具体包括:通过视频信息获取地点场景信息。
  7. 根据权利要求1所述的生成方法,其特征在于,所述获取地点场景信息的步骤具体包括:通过图片信息获取地点场景信息。
  8. 根据权利要求1所述的生成方法,其特征在于,所述获取地点场景信息的步骤具体包括:通过手势信息获取地点场景信息。
  9. 一种机器人交互内容的生成系统,其特征在于,包括:
    意图识别模块,用于获取用户信息,根据所述用户信息确定用户意图;
    场景识别模块,用于获取地点场景信息;
    内容生成模块,用于根据所述用户意图和地点场景信息,结合当前的机器人可变参数生成机器人交互内容。
  10. 根据权利要求9所述的生成系统,其特征在于,所述系统包括人工智能云处理模块,用于:将机器人的自我认知的参数与可变参数中场景的参数进行拟合,生成机器人可变参数。
  11. 根据权利要求10所述的生成系统,其特征在于,其中,所述可变参数至少包括改变用户原本的行为和改变之后的行为,以及代表改变用户原本的行为和改变之后的行为的参数值。
  12. 根据权利要求9所述的生成系统,其特征在于,所述内容生成模块进一步用于:根据所述用户意图和地点场景信息,结合当前的机器人可变参数以及参数改变概率的拟合曲线生成机器人交互内容。
  13. 根据权利要求12所述的生成系统,其特征在于,其中,所述系统包括拟合曲线生成模块,用于:使用概率算法,将机器人之间的参数用网络做概率估计,计算当生活时间轴上的机器人在生活时间轴上的场景参数改变后,每个参数改变的概率,形成所述参数改变概率的拟合曲线。
  14. 根据权利要求9所述的生成系统,其特征在于,所述场景识别模块具体用于,获取视频信息。
  15. 根据权利要求9所述的生成系统,其特征在于,所述场景识别模块具体用于,获取图片信息。
  16. 根据权利要求9所述的生成系统,其特征在于,所述场景识别模块具体用于,获取手势信息。
  17. 一种机器人,其特征在于,包括如权利要求9至16任一所述的一种机器人交互内容的生成系统。
PCT/CN2016/087751 2016-06-29 2016-06-29 一种机器人交互内容的生成方法、系统及机器人 Ceased WO2018000266A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2016/087751 WO2018000266A1 (zh) 2016-06-29 2016-06-29 一种机器人交互内容的生成方法、系统及机器人
CN201680001751.2A CN106462804A (zh) 2016-06-29 2016-06-29 一种机器人交互内容的生成方法、系统及机器人

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/087751 WO2018000266A1 (zh) 2016-06-29 2016-06-29 一种机器人交互内容的生成方法、系统及机器人

Publications (1)

Publication Number Publication Date
WO2018000266A1 true WO2018000266A1 (zh) 2018-01-04

Family

ID=58215747

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/087751 Ceased WO2018000266A1 (zh) 2016-06-29 2016-06-29 一种机器人交互内容的生成方法、系统及机器人

Country Status (2)

Country Link
CN (1) CN106462804A (zh)
WO (1) WO2018000266A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI649057B (zh) * 2018-04-10 2019-02-01 禾聯碩股份有限公司 即時環境掃描之清潔系統

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107491511A (zh) * 2017-08-03 2017-12-19 深圳狗尾草智能科技有限公司 机器人的自我认知方法及装置
CN107799126B (zh) * 2017-10-16 2020-10-16 苏州狗尾草智能科技有限公司 基于有监督机器学习的语音端点检测方法及装置
CN108320021A (zh) * 2018-01-23 2018-07-24 深圳狗尾草智能科技有限公司 机器人动作与表情确定方法、展示合成方法、装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7685518B2 (en) * 1998-01-23 2010-03-23 Sony Corporation Information processing apparatus, method and medium using a virtual reality space
CN102103707A (zh) * 2009-12-16 2011-06-22 群联电子股份有限公司 情感引擎、情感引擎系统及电子装置的控制方法
CN104951077A (zh) * 2015-06-24 2015-09-30 百度在线网络技术(北京)有限公司 基于人工智能的人机交互方法、装置和终端设备
CN105490918A (zh) * 2015-11-20 2016-04-13 深圳狗尾草智能科技有限公司 一种机器人主动与主人交互的系统及方法
CN105701211A (zh) * 2016-01-13 2016-06-22 北京光年无限科技有限公司 面向问答系统的主动交互数据处理方法及系统

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001277166A (ja) * 2000-03-31 2001-10-09 Sony Corp ロボット及びロボットの行動決定方法
WO2002034478A1 (fr) * 2000-10-23 2002-05-02 Sony Corporation Robot pourvu de jambes, procede de commande du comportement d"un tel robot, et support de donnees
JP3988121B2 (ja) * 2002-03-15 2007-10-10 ソニー株式会社 学習装置、記憶方法及びロボット装置
JP2003340759A (ja) * 2002-05-20 2003-12-02 Sony Corp ロボット装置およびロボット制御方法、記録媒体、並びにプログラム
CN101587329A (zh) * 2009-06-18 2009-11-25 北京理工大学 机器人预测的方法和系统
CN104901873A (zh) * 2015-06-29 2015-09-09 曾劲柏 一种基于场景和动作的网络社交系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7685518B2 (en) * 1998-01-23 2010-03-23 Sony Corporation Information processing apparatus, method and medium using a virtual reality space
CN102103707A (zh) * 2009-12-16 2011-06-22 群联电子股份有限公司 情感引擎、情感引擎系统及电子装置的控制方法
CN104951077A (zh) * 2015-06-24 2015-09-30 百度在线网络技术(北京)有限公司 基于人工智能的人机交互方法、装置和终端设备
CN105490918A (zh) * 2015-11-20 2016-04-13 深圳狗尾草智能科技有限公司 一种机器人主动与主人交互的系统及方法
CN105701211A (zh) * 2016-01-13 2016-06-22 北京光年无限科技有限公司 面向问答系统的主动交互数据处理方法及系统

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI649057B (zh) * 2018-04-10 2019-02-01 禾聯碩股份有限公司 即時環境掃描之清潔系統

Also Published As

Publication number Publication date
CN106462804A (zh) 2017-02-22

Similar Documents

Publication Publication Date Title
WO2018006370A1 (zh) 一种虚拟3d机器人的交互方法、系统及机器人
WO2018000267A1 (zh) 一种机器人交互内容的生成方法、系统及机器人
WO2018000259A1 (zh) 一种机器人交互内容的生成方法、系统及机器人
WO2018006374A1 (zh) 一种基于主动唤醒的功能推荐方法、系统及机器人
KR102423712B1 (ko) 루틴 실행 중에 클라이언트 디바이스간 자동화 어시스턴트 루틴 전송
WO2018000268A1 (zh) 一种机器人交互内容的生成方法、系统及机器人
WO2018000277A1 (zh) 一种问答方法、系统和机器人
WO2018006369A1 (zh) 一种同步语音及虚拟动作的方法、系统及机器人
WO2018006373A1 (zh) 一种基于意图识别控制家电的方法、系统及机器人
WO2018006372A1 (zh) 一种基于意图识别控制家电的方法、系统及机器人
KR20200024675A (ko) 휴먼 행동 인식 장치 및 방법
CA2520036A1 (en) A behavioural translator for an object
Koenig et al. Robot life-long task learning from human demonstrations: a Bayesian approach
JP6867971B2 (ja) 会議支援装置及び会議支援システム
WO2018000266A1 (zh) 一种机器人交互内容的生成方法、系统及机器人
WO2018006371A1 (zh) 一种同步语音及虚拟动作的方法、系统及机器人
CN105938484B (zh) 基于用户反馈知识库的机器人交互方法和系统
CN118721204B (zh) 机器人的控制方法及装置、系统和机器人
JP2020009474A (ja) 人工ソーシャルネットワークを運用するためのシステム及び方法
WO2021084810A1 (ja) 情報処理装置及び情報処理方法、並びに人工知能モデル製造方法
CN114303151B (zh) 经由使用组合神经网络的场景建模进行自适应对话的系统和方法
Taniguchi et al. Semiotically adaptive cognition: toward the realization of remotely-operated service robots for the new normal symbiotic society
WO2018000258A1 (zh) 一种机器人交互内容的生成方法、系统及机器人
WO2018000261A1 (zh) 一种机器人交互内容的生成方法、系统及机器人
WO2018000260A1 (zh) 一种机器人交互内容的生成方法、系统及机器人

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16906667

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16906667

Country of ref document: EP

Kind code of ref document: A1