[go: up one dir, main page]

WO2018000261A1 - 一种机器人交互内容的生成方法、系统及机器人 - Google Patents

一种机器人交互内容的生成方法、系统及机器人 Download PDF

Info

Publication number
WO2018000261A1
WO2018000261A1 PCT/CN2016/087740 CN2016087740W WO2018000261A1 WO 2018000261 A1 WO2018000261 A1 WO 2018000261A1 CN 2016087740 W CN2016087740 W CN 2016087740W WO 2018000261 A1 WO2018000261 A1 WO 2018000261A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
time axis
information
user
life time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2016/087740
Other languages
English (en)
French (fr)
Inventor
王昊奋
邱楠
杨新宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Gowild Robotics Co Ltd
Original Assignee
Shenzhen Gowild Robotics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Gowild Robotics Co Ltd filed Critical Shenzhen Gowild Robotics Co Ltd
Priority to CN201680001750.8A priority Critical patent/CN106537293A/zh
Priority to PCT/CN2016/087740 priority patent/WO2018000261A1/zh
Publication of WO2018000261A1 publication Critical patent/WO2018000261A1/zh
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models

Definitions

  • the invention relates to the field of robot interaction technology, and in particular to a method, a system and a robot for generating robot interactive content.
  • the object of the present invention is to provide a method, a system and a robot for generating interactive content of a robot, and a method for generating a machine interactive content for automatically detecting a facial expression by actively awakening the robot, which can improve the anthropomorphicity of the robot interactive content generation and enhance human-computer interaction. Experience and improve intelligence.
  • a method for generating robot interactive content comprising:
  • the robot interaction content is generated according to the current multi-modality information, the user intention and the location scene information, in combination with the current robot life time axis.
  • the step of actively waking up the robot comprises:
  • the robot will wake up actively.
  • the method for generating parameters of the life time axis of the robot includes:
  • the self-cognitive parameters of the robot are fitted to the parameters in the life time axis to generate a robot life time axis.
  • the step of expanding the self-cognition of the robot specifically comprises: combining the life scene with the self-knowledge of the robot to form a self-cognitive curve based on the life time axis.
  • the step of fitting the self-cognitive parameter of the robot to the parameter in the life time axis comprises: using a probability algorithm to calculate each parameter of the robot on the life time axis after the time axis scene parameter is changed.
  • the probability of change forms a fitted curve.
  • the life time axis refers to a time axis including 24 hours a day
  • the parameters in the life time axis include at least a daily life behavior performed by the user on the life time axis and parameter values representing the behavior.
  • the method further comprises: acquiring and analyzing a voice signal
  • the generating the robot interaction content according to the user multimodal information and the user intention, combined with the current robot life time axis further includes:
  • the robot interaction content is generated in combination with the current robot life time axis.
  • the step of acquiring location scene information specifically includes: acquiring location scene information by using video information.
  • the step of acquiring location scene information specifically includes: acquiring location scene information by using picture information.
  • the step of acquiring location scene information specifically includes: acquiring location scene information by using gesture information.
  • the invention discloses a system for generating robot interactive content, comprising:
  • Light sensing automatic detection module for actively waking up the robot
  • An expression analysis cloud processing module for acquiring user multimodal information
  • An intent identification module configured to determine a user intent according to the user multimodal information
  • a scene recognition module configured to acquire location scene information
  • the content generating module is configured to generate the robot interaction content according to the current robot life time axis according to the user multimodal information and the user intention.
  • the light sensing automatic detecting module is specifically configured to:
  • the robot will wake up actively.
  • the system comprises a time axis based and artificial intelligence cloud processing module for:
  • the self-cognitive parameters of the robot are fitted to the parameters in the life time axis to generate a robot life time axis.
  • the time-based and artificial intelligence cloud processing module is further configured to combine a life scene with a self-awareness of the robot to form a self-cognitive curve based on a life time axis.
  • the time axis-based and artificial intelligence cloud processing module is further configured to: use a probability algorithm to calculate a probability of each parameter change of the robot on the life time axis after the time axis scene parameter is changed, to form a fitting curve.
  • the life time axis refers to a time axis including 24 hours a day
  • the parameters in the life time axis include at least a daily life behavior performed by the user on the life time axis and parameter values representing the behavior.
  • the system further includes: a voice analysis cloud processing module, configured to acquire and analyze the voice signal;
  • the content generating module is further configured to: generate the robot interaction content according to the current robot life time axis according to the user multimodal information, the voice signal, and the user intention.
  • the scene recognition module is specifically configured to acquire location scene information by using video information.
  • the scene recognition module is specifically configured to acquire location scene information by using picture information.
  • the scene recognition module is specifically configured to acquire location scene information by using gesture information.
  • the invention discloses a robot comprising a system for generating interactive content of a robot as described above.
  • the existing robot is generally based on the method of generating the interactive interactive content of the question and answer interactive robot in the solid scene, and cannot generate the robot more accurately based on the current scene. expression.
  • a method for generating interactive content of a robot comprising: actively waking up a robot; acquiring multimodal information of the user; determining user intent according to the multimodal information of the user; acquiring location scene information; and according to the multimodal information of the user, User intent and location scene information, combined with the current robot life timeline to generate robot interaction content.
  • the robot when the user is away from the specific position of the robot, the robot actively wakes up, and recognizes that the robot interaction content is more accurately generated according to the user's multi-modal information and intention, combined with the location scene information and the life time axis of the robot, thereby being more accurate.
  • Anthropomorphic interaction and communication with people For people, everyday life has a certain regularity. In order to make robots communicate with people more anthropomorphic, let the robots sleep, exercise, eat, dance, read books, eat, make up, etc. in 24 hours a day. Sleep and other actions.
  • the present invention adds the life time axis in which the robot is located to the interactive content generation of the robot, and makes the robot more humanized when interacting with the human, so that the robot has a human lifestyle in the life time axis, and the method can enhance the robot interaction content.
  • FIG. 1 is a flowchart of a method for generating interactive content of a robot according to Embodiment 1 of the present invention
  • FIG. 2 is a schematic diagram of a system for generating interactive content of a robot according to a second embodiment of the present invention.
  • Computer devices include user devices and network devices.
  • the user equipment or the client includes but is not limited to a computer, a smart phone, a PDA, etc.;
  • the network device includes but is not limited to a single network server, a server group composed of multiple network servers, or a cloud computing-based computer or network server. cloud.
  • the computer device can be operated separately to implement the invention, and can also access the network and
  • the present invention is implemented by interworking with other computer devices in the network.
  • the network in which the computer device is located includes, but is not limited to, the Internet, a wide area network, a metropolitan area network, a local area network, a VPN network, and the like.
  • first means “first,” “second,” and the like may be used herein to describe the various elements, but the elements should not be limited by these terms, and the terms are used only to distinguish one element from another.
  • the term “and/or” used herein includes any and all combinations of one or more of the associated listed items. When a unit is referred to as being “connected” or “coupled” to another unit, it can be directly connected or coupled to the other unit, or an intermediate unit can be present.
  • a method for generating interactive content of a robot including:
  • S105 Generate robot interaction content according to the current robot life timeline 300 according to the user multimodal information, the user intention, and the location scene information.
  • the existing robot is generally based on the method of generating interactive interactive content of the question and answer interaction robot in the solid scene, and cannot generate the expression of the robot more accurately based on the current scene.
  • a method for generating interactive content of a robot comprising: actively waking up a robot; acquiring multimodal information of the user; determining user intent according to the multimodal information of the user; acquiring location scene information; and according to the multimodal information of the user, User intent and location scene information, combined with the current robot life timeline to generate robot interaction content.
  • the robot when the user is away from the specific position of the robot, the robot actively wakes up, and recognizes that the robot interaction content is more accurately generated according to the user's multi-modal information and intention, combined with the location scene information and the life time axis of the robot, thereby being more accurate.
  • Anthropomorphic interaction and communication with people For people, everyday The life has a certain regularity. In order to make the robot more humanized when communicating with people, the robot will also have the functions of sleeping, exercising, eating, dancing, reading, eating, makeup, sleeping, etc. in 24 hours a day.
  • the present invention adds the life time axis in which the robot is located to the interactive content generation of the robot, and makes the robot more humanized when interacting with the human, so that the robot has a human lifestyle in the life time axis, and the method can enhance the robot interaction content.
  • the robot life timeline 300 is completed and set in advance. Specifically, the robot life timeline 300 is a series of parameter collections, and this parameter is transmitted to the system to generate interactive content.
  • the multimodal information in this embodiment may be one of user expression, voice information, gesture information, scene information, image information, video information, face information, pupil iris information, light sense information, and fingerprint information.
  • voice information voice information
  • gesture information scene information
  • image information video information
  • face information face information
  • pupil iris information light sense information
  • fingerprint information fingerprint information.
  • the user's expression is preferred, so that the recognition is accurate and the recognition efficiency is high.
  • the life time axis is specifically: according to the time axis of human daily life, the robot is fitted with the time axis of human daily life, and the behavior of the robot follows the fitting action, that is, the robot of the day is obtained.
  • Behavior which allows the robot to perform its own behavior based on the life time axis, such as generating interactive content and communicating with humans. If the robot is always awake, it will act according to the behavior on this timeline, and the robot's self-awareness will be changed according to this timeline.
  • the life timeline and variable parameters can be used to change the attributes of self-cognition, such as mood values, fatigue values, etc., and can also automatically add new self-awareness information, such as no previous anger value, based on the life time axis and The scene of the variable factor will automatically add to the self-cognition of the robot based on the scene that previously simulated the human self-cognition.
  • the robot's light-sensing automatic detection module when the user is not in front of the robot, the robot's light-sensing automatic detection module does not trigger, so the robot is in a sleep state.
  • the robot's light-sensing automatic detection module detects the user's proximity, so the robot will actively wake up and recognize the user's expression, combined with the location scene information, the robot's life time axis, for example, current The time is 6 pm, the location scene is the doorway, the user's off-duty time, the user just returned home, then when the robot recognizes that the user's expression is happy, the active wake-up call, with a happy expression, when unhappy, take the initiative The song, with a sympathetic expression.
  • the interactive content can be an expression or text or voice.
  • the step of actively waking up the robot includes:
  • the robot will wake up actively.
  • the user multi-modal information for example, the user's motion, the user's expression, etc.
  • the robot will be actively woken up if not reached. Will not wake up.
  • the detection module of the robot detects the proximity of the human, and actively wakes up itself to interact with humans.
  • Wake-up robots can also perform expressions, actions, or other dynamic behaviors made by humans. If humans are standing still, do not make expressions and movements, or are in a static state such as lying still, then they may not reach the preset. The wake-up parameters are thus not considered to wake the robot, and the robot does not actively wake itself up when it detects these behaviors.
  • the method for generating parameters of the robot life time axis includes:
  • the self-cognitive parameters of the robot are fitted to the parameters in the life time axis to generate a robot life time axis.
  • the life time axis is added to the self-cognition of the robot itself, so that the robot has an anthropomorphic life. For example, add the cognition of lunch to the robot.
  • the step of expanding the self-cognition of the robot specifically includes: combining the life scene with the self-awareness of the robot to form a self-cognitive curve based on the life time axis.
  • the life time axis can be specifically added to the parameters of the robot itself.
  • the step of fitting the parameter of the self-cognition of the robot to the parameter in the life time axis comprises: using a probability algorithm to calculate the time of the robot on the life time axis after the time axis scene parameter is changed The probability of each parameter change forms a fitted curve. In this way, the parameters of the robot's self-cognition can be specifically matched with the parameters in the life time axis.
  • the robot will have sleep, exercise, eat, dance, read books, eat, make up, sleep and other actions. Each action will affect the self-cognition of the robot itself, and combine the parameters on the life time axis with the self-cognition of the robot itself.
  • the robot's self-cognition includes, mood, fatigue value, intimacy. , goodness, number of interactions, three-dimensional cognition of the robot, age, height, weight, intimacy, game scene value, game object value, location scene value, location object value, etc. For the robot to identify the location of the scene, such as cafes, bedrooms, etc.
  • the machine will perform different actions in the time axis of the day, such as sleeping at night, eating at noon, exercising during the day, etc. All the scenes in the life time axis will have an impact on self-awareness. These numerical changes are modeled by the dynamic fit of the probability model, fitting the probability that all of these actions occur on the time axis.
  • Scene Recognition This type of scene recognition changes the value of the geographic scene in self-cognition.
  • the method further comprises: acquiring and analyzing a speech signal
  • the generating the robot interaction content according to the user multimodal information and the user intention, combined with the current robot life time axis further includes:
  • the robot interaction content is generated in combination with the current robot life time axis. In this way, the robot interactive content can be generated in combination with the voice signal, which is more accurate.
  • the step of acquiring location scene information specifically includes: acquiring location scene information by using video information.
  • location scene information can be obtained through video, and the video acquisition is more accurate.
  • the step of acquiring location scene information specifically includes: acquiring location scene information by using picture information.
  • the image acquisition can save the robot's calculations and make the robot's reaction more rapid.
  • the step of acquiring location scene information specifically includes: acquiring location scene information by using gesture information.
  • the gesture can be used to make the robot more applicable. For example, if the disabled or the owner sometimes does not want to talk, the gesture can be used to transmit information to the robot.
  • a system for generating interactive content of a robot includes:
  • the light sensing automatic detecting module 201 is configured to actively wake up the robot
  • the expression analysis cloud processing module 202 is configured to acquire user multimodal information
  • the intent identification module 203 is configured to determine a user intent according to the user multimodal information
  • a scene recognition module 204 configured to acquire location scene information
  • the content generation module 205 is configured to generate the robot interaction content according to the current robot life time axis sent by the robot life timeline module 301 according to the user multimodal information, the user intention, and the location scene information.
  • the present invention adds the life time axis in which the robot is located to the interactive content generation of the robot, and makes the robot more humanized when interacting with the human, so that the robot has a human lifestyle in the life time axis, and the method can enhance the robot interaction content.
  • the robot's light-sensing automatic detection module when the user is not in front of the robot, the robot's light-sensing automatic detection module does not trigger, so the robot is in a sleep state.
  • the robot's light-sensing automatic detection module detects the user's proximity, so the robot will actively wake up and recognize the user's expression, combined with the location scene information, the robot's life time axis, for example, current The time is 6 pm, the location scene is the doorway, the user's off-duty time, the user just returned home, then when the robot recognizes that the user's expression is happy, the active wake-up call, with a happy expression, when unhappy, take the initiative The song, with a sympathetic expression.
  • the light sensing automatic detecting module is specifically configured to:
  • the robot will wake up actively.
  • the user multimodal information for example, the user's motion, the user's expression, etc.
  • the robot's detection module detects the proximity of humans and actively wakes itself up to interact with humans.
  • Wake-up robots can also perform expressions, actions, or other dynamic behaviors made by humans. If humans are standing still, do not make expressions and movements, or are in a static state such as lying still, then they may not reach the preset. The wake-up parameters are thus not considered to wake the robot, and the robot does not actively wake itself up when it detects these behaviors.
  • the system includes a time axis based and artificial intelligence cloud processing module for:
  • the life time axis is added to the self-cognition of the robot itself, so that the robot has an anthropomorphic life. For example, add the cognition of lunch to the robot.
  • the time-based and artificial intelligence cloud processing module is further configured to combine a life scene with a self-awareness of the robot to form a self-cognitive curve based on a life time axis.
  • the life time axis can be specifically added to the parameters of the robot itself.
  • the time axis-based and artificial intelligence cloud processing module is further configured to: use a probability algorithm to calculate a probability of each parameter change of the robot on the life time axis after the time axis scene parameter changes, to form a fit curve.
  • the probability algorithm can adopt the Bayesian probability algorithm.
  • the robot will have sleep, exercise, eat, dance, read books, eat, make up, sleep and other actions. Each action will affect the self-cognition of the robot itself, and combine the parameters on the life time axis with the self-cognition of the robot itself.
  • the robot's self-cognition includes, mood, fatigue value, intimacy. , goodness, number of interactions, three-dimensional cognition of the robot, age, height, weight, intimacy, game scene value, game object value, location scene value, location object value, etc. For the robot to identify the location of the scene, such as cafes, bedrooms, etc.
  • the machine will perform different actions in the time axis of the day, such as sleeping at night, eating at noon, exercising during the day, etc. All the scenes in the life time axis will have an impact on self-awareness. These numerical changes are modeled by the dynamic fit of the probability model, fitting the probability that all of these actions occur on the time axis.
  • Scene Recognition This type of scene recognition changes the value of the geographic scene in self-cognition.
  • the system further includes: a voice analysis cloud processing module, configured to acquire and analyze the voice signal;
  • the content generating module is further configured to: generate the robot interaction content according to the current robot life time axis according to the user multimodal information, the voice signal, and the user intention. In this way, the robot interactive content can be generated in combination with the voice signal, which is more accurate.
  • the scene recognition module is specifically configured to acquire location scene information by using video information.
  • location scene information can be obtained through video, and the video acquisition is more accurate.
  • the scene recognition module is specifically configured to obtain Take location scene information.
  • the image acquisition can save the robot's calculations and make the robot's reaction more rapid.
  • the scene recognition module is specifically configured to acquire location scene information by using gesture information.
  • the gesture can be used to make the robot more applicable. For example, if the disabled or the owner sometimes does not want to talk, the gesture can be used to transmit information to the robot.
  • a robot including a robot interaction content generation system according to any of the above.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Manipulator (AREA)
  • Toys (AREA)

Abstract

提供一种机器人交互内容的生成方法,包括:主动唤醒机器人(S101);获取用户多模态信息(S102);根据所述用户多模态信息确定用户意图(S103);获取地点场景信息(S104);根据所述用户多模态信息、所述用户意图和地点场景信息,结合当前的机器人生活时间轴生成机器人交互内容(S105)。将机器人所在的生活时间轴加入到机器人的交互内容生成中去,使机器人与人交互时更加拟人化,使得机器人在生活时间轴内具有人类的生活方式,该方法能够提升机器人交互内容生成的拟人性,提升人机交互体验,提高智能性。

Description

一种机器人交互内容的生成方法、系统及机器人 技术领域
本发明涉及机器人交互技术领域,尤其涉及一种机器人交互内容的生成方法、系统及机器人。
背景技术
通常人类在与计算机交互过程中,会主动唤醒机器人交互,机器人拾音后开始交互,并在过程中做出表情反馈,人类一般是在见面后会主动发起对话,听到说话的人经过大脑对与说话人的话语与表情分析过后进行合理的表情反馈,而对于机器人而言,目前机器人的交互模式一般采用拾音启动,并做出表情上的反馈,这种方式使得机器人的交互性很低,智能性很低存下下列问题:一般主动换起的机器人,主要作用为打招呼,为用户设定好的语言和表情,这种方式,这种情况下机器人实际还是按照人类预先设计好的交互方式进行表情的输出,这导致机器人不具有拟人化,不能像人类一样,在看见对方是,会对对方的表情做分析,后主动询问对方的方式,并反馈相对应的表情。
因此,如何提出主动唤醒的自动检测人脸表情的机器表情生成方法,能够提升机器人交互内容生成的拟人性,是本技术领域亟需解决的技术问题。
发明内容
本发明的目的是提供一种机器人交互内容的生成方法、系统及机器人,使机器人主动唤醒的自动检测人脸表情的机器交互内容生成方法,能够提升机器人交互内容生成的拟人性,提升人机交互体验,提高智能性。
本发明的目的是通过以下技术方案来实现的:
一种机器人交互内容的生成方法,包括:
主动唤醒机器人;
获取用户多模态信息;
根据所述用户多模态信息确定用户意图;
获取地点场景信息;
根据所述用户多模态信息、所述用户意图和地点场景信息,结合当前的机器人生活时间轴生成机器人交互内容。
优选的,所述主动唤醒机器人的步骤包括:
获取用户多模态信息;
根据预设的唤醒参数与所述用户多模态信息进行匹配;
若用户多模态信息达到预设的唤醒参数则将机器人主动唤醒。
优选的,所述机器人生活时间轴的参数的生成方法包括:
将机器人的自我认知进行扩展;
获取生活时间轴的参数;
对机器人的自我认知的参数与生活时间轴中的参数进行拟合,生成机器人生活时间轴。
优选的,所述将机器人的自我认知进行扩展的步骤具体包括:将生活场景与机器人的自我认识相结合形成基于生活时间轴的自我认知曲线。
优选的,所述对机器人的自我认知的参数与生活时间轴中的参数进行拟合的步骤具体包括:使用概率算法,计算生活时间轴上的机器人在时间轴场景参数改变后的每个参数改变的概率,形成拟合曲线。
优选的,其中,所述生活时间轴指包含一天24小时的时间轴,所述生活时间轴中的参数至少包括用户在所述生活时间轴上进行的日常生活行为以及代表该行为的参数值。
优选的,所述方法进一步包括:获取并分析语音信号;
所述根据所述用户多模态信息和所述用户意图,结合当前的机器人生活时间轴生成机器人交互内容进一步包括:
根据所述用户多模态信息、语音信号和所述用户意图,结合当前的机器人生活时间轴生成机器人交互内容。
优选的,所述获取地点场景信息的步骤具体包括:通过视频信息获取地点场景信息。
优选的,所述获取地点场景信息的步骤具体包括:通过图片信息获取地点场景信息。
优选的,所述获取地点场景信息的步骤具体包括:通过手势信息获取地点场景信息。
本发明公开一种机器人交互内容的生成系统,包括:
光感自动检测模块,用于主动唤醒机器人;
表情分析云处理模块,用于获取用户多模态信息;
意图识别模块,用于根据所述用户多模态信息确定用户意图;
场景识别模块,用于获取地点场景信息;
内容生成模块,用于根据所述用户多模态信息和所述用户意图,结合当前的机器人生活时间轴生成机器人交互内容。
优选的,所述光感自动检测模块具体用于:
获取用户多模态信息;
根据预设的唤醒参数与所述用户多模态信息进行匹配;
若用户多模态信息达到预设的唤醒参数则将机器人主动唤醒。
优选的,所述系统包括基于时间轴与人工智能云处理模块,用于:
将机器人的自我认知进行扩展;
获取生活时间轴的参数;
对机器人的自我认知的参数与生活时间轴中的参数进行拟合,生成机器人生活时间轴。
优选的,所述基于时间轴与人工智能云处理模块进一步用于:将生活场景与机器人的自我认识相结合形成基于生活时间轴的自我认知曲线。
优选的,所述基于时间轴与人工智能云处理模块进一步用于:使用概率算法,计算生活时间轴上的机器人在时间轴场景参数改变后的每个参数改变的概率,形成拟合曲线。
优选的,其中,所述生活时间轴指包含一天24小时的时间轴,所述生活时间轴中的参数至少包括用户在所述生活时间轴上进行的日常生活行为以及代表该行为的参数值。
优选的,所述系统进一步包括:语音分析云处理模块,用于获取并分析语音信号;
所述内容生成模块进一步用于:根据所述用户多模态信息、语音信号和所述用户意图,结合当前的机器人生活时间轴生成机器人交互内容。
优选的,所述场景识别模块具体用于,通过视频信息获取地点场景信息
优选的,所述场景识别模块具体用于,通过图片信息获取地点场景信息。
优选的,所述场景识别模块具体用于,通过手势信息获取地点场景信息。
本发明公开一种机器人,包括如上述任一所述的一种机器人交互内容的生成系统。
相比现有技术,本发明具有以下优点:现有机器人对于应用场景来说,一般是基于固的场景中的问答交互机器人交互内容的生成方法,无法基于当前的场景来更加准确的生成机器人的表情。一种机器人交互内容的生成方法,包括:主动唤醒机器人;获取用户多模态信息;根据所述用户多模态信息确定用户意图;获取地点场景信息;根据所述用户多模态信息、所述用户意图和地点场景信息,结合当前的机器人生活时间轴生成机器人交互内容。这样就可以在用户距离机器人的特定位置时,机器人主动唤醒,并且识别到根据用户多模态信息和意图,结合地点场景信息和机器人的生活时间轴来更加准确地生成机器人交互内容,从而更加准确、拟人化的与人进行交互和沟通。对于人来讲每天的生活都具有一定的规律性,为了让机器人与人沟通时更加拟人化,在一天24小时中,让机器人也会有睡觉,运动,吃饭,跳舞,看书,吃饭,化妆,睡觉等动作。因此本发明将机器人所在的生活时间轴加入到机器人的交互内容生成中去,使机器人与人交互时更加拟人化,使得机器人在生活时间轴内具有人类的生活方式,该方法能够提升机器人交互内容生成的拟人性,提升人机交互体验,提高智能性。
附图说明
图1是本发明实施例一的一种机器人交互内容的生成方法的流程图;
图2是本发明实施例二的一种机器人交互内容的生成系统的示意图。
具体实施方式
虽然流程图将各项操作描述成顺序的处理,但是其中的许多操作可以被并行地、并发地或者同时实施。各项操作的顺序可以被重新安排。当其操作完成时处理可以被终止,但是还可以具有未包括在附图中的附加步骤。处理可以对应于方法、函数、规程、子例程、子程序等等。
计算机设备包括用户设备与网络设备。其中,用户设备或客户端包括但不限于电脑、智能手机、PDA等;网络设备包括但不限于单个网络服务器、多个网络服务器组成的服务器组或基于云计算的由大量计算机或网络服务器构成的云。计算机设备可单独运行来实现本发明,也可接入网络并 通过与网络中的其他计算机设备的交互操作来实现本发明。计算机设备所处的网络包括但不限于互联网、广域网、城域网、局域网、VPN网络等。
在这里可能使用了术语“第一”、“第二”等等来描述各个单元,但是这些单元不应当受这些术语限制,使用这些术语仅仅是为了将一个单元与另一个单元进行区分。这里所使用的术语“和/或”包括其中一个或更多所列出的相关联项目的任意和所有组合。当一个单元被称为“连接”或“耦合”到另一单元时,其可以直接连接或耦合到所述另一单元,或者可以存在中间单元。
这里所使用的术语仅仅是为了描述具体实施例而不意图限制示例性实施例。除非上下文明确地另有所指,否则这里所使用的单数形式“一个”、“一项”还意图包括复数。还应当理解的是,这里所使用的术语“包括”和/或“包含”规定所陈述的特征、整数、步骤、操作、单元和/或组件的存在,而不排除存在或添加一个或更多其他特征、整数、步骤、操作、单元、组件和/或其组合。
下面结合附图和较佳的实施例对本发明作进一步说明。
实施例一
如图1所示,本实施例中公开一种机器人交互内容的生成方法,包括:
S101、主动唤醒机器人;
S102、获取用户多模态信息;
S103、根据所述用户多模态信息确定用户意图;
S104、获取地点场景信息;
S105、根据所述用户多模态信息、所述用户意图和地点场景信息,结合当前的机器人生活时间轴300生成机器人交互内容。
现有机器人对于应用场景来说,一般是基于固的场景中的问答交互机器人交互内容的生成方法,无法基于当前的场景来更加准确的生成机器人的表情。一种机器人交互内容的生成方法,包括:主动唤醒机器人;获取用户多模态信息;根据所述用户多模态信息确定用户意图;获取地点场景信息;根据所述用户多模态信息、所述用户意图和地点场景信息,结合当前的机器人生活时间轴生成机器人交互内容。这样就可以在用户距离机器人的特定位置时,机器人主动唤醒,并且识别到根据用户多模态信息和意图,结合地点场景信息和机器人的生活时间轴来更加准确地生成机器人交互内容,从而更加准确、拟人化的与人进行交互和沟通。对于人来讲每天 的生活都具有一定的规律性,为了让机器人与人沟通时更加拟人化,在一天24小时中,让机器人也会有睡觉,运动,吃饭,跳舞,看书,吃饭,化妆,睡觉等动作。因此本发明将机器人所在的生活时间轴加入到机器人的交互内容生成中去,使机器人与人交互时更加拟人化,使得机器人在生活时间轴内具有人类的生活方式,该方法能够提升机器人交互内容生成的拟人性,提升人机交互体验,提高智能性。机器人生活时间轴300是提前进行拟合和设置完成的,具体来讲,机器人生活时间轴300是一系列的参数合集,将这个参数传输给系统进行生成交互内容。
本实施例中的多模态信息可以是用户表情、语音信息、手势信息、场景信息、图像信息、视频信息、人脸信息、瞳孔虹膜信息、光感信息和指纹信息等其中的其中一种或几种。本实施例中优选为用户表情,这样识别的准确并且识别的效率高。
本实施例中,基于生活时间轴具体是:根据人类日常生活的时间轴,将机器人与人类日常生活的时间轴做拟合,机器人的行为按照这个拟合行动,也就是得到一天中机器人自己的行为,从而让机器人基于生活时间轴去进行自己的行为,例如生成交互内容与人类沟通等。假如机器人一直唤醒的话,就会按照这个时间轴上的行为行动,机器人的自我认知也会根据这个时间轴进行相应的更改。生活时间轴与可变参数可以对自我认知中的属性,例如心情值,疲劳值等等的更改,也可以自动加入新的自我认知信息,比如之前没有愤怒值,基于生活时间轴和可变因素的场景就会自动根据之前模拟人类自我认知的场景,从而对机器人的自我认知进行添加。
例如,用户不在机器人面前的时候,机器人的光感自动检测模块没有触发,因此机器人处于休眠状态。而当用户走到机器人的面前时,机器人的光感自动检测模块检测到用户的靠近,因此机器人就会主动唤醒,并且识别用户的表情,结合地点场景信息,机器人的生活时间轴,例如,当前时间是下午6点,地点场景为门口,为用户的下班时间,用户刚回到家,那么当机器人识别到用户的表情为开心时,主动唤醒打招呼,配上开心表情,当不开心时,主动放首歌,并配上同情的表情。而如果当前时间是上午9点,当地点场景为房间时,那么当机器人识别到用户的表情为开心时,主动唤醒打招呼,配上早上好的表情,当不开心时,主动放首歌,并配上可怜的表情。交互内容可以是表情或文字或语音等。
根据其中一个示例,所述主动唤醒机器人的步骤包括:
获取用户多模态信息;
根据预设的唤醒参数与所述用户多模态信息进行匹配;
若用户多模态信息达到预设的唤醒参数则将机器人主动唤醒。
这样就可以将用户多模态信息,例如,用户的动作,用户的表情等采集后与预设的唤醒参数进行比较,如果达到了预设的唤醒参数,那就将机器人主动唤醒,如果没有达到就不会唤醒。例如在人类靠近机器人之后,机器人的检测模块检测到人类的靠近,就会主动的唤醒自己,从而与人类进行交互。唤醒机器人还可以通过人类作出的表情,动作,或其他具有动态的行为,而如果人类是站立不动、不作出表情和动作,或者躺着不动等静态状态,那么就可以是没有达到预设的唤醒参数,从而不被视为唤醒机器人,机器人检测到这些行为时不会主动唤醒自己。
根据其中一个示例,所述机器人生活时间轴的参数的生成方法包括:
将机器人的自我认知进行扩展;
获取生活时间轴的参数;
对机器人的自我认知的参数与生活时间轴中的参数进行拟合,生成机器人生活时间轴。这样将生活时间轴加入到机器人本身的自我认知中去,使机器人具有拟人化的生活。例如将中午吃饭的认知加入到机器人中去。
根据其中另一个示例,所述将机器人的自我认知进行扩展的步骤具体包括:将生活场景与机器人的自我认识相结合形成基于生活时间轴的自我认知曲线。这样就可以具体的将生活时间轴加入到机器人本身的参数中去。
根据其中另一个示例,所述对机器人的自我认知的参数与生活时间轴中的参数进行拟合的步骤具体包括:使用概率算法,计算生活时间轴上的机器人在时间轴场景参数改变后的每个参数改变的概率,形成拟合曲线。这样就可以具体的将机器人的自我认知的参数与生活时间轴中的参数进行拟合。
例如,在一天24小时中,使机器人会有睡觉,运动,吃饭,跳舞,看书,吃饭,化妆,睡觉等动作。每个动作会影响机器人本身的自我认知,将生活时间轴上的参数与机器人本身的自我认知进行结合,拟合后,即让机器人的自我认知包括了,心情,疲劳值,亲密度,好感度,交互次数,机器人的三维的认知,年龄,身高,体重,亲密度,游戏场景值,游戏对象值,地点场景值,地点对象值等。为机器人可以自己识别所在的地点场景,比如咖啡厅,卧室等。
机器一天的时间轴内会进行不同的动作,比如夜里睡觉,中午吃饭,白天运动等等,这些所有的生活时间轴中的场景,对于自我认知都会有影响。这些数值的变化采用的概率模型的动态拟合方式,将这些所有动作在时间轴上发生的几率拟合出来。场景识别:这种地点场景识别会改变自我认知中的地理场景值。
根据其中另一个示例,所述方法进一步包括:获取并分析语音信号;
所述根据所述用户多模态信息和所述用户意图,结合当前的机器人生活时间轴生成机器人交互内容进一步包括:
根据所述用户多模态信息、语音信号和所述用户意图,结合当前的机器人生活时间轴生成机器人交互内容。这样就可以结合语音信号生成机器人交互内容,更加准确。
根据其中另一个示例,所述获取地点场景信息的步骤具体包括:通过视频信息获取地点场景信息。这样地点场景信息可以通过视频来获取,通过视频获取更加准确。
根据其中另一个示例,所述获取地点场景信息的步骤具体包括:通过图片信息获取地点场景信息。通过图片获取可以省去机器人的计算量,使机器人的反应更加迅速。
根据其中另一个示例,所述获取地点场景信息的步骤具体包括:通过手势信息获取地点场景信息。通过手势获取可以使机器人的适用范围更加广,例如对于残疾人士或者主人有时候不想说话,就可以通过手势向机器人传递信息。
实施例二
如图2所示,本实施例中公开一种机器人交互内容的生成系统,包括:
光感自动检测模块201,用于主动唤醒机器人;
表情分析云处理模块202,用于获取用户多模态信息;
意图识别模块203,用于根据所述用户多模态信息确定用户意图;
场景识别模块204,用于获取地点场景信息;
内容生成模块205,用于根据所述用户多模态信息、所述用户意图和地点场景信息,结合机器人生活时间轴模块301发送的当前的机器人生活时间轴生成机器人交互内容。
这样就可以在用户距离机器人的特定位置时,机器人主动唤醒,并且 识别到根据用户多模态信息和意图,结合地点场景信息和机器人的生活时间轴来更加准确地生成机器人交互内容,从而更加准确、拟人化的与人进行交互和沟通。对于人来讲每天的生活都具有一定的规律性,为了让机器人与人沟通时更加拟人化,在一天24小时中,让机器人也会有睡觉,运动,吃饭,跳舞,看书,吃饭,化妆,睡觉等动作。因此本发明将机器人所在的生活时间轴加入到机器人的交互内容生成中去,使机器人与人交互时更加拟人化,使得机器人在生活时间轴内具有人类的生活方式,该方法能够提升机器人交互内容生成的拟人性,提升人机交互体验,提高智能性。
例如,用户不在机器人面前的时候,机器人的光感自动检测模块没有触发,因此机器人处于休眠状态。而当用户走到机器人的面前时,机器人的光感自动检测模块检测到用户的靠近,因此机器人就会主动唤醒,并且识别用户的表情,结合地点场景信息,机器人的生活时间轴,例如,当前时间是下午6点,地点场景为门口,为用户的下班时间,用户刚回到家,那么当机器人识别到用户的表情为开心时,主动唤醒打招呼,配上开心表情,当不开心时,主动放首歌,并配上同情的表情。
根据其中一个示例,所述光感自动检测模块具体用于:
获取用户多模态信息;
根据预设的唤醒参数与所述用户多模态信息进行匹配;
若用户多模态信息达到预设的唤醒参数则将机器人主动唤醒。
这样就可以将用户多模态信息,例如,用户的动作,用户的表情等采集后与预设的唤醒参数进行比较,如果达到了预设的唤醒参数,那就将机器人主动唤醒,例如在人类靠近机器人之后,机器人的检测模块检测到人类的靠近,就会主动的唤醒自己,从而与人类进行交互。唤醒机器人还可以通过人类作出的表情,动作,或其他具有动态的行为,而如果人类是站立不动、不作出表情和动作,或者躺着不动等静态状态,那么就可以是没有达到预设的唤醒参数,从而不被视为唤醒机器人,机器人检测到这些行为时不会主动唤醒自己。
根据其中一个示例,所述系统包括基于时间轴与人工智能云处理模块,用于:
将机器人的自我认知进行扩展;
获取生活时间轴的参数;
对机器人的自我认知的参数与生活时间轴中的参数进行拟合,生成机 器人生活时间轴。
这样将生活时间轴加入到机器人本身的自我认知中去,使机器人具有拟人化的生活。例如将中午吃饭的认知加入到机器人中去。
根据其中另一个示例,所述基于时间轴与人工智能云处理模块进一步用于:将生活场景与机器人的自我认识相结合形成基于生活时间轴的自我认知曲线。这样就可以具体的将生活时间轴加入到机器人本身的参数中去。
根据其中另一个示例,所述基于时间轴与人工智能云处理模块进一步用于:使用概率算法,计算生活时间轴上的机器人在时间轴场景参数改变后的每个参数改变的概率,形成拟合曲线。这样就可以具体的将机器人的自我认知的参数与生活时间轴中的参数进行拟合。其中,概率算法可以采用贝叶斯概率算法。
例如,在一天24小时中,使机器人会有睡觉,运动,吃饭,跳舞,看书,吃饭,化妆,睡觉等动作。每个动作会影响机器人本身的自我认知,将生活时间轴上的参数与机器人本身的自我认知进行结合,拟合后,即让机器人的自我认知包括了,心情,疲劳值,亲密度,好感度,交互次数,机器人的三维的认知,年龄,身高,体重,亲密度,游戏场景值,游戏对象值,地点场景值,地点对象值等。为机器人可以自己识别所在的地点场景,比如咖啡厅,卧室等。
机器一天的时间轴内会进行不同的动作,比如夜里睡觉,中午吃饭,白天运动等等,这些所有的生活时间轴中的场景,对于自我认知都会有影响。这些数值的变化采用的概率模型的动态拟合方式,将这些所有动作在时间轴上发生的几率拟合出来。场景识别:这种地点场景识别会改变自我认知中的地理场景值。
根据其中另一个示例,所述系统进一步包括:语音分析云处理模块,用于获取并分析语音信号;
所述内容生成模块进一步用于:根据所述用户多模态信息、语音信号和所述用户意图,结合当前的机器人生活时间轴生成机器人交互内容。这样就可以结合语音信号生成机器人交互内容,更加准确。
根据其中另一个示例,所述场景识别模块具体用于,通过视频信息获取地点场景信息。这样地点场景信息可以通过视频来获取,通过视频获取更加准确。
根据其中另一个示例,所述场景识别模块具体用于,通过图片信息获 取地点场景信息。通过图片获取可以省去机器人的计算量,使机器人的反应更加迅速。
根据其中另一个示例,所述场景识别模块具体用于,通过手势信息获取地点场景信息。通过手势获取可以使机器人的适用范围更加广,例如对于残疾人士或者主人有时候不想说话,就可以通过手势向机器人传递信息。
本实施例中公开一种机器人,包括如上述任一所述的一种机器人交互内容的生成系统。
以上内容是结合具体的优选实施方式对本发明所作的进一步详细说明,不能认定本发明的具体实施只局限于这些说明。对于本发明所属技术领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干简单推演或替换,都应当视为属于本发明的保护范围。

Claims (21)

  1. 一种机器人交互内容的生成方法,其特征在于,包括:
    主动唤醒机器人;
    获取用户多模态信息;
    根据所述用户多模态信息确定用户意图;
    获取地点场景信息;
    根据所述用户多模态信息、所述用户意图和地点场景信息,结合当前的机器人生活时间轴生成机器人交互内容。
  2. 根据权利要求1所述的生成方法,其特征在于,所述主动唤醒机器人的步骤包括:
    获取用户多模态信息;
    根据预设的唤醒参数与所述用户多模态信息进行匹配;
    若用户多模态信息达到预设的唤醒参数则将机器人主动唤醒。
  3. 根据权利要求1所述的生成方法,其特征在于,所述方法进一步包括:获取并分析语音信号;
    所述根据所述用户多模态信息和所述用户意图,结合当前的机器人生活时间轴生成机器人交互内容进一步包括:
    根据所述用户多模态信息、语音信号和所述用户意图,结合当前的机器人生活时间轴生成机器人交互内容。
  4. 根据权利要求1所述的生成方法,其特征在于,所述机器人生活时间轴的参数的生成方法包括:
    将机器人的自我认知进行扩展;
    获取生活时间轴的参数;
    对机器人的自我认知的参数与生活时间轴中的参数进行拟合,生成机器人生活时间轴。
  5. 根据权利要求4所述的生成方法,其特征在于,所述将机器人的自我认知进行扩展的步骤具体包括:将生活场景与机器人的自我认识相结合形成基于生活时间轴的自我认知曲线。
  6. 根据权利要求4所述的生成方法,其特征在于,所述对机器人的自我认知的参数与生活时间轴中的参数进行拟合的步骤具体包括:使用概率算法,计算生活时间轴上的机器人在时间轴场景参数改变后的每个参数改变的概率,形成拟合曲线。
  7. 根据权利要求4所述的生成方法,其特征在于,其中,所述生活时间轴指包含一天24小时的时间轴,所述生活时间轴中的参数至少包括用户在所述生活时间轴上进行的日常生活行为以及代表该行为的参数值。
  8. 根据权利要求1所述的生成方法,其特征在于,所述获取地点场景信息的步骤具体包括:通过视频信息获取地点场景信息。
  9. 根据权利要求1所述的生成方法,其特征在于,所述获取地点场景信息的步骤具体包括:通过图片信息获取地点场景信息。
  10. 根据权利要求1所述的生成方法,其特征在于,所述获取地点场景信息的步骤具体包括:通过手势信息获取地点场景信息。
  11. 一种机器人交互内容的生成系统,其特征在于,包括:
    光感自动检测模块,用于主动唤醒机器人;
    表情分析云处理模块,用于获取用户多模态信息;
    意图识别模块,用于根据所述用户多模态信息确定用户意图;
    场景识别模块,用于获取地点场景信息;
    内容生成模块,用于根据所述用户多模态信息、所述用户意图和地点场景信息,结合当前的机器人生活时间轴生成机器人交互内容。
  12. 根据权利要求11所述的生成系统,其特征在于,所述光感自动检测模块具体用于:
    获取用户多模态信息;
    根据预设的唤醒参数与所述用户多模态信息进行匹配;
    若用户多模态信息达到预设的唤醒参数则将机器人主动唤醒。
  13. 根据权利要求11所述的生成系统,其特征在于,所述系统进一步包括:语音分析云处理模块,用于获取并分析语音信号;
    所述内容生成模块进一步用于:根据所述用户多模态信息、语音信号和所述用户意图,结合当前的机器人生活时间轴生成机器人交互内容。
  14. 根据权利要求11所述的生成系统,其特征在于,所述系统包括基于时间轴与人工智能云处理模块,用于:
    将机器人的自我认知进行扩展;
    获取生活时间轴的参数;
    对机器人的自我认知的参数与生活时间轴中的参数进行拟合,生成机器人生活时间轴。
  15. 根据权利要求14所述的生成系统,其特征在于,所述基于时间轴 与人工智能云处理模块进一步用于:将生活场景与机器人的自我认识相结合形成基于生活时间轴的自我认知曲线。
  16. 根据权利要求14所述的生成系统,其特征在于,所述基于时间轴与人工智能云处理模块进一步用于:使用概率算法,计算生活时间轴上的机器人在时间轴场景参数改变后的每个参数改变的概率,形成拟合曲线。
  17. 根据权利要求14所述的生成系统,其特征在于,其中,所述生活时间轴指包含一天24小时的时间轴,所述生活时间轴中的参数至少包括用户在所述生活时间轴上进行的日常生活行为以及代表该行为的参数值。
  18. 根据权利要求11所述的生成系统,其特征在于,所述场景识别模块具体用于,通过视频信息获取地点场景信息。
  19. 根据权利要求11所述的生成系统,其特征在于,所述场景识别模块具体用于,通过图片信息获取地点场景信息。
  20. 根据权利要求11所述的生成系统,其特征在于,所述场景识别模块具体用于,通过手势信息获取地点场景信息。
  21. 一种机器人,其特征在于,包括如权利要求10至20任一所述的一种机器人交互内容的生成系统。
PCT/CN2016/087740 2016-06-29 2016-06-29 一种机器人交互内容的生成方法、系统及机器人 Ceased WO2018000261A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201680001750.8A CN106537293A (zh) 2016-06-29 2016-06-29 一种机器人交互内容的生成方法、系统及机器人
PCT/CN2016/087740 WO2018000261A1 (zh) 2016-06-29 2016-06-29 一种机器人交互内容的生成方法、系统及机器人

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/087740 WO2018000261A1 (zh) 2016-06-29 2016-06-29 一种机器人交互内容的生成方法、系统及机器人

Publications (1)

Publication Number Publication Date
WO2018000261A1 true WO2018000261A1 (zh) 2018-01-04

Family

ID=58335931

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/087740 Ceased WO2018000261A1 (zh) 2016-06-29 2016-06-29 一种机器人交互内容的生成方法、系统及机器人

Country Status (2)

Country Link
CN (1) CN106537293A (zh)
WO (1) WO2018000261A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108363492B (zh) * 2018-03-09 2021-06-25 南京阿凡达机器人科技有限公司 一种人机交互方法及交互机器人
CN109176535B (zh) * 2018-07-16 2021-10-19 北京光年无限科技有限公司 基于智能机器人的交互方法及系统
CN112001248B (zh) 2020-07-20 2024-03-01 北京百度网讯科技有限公司 主动交互的方法、装置、电子设备和可读存储介质
CN112099630B (zh) * 2020-09-11 2024-04-05 济南大学 一种多模态意图逆向主动融合的人机交互方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102103707A (zh) * 2009-12-16 2011-06-22 群联电子股份有限公司 情感引擎、情感引擎系统及电子装置的控制方法
CN105345818A (zh) * 2015-11-04 2016-02-24 深圳好未来智能科技有限公司 带有情绪及表情模块的3d视频互动机器人
CN105409197A (zh) * 2013-03-15 2016-03-16 趣普科技公司 用于提供持久伙伴装置的设备和方法
CN105490918A (zh) * 2015-11-20 2016-04-13 深圳狗尾草智能科技有限公司 一种机器人主动与主人交互的系统及方法
CN105511608A (zh) * 2015-11-30 2016-04-20 北京光年无限科技有限公司 基于智能机器人的交互方法及装置、智能机器人

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1956528B1 (en) * 2007-02-08 2018-10-03 Samsung Electronics Co., Ltd. Apparatus and method for expressing behavior of software robot

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102103707A (zh) * 2009-12-16 2011-06-22 群联电子股份有限公司 情感引擎、情感引擎系统及电子装置的控制方法
CN105409197A (zh) * 2013-03-15 2016-03-16 趣普科技公司 用于提供持久伙伴装置的设备和方法
CN105345818A (zh) * 2015-11-04 2016-02-24 深圳好未来智能科技有限公司 带有情绪及表情模块的3d视频互动机器人
CN105490918A (zh) * 2015-11-20 2016-04-13 深圳狗尾草智能科技有限公司 一种机器人主动与主人交互的系统及方法
CN105511608A (zh) * 2015-11-30 2016-04-20 北京光年无限科技有限公司 基于智能机器人的交互方法及装置、智能机器人

Also Published As

Publication number Publication date
CN106537293A (zh) 2017-03-22

Similar Documents

Publication Publication Date Title
CN107894833B (zh) 基于虚拟人的多模态交互处理方法及系统
CN106956271B (zh) 预测情感状态的方法和机器人
CN108227932B (zh) 交互意图确定方法及装置、计算机设备及存储介质
WO2018000268A1 (zh) 一种机器人交互内容的生成方法、系统及机器人
WO2018000259A1 (zh) 一种机器人交互内容的生成方法、系统及机器人
US11221669B2 (en) Non-verbal engagement of a virtual assistant
CN108000526B (zh) 用于智能机器人的对话交互方法及系统
US8321221B2 (en) Speech communication system and method, and robot apparatus
WO2018006374A1 (zh) 一种基于主动唤醒的功能推荐方法、系统及机器人
CN108334583A (zh) 情感交互方法及装置、计算机可读存储介质、计算机设备
CN110110169A (zh) 人机交互方法及人机交互装置
KR20200024675A (ko) 휴먼 행동 인식 장치 및 방법
CN107797663A (zh) 基于虚拟人的多模态交互处理方法及系统
WO2018006372A1 (zh) 一种基于意图识别控制家电的方法、系统及机器人
WO2018000267A1 (zh) 一种机器人交互内容的生成方法、系统及机器人
WO2018006370A1 (zh) 一种虚拟3d机器人的交互方法、系统及机器人
WO2018000261A1 (zh) 一种机器人交互内容的生成方法、系统及机器人
WO2018000260A1 (zh) 一种机器人交互内容的生成方法、系统及机器人
WO2018006371A1 (zh) 一种同步语音及虚拟动作的方法、系统及机器人
Thakur et al. A complex activity based emotion recognition algorithm for affect aware systems
CN117668763B (zh) 基于多模态的数字人一体机及其多模态感知识别方法
WO2018000258A1 (zh) 一种机器人交互内容的生成方法、系统及机器人
WO2018000266A1 (zh) 一种机器人交互内容的生成方法、系统及机器人
CN114047901B (zh) 人机交互方法及智能设备
Hao et al. Proposal of initiative service model for service robot

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16906662

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16906662

Country of ref document: EP

Kind code of ref document: A1