WO2018000261A1 - Procédé et système permettant de générer un contenu d'interaction de robot, et robot - Google Patents
Procédé et système permettant de générer un contenu d'interaction de robot, et robot Download PDFInfo
- Publication number
- WO2018000261A1 WO2018000261A1 PCT/CN2016/087740 CN2016087740W WO2018000261A1 WO 2018000261 A1 WO2018000261 A1 WO 2018000261A1 CN 2016087740 W CN2016087740 W CN 2016087740W WO 2018000261 A1 WO2018000261 A1 WO 2018000261A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- robot
- time axis
- information
- user
- life time
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
Definitions
- the invention relates to the field of robot interaction technology, and in particular to a method, a system and a robot for generating robot interactive content.
- the object of the present invention is to provide a method, a system and a robot for generating interactive content of a robot, and a method for generating a machine interactive content for automatically detecting a facial expression by actively awakening the robot, which can improve the anthropomorphicity of the robot interactive content generation and enhance human-computer interaction. Experience and improve intelligence.
- a method for generating robot interactive content comprising:
- the robot interaction content is generated according to the current multi-modality information, the user intention and the location scene information, in combination with the current robot life time axis.
- the step of actively waking up the robot comprises:
- the robot will wake up actively.
- the method for generating parameters of the life time axis of the robot includes:
- the self-cognitive parameters of the robot are fitted to the parameters in the life time axis to generate a robot life time axis.
- the step of expanding the self-cognition of the robot specifically comprises: combining the life scene with the self-knowledge of the robot to form a self-cognitive curve based on the life time axis.
- the step of fitting the self-cognitive parameter of the robot to the parameter in the life time axis comprises: using a probability algorithm to calculate each parameter of the robot on the life time axis after the time axis scene parameter is changed.
- the probability of change forms a fitted curve.
- the life time axis refers to a time axis including 24 hours a day
- the parameters in the life time axis include at least a daily life behavior performed by the user on the life time axis and parameter values representing the behavior.
- the method further comprises: acquiring and analyzing a voice signal
- the generating the robot interaction content according to the user multimodal information and the user intention, combined with the current robot life time axis further includes:
- the robot interaction content is generated in combination with the current robot life time axis.
- the step of acquiring location scene information specifically includes: acquiring location scene information by using video information.
- the step of acquiring location scene information specifically includes: acquiring location scene information by using picture information.
- the step of acquiring location scene information specifically includes: acquiring location scene information by using gesture information.
- the invention discloses a system for generating robot interactive content, comprising:
- Light sensing automatic detection module for actively waking up the robot
- An expression analysis cloud processing module for acquiring user multimodal information
- An intent identification module configured to determine a user intent according to the user multimodal information
- a scene recognition module configured to acquire location scene information
- the content generating module is configured to generate the robot interaction content according to the current robot life time axis according to the user multimodal information and the user intention.
- the light sensing automatic detecting module is specifically configured to:
- the robot will wake up actively.
- the system comprises a time axis based and artificial intelligence cloud processing module for:
- the self-cognitive parameters of the robot are fitted to the parameters in the life time axis to generate a robot life time axis.
- the time-based and artificial intelligence cloud processing module is further configured to combine a life scene with a self-awareness of the robot to form a self-cognitive curve based on a life time axis.
- the time axis-based and artificial intelligence cloud processing module is further configured to: use a probability algorithm to calculate a probability of each parameter change of the robot on the life time axis after the time axis scene parameter is changed, to form a fitting curve.
- the life time axis refers to a time axis including 24 hours a day
- the parameters in the life time axis include at least a daily life behavior performed by the user on the life time axis and parameter values representing the behavior.
- the system further includes: a voice analysis cloud processing module, configured to acquire and analyze the voice signal;
- the content generating module is further configured to: generate the robot interaction content according to the current robot life time axis according to the user multimodal information, the voice signal, and the user intention.
- the scene recognition module is specifically configured to acquire location scene information by using video information.
- the scene recognition module is specifically configured to acquire location scene information by using picture information.
- the scene recognition module is specifically configured to acquire location scene information by using gesture information.
- the invention discloses a robot comprising a system for generating interactive content of a robot as described above.
- the existing robot is generally based on the method of generating the interactive interactive content of the question and answer interactive robot in the solid scene, and cannot generate the robot more accurately based on the current scene. expression.
- a method for generating interactive content of a robot comprising: actively waking up a robot; acquiring multimodal information of the user; determining user intent according to the multimodal information of the user; acquiring location scene information; and according to the multimodal information of the user, User intent and location scene information, combined with the current robot life timeline to generate robot interaction content.
- the robot when the user is away from the specific position of the robot, the robot actively wakes up, and recognizes that the robot interaction content is more accurately generated according to the user's multi-modal information and intention, combined with the location scene information and the life time axis of the robot, thereby being more accurate.
- Anthropomorphic interaction and communication with people For people, everyday life has a certain regularity. In order to make robots communicate with people more anthropomorphic, let the robots sleep, exercise, eat, dance, read books, eat, make up, etc. in 24 hours a day. Sleep and other actions.
- the present invention adds the life time axis in which the robot is located to the interactive content generation of the robot, and makes the robot more humanized when interacting with the human, so that the robot has a human lifestyle in the life time axis, and the method can enhance the robot interaction content.
- FIG. 1 is a flowchart of a method for generating interactive content of a robot according to Embodiment 1 of the present invention
- FIG. 2 is a schematic diagram of a system for generating interactive content of a robot according to a second embodiment of the present invention.
- Computer devices include user devices and network devices.
- the user equipment or the client includes but is not limited to a computer, a smart phone, a PDA, etc.;
- the network device includes but is not limited to a single network server, a server group composed of multiple network servers, or a cloud computing-based computer or network server. cloud.
- the computer device can be operated separately to implement the invention, and can also access the network and
- the present invention is implemented by interworking with other computer devices in the network.
- the network in which the computer device is located includes, but is not limited to, the Internet, a wide area network, a metropolitan area network, a local area network, a VPN network, and the like.
- first means “first,” “second,” and the like may be used herein to describe the various elements, but the elements should not be limited by these terms, and the terms are used only to distinguish one element from another.
- the term “and/or” used herein includes any and all combinations of one or more of the associated listed items. When a unit is referred to as being “connected” or “coupled” to another unit, it can be directly connected or coupled to the other unit, or an intermediate unit can be present.
- a method for generating interactive content of a robot including:
- S105 Generate robot interaction content according to the current robot life timeline 300 according to the user multimodal information, the user intention, and the location scene information.
- the existing robot is generally based on the method of generating interactive interactive content of the question and answer interaction robot in the solid scene, and cannot generate the expression of the robot more accurately based on the current scene.
- a method for generating interactive content of a robot comprising: actively waking up a robot; acquiring multimodal information of the user; determining user intent according to the multimodal information of the user; acquiring location scene information; and according to the multimodal information of the user, User intent and location scene information, combined with the current robot life timeline to generate robot interaction content.
- the robot when the user is away from the specific position of the robot, the robot actively wakes up, and recognizes that the robot interaction content is more accurately generated according to the user's multi-modal information and intention, combined with the location scene information and the life time axis of the robot, thereby being more accurate.
- Anthropomorphic interaction and communication with people For people, everyday The life has a certain regularity. In order to make the robot more humanized when communicating with people, the robot will also have the functions of sleeping, exercising, eating, dancing, reading, eating, makeup, sleeping, etc. in 24 hours a day.
- the present invention adds the life time axis in which the robot is located to the interactive content generation of the robot, and makes the robot more humanized when interacting with the human, so that the robot has a human lifestyle in the life time axis, and the method can enhance the robot interaction content.
- the robot life timeline 300 is completed and set in advance. Specifically, the robot life timeline 300 is a series of parameter collections, and this parameter is transmitted to the system to generate interactive content.
- the multimodal information in this embodiment may be one of user expression, voice information, gesture information, scene information, image information, video information, face information, pupil iris information, light sense information, and fingerprint information.
- voice information voice information
- gesture information scene information
- image information video information
- face information face information
- pupil iris information light sense information
- fingerprint information fingerprint information.
- the user's expression is preferred, so that the recognition is accurate and the recognition efficiency is high.
- the life time axis is specifically: according to the time axis of human daily life, the robot is fitted with the time axis of human daily life, and the behavior of the robot follows the fitting action, that is, the robot of the day is obtained.
- Behavior which allows the robot to perform its own behavior based on the life time axis, such as generating interactive content and communicating with humans. If the robot is always awake, it will act according to the behavior on this timeline, and the robot's self-awareness will be changed according to this timeline.
- the life timeline and variable parameters can be used to change the attributes of self-cognition, such as mood values, fatigue values, etc., and can also automatically add new self-awareness information, such as no previous anger value, based on the life time axis and The scene of the variable factor will automatically add to the self-cognition of the robot based on the scene that previously simulated the human self-cognition.
- the robot's light-sensing automatic detection module when the user is not in front of the robot, the robot's light-sensing automatic detection module does not trigger, so the robot is in a sleep state.
- the robot's light-sensing automatic detection module detects the user's proximity, so the robot will actively wake up and recognize the user's expression, combined with the location scene information, the robot's life time axis, for example, current The time is 6 pm, the location scene is the doorway, the user's off-duty time, the user just returned home, then when the robot recognizes that the user's expression is happy, the active wake-up call, with a happy expression, when unhappy, take the initiative The song, with a sympathetic expression.
- the interactive content can be an expression or text or voice.
- the step of actively waking up the robot includes:
- the robot will wake up actively.
- the user multi-modal information for example, the user's motion, the user's expression, etc.
- the robot will be actively woken up if not reached. Will not wake up.
- the detection module of the robot detects the proximity of the human, and actively wakes up itself to interact with humans.
- Wake-up robots can also perform expressions, actions, or other dynamic behaviors made by humans. If humans are standing still, do not make expressions and movements, or are in a static state such as lying still, then they may not reach the preset. The wake-up parameters are thus not considered to wake the robot, and the robot does not actively wake itself up when it detects these behaviors.
- the method for generating parameters of the robot life time axis includes:
- the self-cognitive parameters of the robot are fitted to the parameters in the life time axis to generate a robot life time axis.
- the life time axis is added to the self-cognition of the robot itself, so that the robot has an anthropomorphic life. For example, add the cognition of lunch to the robot.
- the step of expanding the self-cognition of the robot specifically includes: combining the life scene with the self-awareness of the robot to form a self-cognitive curve based on the life time axis.
- the life time axis can be specifically added to the parameters of the robot itself.
- the step of fitting the parameter of the self-cognition of the robot to the parameter in the life time axis comprises: using a probability algorithm to calculate the time of the robot on the life time axis after the time axis scene parameter is changed The probability of each parameter change forms a fitted curve. In this way, the parameters of the robot's self-cognition can be specifically matched with the parameters in the life time axis.
- the robot will have sleep, exercise, eat, dance, read books, eat, make up, sleep and other actions. Each action will affect the self-cognition of the robot itself, and combine the parameters on the life time axis with the self-cognition of the robot itself.
- the robot's self-cognition includes, mood, fatigue value, intimacy. , goodness, number of interactions, three-dimensional cognition of the robot, age, height, weight, intimacy, game scene value, game object value, location scene value, location object value, etc. For the robot to identify the location of the scene, such as cafes, bedrooms, etc.
- the machine will perform different actions in the time axis of the day, such as sleeping at night, eating at noon, exercising during the day, etc. All the scenes in the life time axis will have an impact on self-awareness. These numerical changes are modeled by the dynamic fit of the probability model, fitting the probability that all of these actions occur on the time axis.
- Scene Recognition This type of scene recognition changes the value of the geographic scene in self-cognition.
- the method further comprises: acquiring and analyzing a speech signal
- the generating the robot interaction content according to the user multimodal information and the user intention, combined with the current robot life time axis further includes:
- the robot interaction content is generated in combination with the current robot life time axis. In this way, the robot interactive content can be generated in combination with the voice signal, which is more accurate.
- the step of acquiring location scene information specifically includes: acquiring location scene information by using video information.
- location scene information can be obtained through video, and the video acquisition is more accurate.
- the step of acquiring location scene information specifically includes: acquiring location scene information by using picture information.
- the image acquisition can save the robot's calculations and make the robot's reaction more rapid.
- the step of acquiring location scene information specifically includes: acquiring location scene information by using gesture information.
- the gesture can be used to make the robot more applicable. For example, if the disabled or the owner sometimes does not want to talk, the gesture can be used to transmit information to the robot.
- a system for generating interactive content of a robot includes:
- the light sensing automatic detecting module 201 is configured to actively wake up the robot
- the expression analysis cloud processing module 202 is configured to acquire user multimodal information
- the intent identification module 203 is configured to determine a user intent according to the user multimodal information
- a scene recognition module 204 configured to acquire location scene information
- the content generation module 205 is configured to generate the robot interaction content according to the current robot life time axis sent by the robot life timeline module 301 according to the user multimodal information, the user intention, and the location scene information.
- the present invention adds the life time axis in which the robot is located to the interactive content generation of the robot, and makes the robot more humanized when interacting with the human, so that the robot has a human lifestyle in the life time axis, and the method can enhance the robot interaction content.
- the robot's light-sensing automatic detection module when the user is not in front of the robot, the robot's light-sensing automatic detection module does not trigger, so the robot is in a sleep state.
- the robot's light-sensing automatic detection module detects the user's proximity, so the robot will actively wake up and recognize the user's expression, combined with the location scene information, the robot's life time axis, for example, current The time is 6 pm, the location scene is the doorway, the user's off-duty time, the user just returned home, then when the robot recognizes that the user's expression is happy, the active wake-up call, with a happy expression, when unhappy, take the initiative The song, with a sympathetic expression.
- the light sensing automatic detecting module is specifically configured to:
- the robot will wake up actively.
- the user multimodal information for example, the user's motion, the user's expression, etc.
- the robot's detection module detects the proximity of humans and actively wakes itself up to interact with humans.
- Wake-up robots can also perform expressions, actions, or other dynamic behaviors made by humans. If humans are standing still, do not make expressions and movements, or are in a static state such as lying still, then they may not reach the preset. The wake-up parameters are thus not considered to wake the robot, and the robot does not actively wake itself up when it detects these behaviors.
- the system includes a time axis based and artificial intelligence cloud processing module for:
- the life time axis is added to the self-cognition of the robot itself, so that the robot has an anthropomorphic life. For example, add the cognition of lunch to the robot.
- the time-based and artificial intelligence cloud processing module is further configured to combine a life scene with a self-awareness of the robot to form a self-cognitive curve based on a life time axis.
- the life time axis can be specifically added to the parameters of the robot itself.
- the time axis-based and artificial intelligence cloud processing module is further configured to: use a probability algorithm to calculate a probability of each parameter change of the robot on the life time axis after the time axis scene parameter changes, to form a fit curve.
- the probability algorithm can adopt the Bayesian probability algorithm.
- the robot will have sleep, exercise, eat, dance, read books, eat, make up, sleep and other actions. Each action will affect the self-cognition of the robot itself, and combine the parameters on the life time axis with the self-cognition of the robot itself.
- the robot's self-cognition includes, mood, fatigue value, intimacy. , goodness, number of interactions, three-dimensional cognition of the robot, age, height, weight, intimacy, game scene value, game object value, location scene value, location object value, etc. For the robot to identify the location of the scene, such as cafes, bedrooms, etc.
- the machine will perform different actions in the time axis of the day, such as sleeping at night, eating at noon, exercising during the day, etc. All the scenes in the life time axis will have an impact on self-awareness. These numerical changes are modeled by the dynamic fit of the probability model, fitting the probability that all of these actions occur on the time axis.
- Scene Recognition This type of scene recognition changes the value of the geographic scene in self-cognition.
- the system further includes: a voice analysis cloud processing module, configured to acquire and analyze the voice signal;
- the content generating module is further configured to: generate the robot interaction content according to the current robot life time axis according to the user multimodal information, the voice signal, and the user intention. In this way, the robot interactive content can be generated in combination with the voice signal, which is more accurate.
- the scene recognition module is specifically configured to acquire location scene information by using video information.
- location scene information can be obtained through video, and the video acquisition is more accurate.
- the scene recognition module is specifically configured to obtain Take location scene information.
- the image acquisition can save the robot's calculations and make the robot's reaction more rapid.
- the scene recognition module is specifically configured to acquire location scene information by using gesture information.
- the gesture can be used to make the robot more applicable. For example, if the disabled or the owner sometimes does not want to talk, the gesture can be used to transmit information to the robot.
- a robot including a robot interaction content generation system according to any of the above.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Manipulator (AREA)
- Toys (AREA)
Abstract
L'invention concerne un procédé permettant de générer un contenu d'interaction de robot, comprenant : l'activation proactive d'un robot (S101); l'obtention d'informations multimodes d'utilisateur (S102); la détermination d'une intention d'utilisateur selon les informations multimodes d'utilisateur (S103); l'obtention d'informations de scène de lieu (S104); et la génération d'un contenu d'interaction de robot par combinaison des informations multimodes d'utilisateur, de l'intention d'utilisateur, des informations de scène de lieu et d'une chronologie de vie de robot courante (S105). La chronologie de vie du robot est ajoutée à la génération du contenu d'interaction de robot, de sorte que le robot soit plus humanisé lorsqu'il entre en interaction avec une personne et ait un mode de vie humain dans la chronologie de vie. Grâce au procédé, l'humanisation de la génération de contenu d'interaction de robot, l'expérience d'interaction homme-robot et l'intelligence peuvent être améliorées.
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201680001750.8A CN106537293A (zh) | 2016-06-29 | 2016-06-29 | 一种机器人交互内容的生成方法、系统及机器人 |
| PCT/CN2016/087740 WO2018000261A1 (fr) | 2016-06-29 | 2016-06-29 | Procédé et système permettant de générer un contenu d'interaction de robot, et robot |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2016/087740 WO2018000261A1 (fr) | 2016-06-29 | 2016-06-29 | Procédé et système permettant de générer un contenu d'interaction de robot, et robot |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2018000261A1 true WO2018000261A1 (fr) | 2018-01-04 |
Family
ID=58335931
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2016/087740 Ceased WO2018000261A1 (fr) | 2016-06-29 | 2016-06-29 | Procédé et système permettant de générer un contenu d'interaction de robot, et robot |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN106537293A (fr) |
| WO (1) | WO2018000261A1 (fr) |
Families Citing this family (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108363492B (zh) * | 2018-03-09 | 2021-06-25 | 南京阿凡达机器人科技有限公司 | 一种人机交互方法及交互机器人 |
| CN109176535B (zh) * | 2018-07-16 | 2021-10-19 | 北京光年无限科技有限公司 | 基于智能机器人的交互方法及系统 |
| CN112001248B (zh) | 2020-07-20 | 2024-03-01 | 北京百度网讯科技有限公司 | 主动交互的方法、装置、电子设备和可读存储介质 |
| CN112099630B (zh) * | 2020-09-11 | 2024-04-05 | 济南大学 | 一种多模态意图逆向主动融合的人机交互方法 |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102103707A (zh) * | 2009-12-16 | 2011-06-22 | 群联电子股份有限公司 | 情感引擎、情感引擎系统及电子装置的控制方法 |
| CN105345818A (zh) * | 2015-11-04 | 2016-02-24 | 深圳好未来智能科技有限公司 | 带有情绪及表情模块的3d视频互动机器人 |
| CN105409197A (zh) * | 2013-03-15 | 2016-03-16 | 趣普科技公司 | 用于提供持久伙伴装置的设备和方法 |
| CN105490918A (zh) * | 2015-11-20 | 2016-04-13 | 深圳狗尾草智能科技有限公司 | 一种机器人主动与主人交互的系统及方法 |
| CN105511608A (zh) * | 2015-11-30 | 2016-04-20 | 北京光年无限科技有限公司 | 基于智能机器人的交互方法及装置、智能机器人 |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP1956528B1 (fr) * | 2007-02-08 | 2018-10-03 | Samsung Electronics Co., Ltd. | Appareil et procédé pour exprimer le comportement d'un robot de logiciel |
-
2016
- 2016-06-29 CN CN201680001750.8A patent/CN106537293A/zh active Pending
- 2016-06-29 WO PCT/CN2016/087740 patent/WO2018000261A1/fr not_active Ceased
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102103707A (zh) * | 2009-12-16 | 2011-06-22 | 群联电子股份有限公司 | 情感引擎、情感引擎系统及电子装置的控制方法 |
| CN105409197A (zh) * | 2013-03-15 | 2016-03-16 | 趣普科技公司 | 用于提供持久伙伴装置的设备和方法 |
| CN105345818A (zh) * | 2015-11-04 | 2016-02-24 | 深圳好未来智能科技有限公司 | 带有情绪及表情模块的3d视频互动机器人 |
| CN105490918A (zh) * | 2015-11-20 | 2016-04-13 | 深圳狗尾草智能科技有限公司 | 一种机器人主动与主人交互的系统及方法 |
| CN105511608A (zh) * | 2015-11-30 | 2016-04-20 | 北京光年无限科技有限公司 | 基于智能机器人的交互方法及装置、智能机器人 |
Also Published As
| Publication number | Publication date |
|---|---|
| CN106537293A (zh) | 2017-03-22 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN107894833B (zh) | 基于虚拟人的多模态交互处理方法及系统 | |
| CN106956271B (zh) | 预测情感状态的方法和机器人 | |
| CN108227932B (zh) | 交互意图确定方法及装置、计算机设备及存储介质 | |
| WO2018000268A1 (fr) | Procédé et système pour générer un contenu d'interaction de robot, et robot | |
| WO2018000259A1 (fr) | Procédé et système pour générer un contenu d'interaction de robot et robot | |
| US11221669B2 (en) | Non-verbal engagement of a virtual assistant | |
| CN108000526B (zh) | 用于智能机器人的对话交互方法及系统 | |
| US8321221B2 (en) | Speech communication system and method, and robot apparatus | |
| WO2018006374A1 (fr) | Procédé, système et robot de recommandation de fonction basés sur un réveil automatique | |
| CN108334583A (zh) | 情感交互方法及装置、计算机可读存储介质、计算机设备 | |
| CN110110169A (zh) | 人机交互方法及人机交互装置 | |
| KR20200024675A (ko) | 휴먼 행동 인식 장치 및 방법 | |
| CN107797663A (zh) | 基于虚拟人的多模态交互处理方法及系统 | |
| WO2018006372A1 (fr) | Procédé et système de commande d'appareil ménager sur la base de la reconnaissance d'intention, et robot | |
| WO2018000267A1 (fr) | Procédé de génération de contenu d'interaction de robot, système et robot | |
| WO2018006370A1 (fr) | Procédé et système d'interaction pour robot 3d virtuel, et robot | |
| WO2018000261A1 (fr) | Procédé et système permettant de générer un contenu d'interaction de robot, et robot | |
| WO2018000260A1 (fr) | Procédé servant à générer un contenu d'interaction de robot, système et robot | |
| WO2018006371A1 (fr) | Procédé et système de synchronisation de paroles et d'actions virtuelles, et robot | |
| Thakur et al. | A complex activity based emotion recognition algorithm for affect aware systems | |
| CN117668763B (zh) | 基于多模态的数字人一体机及其多模态感知识别方法 | |
| WO2018000258A1 (fr) | Procédé et système permettant de générer un contenu d'interaction de robot et robot | |
| WO2018000266A1 (fr) | Procédé et système permettant de générer un contenu d'interaction de robot, et robot | |
| CN114047901B (zh) | 人机交互方法及智能设备 | |
| Hao et al. | Proposal of initiative service model for service robot |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16906662 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 16906662 Country of ref document: EP Kind code of ref document: A1 |