[go: up one dir, main page]

WO2019148491A1 - Human-computer interaction method and device, robot, and computer readable storage medium - Google Patents

Human-computer interaction method and device, robot, and computer readable storage medium Download PDF

Info

Publication number
WO2019148491A1
WO2019148491A1 PCT/CN2018/075263 CN2018075263W WO2019148491A1 WO 2019148491 A1 WO2019148491 A1 WO 2019148491A1 CN 2018075263 W CN2018075263 W CN 2018075263W WO 2019148491 A1 WO2019148491 A1 WO 2019148491A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
information
human
interacted
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2018/075263
Other languages
French (fr)
Chinese (zh)
Inventor
张含波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cloudminds Shenzhen Robotics Systems Co Ltd
Original Assignee
Cloudminds Shenzhen Robotics Systems Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cloudminds Shenzhen Robotics Systems Co Ltd filed Critical Cloudminds Shenzhen Robotics Systems Co Ltd
Priority to CN201880001295.0A priority Critical patent/CN108780361A/en
Priority to PCT/CN2018/075263 priority patent/WO2019148491A1/en
Publication of WO2019148491A1 publication Critical patent/WO2019148491A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/0005Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/80Recognising image objects characterised by unique random patterns
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Definitions

  • the present application relates to the field of robot technology, and in particular, to a human-computer interaction method, device, robot, and computer readable storage medium.
  • HMI Human-computer interaction or Human-Machine Interaction
  • the system can be a wide variety of machines or computerized systems and software. Taking interactive robots placed in public places such as bank business halls, large shopping malls, airports, etc. as an example, the robot can respond to the computer system and provide services for users, such as actively initiating greetings, answering user questions, guiding users to handle business, etc. .
  • One technical problem to be solved by some embodiments of the present application is to provide a human-computer interaction method, apparatus, robot, and computer readable storage medium to solve the above technical problems.
  • An embodiment of the present application provides a human-computer interaction method, the human-computer interaction method being applied to a robot, comprising: extracting biometric information of the identified at least one object; wherein the biometric information includes physiological characteristic information and/or Behavior characteristic information; determining, according to the biometric information, a target interaction object that needs to interact from at least one object; and controlling the robot to make a response that matches the target interaction object.
  • An embodiment of the present application provides a human-machine interaction device, which is applied to a robot, including: an extraction module, a determination module, and a control module; and an extraction module, configured to extract biometric characteristics of the identified at least one object Information; wherein the biometric information includes physiological characteristic information and/or behavior characteristic information; and a determining module, configured to determine, from the at least one object, a target interactive object that needs to interact according to the biometric information; and a control module, configured to control the robot to make The response that matches the target interaction object.
  • An embodiment of the present application provides a robot including at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, the instructions being at least one The processor executes to enable the at least one processor to perform the human-computer interaction method involved in any of the method embodiments of the present application.
  • An embodiment of the present application provides a computer readable storage medium storing computer instructions for causing a computer to execute the human-computer interaction method involved in any of the method embodiments of the present application.
  • the robot detects the object by extracting the biometric information of the recognized object, and determines the object that really needs to be interacted according to the biometric information, and determines that the interaction is really needed.
  • the object is only made to respond to the object.
  • the robot can respond only to the objects that need to interact, thereby effectively avoiding the false response operation and greatly improving the user experience.
  • FIG. 1 is a flowchart of a human-computer interaction method in a first embodiment of the present application
  • FIG. 2 is a schematic diagram of a robot determining a target interaction object in the first embodiment of the present application
  • FIG. 3 is a flowchart of a human-computer interaction method in a second embodiment of the present application.
  • FIG. 4 is a schematic diagram of a robot determining a target interaction object in a second embodiment of the present application
  • FIG. 5 is a flowchart of a human-computer interaction method in a third embodiment of the present application.
  • FIG. 6 is a block diagram showing a human-machine interaction apparatus in a fourth embodiment of the present application.
  • Fig. 7 is a block diagram showing the robot in the fifth embodiment of the present application.
  • the first embodiment of the present application relates to a human-computer interaction method, and the human-computer interaction method is applied to a robot.
  • the specific process is shown in FIG. 1 .
  • the robot referred to in this embodiment is a common name for automatic control of a machine, including all machines that simulate human behavior or thoughts and simulate other creatures (such as robot dogs, robot cats, etc.).
  • step 101 biometric information of the identified at least one object is extracted.
  • the operation of extracting the biometric information of the identified at least one object may be, at least, detecting at least one within a preset range (eg, 5 meters) centered on the position of the robot. Triggered when the object is close, this detection method enables the robot to perceive objects within 360 degrees of its position.
  • a preset range eg, 5 meters
  • the robot determines the operation of identifying the object, specifically by using a proximity sensor installed on the robot, for example, after placing the robot in a public place and starting it, the proximity sensor is It is possible to perceive whether there is an object approaching within 5 meters of the center of the robot. If the movement information or presence information of the object is sensed, the sensed information is converted into an electrical signal, and the robot's processor controls the biological characteristics of the robot. The collecting device extracts biometric information of the identified at least one object.
  • Method 1 Control the robot to perform image acquisition, and extract biometric features of at least one object from the collected images to obtain biometric information of at least one object.
  • Method 2 Control the robot to perform voice collection, and extract biometric features of at least one object from the collected voices to obtain biometric information of at least one object.
  • Manner 3 controlling the robot to perform image acquisition and voice collection, and extracting biometric features of at least one object from the acquired image, obtaining biometric information of at least one object, and extracting biometric characteristics of at least one object from the collected speech. Obtaining biometric information of at least one object.
  • the biometric information of the object obtained from the image and the biometric information of the object obtained from the speech may be further analyzed to determine the biometric information belonging to the same object.
  • the biometric information of the object in the image and the biometric information from the object in the speech can be comprehensively analyzed according to the same object, thereby improving the accuracy of determining the target interaction object.
  • the extracted biometric information specifically includes physiological characteristic information and/or behavior characteristic information.
  • the physiological characteristic information may specifically be any one or any combination of the facial information, the eye information, and the voiceprint information (specifically, information indicating who the sound is from the voice), and the behavior characteristic information is specific. It may be any one or any combination of the displacement information of the recognized object, the related information such as the voice content information (specifically, the information capable of identifying the content) in the utterance.
  • physiological feature information such as facial information and/or eye information of the subject
  • behavior characteristic information such as displacement information
  • biometrics of at least one object are extracted from the collected speech
  • physiological characteristic information such as voiceprint information of the object
  • behavior characteristic information such as voice content information
  • control robot performs image acquisition, specifically, an image acquisition device that controls the robot itself, such as a camera for image acquisition, or an external image acquisition device that is connected to the robot, such as a monitoring device installed in a shopping mall, or Two ways to cooperate with the acquisition.
  • control robot performs voice collection, and may also be acquired by using the robot's own voice acquisition device and/or an external voice collection device connected thereto.
  • the robot can be controlled to rotate to the direction of the object to be recognized according to the direction information of the object, and then The robot is controlled to perform image and/or voice acquisition operations, thereby ensuring that the captured image and the voice have recognized objects, so that the biometric information of the subsequently extracted object is more complete, thereby ensuring that the final target interaction object is more accurate.
  • the collected image in the embodiment is not limited to image information such as a photo, and may be image information in the video, which is not limited herein.
  • the target interaction object can be.
  • a target interaction object that needs to be interacted is determined from at least one object according to the biometric information.
  • the operation of the target interaction object that needs to be interacted is determined from the at least one object according to the biometric information, which may be implemented by:
  • At least one object is determined to be an object to be interacted.
  • the present embodiment specifically describes the object to be interacted with.
  • the non-human object can be excluded by comparing the extracted biometric information with the sample information of the pre-stored person, thereby ensuring the accuracy of the subsequent operation.
  • each person's biometrics such as the direction of displacement (whether it is toward robot motion, etc.), eye information (whether or not the robot is watching, etc.) Are you looking for help to identify those who really seek help as objects to be interacted with?
  • a matching interactive object as the target interactive object, that is, the object selected by the robot for human-computer interaction.
  • the object to be interacted is directly determined as the target interaction object. If the number of the objects to be interacted is greater than 1, the condition is set according to the preset priority, and each of the objects to be interacted The object sets the priority, and then determines that the object with the highest priority is the target interaction object.
  • three objects appear in the range that can be recognized by the robot, namely A, B, and C, and after determining according to the biometric information, all three objects meet the interactive conditions, that is, The object to be interacted with.
  • the way to determine the target interaction object can be determined by the priority level, for example, according to the location information of the object to be interacted.
  • the obtained location information of the object A to be interacted is (x0, y0), the location information of the object B to be interacted with (x1, y1), and the location information of the object C to be interacted with ( X2, y2), formula according to distance It can be calculated that the distances of the objects A, B, and C to be interacted with each other are d0, d1, and d2, respectively. If d2 ⁇ d0 ⁇ d1, the conditions are set according to the preset priority (the closer to the robot, the higher the priority, the farther from the robot, the lower the priority), and the priority is set for the objects A, B, and C to be interacted with.
  • the priority of the setting is: the object to be interacted with C (the highest priority), the object to be interacted with B (the lowest priority), and the object to be interacted with A (the priority is between the object to be interacted and the object to be interacted with B). It can be determined that the object C to be interacted with is the target interactive object.
  • step 103 location information of the target interactive object is obtained.
  • step 104 the robot is controlled to move toward the target interactive object based on the location information.
  • the robot can be moved toward the target interaction object according to the acquired location information of the target interaction object, so that the robot can actively perform the interaction operation and improve the user experience.
  • the human-computer interaction method provided in the embodiment can enable the robot to respond only to the object that needs to interact, thereby effectively avoiding the error response operation and greatly improving the user experience.
  • a second embodiment of the present application relates to a human-computer interaction method.
  • the embodiment is further improved on the basis of the first embodiment, and the specific improvement is: in the process of controlling the response of the robot to match the target interaction object, the identity information of the target interaction object is also acquired, and the robot is After moving to the area where the target interaction object is located, a response matching the target interaction object is made according to the identity information.
  • the identity information of the target interaction object is also acquired, and the robot is After moving to the area where the target interaction object is located, a response matching the target interaction object is made according to the identity information.
  • step 301 to step 305 are included, wherein step 301 and step 302 and step 304 are substantially the same as step 101 and step 102 and step 104 in the first embodiment, respectively.
  • step 301 and step 302 and step 304 are substantially the same as step 101 and step 102 and step 104 in the first embodiment, respectively.
  • the following is a description of the differences.
  • the technical details that are not described in detail in this embodiment, refer to the human-computer interaction method provided in the first embodiment, and details are not described herein again.
  • step 303 location information and identity information of the target interaction object are obtained.
  • the identity information of the target interaction object obtained in this embodiment may include any one or any combination of name, gender, age, whether it is a VIP client or the like.
  • the above various identity information may specifically be the person stored in the face database of the user who has processed the business recorded by the face recognition technology and the location of the robot (such as the bank business hall).
  • the face data is matched.
  • the related identity information of the recorded user who has processed the service can be directly obtained. If there is no match, the gender and approximate age range are first determined according to the face recognition technology, and then the identity information of the target interactive object is further improved through the Internet search.
  • the identity information of the object to be interacted may also be determined, for example, the priority of the object to be interacted is set according to the VIP parameter carried in the identity information, and
  • the target interaction object is determined by considering factors such as distance, and is specifically described below in conjunction with FIG. 4 for ease of understanding.
  • the method of determining the target interaction object may be to prioritize the distance factor, and select the object C to be the target interaction object; or preferentially consider the VIP factor, and select the object A to be interacted as the target interaction object;
  • the age factor prioritizes the older objects to be interacted with as the target interaction object.
  • step 305 after moving to the area where the target interaction object is located, a response matching the target interaction object is made according to the identity information.
  • the target interaction object is C in Figure 4.
  • the robot can actively perform service inquiry or business guidance, such as "Zhang Yi Hello, sir, what business do you need to do?".
  • the object to be interacted with the object A to be interacted with and the object to be interacted with B may be made. Patience waiting! voice prompts.
  • the human-computer interaction method provided in this embodiment can further acquire the identity information of the target interaction object when acquiring the location information of the target interaction object, so that the robot can move according to the location information of the target interaction object. After the target interaction object is located, the response to the target interaction object can be made according to the identity information, thereby further improving the user experience.
  • a third embodiment of the present application relates to a human-computer interaction method.
  • the embodiment is further improved on the basis of the first embodiment or the second embodiment, and the specific improvement is: after the control robot makes a response that matches the target interaction object, and then re-determines the target interaction object that needs to be interacted. It is necessary to first determine whether there is a new object approaching the robot at present.
  • the specific process is shown in Figure 5.
  • step 501 to step 508 are included, where steps 501 to 504 are substantially the same as steps 101 to 104 in the first embodiment, and details are not described herein again.
  • steps 501 to 504 are substantially the same as steps 101 to 104 in the first embodiment, and details are not described herein again.
  • steps 501 to 504 are substantially the same as steps 101 to 104 in the first embodiment, and details are not described herein again.
  • step 505 it is determined whether a new object is approaching the robot. If it is determined that a new object is close to the robot, go to step 506; otherwise, go directly to step 507 to re-select a target object to be interacted from the remaining objects to be interacted in the last human-computer interaction process to make the target interaction object.
  • the manner of judging whether a new object approaches the robot may be adopted as in the first embodiment, if the preset range (eg, 5 meters) is centered on the current position of the robot. If it is detected that a new object is approaching, it is determined that a new object is close to the robot, and the specific judgment operation will not be described here.
  • the preset range eg, 5 meters
  • the new object approaching the robot may be one, or may be greater than 1, and is not limited herein.
  • step 506 biometric information of the new object is extracted.
  • step 507 the target interaction object that needs to interact is re-determined.
  • the target object that needs to be interacted in the embodiment is specifically selected from the new object and the object other than the target interaction object of the last interaction operation.
  • the robot can only respond to an object to be interacted (that is, the target interaction object needs to be selected for interaction), and after completing an interaction, Other objects to be interacted with each other.
  • the robot may wait for the robot to respond in addition to the previously determined object to be interacted, and there may be new objects to be interacted with, so in this case, it is necessary to re-determine the interaction.
  • the target interactive object it is necessary to reselect a target to be interacted with the newly confirmed object to be interacted and the remaining objects to be interacted in the last human-computer interaction process to make the target interactive object.
  • the manner of re-determining the target interaction object that needs to be interacted is substantially the same as the determination manner in the first embodiment, and it is determined that the identified object is determined according to the biometric information. Interacting objects, and then selecting the target interaction objects that need to be interacted with from the objects to be interacted. The specific implementation details are not described here.
  • the priority of each object to be interacted can still be selected.
  • the new target interaction object can be determined according to other selection methods, and no limitation is made here.
  • step 508 the control robot is made to respond to the re-determined target interaction object.
  • the control robot makes a response that matches the re-determined target interaction object, and the response process may be: moving toward the target interaction object, and moving to the area where the target interaction object is located, actively conducting service consultation or service guidance, specifically
  • the response mode can be set according to the information about the re-determined target interaction object, and no limitation is made here.
  • the human-computer interaction method extracts a new one by monitoring whether a new object approaches the robot after completing a human-computer interaction operation, and when it is determined that a new object approaches the robot.
  • the human-computer interaction method provided in this embodiment enables the robot to dynamically update the state of the object during the work process, thereby accurately making a response conforming to the current scene, reducing misoperation, and further Improved user experience.
  • a fourth embodiment of the present application relates to a human-machine interaction device, which is applied to a robot, and the specific structure is as shown in FIG. 6.
  • the human-machine interaction device includes an extraction module 601, a determination module 602, and a control module 603.
  • the extraction module 601 is configured to extract biometric information of the identified at least one object.
  • the determining module 602 is configured to determine, from the at least one object, the target interactive object that needs to interact according to the biometric information.
  • the control module 603 is configured to control the robot to make a response that matches the target interaction object.
  • the biometric information of the identified at least one object extracted by the extraction module 601 may be any one of physiological characteristic information and behavior characteristic information or a combination of the two.
  • the physiological feature information extracted by the extraction module 601 in this embodiment may be any one or any combination of facial information, eye information, voiceprint information, and the like of the object.
  • the behavior characteristic information extracted by the extraction module 601 may specifically be any one of a displacement information of the object, voice content information, or a combination thereof.
  • the determining module 602 may determine, when determining the target interactive object that needs to be interacted from the at least one object, according to the foregoing various biometric information, first: determining, according to the foregoing various biometric information, that the identified object is the object to be interacted with ( An object that needs to be interacted with, such as analyzing the eye gaze of the object based on the ocular information of the recognized object, and the displacement information of the object to determine whether it is currently seeking help to determine whether it is the object to be interacted; After determining the objects to be interacted with, select an object that meets the requirements from the objects to be interacted as the target interaction object (the object that needs to be interacted eventually).
  • control module 603 controls the robot to make a response that matches the target interaction object, and specifically may control the robot to move toward the target interaction object.
  • the robot can be controlled to make a response matching the identity information of the object, such as active service inquiry, service guidance, etc., specifically: "Hello, may I ask? What business do you want to handle?”
  • the human-machine interaction device uses the extraction module to extract the biometric information of the identified at least one object, and the determining module determines that the interaction needs to be performed from the at least one object according to the biometric information.
  • the target interaction object and then the control module is used to control the robot to make a response matching the target interaction object, and the direct cooperation of the above modules enables the robot equipped with the human-machine interaction device to respond only to the object that needs to interact. Thereby effectively avoiding the false response operation and greatly improving the user experience.
  • a fifth embodiment of the present application relates to a robot, and the specific structure is as shown in FIG.
  • the robot can be a smart machine located in a public place such as a bank business hall, a large shopping mall, an airport, or the like.
  • the internal one specifically includes one or more processors 701 and a memory 702.
  • One processor 701 is taken as an example in FIG.
  • each functional module in the human-machine interaction device involved in the foregoing embodiment is deployed on the processor 701, and the processor 701 and the memory 702 can be connected through a bus or other manner, and the bus is used in FIG. Connection is an example.
  • the memory 702 is a computer readable storage medium, and can be used to store a software program, a computer executable program, and a module, such as a program instruction/module corresponding to the human-computer interaction method involved in any method embodiment of the present application.
  • the processor 701 executes various functional applications and data processing of the server by executing software programs, instructions, and modules stored in the memory 702, that is, implementing the human-computer interaction method involved in any method embodiment of the present application.
  • the memory 702 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function; the storage data area may establish a history database, store priority setting conditions, and the like.
  • the memory 702 may include a high speed random access memory, and may also include a readable and writable memory (RAM).
  • memory 702 can optionally include memory remotely located relative to processor 701 that can be connected to the terminal device over a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the memory 702 can store the instructions executed by the at least one processor 701, and the instructions are executed by the at least one processor 701, so that the at least one processor 701 can perform the human-computer interaction method involved in any method embodiment of the present application.
  • the human-computer interaction method provided in any embodiment of the present application can be referred to.
  • a sixth embodiment of the present application is directed to a computer readable storage medium having stored therein computer instructions that enable a computer to perform the human-computer interaction method involved in any of the method embodiments of the present application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Automation & Control Theory (AREA)
  • Fuzzy Systems (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Manipulator (AREA)

Abstract

The present application relates to the technical field of robots and discloses a human-computer interaction method and device, a robot, and a computer readable storage medium. In the present application, the human-computer interaction method is applied to a robot, and comprises: extracting biometric information of at least one recognized object, the biometric information comprising physiological feature information and/or behavior feature information; determining, from the at least one object and according to the biometric information, a target interaction object requiring interaction; and controlling a robot to execute a response matched to the target interaction object. The human-computer interaction method enables a robot to respond only to an object requiring interaction, thereby avoiding false response operations, and greatly improving the user experience.

Description

人机交互方法、装置、机器人及计算机可读存储介质Human-computer interaction method, device, robot and computer readable storage medium 技术领域Technical field

本申请涉及机器人技术领域,特别涉及一种人机交互方法、装置、机器人及计算机可读存储介质。The present application relates to the field of robot technology, and in particular, to a human-computer interaction method, device, robot, and computer readable storage medium.

背景技术Background technique

人机交互、人机互动(Human–Computer Interaction或Human–Machine Interaction,简称HCI或HMI),是一门研究系统与用户之间的交互关系的学问。系统可以是各种各样的机器,也可以是计算机化的系统和软件。以放置在银行营业厅、大型商场、机场等公共场所的交互型机器人为例,该机器人能够通过计算机系统作出响应,为用户提供服务,如主动发起问候、回答用户的提问、引导用户办理业务等。Human-computer interaction or Human-Machine Interaction (HCI or HMI) is a study of the interaction between systems and users. The system can be a wide variety of machines or computerized systems and software. Taking interactive robots placed in public places such as bank business halls, large shopping malls, airports, etc. as an example, the robot can respond to the computer system and provide services for users, such as actively initiating greetings, answering user questions, guiding users to handle business, etc. .

但是,发明人发现现有技术中至少存在如下问题:由于公共场所的人流量较大,并且会有各种各样的声音干扰,如播放的广播、音乐,而现有的机器人根本无法屏蔽掉这些干扰因素,反而会不断的作出响应,这样不仅严重占用了机器人的处理资源,还会使机器人无法为真正需要帮助的用户提供有效的服务,严重影响用户的使用体验。However, the inventors have found that at least the following problems exist in the prior art: since the traffic volume in a public place is large and there are various kinds of sound disturbances, such as broadcasted music and music, the existing robot cannot be shielded at all. These interference factors will continue to respond, which not only seriously occupies the processing resources of the robot, but also makes the robot unable to provide effective services for users who really need help, seriously affecting the user experience.

发明内容Summary of the invention

本申请部分实施例所要解决的一个技术问题在于提供一种人机交互方法、装置、机器人及计算机可读存储介质,以解决上述技术问题。One technical problem to be solved by some embodiments of the present application is to provide a human-computer interaction method, apparatus, robot, and computer readable storage medium to solve the above technical problems.

本申请的一个实施例提供了一种人机交互方法,该人机交互方法应用于机器人,包括:提取识别到的至少一个对象的生物特征信息;其中,生物特征信息包括生理特征信息和/或行为特征信息;根据生物特征信息,从至少一个对象中确定需要进行交互的目标交互对象;控制机器人作出与目标交互对象匹配的响应。An embodiment of the present application provides a human-computer interaction method, the human-computer interaction method being applied to a robot, comprising: extracting biometric information of the identified at least one object; wherein the biometric information includes physiological characteristic information and/or Behavior characteristic information; determining, according to the biometric information, a target interaction object that needs to interact from at least one object; and controlling the robot to make a response that matches the target interaction object.

本申请的一个实施例提供了一种人机交互装置,该人机交互装置应用于机器人,包括:提取模块、确定模块和控制模块;提取模块,用于提取识别到的至少一个对象的生物特征信息;其中,生物特征信息包括生理特征信息和/或行为特征信息;确定模块,用于根据生物特征信息,从至少一个对象中确定需 要进行交互的目标交互对象;控制模块,用于控制机器人作出与目标交互对象匹配的响应。An embodiment of the present application provides a human-machine interaction device, which is applied to a robot, including: an extraction module, a determination module, and a control module; and an extraction module, configured to extract biometric characteristics of the identified at least one object Information; wherein the biometric information includes physiological characteristic information and/or behavior characteristic information; and a determining module, configured to determine, from the at least one object, a target interactive object that needs to interact according to the biometric information; and a control module, configured to control the robot to make The response that matches the target interaction object.

本申请的一个实施例提供了一种机器人,该机器人包括至少一个处理器;以及与至少一个处理器通信连接的存储器;其中,存储器存储有可被至少一个处理器执行的指令,指令被至少一个处理器执行,以使至少一个处理器能够执行本申请任意方法实施例中涉及的人机交互方法。An embodiment of the present application provides a robot including at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, the instructions being at least one The processor executes to enable the at least one processor to perform the human-computer interaction method involved in any of the method embodiments of the present application.

本申请的一个实施例提供了一种计算机可读存储介质,该计算机可读存储介质存储有计算机指令,计算机指令用于使计算机执行本申请任意方法实施例中涉及的人机交互方法。An embodiment of the present application provides a computer readable storage medium storing computer instructions for causing a computer to execute the human-computer interaction method involved in any of the method embodiments of the present application.

本申请实施例相对于现有技术而言,机器人在识别到对象时,通过提取识别到的对象的生物特征信息,并根据生物特征信息来确定真正需要进行交互的对象,在确定真正需要进行交互的对象后才作出与该对象匹配的响应。通过这种人机交互方式,能够使机器人仅针对需要进行交互的对象作出响应,从而有效避免了误响应操作,大大提升了用户体验。Compared with the prior art, the robot detects the object by extracting the biometric information of the recognized object, and determines the object that really needs to be interacted according to the biometric information, and determines that the interaction is really needed. The object is only made to respond to the object. Through this human-computer interaction mode, the robot can respond only to the objects that need to interact, thereby effectively avoiding the false response operation and greatly improving the user experience.

附图说明DRAWINGS

一个或多个实施例通过与之对应的附图中的图片进行示例性说明,这些示例性说明并不构成对实施例的限定,附图中具有相同参考数字标号的元件表示为类似的元件,除非有特别申明,附图中的图不构成比例限制。The one or more embodiments are exemplified by the accompanying drawings in the accompanying drawings, and FIG. The figures in the drawings do not constitute a scale limitation unless otherwise stated.

图1是本申请第一实施例中人机交互方法的流程图;1 is a flowchart of a human-computer interaction method in a first embodiment of the present application;

图2是本申请第一实施例中机器人确定目标交互对象的示意图;2 is a schematic diagram of a robot determining a target interaction object in the first embodiment of the present application;

图3是本申请第二实施例中人机交互方法的流程图;3 is a flowchart of a human-computer interaction method in a second embodiment of the present application;

图4是本申请第二实施例中机器人确定目标交互对象的示意图;4 is a schematic diagram of a robot determining a target interaction object in a second embodiment of the present application;

图5是本申请第三实施例中人机交互方法的流程图;5 is a flowchart of a human-computer interaction method in a third embodiment of the present application;

图6是本申请第四实施例中人机交互装置的方框示意图;6 is a block diagram showing a human-machine interaction apparatus in a fourth embodiment of the present application;

图7是本申请第五实施例中机器人的方框示意图。Fig. 7 is a block diagram showing the robot in the fifth embodiment of the present application.

具体实施方式Detailed ways

为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请部分实施例进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。In order to make the objects, the technical solutions and the advantages of the present application more clear, some embodiments of the present application will be further described in detail below with reference to the accompanying drawings and embodiments. It is understood that the specific embodiments described herein are merely illustrative of the application and are not intended to be limiting.

本申请的第一实施例涉及一种人机交互方法,该人机交互方法应用于机器人,具体流程如图1所示。The first embodiment of the present application relates to a human-computer interaction method, and the human-computer interaction method is applied to a robot. The specific process is shown in FIG. 1 .

需要说明的是,本实施例中所说的机器人是自动控制机器的俗称,包括一切模拟人类行为或思想和模拟其他生物的机械(如机器狗,机器猫等)。It should be noted that the robot referred to in this embodiment is a common name for automatic control of a machine, including all machines that simulate human behavior or thoughts and simulate other creatures (such as robot dogs, robot cats, etc.).

在步骤101中,提取识别到的至少一个对象的生物特征信息。In step 101, biometric information of the identified at least one object is extracted.

具体的说,在本实施例中,提取识别到的至少一个对象的生物特征信息的操作,具体可以是在以机器人所处位置为圆心的预设范围(如5米)内检测到有至少一个对象接近时触发的,通过这种检测方式使机器人能够感知自身所处位置360度范围内的对象。Specifically, in the embodiment, the operation of extracting the biometric information of the identified at least one object may be, at least, detecting at least one within a preset range (eg, 5 meters) centered on the position of the robot. Triggered when the object is close, this detection method enables the robot to perceive objects within 360 degrees of its position.

值得一提的是,在本实施例中,机器人确定识别到对象的操作,具体可以是通过机器人上安装的接近传感器实现,比如在将机器人放置在公共场所,并将其启动后,接近传感器就可以感知以机器人为圆心的5米范围内是否有对象接近,如果感知到有对象的移动信息或存在信息后,就将感知到的信息转换为电气信号,由机器人的处理器控制机器人的生物特征采集装置去提取识别到的至少一个对象的生物特征信息。It is worth mentioning that, in this embodiment, the robot determines the operation of identifying the object, specifically by using a proximity sensor installed on the robot, for example, after placing the robot in a public place and starting it, the proximity sensor is It is possible to perceive whether there is an object approaching within 5 meters of the center of the robot. If the movement information or presence information of the object is sensed, the sensed information is converted into an electrical signal, and the robot's processor controls the biological characteristics of the robot. The collecting device extracts biometric information of the identified at least one object.

为了便于理解提取生物特征信息的具体实现方式,以下罗列了几种具体的提取方式,具体如下:In order to facilitate the understanding of the specific implementation of extracting biometric information, several specific extraction methods are listed below, as follows:

方式一:控制机器人进行图像采集,并从采集到的图像中提取至少一个对象的生物特征,得到至少一个对象的生物特征信息。Method 1: Control the robot to perform image acquisition, and extract biometric features of at least one object from the collected images to obtain biometric information of at least one object.

方式二:控制机器人进行语音采集,并从采集到的语音中提取至少一个对象的生物特征,得到至少一个对象的生物特征信息。Method 2: Control the robot to perform voice collection, and extract biometric features of at least one object from the collected voices to obtain biometric information of at least one object.

方式三:控制机器人进行图像采集和语音采集,并从采集到的图像中提取至少一个对象的生物特征,得到至少一个对象的生物特征信息,同时从采集到的语音中提取至少一个对象的生物特征,得到至少一个对象的生物特征信息。Manner 3: controlling the robot to perform image acquisition and voice collection, and extracting biometric features of at least one object from the acquired image, obtaining biometric information of at least one object, and extracting biometric characteristics of at least one object from the collected speech. Obtaining biometric information of at least one object.

另外,在采用方式三提取生物特征信息时,可以进一步对从图像中得到的对象的生物特征信息和从语音中得到的对象生物特征信息进行分析处理,从而确定属于同一个对象的生物特征信息,使得后续确定目标交互对象的操作中,能够根据同一个对象来自图像中对象的生物特征信息和来自语音中对象的生物特征信息进行综合分析,从而提升确定目标交互对象的准确性。In addition, when the biometric information is extracted by using the third method, the biometric information of the object obtained from the image and the biometric information of the object obtained from the speech may be further analyzed to determine the biometric information belonging to the same object. In the operation of subsequently determining the target interaction object, the biometric information of the object in the image and the biometric information from the object in the speech can be comprehensively analyzed according to the same object, thereby improving the accuracy of determining the target interaction object.

需要说明的是,在本实施例中,提取的生物特征信息具体包括生理特征信息和/或行为特征信息。It should be noted that, in this embodiment, the extracted biometric information specifically includes physiological characteristic information and/or behavior characteristic information.

其中,生理特征信息具体可以是识别到的对象的面部信息、眼部信息、 声纹信息(具体指能够分析出声音来自谁的信息)等相关信息中的任意一个或任意组合,行为特征信息具体可以是识别到的对象的位移信息、所说的话语中的语音内容信息(具体指能够识别出所说内容的信息)等相关信息中的任意一个或任意组合。The physiological characteristic information may specifically be any one or any combination of the facial information, the eye information, and the voiceprint information (specifically, information indicating who the sound is from the voice), and the behavior characteristic information is specific. It may be any one or any combination of the displacement information of the recognized object, the related information such as the voice content information (specifically, the information capable of identifying the content) in the utterance.

比如说,在从采集到的图像中提取至少一个对象的生物特征时,通常可以提取到对象的面部信息和/或眼部信息等生理特征信息,以及位移信息等行为特征信息。For example, when extracting biometrics of at least one object from the acquired image, physiological feature information such as facial information and/or eye information of the subject, and behavior characteristic information such as displacement information may be extracted.

还比如说,在从采集到的语音中提取至少一个对象的生物特征时,通常可以提取到对象的声纹信息等生理特征信息,以及语音内容信息等行为特征信息。For example, when the biometrics of at least one object are extracted from the collected speech, physiological characteristic information such as voiceprint information of the object, and behavior characteristic information such as voice content information may be extracted.

另外,上述控制机器人进行图像采集,具体可以是控制机器人自身的图像采集装置,如摄像头进行图像采集,也可以是从与机器人通信连接的外部图像采集设备获取,如商场中安装的监控设备,或者两种方式配合采集。In addition, the above-mentioned control robot performs image acquisition, specifically, an image acquisition device that controls the robot itself, such as a camera for image acquisition, or an external image acquisition device that is connected to the robot, such as a monitoring device installed in a shopping mall, or Two ways to cooperate with the acquisition.

同理,上述控制机器人进行语音采集,也可以是利用机器人自身的语音采集装置和/与其通信连接的外部语音采集装置获取。Similarly, the above control robot performs voice collection, and may also be acquired by using the robot's own voice acquisition device and/or an external voice collection device connected thereto.

另外,值得一提的是,在确定识别到对象后,控制机器人进行图像采集和/或语音采集之前,可以根据感知到对象的方向信息,控制机器人转动到面向识别到的对象所在的方向,然后在控制机器人进行图像和/或语音的采集操作,从而保证采集到的图像和语音中有识别到的对象,使得后续提取的对象的生物特征信息更加完整,进而保证最终确定的目标交互对象更加准确。In addition, it is worth mentioning that, after determining the object to be recognized, before controlling the robot to perform image acquisition and/or voice collection, the robot can be controlled to rotate to the direction of the object to be recognized according to the direction information of the object, and then The robot is controlled to perform image and/or voice acquisition operations, thereby ensuring that the captured image and the voice have recognized objects, so that the biometric information of the subsequently extracted object is more complete, thereby ensuring that the final target interaction object is more accurate. .

另外,本实施例中所说的采集到的图像并不局限为照片等图像信息,还可以是视频中的图像信息,此处不做限制。In addition, the collected image in the embodiment is not limited to image information such as a photo, and may be image information in the video, which is not limited herein.

需要说明的是,以上仅为举例说明,在实际应用中,本领域的技术人员可以根据其掌握的技术手段合理设置,只要能够根据提取到的生物特征信息,从识别到的至少一个对象中确定目标交互对象即可。It should be noted that the above is only an example. In practical applications, those skilled in the art may reasonably set according to the technical means they master, as long as they can determine from the identified at least one object according to the extracted biometric information. The target interaction object can be.

在步骤102中,根据生物特征信息,从至少一个对象中确定需要进行交互的目标交互对象。In step 102, a target interaction object that needs to be interacted is determined from at least one object according to the biometric information.

在本实施例中,根据生物特征信息,从至少一个对象中确定需要进行交互的目标交互对象的操作,具体可以通过下述方式实现:In this embodiment, the operation of the target interaction object that needs to be interacted is determined from the at least one object according to the biometric information, which may be implemented by:

首先,根据生物特征信息,确定至少一个对象为待交互对象。为了便于说明,本实施例以待交互对象为人进行具体说明。First, according to the biometric information, at least one object is determined to be an object to be interacted. For convenience of description, the present embodiment specifically describes the object to be interacted with.

具体的说,因为在实际应用中,接近机器人的对象不一定都是需要进行 交互的对象,如接近机器人的可能是小动物或其他终端设备,并非人。因此,可以通过将提取到的生物特征信息与预存的人的样本信息进行比对,将非人对象排除掉,从而保证后续操作的准确性。Specifically, because in practical applications, objects close to the robot are not necessarily objects that need to be interacted, such as small animals or other terminal devices that are close to the robot, not human. Therefore, the non-human object can be excluded by comparing the extracted biometric information with the sample information of the pre-stored person, thereby ensuring the accuracy of the subsequent operation.

另外,在确定识别到的对象中有多个人时,还可以进一步通过分析每个人的生物特征,如位移方向(是否是朝向机器人运动等)、眼部信息(是否在注视机器人等)等确定其是否正在寻求帮助,将这些真正寻求帮助的人确定为待交互对象。In addition, when it is determined that there are a plurality of people in the identified object, it is further possible to determine each person's biometrics, such as the direction of displacement (whether it is toward robot motion, etc.), eye information (whether or not the robot is watching, etc.) Are you looking for help to identify those who really seek help as objects to be interacted with?

然后,从确定的待交互对象中,选取一个符合要求的待交互对象作为目标交互对象,即机器人最终选择的进行人机交互的对象。Then, from the determined objects to be interacted, select a matching interactive object as the target interactive object, that is, the object selected by the robot for human-computer interaction.

具体的说,若待交互对象的数目等于1,则直接将该待交互对象确定为目标交互对象,若待交互对象的数目大于1,则根据预设的优先级设置条件,为每一个待交互对象设置优先级,然后确定优先级最高的待交互对象为目标交互对象。Specifically, if the number of the objects to be interacted is equal to 1, the object to be interacted is directly determined as the target interaction object. If the number of the objects to be interacted is greater than 1, the condition is set according to the preset priority, and each of the objects to be interacted The object sets the priority, and then determines that the object with the highest priority is the target interaction object.

为了便于理解,以下结合图2进行具体说明。For ease of understanding, the following is specifically described in conjunction with FIG. 2.

如图2所示,在机器人所能识别到的范围内出现了3个对象,分别为A、B、C,且根据生物特征信息进行判断后,这3个对象均符合交互条件,即都是待交互对象。这种情况下,确定目标交互对象的方式就可以通过优先级的高低来确定,比如根据待交互对象的位置信息为其设置优先级。As shown in Fig. 2, three objects appear in the range that can be recognized by the robot, namely A, B, and C, and after determining according to the biometric information, all three objects meet the interactive conditions, that is, The object to be interacted with. In this case, the way to determine the target interaction object can be determined by the priority level, for example, according to the location information of the object to be interacted.

具体的说,如图2所示,获取到的待交互对象A的位置信息为(x0,y0)、待交互对象B的位置信息为(x1,y1)、待交互对象C的位置信息为(x2,y2),根据距离计算公式

Figure PCTCN2018075263-appb-000001
可以计算出待交互对象A、B、C分别距离机器人的距离为d0、d1、d2。假如d2<d0<d1,则根据预设的优先级设置条件(距离机器人越近,优先级越高,距离机器人越远,优先级越低),为待交互对象A、B、C设置优先级,设置的优先级分别为:待交互对象C(优先级最高)、待交互对象B(优先级最低),待交互对象A(优先级位于待交互对象C和待交互对象B之间),此时就可以确定待交互对象C为目标交互对象。 Specifically, as shown in FIG. 2, the obtained location information of the object A to be interacted is (x0, y0), the location information of the object B to be interacted with (x1, y1), and the location information of the object C to be interacted with ( X2, y2), formula according to distance
Figure PCTCN2018075263-appb-000001
It can be calculated that the distances of the objects A, B, and C to be interacted with each other are d0, d1, and d2, respectively. If d2<d0<d1, the conditions are set according to the preset priority (the closer to the robot, the higher the priority, the farther from the robot, the lower the priority), and the priority is set for the objects A, B, and C to be interacted with. The priority of the setting is: the object to be interacted with C (the highest priority), the object to be interacted with B (the lowest priority), and the object to be interacted with A (the priority is between the object to be interacted and the object to be interacted with B). It can be determined that the object C to be interacted with is the target interactive object.

另外,值得一提的是,在实际应用中,可能存在多个待交互对象距离机器人的位置相同的情况,这种情况下,可以通过机器人移动到哪一个待交互对象时需要转转动的角度最小的原则作出优先级判断。In addition, it is worth mentioning that in practical applications, there may be multiple situations where the object to be interacted is the same as the position of the robot. In this case, the angle at which the robot needs to be rotated when moving to which object to be interacted The principle of minimum makes priority judgment.

需要说明的是,以上仅为举例说明,并不对本申请的技术方案及要保护的范围构成限定,在实际应用中,本领域的技术人员可以根据实际需要,合理设置,此处不做限制。It should be noted that the above is only an example, and does not limit the technical solution of the present application and the scope to be protected. In practical applications, those skilled in the art can appropriately set according to actual needs, and no limitation is made herein.

在步骤103中,获取目标交互对象的位置信息。In step 103, location information of the target interactive object is obtained.

在步骤104中,根据位置信息,控制机器人朝着目标交互对象移动。In step 104, the robot is controlled to move toward the target interactive object based on the location information.

具体的说,在确定目标交互对象后,可以根据获取到的目标交互对象的位置信息,控制机器人朝着目标交互对象移动,使得机器人能够主动进行交互操作,提升用户体验。Specifically, after the target interaction object is determined, the robot can be moved toward the target interaction object according to the acquired location information of the target interaction object, so that the robot can actively perform the interaction operation and improve the user experience.

与现有技术相比,本实施例中提供的人机交互方法,能够使机器人仅针对需要进行交互的对象作出响应,从而有效避免了误响应操作,大大提升了用户体验。Compared with the prior art, the human-computer interaction method provided in the embodiment can enable the robot to respond only to the object that needs to interact, thereby effectively avoiding the error response operation and greatly improving the user experience.

本申请的第二实施例涉及一种人机交互方法。本实施例在第一实施例的基础上做了进一步改进,具体改进之处为:在控制机器人作出与目标交互对象匹配的响应的过程中,还会获取目标交互对象的身份信息,并在机器人移动到目标交互对象所在的区域后,根据身份信息作出与目标交互对象匹配的响应,为了便于说明,以下结合图3和图4进行具体说明。A second embodiment of the present application relates to a human-computer interaction method. The embodiment is further improved on the basis of the first embodiment, and the specific improvement is: in the process of controlling the response of the robot to match the target interaction object, the identity information of the target interaction object is also acquired, and the robot is After moving to the area where the target interaction object is located, a response matching the target interaction object is made according to the identity information. For convenience of description, the following specifically describes FIG. 3 and FIG.

具体的说,在本实施例中,包含步骤301至步骤305,其中,步骤301和步骤302、步骤304分别与第一实施例中的步骤101和步骤102、步骤104大致相同,此处不再赘述,下面主要介绍不同之处,未在本实施方式中详尽描述的技术细节,可参见第一实施例所提供的人机交互方法,此处不再赘述。Specifically, in this embodiment, step 301 to step 305 are included, wherein step 301 and step 302 and step 304 are substantially the same as step 101 and step 102 and step 104 in the first embodiment, respectively. For details, the following is a description of the differences. For details of the technical details that are not described in detail in this embodiment, refer to the human-computer interaction method provided in the first embodiment, and details are not described herein again.

在步骤303中,获取目标交互对象的位置信息和身份信息。In step 303, location information and identity information of the target interaction object are obtained.

以目标交互对象为人为例,本实施例中获取的目标交互对象的身份信息可以包括姓名、性别、年龄、是否为VIP客户等相关信息中的任意一种或任意组合。Taking the target interaction object as an example, the identity information of the target interaction object obtained in this embodiment may include any one or any combination of name, gender, age, whether it is a VIP client or the like.

需要说明的是,上述各种身份信息,具体可以通过人脸识别技术将目标交互对象的信息与机器人所处场合(如银行营业厅)记录的办理过业务的用户的人脸数据库中存储的人脸数据进行匹配,在匹配成功后,便可直接获取记录的办理过业务的用户的相关身份信息。如果没有匹配成功的,则根据人脸识别技术先确定性别及大致年龄范围,然后通过互联网查找进一步完善目标交互对象的身份信息。It should be noted that the above various identity information may specifically be the person stored in the face database of the user who has processed the business recorded by the face recognition technology and the location of the robot (such as the bank business hall). The face data is matched. After the matching is successful, the related identity information of the recorded user who has processed the service can be directly obtained. If there is no match, the gender and approximate age range are first determined according to the face recognition technology, and then the identity information of the target interactive object is further improved through the Internet search.

另外,值得一提的是,在实际应用中,确定目标交互对象的时候,还可以结合待交互对象的身份信息进行确定,如根据身份信息中携带的VIP参数设置待交互对象的优先级,并综合考虑距离等因素,确定目标交互对象,为了便于理解,以下结合图4进行具体说明。In addition, it is worth mentioning that, in the actual application, when the target interaction object is determined, the identity information of the object to be interacted may also be determined, for example, the priority of the object to be interacted is set according to the VIP parameter carried in the identity information, and The target interaction object is determined by considering factors such as distance, and is specifically described below in conjunction with FIG. 4 for ease of understanding.

具体的说,在机器人可以识别的范围内有A、B、C三个待交互对象,且 每个待交互对象的位置信息和身份信息如图4标注,其中待交互对象A、B、C分别距离机器人的距离为d0、d1、d2,且d2<d0<d1。Specifically, there are three objects to be interacted with A, B, and C in the range that the robot can recognize, and the location information and identity information of each object to be interacted are marked as shown in FIG. 4, wherein the objects A, B, and C to be interacted are respectively The distance from the robot is d0, d1, d2, and d2 < d0 < d1.

这种情况下,确定目标交互对象的方式可以是优先考虑距离因素,选择待交互对象C为目标交互对象;也可以优先考虑VIP因素,选择待交互对象A为目标交互对象;还可以是优先考虑年龄因素,将年龄大的待交互对象优先确定为目标交互对象。In this case, the method of determining the target interaction object may be to prioritize the distance factor, and select the object C to be the target interaction object; or preferentially consider the VIP factor, and select the object A to be interacted as the target interaction object; The age factor prioritizes the older objects to be interacted with as the target interaction object.

需要说的是,以上仅为距离说明,并不对本申请的技术方案及要保护的范围构成限定,在实际应用中,本领域的技术人员可以根据实际需要,合理设置,此处不做限制。It should be noted that the above is only a description of the distance, and does not limit the technical solution of the present application and the scope to be protected. In practical applications, those skilled in the art can appropriately set according to actual needs, and no limitation is made herein.

在步骤305中,在移动到目标交互对象所在的区域后,根据身份信息作出与目标交互对象匹配的响应。In step 305, after moving to the area where the target interaction object is located, a response matching the target interaction object is made according to the identity information.

比如说,目标交互对象为图4中的C,在移动到目标交互对象C所在的区域后(如距离目标交互对象一米的位置),机器人可以主动进行服务询问或业务引导,如“张一先生,您好,请问您需要办理什么业务?”。For example, the target interaction object is C in Figure 4. After moving to the area where the target interaction object C is located (such as one meter away from the target interaction object), the robot can actively perform service inquiry or business guidance, such as "Zhang Yi Hello, sir, what business do you need to do?".

进一步的,为了提升用户体验,在向目标交互对象C进行询问后,等待目标交互对象C作出回答的过程中,还可以向待交互对象A和待交互对象B作出“当前客人较多,请您耐心等待!”的语音提示。Further, in order to improve the user experience, after the query to the target interactive object C, while waiting for the target interactive object C to make an answer, the object to be interacted with the object A to be interacted with and the object to be interacted with B may be made. Patience waiting!" voice prompts.

需要说明的是,以上仅为举例说明,并不对本申请的技术方案及要保护的范围构成限定,在实际应用中,本领域的技术人员可以根据实际需要,合理设置,此处不做限制。It should be noted that the above is only an example, and does not limit the technical solution of the present application and the scope to be protected. In practical applications, those skilled in the art can appropriately set according to actual needs, and no limitation is made herein.

与现有技术相比,本实施例中提供的人机交互方法,在获取目标交互对象的位置信息时,通过进一步获取目标交互对象的身份信息,从而可以在机器人根据目标交互对象的位置信息移动到目标交互对象所在的区域后,能够根据身份信息作出与目标交互对象匹配的响应,进一步提升了用户体验。Compared with the prior art, the human-computer interaction method provided in this embodiment can further acquire the identity information of the target interaction object when acquiring the location information of the target interaction object, so that the robot can move according to the location information of the target interaction object. After the target interaction object is located, the response to the target interaction object can be made according to the identity information, thereby further improving the user experience.

本申请的第三实施例涉及一种人机交互方法。本实施例在第一实施例或第二实施例的基础上做了进一步改进,具体改进之处为:在控制机器人作出与目标交互对象匹配的响应之后,重新确定需要进行交互的目标交互对象时,需要先判断当前是否有新的对象接近机器人,具体流程如图5所示。A third embodiment of the present application relates to a human-computer interaction method. The embodiment is further improved on the basis of the first embodiment or the second embodiment, and the specific improvement is: after the control robot makes a response that matches the target interaction object, and then re-determines the target interaction object that needs to be interacted. It is necessary to first determine whether there is a new object approaching the robot at present. The specific process is shown in Figure 5.

具体的说,在本实施例中,包括步骤501至步骤508,其中,步骤501至步骤504分别与第一实施例中的步骤101至步骤104大致相同,此处不再赘述,下面主要介绍不同之处,未在本实施方式中详尽描述的技术细节,可参见第一实施例或第二实施例所提供的人机交互方法,此处不再赘述。Specifically, in this embodiment, step 501 to step 508 are included, where steps 501 to 504 are substantially the same as steps 101 to 104 in the first embodiment, and details are not described herein again. For the technical details that are not described in detail in this embodiment, refer to the human-computer interaction method provided in the first embodiment or the second embodiment, and details are not described herein again.

在步骤505中,判断是否有新的对象接近机器人。如果确定有新的对象接近机器人,进入步骤506;否则,直接进入步骤507,从上一次人机交互过程中剩余的待交互对象中重新选取一个待交互对象作出目标交互对象。In step 505, it is determined whether a new object is approaching the robot. If it is determined that a new object is close to the robot, go to step 506; otherwise, go directly to step 507 to re-select a target object to be interacted from the remaining objects to be interacted in the last human-computer interaction process to make the target interaction object.

具体的说,在本实施例中,判断是否有新的对象接近机器人的方式可以采用如第一实施例所说的,若在以机器人当前所处位置为圆心的预设范围(如5米)内检测到有新的对象接近,则确定有新的对象接近机器人,具体的判断操作,此处不再赘述。Specifically, in this embodiment, the manner of judging whether a new object approaches the robot may be adopted as in the first embodiment, if the preset range (eg, 5 meters) is centered on the current position of the robot. If it is detected that a new object is approaching, it is determined that a new object is close to the robot, and the specific judgment operation will not be described here.

另外,需要说明的是,在本实施例中,接近机器人的新的对象可以为一个,也可以大于1,此处不做限制。In addition, it should be noted that, in this embodiment, the new object approaching the robot may be one, or may be greater than 1, and is not limited herein.

在步骤506中,提取新的对象的生物特征信息。In step 506, biometric information of the new object is extracted.

在步骤507中,重新确定需要进行交互的目标交互对象。In step 507, the target interaction object that needs to interact is re-determined.

具体的说,在本实施例中重新确定的需要进行交互的目标对象具体为从新的对象和除上一次交互操作的目标交互对象之外的对象中选取。Specifically, the target object that needs to be interacted in the embodiment is specifically selected from the new object and the object other than the target interaction object of the last interaction operation.

为了便于理解,以下进行具体说明:For ease of understanding, the following is specifically explained:

在实际应用中,特别是在人流量较大的公共场所,同一时间可能存在多个需要与机器人进行交互的对象(即根据识别到的对象的生物特征信息,确定有大于1个对象为需要进行交互的待交互对象),然而,在进行人机交互时,同一时刻,机器人只能向一个待交互对象作出响应(即需要选定目标交互对象进行交互),在完成一次交互后,才可以与其他待交互对象进行交互。但在完成一次交互后,机器人周围可能除了有之前确定的待交互对象还在等待机器人作出响应,还可能会有新的需要待交互的对象出现,因此在这种情况下,重新确定需要进行交互的目标交互对象的操作,就需要在新确认的待交互对象和上一次人机交互过程中剩余的待交互对象中重新选取一个待交互对象作出目标交互对象。In practical applications, especially in public places with large traffic, there may be multiple objects that need to interact with the robot at the same time (ie, according to the biometric information of the identified objects, it is determined that more than one object is needed) Interactive interaction objects) However, at the same time, during the human-computer interaction, the robot can only respond to an object to be interacted (that is, the target interaction object needs to be selected for interaction), and after completing an interaction, Other objects to be interacted with each other. However, after completing an interaction, the robot may wait for the robot to respond in addition to the previously determined object to be interacted, and there may be new objects to be interacted with, so in this case, it is necessary to re-determine the interaction. For the operation of the target interactive object, it is necessary to reselect a target to be interacted with the newly confirmed object to be interacted and the remaining objects to be interacted in the last human-computer interaction process to make the target interactive object.

另外,需要说明的是,由于本实施例中重新确定需要进行交互的目标交互对象的方式,与第一实施例中的确定方式大致相同,均需要根据生物特征信息,确定识别到的对象为待交互对象,然后从待交互对象中选取最终需要进行交互的目标交互对象,具体的实现细节此处不再赘述。In addition, it should be noted that, in this embodiment, the manner of re-determining the target interaction object that needs to be interacted is substantially the same as the determination manner in the first embodiment, and it is determined that the identified object is determined according to the biometric information. Interacting objects, and then selecting the target interaction objects that need to be interacted with from the objects to be interacted. The specific implementation details are not described here.

另外,关于目标交互对象的选取,在本实施例中仍然可以按照各待交互对象优先级的高低进行选择,当然也可以按照其他选取方式确定新的目标交互对象,此处不做限制。In addition, regarding the selection of the target interaction object, in this embodiment, the priority of each object to be interacted can still be selected. Of course, the new target interaction object can be determined according to other selection methods, and no limitation is made here.

在步骤508中,控制机器人作出与重新确定的目标交互对象匹配的响应。In step 508, the control robot is made to respond to the re-determined target interaction object.

具体的说,控制机器人作出与重新确定的目标交互对象匹配的响应,响应过程可以为:朝着目标交互对象移动,并移动到目标交互对象所在的区域后,主动进行服务咨询或业务引导,具体的响应方式,可以根据重新确定的目标交互对象的相关信息进行设置,此处不做限制。Specifically, the control robot makes a response that matches the re-determined target interaction object, and the response process may be: moving toward the target interaction object, and moving to the area where the target interaction object is located, actively conducting service consultation or service guidance, specifically The response mode can be set according to the information about the re-determined target interaction object, and no limitation is made here.

需要说明的是,以上仅为举例说明,并不对本申请的技术方案及要保护的范围构成限定,在实际应用中,本领域的技术人员可以根据实际需要,合理设置,此处不做限制。It should be noted that the above is only an example, and does not limit the technical solution of the present application and the scope to be protected. In practical applications, those skilled in the art can appropriately set according to actual needs, and no limitation is made herein.

与现有技术相比,本实施例中提供的人机交互方法,在完成一次人机交互操作后,通过监测是否有新的对象接近机器人,并在确定有新的对象接近机器人时,提取新出现的对象的生物特征信息及确定新出现的对象是否为待交互对象,如果新出现的对象为待交互对象,则在新确认的待交互对象和上一次人机交互过程中剩余的待交互对象中重新选取一个待交互对象作出目标交互对象,然后进行人机交互;如新出现的对象不是待交互对象,则直接在上一次人机交互过程中剩余的待交互对象中重新选取一个待交互对象作出目标交互对象,然后进行人机交互。Compared with the prior art, the human-computer interaction method provided in this embodiment extracts a new one by monitoring whether a new object approaches the robot after completing a human-computer interaction operation, and when it is determined that a new object approaches the robot. The biometric information of the object that appears and whether the newly appearing object is the object to be interacted. If the newly appearing object is the object to be interacted, the object to be interacted with in the newly confirmed object to be interacted and the last human-computer interaction process Re-select an object to be interacted to make the target interaction object, and then perform human-computer interaction; if the newly appearing object is not the object to be interacted, then directly select an object to be interacted in the remaining objects to be interacted in the last human-computer interaction process. Make the target interaction object and then perform human-computer interaction.

通过上述描述不难发现,本实施例中提供的人机交互方法,能够使机器人在工作过程中动态的更新感知到对象的状态,从而能够准确的作出符合当前场景的响应,减少误操作,进一步提升了用户体验。It is not difficult to find through the above description that the human-computer interaction method provided in this embodiment enables the robot to dynamically update the state of the object during the work process, thereby accurately making a response conforming to the current scene, reducing misoperation, and further Improved user experience.

本申请的第四实施例涉及一种人机交互装置,该人机交互装置应用于机器人,具体结构如图6所示。A fourth embodiment of the present application relates to a human-machine interaction device, which is applied to a robot, and the specific structure is as shown in FIG. 6.

如图6所示,人机交互装置包括提取模块601、确定模块602和控制模块603。As shown in FIG. 6, the human-machine interaction device includes an extraction module 601, a determination module 602, and a control module 603.

其中,提取模块601,用于提取识别到的至少一个对象的生物特征信息。The extraction module 601 is configured to extract biometric information of the identified at least one object.

确定模块602,用于根据生物特征信息,从至少一个对象中确定需要进行交互的目标交互对象。The determining module 602 is configured to determine, from the at least one object, the target interactive object that needs to interact according to the biometric information.

控制模块603,用于控制机器人作出与目标交互对象匹配的响应。The control module 603 is configured to control the robot to make a response that matches the target interaction object.

具体的说,在本实施例中,提取模块601提取的识别到的至少一个对象的生物特征信息具体可以是生理特征信息、行为特征信息中的任意一种或其两者的组合。Specifically, in the embodiment, the biometric information of the identified at least one object extracted by the extraction module 601 may be any one of physiological characteristic information and behavior characteristic information or a combination of the two.

另外,值得一提的是,本实施例中提取模块601提取到的生理特征信息具体可以是对象的面部信息、眼部信息、声纹信息等任意一种或任意组合。提取模块601提取到的行为特征信息具体可以是对象的位移信息、语音内容信息 等任意一种或其组合。In addition, it is worth mentioning that the physiological feature information extracted by the extraction module 601 in this embodiment may be any one or any combination of facial information, eye information, voiceprint information, and the like of the object. The behavior characteristic information extracted by the extraction module 601 may specifically be any one of a displacement information of the object, voice content information, or a combination thereof.

确定模块602在根据上述各种生物特征信息,从至少一个对象中确定需要进行交互的目标交互对象时,具体可以是:首先,根据上述各种生物特征信息确定识别到的对象是待交互对象(需要进行交互的对象),如根据识别到的对象的眼部信息分析该对象的眼神注视情况,以及该对象的位移信息来确定其当前是否正在寻求帮助,从而确定其是否为待交互对象;然后,在确定待交互对象后,从这些待交互对象中选取一个符合要求的对象作为目标交互对象(最终需要进行交互的对象)。The determining module 602 may determine, when determining the target interactive object that needs to be interacted from the at least one object, according to the foregoing various biometric information, first: determining, according to the foregoing various biometric information, that the identified object is the object to be interacted with ( An object that needs to be interacted with, such as analyzing the eye gaze of the object based on the ocular information of the recognized object, and the displacement information of the object to determine whether it is currently seeking help to determine whether it is the object to be interacted; After determining the objects to be interacted with, select an object that meets the requirements from the objects to be interacted as the target interaction object (the object that needs to be interacted eventually).

另外,在本实施例中,控制模块603控制机器人作出与目标交互对象匹配的响应,具体可以是控制机器人朝着目标交互对象移动。In addition, in the embodiment, the control module 603 controls the robot to make a response that matches the target interaction object, and specifically may control the robot to move toward the target interaction object.

进一步的,在机器人移动到目标交互对象所在的区域后,可以控制机器人根据该对象的身份信息作出与其匹配的响应,如主动的进行服务询问、业务引导等,具体可以是:“您好,请问您要办理什么业务?”Further, after the robot moves to the area where the target interaction object is located, the robot can be controlled to make a response matching the identity information of the object, such as active service inquiry, service guidance, etc., specifically: "Hello, may I ask? What business do you want to handle?"

需要说明的是,以上仅为举例说明,并不对本申请的技术方案及要保护的范围构成限定,在实际应用中,本领域的技术人员可以根据实际需要,合理设置,此处不做限制。It should be noted that the above is only an example, and does not limit the technical solution of the present application and the scope to be protected. In practical applications, those skilled in the art can appropriately set according to actual needs, and no limitation is made herein.

另外,未在本实施方式中详尽描述的技术细节,可参见本申请任一实施例所提供的人机交互方法,此处不再赘述。In addition, for the technical details that are not described in detail in this embodiment, refer to the human-computer interaction method provided in any embodiment of the present application, and details are not described herein again.

通过上述描述不难发现,本实施例中提供的人机交互装置,采用提取模块来提取识别到的至少一个对象的生物特征信息,确定模块根据生物特征信息,从至少一个对象中确定需要进行交互的目标交互对象,然后利用控制模块控制机器人作出与目标交互对象匹配的响应,通过上述各模块直接的相互配合,使得装配有该人机交互装置的机器人能够仅针对需要进行交互的对象作出响应,从而有效避免了误响应操作,大大提升了用户体验。It is not difficult to find that the human-machine interaction device provided in this embodiment uses the extraction module to extract the biometric information of the identified at least one object, and the determining module determines that the interaction needs to be performed from the at least one object according to the biometric information. The target interaction object, and then the control module is used to control the robot to make a response matching the target interaction object, and the direct cooperation of the above modules enables the robot equipped with the human-machine interaction device to respond only to the object that needs to interact. Thereby effectively avoiding the false response operation and greatly improving the user experience.

以上所描述的装置实施例仅仅是示意性的,并不对本申请的保护范围构成限定,在实际应用中,本领域的技术人员可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的,此处不做限制。The device embodiments described above are merely illustrative and do not limit the scope of the application. In practical applications, those skilled in the art may select some or all of the modules according to actual needs to implement the embodiment. The purpose of the program is not limited here.

本申请的第五实施例涉及一种机器人,具体结构如图7所示。A fifth embodiment of the present application relates to a robot, and the specific structure is as shown in FIG.

该机器人可以是位于如银行营业厅、大型商场、机场等设备公共场所的智能机器设备。其内部具体包括一个或多个处理器701以及存储器702,图7中以一个处理器701为例。The robot can be a smart machine located in a public place such as a bank business hall, a large shopping mall, an airport, or the like. The internal one specifically includes one or more processors 701 and a memory 702. One processor 701 is taken as an example in FIG.

在本实施例中,上述实施例中涉及到的人机交互装置中的各功能模块均 部署在处理器701上,处理器701和存储器702可以通过总线或其他方式连接,图7中以通过总线连接为例。In this embodiment, each functional module in the human-machine interaction device involved in the foregoing embodiment is deployed on the processor 701, and the processor 701 and the memory 702 can be connected through a bus or other manner, and the bus is used in FIG. Connection is an example.

存储器702作为一种计算机可读存储介质,可用于存储软件程序、计算机可执行程序以及模块,如本申请任意方法实施例中涉及的人机交互方法对应的程序指令/模块。处理器701通过运行存储在存储器702中的软件程序、指令以及模块,从而执行服务器的各种功能应用以及数据处理,即实现本申请任意方法实施例中涉及的人机交互方法。The memory 702 is a computer readable storage medium, and can be used to store a software program, a computer executable program, and a module, such as a program instruction/module corresponding to the human-computer interaction method involved in any method embodiment of the present application. The processor 701 executes various functional applications and data processing of the server by executing software programs, instructions, and modules stored in the memory 702, that is, implementing the human-computer interaction method involved in any method embodiment of the present application.

存储器702可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可建立历史数据库,用于存储优先级设置条件等。此外,存储器702可以包括高速随机存取存储器,还可以包括可读写存储器(Random Access Memory,RAM)等。在一些实施例中,存储器702可选包括相对于处理器701远程设置的存储器,这些远程存储器可以通过网络连接至终端设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。The memory 702 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function; the storage data area may establish a history database, store priority setting conditions, and the like. In addition, the memory 702 may include a high speed random access memory, and may also include a readable and writable memory (RAM). In some embodiments, memory 702 can optionally include memory remotely located relative to processor 701 that can be connected to the terminal device over a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.

在实际应用中,存储器702中可以存储至少一个处理器701执行的指令,指令被至少一个处理器701执行,以使至少一个处理器701能够执行本申请任意方法实施例涉及的人机交互方法,控制人机交互装置中的各个功能模块完成人机交互方法中的定位操作,未在本实施例中详尽描述的技术细节,可参见本申请任一实施例所提供的人机交互方法。In an actual application, the memory 702 can store the instructions executed by the at least one processor 701, and the instructions are executed by the at least one processor 701, so that the at least one processor 701 can perform the human-computer interaction method involved in any method embodiment of the present application. For the details of the technical operations in the human-computer interaction method, the human-computer interaction method provided in any embodiment of the present application can be referred to.

本申请的第六实施例涉及一种计算机可读存储介质,该计算机可读存储介质中存储有计算机指令,该计算机指令使计算机能够执行本申请任意方法实施例中涉及的人机交互方法。A sixth embodiment of the present application is directed to a computer readable storage medium having stored therein computer instructions that enable a computer to perform the human-computer interaction method involved in any of the method embodiments of the present application.

本领域的普通技术人员可以理解,上述各实施例是实现本申请的具体实施例,而在实际应用中,可以在形式上和细节上对其作各种改变,而不偏离本申请的精神和范围。A person skilled in the art can understand that the above embodiments are specific embodiments of the present application, and various changes can be made in the form and details without departing from the spirit and scope of the application. range.

Claims (10)

一种人机交互方法,应用于机器人,所述人机交互方法包括:A human-computer interaction method is applied to a robot, and the human-computer interaction method includes: 提取识别到的至少一个对象的生物特征信息;其中,所述生物特征信息包括生理特征信息和/或行为特征信息;Extracting biometric information of the identified at least one object; wherein the biometric information includes physiological characteristic information and/or behavior characteristic information; 根据所述生物特征信息,从所述至少一个对象中确定需要进行交互的目标交互对象;Determining, from the at least one object, a target interaction object that needs to be interacted according to the biometric information; 控制所述机器人作出与所述目标交互对象匹配的响应。Controlling the robot to make a response that matches the target interaction object. 如权利要求1所述的人机交互方法,其中,所述提取识别到的至少一个对象的生物特征信息,具体包括:The human-computer interaction method according to claim 1, wherein the extracting the biometric information of the at least one object that is identified includes: 在以所述机器人所处位置为圆心的预设范围内,检测到有至少一个对象接近所述机器人,提取所述至少一个对象的生物特征信息。In a preset range centered on the position where the robot is located, it is detected that at least one object approaches the robot, and biometric information of the at least one object is extracted. 如权利要求1或2所述的人机交互方法,其中,所述提取识别到的至少一个对象的生物特征信息,具体包括:The human-computer interaction method according to claim 1 or 2, wherein the extracting the biometric information of the identified at least one object comprises: 控制所述机器人进行图像采集,并从采集到的图像中提取所述至少一个对象的生物特征,得到所述至少一个对象的生理特征信息和/或行为特征信息;其中,所述生理特征信息包括面部信息和/或眼部信息,所述行为特征信息包括位移信息;Controlling the robot to perform image acquisition, and extracting biometric features of the at least one object from the collected image to obtain physiological characteristic information and/or behavior characteristic information of the at least one object; wherein the physiological characteristic information includes Facial information and/or eye information, the behavior characteristic information including displacement information; 和/或,控制所述机器人进行语音采集,并从采集到的语音中提取所述至少一个对象的生物特征,得到所述至少一个对象的生理特征信息和/或行为特征信息;其中,所述生理特征信息包括声纹信息,所述行为特征信息包括语音内容信息。And/or controlling the robot to perform voice collection, and extracting biometric features of the at least one object from the collected voices to obtain physiological feature information and/or behavior feature information of the at least one object; wherein The physiological feature information includes voiceprint information, and the behavior feature information includes voice content information. 如权利要求1至3任意一项所述的人机交互方法,其中,所述根据所述生物特征信息,从所述至少一个对象中确定需要进行交互的目标交互对象,具体包括:The human-computer interaction method according to any one of claims 1 to 3, wherein the determining, according to the biometric information, the target interaction object that needs to be interacted from the at least one object, specifically includes: 根据所述生物特征信息,确定所述至少一个对象为待交互对象;Determining, according to the biometric information, that the at least one object is an object to be interacted with; 若所述待交互对象的数目等于1,确定所述待交互对象为所述目标交互对象;If the number of the objects to be interacted is equal to 1, determining that the object to be interacted is the target interaction object; 若所述待交互对象的数目大于1,根据预设的优先级设置条件,为每一个所述待交互对象设置优先级,确定优先级最高的待交互对象为所述目标交互对象。If the number of the objects to be interacted is greater than 1, the priority is set for each object to be interacted according to the preset priority setting condition, and the object to be interacted with the highest priority is determined as the target interaction object. 如权利要求1至4任意一项所述的人机交互方法,其中,所述控制所述 机器人作出与所述目标交互对象匹配的响应,具体包括:The human-computer interaction method according to any one of claims 1 to 4, wherein the controlling the robot to make a response that matches the target interaction object includes: 获取所述目标交互对象的位置信息;Obtaining location information of the target interaction object; 根据所述位置信息,控制所述机器人朝着所述目标交互对象移动。Controlling the robot to move toward the target interactive object according to the location information. 如权利要求5所述的人机交互方法,其中,所述控制所述机器人作出与所述目标交互对象匹配的响应,具体包括:The human-computer interaction method according to claim 5, wherein the controlling the robot to make a response that matches the target interaction object comprises: 获取所述目标交互对象的身份信息;Obtaining identity information of the target interaction object; 在所述机器人移动到所述目标交互对象所在的区域后,根据所述身份信息作出与所述目标交互对象匹配的响应。After the robot moves to the area where the target interaction object is located, a response matching the target interaction object is made according to the identity information. 如权利要求6所述的人机交互方法,其中,在控制所述机器人作出与所述目标交互对象匹配的响应之后,所述人机交互方法还包括:The human-computer interaction method according to claim 6, wherein after the controlling the robot to make a response that matches the target interaction object, the human-computer interaction method further comprises: 确定有新的对象接近所述机器人;Determining that a new object is in proximity to the robot; 提取所述新的对象的生物特征信息,并从所述新的对象和所述至少一个对象中除所述目标交互对象之外的对象中,重新确定需要进行交互的目标交互对象;Extracting biometric information of the new object, and re-determining a target interactive object that needs to be interacted from the object other than the target interaction object among the new object and the at least one object; 控制所述机器人作出与重新确定的目标交互对象匹配的响应。The robot is controlled to make a response that matches the re-determined target interaction object. 一种人机交互装置,应用于机器人,所述人机交互装置包括:提取模块、确定模块和控制模块;A human-machine interaction device is applied to a robot, and the human-machine interaction device comprises: an extraction module, a determination module and a control module; 所述提取模块,用于提取识别到的至少一个对象的生物特征信息;其中,所述生物特征信息包括生理特征信息和/或行为特征信息;The extraction module is configured to extract biometric information of the identified at least one object, where the biometric information includes physiological characteristic information and/or behavior characteristic information; 所述确定模块,用于根据所述生物特征信息,从所述至少一个对象中确定需要进行交互的目标交互对象;The determining module is configured to determine, from the at least one object, a target interaction object that needs to be interacted according to the biometric information; 所述控制模块,用于控制所述机器人作出与所述目标交互对象匹配的响应。The control module is configured to control the robot to make a response that matches the target interaction object. 一种机器人,包括:A robot that includes: 至少一个处理器;以及,At least one processor; and, 与所述至少一个处理器通信连接的存储器;其中,a memory communicatively coupled to the at least one processor; wherein 所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1至7任意一项所述的人机交互方法。The memory stores instructions executable by the at least one processor, the instructions being executed by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 7. Human-computer interaction method. 一种计算机可读存储介质,存储有计算机指令,所述计算机指令用于使所述计算机执行权利要求1至7任意一项所述的人机交互方法。A computer readable storage medium storing computer instructions for causing the computer to perform the human-computer interaction method of any one of claims 1 to 7.
PCT/CN2018/075263 2018-02-05 2018-02-05 Human-computer interaction method and device, robot, and computer readable storage medium Ceased WO2019148491A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201880001295.0A CN108780361A (en) 2018-02-05 2018-02-05 Human-computer interaction method and device, robot and computer readable storage medium
PCT/CN2018/075263 WO2019148491A1 (en) 2018-02-05 2018-02-05 Human-computer interaction method and device, robot, and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/075263 WO2019148491A1 (en) 2018-02-05 2018-02-05 Human-computer interaction method and device, robot, and computer readable storage medium

Publications (1)

Publication Number Publication Date
WO2019148491A1 true WO2019148491A1 (en) 2019-08-08

Family

ID=64029123

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/075263 Ceased WO2019148491A1 (en) 2018-02-05 2018-02-05 Human-computer interaction method and device, robot, and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN108780361A (en)
WO (1) WO2019148491A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110716634A (en) * 2019-08-28 2020-01-21 北京市商汤科技开发有限公司 Interaction method, device, equipment and display equipment
CN113724454A (en) * 2021-08-25 2021-11-30 上海擎朗智能科技有限公司 Interaction method of mobile equipment, device and storage medium
CN114633267A (en) * 2022-03-17 2022-06-17 上海擎朗智能科技有限公司 Interactive content determination method, mobile equipment, device and storage medium

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109062482A (en) * 2018-07-26 2018-12-21 百度在线网络技术(北京)有限公司 Man-machine interaction control method, device, service equipment and storage medium
CN110085225B (en) * 2019-04-24 2024-01-02 北京百度网讯科技有限公司 Voice interaction method and device, intelligent robot and computer readable storage medium
CN110228073A (en) * 2019-06-26 2019-09-13 郑州中业科技股份有限公司 Active response formula intelligent robot
CN110465947B (en) * 2019-08-20 2021-07-02 苏州博众机器人有限公司 Multi-mode fusion man-machine interaction method, device, storage medium, terminal and system
CN110689889B (en) * 2019-10-11 2021-08-17 深圳追一科技有限公司 Man-machine interaction method and device, electronic equipment and storage medium
CN112764950B (en) * 2021-01-27 2023-05-26 上海淇玥信息技术有限公司 Event interaction method and device based on combined behaviors and electronic equipment
CN115476366B (en) * 2021-06-15 2024-01-09 北京小米移动软件有限公司 Control method, device, control equipment and storage medium for foot robot
CN113486765B (en) * 2021-06-30 2023-06-16 上海商汤临港智能科技有限公司 Gesture interaction method and device, electronic equipment and storage medium
CN114715175B (en) * 2022-05-06 2025-04-25 Oppo广东移动通信有限公司 Method, device, electronic device and storage medium for determining target object
CN115240669A (en) * 2022-07-15 2022-10-25 中国建设银行股份有限公司 Voice interaction method, device, electronic device and storage medium
CN117251048A (en) * 2022-12-06 2023-12-19 北京小米移动软件有限公司 Control method and device of terminal equipment, terminal equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011143523A2 (en) * 2010-05-13 2011-11-17 Alexander Poltorak Electronic personal interactive device
CN105701447A (en) * 2015-12-30 2016-06-22 上海智臻智能网络科技股份有限公司 Guest-greeting robot
CN105843118A (en) * 2016-03-25 2016-08-10 北京光年无限科技有限公司 Robot interacting method and robot system
CN106113038A (en) * 2016-07-08 2016-11-16 纳恩博(北京)科技有限公司 Mode switching method based on robot and device
CN106203050A (en) * 2016-07-22 2016-12-07 北京百度网讯科技有限公司 The exchange method of intelligent robot and device
CN106873773A (en) * 2017-01-09 2017-06-20 北京奇虎科技有限公司 Robot interactive control method, server and robot
CN107450729A (en) * 2017-08-10 2017-12-08 上海木爷机器人技术有限公司 Robot interactive method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104936091B (en) * 2015-05-14 2018-06-15 讯飞智元信息科技有限公司 Intelligent interactive method and system based on circular microphone array

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011143523A2 (en) * 2010-05-13 2011-11-17 Alexander Poltorak Electronic personal interactive device
CN105701447A (en) * 2015-12-30 2016-06-22 上海智臻智能网络科技股份有限公司 Guest-greeting robot
CN105843118A (en) * 2016-03-25 2016-08-10 北京光年无限科技有限公司 Robot interacting method and robot system
CN106113038A (en) * 2016-07-08 2016-11-16 纳恩博(北京)科技有限公司 Mode switching method based on robot and device
CN106203050A (en) * 2016-07-22 2016-12-07 北京百度网讯科技有限公司 The exchange method of intelligent robot and device
CN106873773A (en) * 2017-01-09 2017-06-20 北京奇虎科技有限公司 Robot interactive control method, server and robot
CN107450729A (en) * 2017-08-10 2017-12-08 上海木爷机器人技术有限公司 Robot interactive method and device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110716634A (en) * 2019-08-28 2020-01-21 北京市商汤科技开发有限公司 Interaction method, device, equipment and display equipment
CN113724454A (en) * 2021-08-25 2021-11-30 上海擎朗智能科技有限公司 Interaction method of mobile equipment, device and storage medium
CN114633267A (en) * 2022-03-17 2022-06-17 上海擎朗智能科技有限公司 Interactive content determination method, mobile equipment, device and storage medium

Also Published As

Publication number Publication date
CN108780361A (en) 2018-11-09

Similar Documents

Publication Publication Date Title
WO2019148491A1 (en) Human-computer interaction method and device, robot, and computer readable storage medium
KR102803155B1 (en) Multi-user authentication on a device
US10913463B2 (en) Gesture based control of autonomous vehicles
EP4044146A1 (en) Method and apparatus for detecting parking space and direction and angle thereof, device and medium
US11145299B2 (en) Managing voice interface devices
KR20210048272A (en) Apparatus and method for automatically focusing the audio and the video
US20200005051A1 (en) Visual Perception Method, Apparatus, Device, and Medium Based on an Autonomous Vehicle
CN204990444U (en) Intelligent security controlgear
WO2020043040A1 (en) Speech recognition method and device
US20230136553A1 (en) Context-aided identification
CN112036345A (en) Method for detecting number of people in target place, recommendation method, detection system and medium
CN104933791A (en) Intelligent security control method and equipment
CN109036392A (en) Robot interactive system
US10917721B1 (en) Device and method of performing automatic audio focusing on multiple objects
US10964188B2 (en) Missing child prevention support system
CN109241721A (en) Method and apparatus for pushed information
KR101933822B1 (en) Intelligent speaker based on face reconition, method for providing active communication using the speaker, and computer readable medium for performing the method
CN114187642A (en) Target area person searching method and device based on leading robot and electronic equipment
JP5844375B2 (en) Object search system and object search method
CN115240669A (en) Voice interaction method, device, electronic device and storage medium
CN114407024A (en) Position leading method, device, robot and storage medium
CN109665387B (en) Intelligent elevator boarding method and device, computer equipment and storage medium
JP2019003494A (en) Robot management system
US20210168293A1 (en) Information processing method and apparatus therefor
CN106650656A (en) User identification device and robot

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18903913

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 03/12/2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18903913

Country of ref document: EP

Kind code of ref document: A1