[go: up one dir, main page]

CN117798914A - Bionic expression robot communication method, device, medium and computer equipment - Google Patents

Bionic expression robot communication method, device, medium and computer equipment Download PDF

Info

Publication number
CN117798914A
CN117798914A CN202311856777.3A CN202311856777A CN117798914A CN 117798914 A CN117798914 A CN 117798914A CN 202311856777 A CN202311856777 A CN 202311856777A CN 117798914 A CN117798914 A CN 117798914A
Authority
CN
China
Prior art keywords
local
robot
signal
information
expression
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311856777.3A
Other languages
Chinese (zh)
Other versions
CN117798914B (en
Inventor
王全胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Xiaoquan Technology And Culture Co ltd
Original Assignee
Shenzhen Xiaoquan Technology And Culture Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Xiaoquan Technology And Culture Co ltd filed Critical Shenzhen Xiaoquan Technology And Culture Co ltd
Priority to CN202311856777.3A priority Critical patent/CN117798914B/en
Publication of CN117798914A publication Critical patent/CN117798914A/en
Application granted granted Critical
Publication of CN117798914B publication Critical patent/CN117798914B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/0009Constructional details, e.g. manipulator supports, bases

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

The application provides a communication method, a communication device, a communication medium and computer equipment for a bionic expression robot. The bionic expression robot communication method searches equipment information of the similar robots in a communication range, and sends a connection request to the similar robots searching the equipment information; if the connection request passes, a first communication connection between the local robot and a first target robot is established; the method comprises the steps of acquiring local input information in a first time period in real time, processing the local input information into a first transmission signal, and transmitting the first transmission signal to a first target robot, wherein the first transmission signal is used for controlling the first target robot to execute an execution action in a second time period. The first target robot of the communication method can perform acceleration, deceleration or constant-speed action execution on effective execution information in the local input information, so that information of an owner acquired by the local robot is more accurately expressed.

Description

Bionic expression robot communication method, device, medium and computer equipment
Technical Field
The invention relates to a communication method, a communication device, a communication medium and a communication computer device for a bionic expression robot, and belongs to the technical field of social bionic robot control.
Background
The robots of the social group can realize interconnection and voice interaction in the same space range through the existing communication mode, the interaction in the prior art is also limited to man-machine interaction, namely interaction between a person and the bionic expression robot, for example, the person can perform voice command to the bionic expression robot and obtain voice feedback, expression feedback, action feedback and the like of the bionic expression robot.
When a user wants to communicate with another user through a similar bionic expression robot in the same space range, for example, an indoor place such as a room, communication interconnection between robots is needed, the conventional communication interconnection needs complex steps such as authentication and verification, and particularly communication interconnection under a non-face-to-face distance and communication between users after communication interconnection cannot be realized, so that application of the bionic expression robot is limited, and the use of the bionic expression robot is used for assisting and promoting communication between people.
In addition, the existing robot can simply repeat the voice after acquiring the user voice, but the robot cannot accurately analyze the depth information of the user voice, such as emotion, emotion and the like, and cannot reprocess the acquired user language according to the emotion, emotion and the like, and more accurately express the emotion characteristics of the voice owner.
Disclosure of Invention
Aiming at the defects of the prior art, the invention aims to provide a communication method, a device, a medium and computer equipment of a bionic expression robot, which are used for automatically establishing a machine communication connection and realizing indirect communication among people through the machine communication connection.
According to an embodiment of the present invention, there is provided a first aspect of: a communication method of a bionic expression robot comprises the following steps:
searching equipment information of the similar robots in a communication range, and sending a connection request to the similar robots searching the equipment information;
if the connection request passes, a first communication connection between the local robot and a first target robot is established;
acquiring local input information in a first time period in real time, processing the local input information into a first transmission signal, and transmitting the first transmission signal to a first target robot, wherein the first transmission signal is used for controlling the first target robot to execute an execution action in a second time period;
if the connection request is not passed, marking the equipment information of the same type of failed robots as equipment to be connected.
Further, the local input information comprises environment sound information and robot touch information;
processing the environment sound information into a local voice signal, and processing the robot touch information into a local voice adjustment parameter;
the local voice signal and the local voice adjusting parameter are used as a first sending signal to be sent to a first target robot;
the first target robot adjusts the voice playing mode of the local voice signal and/or the duration of the second time period according to the local voice adjusting parameter, acquires the first adjusting voice signal, and plays the first adjusting voice signal through the first sound equipment.
Further, the step of the first target robot adjusting the voice playing mode of the local voice signal according to the local voice adjusting parameter and obtaining the first adjusting voice signal includes:
the local voice adjustment parameters are acquired according to the robot touch information, wherein the robot touch information comprises touch position information, touch pressure information and touch speed information;
inputting the local voice adjustment parameters into an emotion classification model and acquiring emotion result parameters, wherein the emotion result parameters comprise: positive emotional outcome, neutral emotional outcome, negative emotional outcome;
Calling a voice playing mode in a voice tone library according to emotion result parameters, and obtaining a first adjusting voice signal according to the voice playing mode and the local voice signal.
Further, the step of the first target robot adjusting the duration of the second time period of the local voice signal according to the local voice adjustment parameter and obtaining the first adjusted voice signal includes:
the local voice adjustment parameters are acquired according to the robot touch information, wherein the robot touch information comprises touch position information, touch pressure information and touch speed information;
inputting the robot touch information into an emotion classification model and acquiring emotion result parameters, wherein the emotion result parameters comprise: positive emotional outcome, neutral emotional outcome, negative emotional outcome;
and acquiring a voice acceleration parameter according to the positive emotion result, wherein the voice acceleration parameter can be more than, equal to or less than 1, if the voice acceleration parameter is more than 1, the playing speed of the local voice signal is increased, if the voice acceleration parameter is equal to 1, the playing speed of the local voice signal is unchanged, and if the voice acceleration parameter is less than 1, the playing speed of the local voice signal is reduced.
Further, the local input information includes ambient sound information and ambient video information;
Acquiring a local voice signal from the environment sound information, and acquiring a local expression signal and/or a local motion signal from the environment video information;
the local voice signal, the local expression signal and/or the local motion signal are/is used as a first transmission signal to be transmitted to a first target robot;
the first target robot adjusts the duration of a second time period of the local voice signal according to the local expression signal and/or the local motion signal, acquires a first adjustment voice signal, and plays the first adjustment voice signal through first sound equipment.
Further, the step of the first target robot adjusting the duration of the second time period of the local voice signal according to the local motion signal and obtaining the first adjusted voice signal, and playing the first adjusted voice signal through the first audio device includes:
according to the motor motion signal, obtaining master motion characteristics, wherein the master motion characteristics comprise head motion characteristics, limb motion characteristics and body motion characteristics, inputting the master motion characteristics into a emotion classification model and obtaining emotion result parameters, and the emotion result parameters comprise: positive emotional outcome, neutral emotional outcome, negative emotional outcome;
And acquiring a voice acceleration parameter according to the positive emotion result, wherein the voice acceleration parameter can be more than 1 and equal to 1 or less than 1, if the voice acceleration parameter is more than 1, the duration of the second time period is more than the duration of the first time period, the local voice signal after the duration adjustment is a first adjustment voice signal, and the first adjustment voice signal is played through first sound equipment.
Further, the step of the first target robot adjusting the duration of the second time period of the local voice signal according to the local expression signal and obtaining a first adjusted voice signal, and playing the first adjusted voice signal through the first audio device includes:
according to the local expression signal, obtaining an owner expression characteristic, wherein the owner expression characteristic comprises an eye action characteristic, an eyebrow action characteristic, a mouth action characteristic, a face action characteristic and a head action characteristic, inputting the owner expression characteristic into a emotion classification model and obtaining emotion result parameters, and the emotion result parameters comprise: positive emotional outcome, neutral emotional outcome, negative emotional outcome;
and acquiring a voice acceleration parameter according to the positive emotion result, wherein the voice acceleration parameter can be more than 1 and equal to 1 or less than 1, if the voice acceleration parameter is more than 1, the duration of the second time period is more than the duration of the first time period, if the voice acceleration parameter is equal to 1, the duration of the second time period is equal to the duration of the first time period, if the voice acceleration parameter is less than 1, the duration of the second time period is less than the duration of the first time period, the local voice signal after the time period adjustment is a first adjustment voice signal, and the first adjustment voice signal is played through first sound equipment.
Still further, the method comprises the steps of,
the local input information comprises environment sound information and environment video information;
acquiring a local voice signal from the environment sound information, and acquiring a local expression signal and/or a local motion signal from the environment video information;
the local voice signal, the local expression signal and/or the local motion signal are/is used as a first transmission signal to be transmitted to a first target robot;
the first target robot adjusts a voice playing mode of a local voice signal according to the local expression signal and/or the local motion signal, acquires a first adjusting voice signal and plays the first adjusting voice signal through first sound equipment.
Further, the step of the first target robot adjusting the voice playing mode of the local voice signal according to the local expression signal and obtaining a first adjusted voice signal, and playing the first adjusted voice signal through the first audio device includes:
according to the local expression signal, obtaining an owner expression characteristic, wherein the owner expression characteristic comprises an eye action characteristic, an eyebrow action characteristic, a mouth action characteristic, a face action characteristic and a head action characteristic, inputting the owner expression characteristic into a emotion classification model and obtaining emotion result parameters, and the emotion result parameters comprise: positive emotional outcome, neutral emotional outcome, negative emotional outcome;
Calling a voice playing mode in a voice tone library according to emotion result parameters, and acquiring a first adjusting voice signal according to the voice playing mode and a local voice signal;
preferably, the cheering tone mode is called according to the positive emotion result, and the first adjusting voice signal plays the local voice signal in the cheering tone mode; invoking a soothing tone mode according to the neutral emotion result, and playing the local voice signal in the soothing tone mode by the first adjusting signal; and calling a depression tone color mode according to the negative emotion result, and playing the local voice signal in the depression tone color mode by the first adjusting signal.
Further, the step of the first target robot adjusting the voice playing mode of the local voice signal according to the local motion signal and obtaining the first adjusted voice signal, and playing the first adjusted voice signal through the first audio device includes:
according to the motor motion signal, obtaining master motion characteristics, wherein the master motion characteristics comprise head motion characteristics, limb motion characteristics and body motion characteristics, inputting the master motion characteristics into a emotion classification model and obtaining emotion result parameters, and the emotion result parameters comprise: positive emotional outcome, neutral emotional outcome, negative emotional outcome;
Calling a voice playing mode in a voice tone library according to emotion result parameters, and acquiring a first adjusting voice signal according to the voice playing mode and a local voice signal;
preferably, the cheering tone mode is called according to the positive emotion result, and the first adjusting voice signal plays the local voice signal in the cheering tone mode; invoking a soothing tone mode according to the neutral emotion result, and playing the local voice signal in the soothing tone mode by the first adjusting signal; and calling a depression tone color mode according to the negative emotion result, and playing the local voice signal in the depression tone color mode by the first adjusting signal.
Further, the first target robot obtains a first expression control signal according to the local expression signal, and the first expression control signal is used for controlling the first expression executing mechanism to act.
Still further, the method comprises the steps of,
according to the method, an owner expression characteristic is obtained according to a local expression signal, the owner expression characteristic comprises an eye action characteristic, a eyebrow action characteristic, a mouth action characteristic, a facial action characteristic and a head action characteristic, a first expression control characteristic signal is obtained according to the owner expression characteristic, and the first expression control signal is used for controlling a first expression executing structure of a first target robot to execute the owner expression characteristic.
Further, the step of controlling the first expression control signal to control the first expression executing mechanism of the first target robot to execute the expression feature of the host includes:
and the first expression control signal executes the expression characteristics of the host in the duration of a second time period, and the duration of the second time period is acquired according to the local voice adjustment parameters.
Still further, the method comprises the steps of,
the local voice adjustment parameters are acquired according to the robot touch information, wherein the robot touch information comprises touch position information, touch pressure information and touch speed information;
inputting the robot touch information into an emotion classification model and acquiring emotion result parameters, wherein the emotion result parameters comprise: positive emotional outcome, neutral emotional outcome, negative emotional outcome;
and acquiring a voice acceleration parameter according to the positive emotion result, wherein the voice acceleration parameter can be more than, equal to or less than 1, if the voice acceleration parameter is more than 1, the playing speed of the local voice signal is increased, if the voice acceleration parameter is equal to 1, the playing speed of the local voice signal is unchanged, and if the voice acceleration parameter is less than 1, the playing speed of the local voice signal is reduced.
Further, the step of controlling the first expression control signal to control the first expression executing mechanism of the first target robot to execute the expression feature of the host includes:
And the first expression control signal executes the host expression characteristic in a second time period, and the second time period is acquired according to the local expression signal and/or the local motor motion signal.
Further, the step of obtaining the duration of the second time period according to the local expression signal includes:
according to the local expression signal, obtaining an owner expression characteristic, wherein the owner expression characteristic comprises an eye action characteristic, an eyebrow action characteristic, a mouth action characteristic, a face action characteristic and a head action characteristic, inputting the owner expression characteristic into a emotion classification model and obtaining emotion result parameters, and the emotion result parameters comprise: positive emotional outcome, neutral emotional outcome, negative emotional outcome;
and acquiring a voice acceleration parameter according to the positive emotion result, wherein the voice acceleration parameter can be more than 1 and equal to 1 or less than 1, if the voice acceleration parameter is more than 1, the duration of the second time period is more than the duration of the first time period, if the voice acceleration parameter is equal to 1, the duration of the second time period is equal to the duration of the first time period, and if the voice acceleration parameter is less than 1, the duration of the second time period is less than the duration of the first time period.
Further, the step of acquiring the second time period according to the local motor power signal includes:
According to the motor motion signal, obtaining master motion characteristics, wherein the master motion characteristics comprise head motion characteristics, limb motion characteristics and body motion characteristics, inputting the master motion characteristics into a emotion classification model and obtaining emotion result parameters, and the emotion result parameters comprise: positive emotional outcome, neutral emotional outcome, negative emotional outcome;
and acquiring a voice acceleration parameter according to the positive emotion result, wherein the voice acceleration parameter can be more than 1 and equal to 1 or less than 1, if the voice acceleration parameter is more than 1, the duration of the second time period is more than the duration of the first time period, if the voice acceleration parameter is equal to 1, the duration of the second time period is equal to the duration of the first time period, and if the voice acceleration parameter is less than 1, the duration of the second time period is less than the duration of the first time period.
Further, the first target robot obtains a first motion control signal according to the local motion signal, and the first motion control signal is used for controlling the first motion to execute structural motion.
Still further, the method comprises the steps of,
according to the method, a master motion characteristic is obtained according to a master motion signal, wherein the master motion characteristic comprises a head motion characteristic, a limb motion characteristic and a body motion characteristic, a first motion control characteristic signal is obtained according to the master motion characteristic, and the first motion control characteristic signal is used for controlling a first motion executing mechanism of a first target robot to execute the master motion signal.
Further, the step of controlling the first motion control feature signal to control the first motion actuator of the first target robot to execute the master motion signal includes:
and the first action control signal executes the action characteristic of the host in the duration of a second time period, and the duration of the second time period is acquired according to the local voice adjustment parameters.
Still further, the method comprises the steps of,
the local voice adjustment parameters are acquired according to the robot touch information, wherein the robot touch information comprises touch position information, touch pressure information and touch speed information;
inputting the robot touch information into an emotion classification model and acquiring emotion result parameters, wherein the emotion result parameters comprise: positive emotional outcome, neutral emotional outcome, negative emotional outcome;
and acquiring a voice acceleration parameter according to the positive emotion result, wherein the voice acceleration parameter can be more than, equal to or less than 1, if the voice acceleration parameter is more than 1, the playing speed of the local voice signal is increased, if the voice acceleration parameter is equal to 1, the playing speed of the local voice signal is unchanged, and if the voice acceleration parameter is less than 1, the playing speed of the local voice signal is reduced.
Further, the step of controlling the first motion actuator of the first target robot to execute the master motion feature by the first motion control signal includes:
And the first action control signal executes the action characteristic of the host in the duration of a second time period, and the duration of the second time period is acquired according to the local expression signal and/or the local action signal.
Further, the step of obtaining the duration of the second time period according to the local expression signal includes:
according to the local expression signal, obtaining an owner expression characteristic, wherein the owner expression characteristic comprises an eye action characteristic, an eyebrow action characteristic, a mouth action characteristic, a face action characteristic and a head action characteristic, inputting the owner expression characteristic into a emotion classification model and obtaining emotion result parameters, and the emotion result parameters comprise: positive emotional outcome, neutral emotional outcome, negative emotional outcome;
and acquiring a voice acceleration parameter according to the positive emotion result, wherein the voice acceleration parameter can be more than 1 and equal to 1 or less than 1, if the voice acceleration parameter is more than 1, the duration of the second time period is more than the duration of the first time period, if the voice acceleration parameter is equal to 1, the duration of the second time period is equal to the duration of the first time period, and if the voice acceleration parameter is less than 1, the duration of the second time period is less than the duration of the first time period.
Further, the step of acquiring the second time period according to the local motor power signal includes:
According to the motor motion signal, obtaining master motion characteristics, wherein the master motion characteristics comprise head motion characteristics, limb motion characteristics and body motion characteristics, inputting the master motion characteristics into a emotion classification model and obtaining emotion result parameters, and the emotion result parameters comprise: positive emotional outcome, neutral emotional outcome, negative emotional outcome;
and acquiring a voice acceleration parameter according to the positive emotion result, wherein the voice acceleration parameter can be more than 1 and equal to 1 or less than 1, if the voice acceleration parameter is more than 1, the duration of the second time period is more than the duration of the first time period, if the voice acceleration parameter is equal to 1, the duration of the second time period is equal to the duration of the first time period, and if the voice acceleration parameter is less than 1, the duration of the second time period is less than the duration of the first time period.
Further, the step of passing the connection request further includes:
judging a first physical distance between the local robot and a first target robot according to the established first communication connection;
if the first physical distance is within the first range, classifying the first communication connection as a machine-machine interaction connection, and acquiring the local input information of the first time period in real time according to the machine-machine interaction connection mode;
And if the first physical distance is in the second range, classifying the first communication connection as man-machine-man-machine interaction connection, and acquiring the local input information of the first time period in real time according to the man-machine-man-machine connection mode.
Further, the method further comprises the following steps:
if the number of similar robots passing through the connection request exceeds 1, establishing second communication connection between the local robot and the second target robot, acquiring communication level sequences of the first communication connection and the second communication connection according to equipment information of the first target robot and the second target robot, and sequentially sending first sending signals to the first target robot and the second target robot according to the communication level sequences, wherein the first sending information is also used for controlling the second target robot to execute execution actions in a third time period.
According to an embodiment of the present invention, by using the bionic expression robot communication method in the first aspect provided by the present invention, a second aspect is provided as follows:
a biomimetic-expression robotic communication device, comprising:
the searching module is used for searching equipment information of the similar robots in the communication range and sending a connection request to the similar robots searching the equipment information;
The connection module is used for establishing a first communication connection between the local robot and the first target robot if the connection request passes;
the system comprises a sending module, a first target robot and a second target robot, wherein the sending module is used for acquiring local input information in a first time period in real time, processing the local input information into a first sending signal, and sending the first sending signal to the first target robot, wherein the first sending signal is used for controlling the first target robot to execute an execution action in a second time period;
and the marking module is used for marking the equipment information of the failed similar robots as equipment to be connected if the connection request fails.
A computer device comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of:
searching equipment information of the similar robots in a communication range, and sending a connection request to the similar robots searching the equipment information;
if the connection request passes, a first communication connection between the local robot and a first target robot is established;
acquiring local input information in a first time period in real time, processing the local input information into a first transmission signal, and transmitting the first transmission signal to a first target robot, wherein the first transmission signal is used for controlling the first target robot to execute an execution action in a second time period;
If the connection request is not passed, marking the equipment information of the same type of failed robots as equipment to be connected.
A computer readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of:
searching equipment information of the similar robots in a communication range, and sending a connection request to the similar robots searching the equipment information;
if the connection request passes, a first communication connection between the local robot and a first target robot is established;
acquiring local input information in a first time period in real time, processing the local input information into a first transmission signal, and transmitting the first transmission signal to a first target robot, wherein the first transmission signal is used for controlling the first target robot to execute an execution action in a second time period;
if the connection request is not passed, marking the equipment information of the same type of failed robots as equipment to be connected.
Compared with the prior art, the beneficial effect of technical scheme exclusive right that this application provided: the communication method can automatically establish communication connection of similar robots in a certain communication range, and automatically acquire local input information, such as voice information, robot touch information and the like, of the local robots in a first time period, the acquired local input information is processed, and then the first target robot performs execution action in a second time period, which can be equal to or not equal to the first time period, so that the first target robot can perform acceleration, deceleration or constant-speed action execution on effective execution information in the local input information, and the information of a host acquired by the local robot is more accurately expressed.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Wherein:
FIG. 1 is a flow chart of a communication method of a bionic expression robot in an embodiment;
FIG. 2 is a block diagram of a communication device of a bionic expression robot in one embodiment;
FIG. 3 is a block diagram of a computer device in one embodiment.
Detailed Description
In order to better understand the technical solutions in the present application, the following description will clearly and completely describe the technical solutions in the embodiments of the present application in conjunction with the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
Example 1
The technical problem solved by the embodiment is as follows: the existing robot can perform voice interaction between human and machine, but usually cannot analyze and express emotion and emotion of a user based on a response process of a database or a language model, namely cannot accurately express emotion characteristics of the user, and cannot respond to the user through voice with emotion characteristics.
The embodiment provides a communication method of a bionic expression robot, as shown in fig. 1, comprising the following steps:
s101: searching equipment information of the similar robots in a communication range, and sending a connection request to the similar robots searching the equipment information;
s102: if the connection request passes, a first communication connection between the local robot and a first target robot is established;
s103: acquiring local input information in a first time period in real time, processing the local input information into a first transmission signal, and transmitting the first transmission signal to a first target robot, wherein the first transmission signal is used for controlling the first target robot to execute an execution action in a second time period;
s104: if the connection request is not passed, marking the equipment information of the same type of failed robots as equipment to be connected.
According to the method, the execution action of the adjustable second time period is set, the emotion characteristics of the local input information in the first transmission signal, particularly the emotion characteristics of the local voice information of the local input information, are more accurately expressed by increasing, not changing or reducing the duration of the second time period, for example, the playing speed of the local voice information is increased, or the local voice information is endowed with a specific tone. Thereby more accurately expressing the voice information of the host acquired by the robot. The communication and transmission of the man-machine robot are richer and more accurate.
Example two
Based on example one, this example discloses a preferred embodiment:
a communication method of a bionic expression robot comprises the following steps:
searching equipment information of the similar robots in a communication range, and sending a connection request to the similar robots searching the equipment information;
if the connection request passes, a first communication connection between the local robot and a first target robot is established;
acquiring local input information of a first time period in real time, wherein the local input information comprises environment sound information and robot touch information; processing the environment sound information into a local voice signal, and processing the robot touch information into a local voice adjustment parameter;
The local voice signal and the local voice adjusting parameter are used as a first sending signal to be sent to a first target robot;
the method comprises the steps that a first sending signal is sent to a first target robot, the first sending signal is used for controlling the first target robot to execute an execution action of a second time period, the first target robot adjusts a voice playing mode of a local voice signal and/or the duration of the second time period according to a local voice adjusting parameter, a first adjusting voice signal is obtained, and the first adjusting voice signal is played through first sound equipment.
If the connection request is not passed, marking the equipment information of the same type of failed robots as equipment to be connected.
The embodiment specifically discloses that the local robot starts to acquire the local input information according to the time period after the first communication connection is established, for example, acquires the local input information once every 10s, further, the duration of the first time period can be intelligently adjusted, for example, according to the intermittent duration of the master voice information, for example, the time period between the intervals is taken as the first time period when the interval 2s is not speaking.
The local input information comprises environment sound information, a local voice signal of an owner is mainly obtained through the environment sound information, and environment noise is filtered through a voice filtering module, so that the extraction accuracy of the local voice signal is enhanced, and the local voice signal is a voice audio signal or a voice text signal.
The robot touch information is a specific information acquisition source of the bionic expression robot, the bionic expression robot is generally provided with a doll appearance with a head occupying a relatively large area, and a user can express emotion by generally or touching the head of the bionic expression robot, so that the robot touch information is acquired by arranging a touch sensing device on the head of the bionic expression robot, the touch sensing device of the head comprises a pressure sensor and a temperature sensor, touch pressure information can be acquired through the pressure sensor, touch position information can be acquired through the pressure sensors at a plurality of different positions, and continuous touch information of the user at a certain position can be acquired through the pressure sensors arranged in a plurality of matrixes so as to acquire touch speed information. The temperature sensor can also assist in acquiring touch position information and touch duration information, and can acquire emotion result parameters through an emotion classification model by comprehensively analyzing the robot touch information, for example, the robot touch information can be used for analyzing whether a user belongs to positive emotion, negative emotion or neutral emotion. For example, a hit head action may be categorized as a negative emotion, a gentle pinching face action as a positive emotion, and so on.
The method for establishing the emotion extraction and judgment module comprises the following steps:
Step one, data acquisition: physiological signals, physical characteristics, expression images and voice information of a subject are acquired through various image acquisition devices and physiological signal recording devices (such as a heart rate monitor, a skin conductance sensor, an electroencephalograph and the like), and the emotion states of the subject corresponding to the data under a specific situation are marked.
Step two, data preprocessing: because the acquired images, sounds and physiological signals may be affected by a number of factors, such as environmental noise, device errors, etc., pre-processing, including filtering, denoising, normalization, etc., is required to reduce the impact of noise on emotion recognition.
Step three, extracting features: characteristic parameters reflecting the emotional state are extracted from the preprocessed image, sound and physiological signals. For example, for heart rate signals, features such as average frequency, standard deviation and the like of the heart rate signals can be extracted; for the skin conductance signal, the characteristics of power spectral density, spectral center and the like can be extracted.
Step four, model training: the extracted features are used to train the model using machine learning or deep learning algorithms, such as Random Forest (RF), artificial Neural Network (ANN), convolutional Neural Network (CNN), or the like. The model adopts an LSTM fusion network to input the host sound and environmental background sound information, expression image and body state feature image information of the host and robot touch signal information as network, and a multi-mode network model comprises a single-mode feature extraction layer, a dual-mode feature fusion layer and a three-mode feature fusion layer, so that multi-mode fusion operation and convolution are carried out.
Step five, model test and evaluation: and (3) checking the performance of the model by using a test set which does not participate in training, and using the model for judging emotion after meeting the limit of the threshold of accuracy and recall rate, and outputting emotion characteristic information.
In order to reduce the operation complexity, the effective communication range of two communication machines is limited, and data acquisition is performed in a targeted manner aiming at different effective communication ranges.
How to utilize the emotion result obtained by the robot touch information to adjust the existing local voice signal is the core scheme of the invention.
Specifically, the playing tone of the local voice signal can be adjusted through the robot touch information:
the local voice adjustment parameters are acquired according to the robot touch information, wherein the robot touch information comprises touch position information, touch pressure information and touch speed information;
inputting the local voice adjustment parameters into an emotion classification model and acquiring emotion result parameters, wherein the emotion result parameters comprise: positive emotional outcome, neutral emotional outcome, negative emotional outcome;
calling a voice playing mode in a voice tone library according to emotion result parameters, and calling a first adjusting voice signal according to the voice playing mode and a local voice signal by the positive emotion result;
Preferably, the cheering tone mode is called according to the positive emotion result, and the first adjusting voice signal plays the local voice signal in the cheering tone mode; invoking a soothing tone mode according to the neutral emotion result, and playing the local voice signal in the soothing tone mode by the first adjusting signal; and calling a depression tone color mode according to the negative emotion result, and playing the local voice signal in the depression tone color mode by the first adjusting signal.
According to the scheme, the playing tone of the local voice signal is changed into cheerful, comfortable and depression through the positive, neutral and negative emotion results, so that the acquired local voice signal, such as an audio signal converted through voice and text signals, is played through a specific tone, the master voice signal acquired by the local robot is expressed more accurately, and the master voice signal acquired by the local robot can be expressed more exaggeratedly. For example, the sound signal of the owner is detected to be positive, the cheerful degree of the owner is exaggerated through the positive and positive degrees of the emotion result, and the bionic expression robot is relatively lovely, so that the sound of the bionic expression robot is moderately exaggerated, and the characteristic of the bionic expression robot is better exerted.
Specifically, the playing speed of the local voice signal can be adjusted through the touch information of the robot:
the local voice adjustment parameters are acquired according to the robot touch information, wherein the robot touch information comprises touch position information, touch pressure information and touch speed information;
inputting the robot touch information into an emotion classification model and acquiring emotion result parameters, wherein the emotion result parameters comprise: positive emotional outcome, neutral emotional outcome, negative emotional outcome;
and acquiring a voice acceleration parameter according to the positive emotion result, wherein the voice acceleration parameter can be more than, equal to or less than 1, if the voice acceleration parameter is more than 1, the playing speed of the local voice signal is increased, if the voice acceleration parameter is equal to 1, the playing speed of the local voice signal is unchanged, and if the voice acceleration parameter is less than 1, the playing speed of the local voice signal is reduced.
According to the scheme, the playing speed of the local voice signal is adjusted through the positive, neutral and negative of the emotion result, the playing speed of the local voice signal is accelerated, unchanged and slowed down through the emotion result parameter, the master voice signal acquired by the local robot is adjusted and expressed through different playing speeds, and the master voice signal acquired by the local robot can be expressed more exaggeratedly. For example, the sound signal of the owner is detected to be positive, the cheerful degree of the owner is exaggerated through the positive and positive degrees of the emotion result, and the bionic expression robot is relatively lovely, so that the sound is moderately accelerated to express, and the characteristic of the bionic expression robot is more beneficial to playing.
Example III
Based on example one, this example discloses a preferred embodiment:
a communication method of a bionic expression robot comprises the following steps:
searching equipment information of the similar robots in a communication range, and sending a connection request to the similar robots searching the equipment information;
if the connection request passes, a first communication connection between the local robot and a first target robot is established;
acquiring local input information of a first time period in real time, wherein the local input information comprises environment sound information and environment video information; acquiring a local voice signal from the environment sound information, and acquiring a local expression signal and/or a local motion signal from the environment video information;
the local voice signal, the local expression signal and/or the local motion signal are/is used as a first transmission signal to be transmitted to a first target robot;
the first target robot adjusts the duration of a second time period of the local voice signal according to the local expression signal and/or the local motion signal, acquires a first adjustment voice signal, and plays the first adjustment voice signal through first sound equipment.
If the connection request is not passed, marking the equipment information of the same type of failed robots as equipment to be connected.
The embodiment specifically discloses that the local robot starts to acquire the local input information according to the time period after the first communication connection is established, for example, acquires the local input information once every 10s, further, the duration of the first time period can be intelligently adjusted, for example, according to the intermittent duration of the master voice information, for example, the time period between the intervals is taken as the first time period when the interval 2s is not speaking.
The local input information comprises environment sound information, a local voice signal of an owner is mainly obtained through the environment sound information, and environment noise is filtered through a voice filtering module, so that the extraction accuracy of the local voice signal is enhanced, and the local voice signal is a voice audio signal or a voice text signal.
The robot obtains the environment video information and obtains the video information and the image information through video obtaining hardware equipment such as a camera and the like, and extracts action information of an owner through analysis of the environment video information to obtain facial expression information, so that further analysis is carried out on the information to promote expressions of the robot and the first target robot, wherein the expressions comprise expression of voice playing, expression of actions and expression of facial expression.
Further analysis of the environmental video information may further improve the expression of the native robot and the first target robot, for example, analyzing the environmental video information to obtain the action features of the host or the expression features of the host, training the feature information and obtaining the emotion result parameters, and optimizing or exaggerating the expression of the first target robot by using the emotion result parameters.
How to use the emotion result parameters obtained by the environment video information to optimize or exaggerate the expression of the local voice information, the host action information and the host expression information is the core scheme of the invention.
Specifically, the playing speed of the local voice signal can be adjusted through the action characteristics of the host:
according to the motor motion signal, obtaining master motion characteristics, wherein the master motion characteristics comprise head motion characteristics, limb motion characteristics and body motion characteristics, inputting the master motion characteristics into a emotion classification model and obtaining emotion result parameters, and the emotion result parameters comprise: positive emotional outcome, neutral emotional outcome, negative emotional outcome;
and acquiring a voice acceleration parameter according to the positive emotion result, wherein the voice acceleration parameter can be more than 1 and equal to 1 or less than 1, if the voice acceleration parameter is more than 1, the duration of the second time period is more than the duration of the first time period, the local voice signal after the duration adjustment is a first adjustment voice signal, and the first adjustment voice signal is played through first sound equipment.
According to the scheme, the playing speed of the local voice signal is adjusted through the positive, neutral and negative of the emotion result, the playing speed of the local voice signal is accelerated, unchanged and slowed down through the emotion result parameter, the master voice signal acquired by the local robot is adjusted and expressed through different playing speeds, and the master voice signal acquired by the local robot can be expressed more exaggeratedly. For example, the sound signal of the owner is detected to be positive, the cheerful degree of the owner is exaggerated through the positive and positive degrees of the emotion result, and the bionic expression robot is relatively lovely, so that the sound is moderately accelerated to express, and the characteristic of the bionic expression robot is more beneficial to playing.
Specifically, the playing speed of the local voice signal can be adjusted through the expression characteristics of the host:
according to the local expression signal, obtaining an owner expression characteristic, wherein the owner expression characteristic comprises an eye action characteristic, an eyebrow action characteristic, a mouth action characteristic, a face action characteristic and a head action characteristic, inputting the owner expression characteristic into a emotion classification model and obtaining emotion result parameters, and the emotion result parameters comprise: positive emotional outcome, neutral emotional outcome, negative emotional outcome;
And acquiring a voice acceleration parameter according to the positive emotion result, wherein the voice acceleration parameter can be more than 1 and equal to 1 or less than 1, if the voice acceleration parameter is more than 1, the duration of the second time period is more than the duration of the first time period, if the voice acceleration parameter is equal to 1, the duration of the second time period is equal to the duration of the first time period, if the voice acceleration parameter is less than 1, the duration of the second time period is less than the duration of the first time period, the local voice signal after the time period adjustment is a first adjustment voice signal, and the first adjustment voice signal is played through first sound equipment.
According to the scheme, the playing speed of the local voice signal is adjusted through the positive, neutral and negative of the emotion result, the playing speed of the local voice signal is accelerated, unchanged and slowed down through the emotion result parameter, the master voice signal acquired by the local robot is adjusted and expressed through different playing speeds, and the master voice signal acquired by the local robot can be expressed more exaggeratedly. For example, the sound signal of the owner is detected to be positive, the cheerful degree of the owner is exaggerated through the positive and positive degrees of the emotion result, and the bionic expression robot is relatively lovely, so that the sound is moderately accelerated to express, and the characteristic of the bionic expression robot is more beneficial to playing.
Specifically, the play tone of the local voice signal can be adjusted through the action characteristics of the host:
according to the motor motion signal, obtaining master motion characteristics, wherein the master motion characteristics comprise head motion characteristics, limb motion characteristics and body motion characteristics, inputting the master motion characteristics into a emotion classification model and obtaining emotion result parameters, and the emotion result parameters comprise: positive emotional outcome, neutral emotional outcome, negative emotional outcome;
calling a voice playing mode in a voice tone library according to emotion result parameters, and calling a first adjusting voice signal according to the voice playing mode and a local voice signal by the positive emotion result;
preferably, the cheering tone mode is called according to the positive emotion result, and the first adjusting voice signal plays the local voice signal in the cheering tone mode; invoking a soothing tone mode according to the neutral emotion result, and playing the local voice signal in the soothing tone mode by the first adjusting signal; and calling a depression tone color mode according to the negative emotion result, and playing the local voice signal in the depression tone color mode by the first adjusting signal.
According to the scheme, the playing tone of the local voice signal is changed into cheerful, comfortable and depression through the positive, neutral and negative emotion results, so that the acquired local voice signal, such as an audio signal converted through voice and text signals, is played through a specific tone, the master voice signal acquired by the local robot is expressed more accurately, and the master voice signal acquired by the local robot can be expressed more exaggeratedly. For example, the sound signal of the owner is detected to be positive, the cheerful degree of the owner is exaggerated through the positive and positive degrees of the emotion result, and the bionic expression robot is relatively lovely, so that the sound of the bionic expression robot is moderately exaggerated, and the characteristic of the bionic expression robot is better exerted.
Specifically, the playing tone of the local voice signal can be adjusted through the expression characteristics of the host:
according to the local expression signal, obtaining an owner expression characteristic, wherein the owner expression characteristic comprises an eye action characteristic, an eyebrow action characteristic, a mouth action characteristic, a face action characteristic and a head action characteristic, inputting the owner expression characteristic into a emotion classification model and obtaining emotion result parameters, and the emotion result parameters comprise: positive emotional outcome, neutral emotional outcome, negative emotional outcome;
calling a voice playing mode in a voice tone library according to emotion result parameters, and calling a first adjusting voice signal according to the voice playing mode and a local voice signal by the positive emotion result;
preferably, the cheering tone mode is called according to the positive emotion result, and the first adjusting voice signal plays the local voice signal in the cheering tone mode; invoking a soothing tone mode according to the neutral emotion result, and playing the local voice signal in the soothing tone mode by the first adjusting signal; and calling a depression tone color mode according to the negative emotion result, and playing the local voice signal in the depression tone color mode by the first adjusting signal.
According to the scheme, the playing tone of the local voice signal is changed into cheerful, comfortable and depression through the positive, neutral and negative emotion results, so that the acquired local voice signal, such as an audio signal converted through voice and text signals, is played through a specific tone, the master voice signal acquired by the local robot is expressed more accurately, and the master voice signal acquired by the local robot can be expressed more exaggeratedly. For example, the sound signal of the owner is detected to be positive, the cheerful degree of the owner is exaggerated through the positive and positive degrees of the emotion result, and the bionic expression robot is relatively lovely, so that the sound of the bionic expression robot is moderately exaggerated, and the characteristic of the bionic expression robot is better exerted.
Further, the execution speed of the limb motion and/or the expression motion of the first target robot can be adjusted through the action characteristics of the host robot:
according to the motor motion signal, obtaining master motion characteristics, wherein the master motion characteristics comprise head motion characteristics, limb motion characteristics and body motion characteristics, inputting the master motion characteristics into a emotion classification model and obtaining emotion result parameters, and the emotion result parameters comprise: positive emotional outcome, neutral emotional outcome, negative emotional outcome;
and acquiring an action acceleration parameter according to the positive emotion result, wherein the action acceleration parameter can be more than 1 and equal to 1 or less than 1, if the action acceleration parameter is more than 1, the duration of the second time period is more than that of the first time period, the motor driving signal after the duration adjustment is a first adjustment action signal, and the first adjustment action signal is played through first execution equipment.
According to the scheme, the playing speed of the motor action signal of the host is adjusted through the positive, neutral and negative of the emotion result, the playing speed of the motor action signal of the host is accelerated, unchanged and slowed down through the emotion result parameter, the host action signal acquired by the host robot is adjusted and expressed through different playing speeds, and the host action signal acquired by the host robot can be expressed more exaggeratedly. For example, the action signal of the owner is detected to be positive, the cheerful degree of the owner is exaggerated through the positive and positive degrees of the emotion result, and the bionic expression robot is relatively lovely, so that the action of the bionic expression robot is moderately accelerated to express, and the characteristics of the bionic expression robot are more beneficial to exerting.
Specifically, the execution speed of the limb action and/or the expression action of the first target robot can be adjusted and regulated through the expression characteristics of the host robot:
according to the local expression signal, obtaining an owner expression characteristic, wherein the owner expression characteristic comprises an eye action characteristic, an eyebrow action characteristic, a mouth action characteristic, a face action characteristic and a head action characteristic, inputting the owner expression characteristic into a emotion classification model and obtaining emotion result parameters, and the emotion result parameters comprise: positive emotional outcome, neutral emotional outcome, negative emotional outcome;
and acquiring an action acceleration parameter according to the positive emotion result, wherein the action acceleration parameter can be more than 1 and equal to 1 or less than 1, if the action acceleration parameter is more than 1, the duration of the second time period is more than that of the first time period, the motor driving signal after the duration adjustment is a first adjustment action signal, and the first adjustment action signal is played through first execution equipment.
According to the scheme, the playing speed of the motor action signal of the host is adjusted through the positive, neutral and negative of the emotion result, the playing speed of the motor action signal of the host is accelerated, unchanged and slowed down through the emotion result parameter, the host action signal acquired by the host robot is adjusted and expressed through different playing speeds, and the host action signal acquired by the host robot can be expressed more exaggeratedly. For example, the action signal of the owner is detected to be positive, the cheerful degree of the owner is exaggerated through the positive and positive degrees of the emotion result, and the bionic expression robot is relatively lovely, so that the action of the bionic expression robot is moderately accelerated to express, and the characteristics of the bionic expression robot are more beneficial to exerting.
Example IV
Based on the third embodiment, the present embodiment continues to disclose an optimization scheme:
the step of passing the connection request further comprises:
judging a first physical distance between the local robot and a first target robot according to the established first communication connection;
if the first physical distance is within the first range, classifying the first communication connection as a machine-machine interaction connection, and acquiring the local input information of the first time period in real time according to the machine-machine interaction connection mode;
and if the first physical distance is in the second range, classifying the first communication connection as man-machine-man-machine interaction connection, and acquiring the local input information of the first time period in real time according to the man-machine-man-machine connection mode.
That is, the present embodiment provides a face-to-face close man-machine-man-machine-interaction manner, and also provides a long-distance non-face-to-face man-machine-interaction manner.
Example five
Based on the third embodiment, the present embodiment continues to disclose an optimization scheme:
if the number of similar robots passing through the connection request exceeds 1, establishing second communication connection between the local robot and the second target robot, acquiring communication level sequences of the first communication connection and the second communication connection according to equipment information of the first target robot and the second target robot, and sequentially sending first sending signals to the first target robot and the second target robot according to the communication level sequences, wherein the first sending information is also used for controlling the second target robot to execute execution actions in a third time period.
In this scheme, can establish the multiparty interaction mode of man-machine robot, likewise, when three or more than three are mutual, can get rid of people's role, namely, obtain the local input information in the first time quantum in real time can include: the first target robot acquires the environmental sound information and the robot touch information of the second target robot, converts the environmental sound information and the robot touch information into a third transmission signal, and generates the third transmission signal to the third target robot, wherein the third transmission signal is used for controlling the third target robot to execute the execution action in the fourth time period.
To avoid that the exaggerated representation of the existing signal is continually iteratively enhanced for each interaction, an acceleration threshold and a deceleration threshold may be set.
Example six
The embodiment further discloses a communication method of the bionic expression robot, and if the connection request fails, the equipment information of the failed similar robot is marked as equipment to be connected.
Since the device information of the same kind of robots is searched in the communication range, a connection request is sent when the device information confirms that the robot device is the same kind of robot device, and if the request result is not passed, the condition that the same kind of robots are temporarily not allowed to be connected and the connection is not allowed includes: the rights are lower than the devices to be connected but not allowed, the devices to be connected set the connection rights, the devices to be connected fail, etc.
Based on the above-mentioned unconnected condition, the device to be connected is not intentionally connected to the local robot, and thus may miss the connection requirement.
The embodiment discloses a method for maintaining to-be-connected equipment, which comprises the following steps:
searching equipment information of the similar robots in a communication range, and sending a connection request to the similar robots searching the equipment information;
if the connection request passes, a first communication connection between the local robot and a first target robot is established;
acquiring local input information in a first time period in real time, processing the local input information into a first transmission signal, and transmitting the first transmission signal to a first target robot, wherein the first transmission signal is used for controlling the first target robot to execute an execution action in a second time period;
if the connection request is not passed, marking the equipment information of the same type of failed robots as equipment to be connected;
and sending a second connection request to the equipment to be connected at intervals, and activating the connection request equipment of the similar robots through the second connection request, wherein the connection request equipment generates display to be connected in an acoustic mode, an optical mode and the like, so that owners of the similar robots can find the connection request and further judge the connection request.
Example six
As shown in fig. 2, a bionic expression robot communication device includes:
a search identification module 100 for searching for similar robot device information in a communication range, the identification step comprising: identifying the identity information of the robot tag signal, and if the identity information passes, continuing to identify the communication grade information;
a connection request module 200, configured to send a connection request to a similar robot that obtains device information;
the signal sending module 300 is configured to, if the connection request passes, establish a first connection between the local robot and the first target robot, obtain local input information in a first period of time in real time, convert the local input information into first sending information, and send the first sending information to the first target robot, where the first sending information is used to control the first target robot to execute an execution action in a second period of time;
the communication control module 400 is configured to mark the device information of the failed robot as the device to be connected if the connection request fails.
Example seven
FIG. 3 illustrates an internal block diagram of a computer device in one embodiment. The computer device may specifically be a terminal or a server. As shown in fig. 3, the computer device includes a processor, a memory, and a network interface connected by a system bus. The memory includes a nonvolatile storage medium and an internal memory. The non-volatile storage medium of the computer device stores an operating system, and may also store a computer program that, when executed by the processor, causes the processor to implement a communication method of the bionic expression robot. The internal memory may also store a computer program that, when executed by the processor, causes the processor to perform a communication method of the bionic expression robot. It will be appreciated by those skilled in the art that the structure shown in fig. 3 is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is presented comprising a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of:
searching the robot tag signals in real time in the effective communication range and queuing and identifying the searched tag signals in time sequence, wherein the identifying step comprises the following steps: identifying the identity information of the robot tag signal, and if the identity information passes, continuing to identify the communication grade information;
sending a communication connection request to a robot conforming to a communication grade rule in a unit time;
if the communication connection request passes, processing an external instruction signal acquired in real time into a first transmission signal, and transmitting the first transmission signal to a first target robot establishing communication connection;
the external command signals comprise an environment sound signal and a robot touch signal, the first transmission signal comprises a first voice signal converted through the environment sound signal and the robot touch signal and a first expression control signal converted through the environment sound signal and the robot touch signal, the first voice signal controls the first target robot to play voice, and the first expression control signal controls an expression control mechanism of the first target robot to move.
In one embodiment, a computer-readable storage medium is provided, storing a computer program which, when executed by a processor, causes the processor to perform the steps of:
searching the robot tag signals in real time in the effective communication range and queuing and identifying the searched tag signals in time sequence, wherein the identifying step comprises the following steps: identifying the identity information of the robot tag signal, and if the identity information passes, continuing to identify the communication grade information;
sending a communication connection request to a robot conforming to a communication grade rule in a unit time;
if the communication connection request passes, processing an external instruction signal acquired in real time into a first transmission signal, and transmitting the first transmission signal to a first target robot establishing communication connection;
the external command signals comprise an environment sound signal and a robot touch signal, the first transmission signal comprises a first voice signal converted through the environment sound signal and the robot touch signal and a first expression control signal converted through the environment sound signal and the robot touch signal, the first voice signal controls the first target robot to play voice, and the first expression control signal controls an expression control mechanism of the first target robot to move.
Those skilled in the art will appreciate that all or part of the processes in the methods of the above embodiments may be implemented by a computer program for instructing relevant hardware, where the program may be stored in a non-volatile computer readable storage medium, and where the program, when executed, may include processes in the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the various embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.

Claims (10)

1. The communication method of the bionic expression robot is characterized by comprising the following steps of:
searching equipment information of the similar robots in a communication range, and sending a connection request to the similar robots searching the equipment information;
if the connection request passes, a first communication connection between the local robot and a first target robot is established;
acquiring local input information in a first time period in real time, processing the local input information into a first transmission signal, and transmitting the first transmission signal to a first target robot, wherein the first transmission signal is used for controlling the first target robot to execute an execution action in a second time period;
If the connection request is not passed, marking the equipment information of the same type of failed robots as equipment to be connected.
2. The bionic expression robot communication method according to claim 1, wherein:
the local input information comprises environment sound information and robot touch information;
processing the environment sound information into a local voice signal, and processing the robot touch information into a local voice adjustment parameter;
the local voice signal and the local voice adjusting parameter are used as a first sending signal to be sent to a first target robot;
the first target robot adjusts the voice playing mode of the local voice signal and/or the duration of the second time period according to the local voice adjusting parameter, acquires the first adjusting voice signal, and plays the first adjusting voice signal through the first sound equipment.
3. The bionic expression robot communication method according to claim 1, wherein:
the local input information comprises environment sound information and environment video information;
acquiring a local voice signal from the environment sound information, and acquiring a local expression signal and/or a local motion signal from the environment video information;
the local voice signal, the local expression signal and/or the local motion signal are/is used as a first transmission signal to be transmitted to a first target robot;
The first target robot adjusts the duration of a second time period of the local voice signal according to the local expression signal and/or the local motion signal, acquires a first adjustment voice signal, and plays the first adjustment voice signal through first sound equipment.
4. The method of claim 3, further comprising:
the first target robot obtains a first expression control signal according to the local expression signal, and the first expression control signal is used for controlling the action of the first expression executing mechanism.
5. The method of claim 3, further comprising:
the first target robot obtains a first action control signal according to the action signal of the robot, and the first action control signal is used for controlling the first action execution structure to act.
6. The method of claim 1, wherein the step of passing the connection request further comprises:
judging a first physical distance between the local robot and a first target robot according to the established first communication connection;
if the first physical distance is within the first range, classifying the first communication connection as a machine-machine interaction connection, and acquiring the local input information of the first time period in real time according to the machine-machine interaction connection mode;
And if the first physical distance is in the second range, classifying the first communication connection as man-machine-man-machine interaction connection, and acquiring the local input information of the first time period in real time according to the man-machine-man-machine connection mode.
7. The method of claim 1, further comprising:
if the number of similar robots passing through the connection request exceeds 1, establishing second communication connection between the local robot and the second target robot, acquiring communication level sequences of the first communication connection and the second communication connection according to equipment information of the first target robot and the second target robot, and sequentially sending first sending signals to the first target robot and the second target robot according to the communication level sequences, wherein the first sending information is also used for controlling the second target robot to execute execution actions in a third time period.
8. A biomimetic-expression robotic communication device, comprising:
the searching module is used for searching equipment information of the similar robots in the communication range and sending a connection request to the similar robots searching the equipment information;
the connection module is used for establishing a first communication connection between the local robot and the first target robot if the connection request passes;
The system comprises a sending module, a first target robot and a second target robot, wherein the sending module is used for acquiring local input information in a first time period in real time, processing the local input information into a first sending signal, and sending the first sending signal to the first target robot, wherein the first sending signal is used for controlling the first target robot to execute an execution action in a second time period;
and the marking module is used for marking the equipment information of the failed similar robots as equipment to be connected if the connection request fails.
9. A computer readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of the method of any one of claims 1 to 7.
10. A computer device comprising a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of the method of any of claims 1 to 7.
CN202311856777.3A 2023-12-29 2023-12-29 Bionic expression robot communication method, device, medium and computer equipment Active CN117798914B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311856777.3A CN117798914B (en) 2023-12-29 2023-12-29 Bionic expression robot communication method, device, medium and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311856777.3A CN117798914B (en) 2023-12-29 2023-12-29 Bionic expression robot communication method, device, medium and computer equipment

Publications (2)

Publication Number Publication Date
CN117798914A true CN117798914A (en) 2024-04-02
CN117798914B CN117798914B (en) 2024-08-02

Family

ID=90431450

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311856777.3A Active CN117798914B (en) 2023-12-29 2023-12-29 Bionic expression robot communication method, device, medium and computer equipment

Country Status (1)

Country Link
CN (1) CN117798914B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5832189A (en) * 1996-09-26 1998-11-03 Interval Research Corporation Affect-based robot communication methods and systems
US8996429B1 (en) * 2011-05-06 2015-03-31 Google Inc. Methods and systems for robot personality development
US9796095B1 (en) * 2012-08-15 2017-10-24 Hanson Robokind And Intelligent Bots, Llc System and method for controlling intelligent animated characters
CN107645523A (en) * 2016-07-21 2018-01-30 北京快乐智慧科技有限责任公司 A kind of method and system of mood interaction
CN110232433A (en) * 2014-07-24 2019-09-13 X开发有限责任公司 Method and system for generating instructions for a robotic system to perform a task

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5832189A (en) * 1996-09-26 1998-11-03 Interval Research Corporation Affect-based robot communication methods and systems
US8996429B1 (en) * 2011-05-06 2015-03-31 Google Inc. Methods and systems for robot personality development
US9796095B1 (en) * 2012-08-15 2017-10-24 Hanson Robokind And Intelligent Bots, Llc System and method for controlling intelligent animated characters
CN110232433A (en) * 2014-07-24 2019-09-13 X开发有限责任公司 Method and system for generating instructions for a robotic system to perform a task
CN107645523A (en) * 2016-07-21 2018-01-30 北京快乐智慧科技有限责任公司 A kind of method and system of mood interaction

Also Published As

Publication number Publication date
CN117798914B (en) 2024-08-02

Similar Documents

Publication Publication Date Title
CN111368609B (en) Speech interaction method based on emotion engine technology, intelligent terminal and storage medium
US10482886B2 (en) Interactive robot and human-robot interaction method
US20240220811A1 (en) System and method for using gestures and expressions for controlling speech applications
WO2017215297A1 (en) Cloud interactive system, multicognitive intelligent robot of same, and cognitive interaction method therefor
US11830292B2 (en) System and method of image processing based emotion recognition
JP2023055910A (en) Robot, dialogue system, information processing method, and program
CN117668763B (en) Digital human all-in-one machine based on multiple modes and multiple mode perception and identification method thereof
CN116188642A (en) Interaction method, device, robot and storage medium
KR20220063816A (en) System and method for analyzing multimodal emotion
US20210097629A1 (en) Initiating communication between first and second users
WO2023017745A1 (en) Communication robot, communication robot control method, and program
KR20220005945A (en) Method, system and non-transitory computer-readable recording medium for generating a data set on facial expressions
CN117798914B (en) Bionic expression robot communication method, device, medium and computer equipment
US20210063972A1 (en) Collaborative human edge node devices and related systems and methods
CN113887332A (en) A skin operation safety monitoring method based on multimodal fusion
CN118866335A (en) A method and system for identifying depression based on face video
CN118587757A (en) AR-based emotional data processing method, device and electronic device
US12011828B2 (en) Method for controlling a plurality of robot effectors
CN106997449A (en) Robot and face identification method with face identification functions
CN110046580A (en) A kind of man-machine interaction method and system based on Emotion identification
JP3848076B2 (en) Virtual biological system and pattern learning method in virtual biological system
KR102843315B1 (en) Online education system of tattoo treatment
Rabie et al. Multi-modal biometrics for real-life person-specific emotional human-robot-interaction
KR102837035B1 (en) Achatbot for providing personized psychological counseling and the operating method thereof
US12383180B2 (en) Embedded device for synchronized collection of brainwaves and environmental data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant