[go: up one dir, main page]

CN119077737A - A robotic arm control method and system based on deep learning and human-computer interaction - Google Patents

A robotic arm control method and system based on deep learning and human-computer interaction Download PDF

Info

Publication number
CN119077737A
CN119077737A CN202411353138.XA CN202411353138A CN119077737A CN 119077737 A CN119077737 A CN 119077737A CN 202411353138 A CN202411353138 A CN 202411353138A CN 119077737 A CN119077737 A CN 119077737A
Authority
CN
China
Prior art keywords
user
mechanical arm
joint
deep learning
pictures
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202411353138.XA
Other languages
Chinese (zh)
Other versions
CN119077737B (en
Inventor
闫天翼
刘思宇
赵岩
明致远
刘梦真
刘紫玉
宋依凡
陈启明
吴景龙
刘田田
裴广盈
王丽
张健
叶初阳
李玮
索鼎杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202411353138.XA priority Critical patent/CN119077737B/en
Publication of CN119077737A publication Critical patent/CN119077737A/en
Application granted granted Critical
Publication of CN119077737B publication Critical patent/CN119077737B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/163Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a mechanical arm control method and a mechanical arm control system based on deep learning and man-machine interaction, comprising the steps of collecting pictures in an intelligent manufacturing scene, identifying the pose of each target in the pictures, rendering the pictures and generating an operation interface of a user; the control method comprises the steps of generating control intention of a target object by a user, inducing brain waves through the target object in a gazing scene, decoding the brain waves to obtain operation intention of the user, receiving the operation intention of the user through a mechanical arm controller, obtaining a running track of the mechanical arm based on a motion planning method, and driving each joint of the mechanical arm to execute grabbing and placing tasks according to the running track by using a dynamic model. The invention provides a brand-new mechanical arm control solution by integrating brain-computer interface technology and deep learning algorithm.

Description

Mechanical arm control method and system based on deep learning and man-machine interaction
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a mechanical arm control method and system based on deep learning and man-machine interaction.
Background
The internet of things system is widely applied to the field of intelligent manufacturing, and the production efficiency and the precision are greatly improved by integrating various sensors, networks and devices. In these applications, robot arm control based on human-machine interaction has become an important research direction, especially in scenes where high precision and flexible operation are required. Currently, the control of the mechanical arm mostly depends on manual remote control, and the control mode includes key control, voice control or follow control. In this mode, a human being plays the role of a "pilot" similar to the mode of operating a vehicle or aircraft to maneuver the robotic arm to accomplish a task. However, due to the complexity of the robotic arm's operation in three dimensions, it is difficult for a human driver to perform a preset task in the "driver" mode.
Recent trends in brain-computer interface research have shown that human intent can be converted into control instructions for robotic arms in an internet of things system by interpreting neural signals. Based on brain-computer interface technology, by recognizing human intent, a user can realize more natural mechanical arm operation in a control mode of a 'person' as if controlling his own limb. The brain-computer interface technology is a novel man-machine interaction mode developed in recent years. The brain-computer interface provides a brand-new control mode by utilizing the natural advantages of human brain in cognition and response. The brain-computer interface converts the brain-computer signal into an operable instruction by collecting and recording the brain-computer signal.
However, in the existing robot arm control system based on the brain-computer interface, the user operates the robot arm in the three-dimensional scene in the two-dimensional space. The user does not actually interact with the three-dimensional scene. The attention of the user is always focused on the visual stimulus of the two-dimensional screen, so that the user is limited to analyze and judge through the environment information of the three-dimensional scene. The problem that human-computer interaction is unnatural exists in the existing brain-computer interface-based brain-control mechanical arm system is caused, and the system is difficult to apply in actual scenes. Furthermore, due to the complexity of the manipulator operation in three-dimensional space, a user often needs to perform a complicated conversion operation when operating the manipulator to perform a motion using a control instruction based on human intention. This means that in order to control the robot arm to reach a predetermined target, the user needs to generate a large number of control instructions based on human intention, which also increases the workload of the user. Therefore, how to apply brain-computer interface technology to robotic arm manipulation in three-dimensional scenarios is a challenging problem.
Disclosure of Invention
The invention provides a mechanical arm control method and a mechanical arm control system based on deep learning and man-machine interaction, which are used for solving the problems of unnatural interaction and heavy user load in the traditional mechanical arm control system based on the brain-computer interface.
In order to achieve the above purpose, the invention provides a mechanical arm control method based on deep learning and man-machine interaction, comprising the following steps:
collecting pictures in an intelligent manufacturing scene, identifying the pose of each target object in the pictures, rendering the pictures, and generating an operation interface of a user;
Based on the control intention of a target object generated by a user, the brain waves are induced by looking at the target object in a scene, and the brain waves are decoded to obtain the operation intention of the user;
And receiving the operation intention of the user through a mechanical arm controller, obtaining the running track of the mechanical arm based on a motion planning method, and driving each joint of the mechanical arm to execute grabbing and placing tasks by using a dynamic model according to the running track.
Preferably, generating the operation interface of the user includes:
And acquiring pictures in the intelligent manufacturing scene by using a camera at a first-person view angle of a user, identifying the pose of each target object in the pictures by a depth learning algorithm, superposing the transparency effect of sine wave coding for each target object in the scene, rendering the pictures, and displaying the pictures in the intelligent manufacturing scene with the transparency effect of sine wave coding superposed to the user in real time to generate an operation interface of the user.
Preferably, the deep learning algorithm is a neural network model realized based on a transfer learning method, the neural network model receives pictures in the intelligent manufacturing scene, extracts features through VGG16, carries out gesture estimation through a translation branch and a rotation branch respectively, and recognizes the pose of the target object;
The translation branch is used for position estimation and outputting a three-dimensional vector to represent the position of an object in a three-dimensional space, the rotation branch is used for gesture estimation and outputting a four-dimensional vector to represent the quaternion rotation of the object, the translation branch consists of three full-connection layers, the feature vector is mapped to 256 dimensions and 64 dimensions respectively and finally a 3-dimensional position vector is output, the rotation branch consists of three full-connection layers, a 4-dimensional quaternion vector is output, and the quaternion is normalized through a custom normalization layer.
Preferably, the transparency of the sine wave code is:
alpha(t)=0.5·sin(2πft+Δφ)+0.5
where alpha (t) is the transparency of the target at time t, f is the frequency of the sine wave, and Δφ is the phase difference.
Preferably, decoding the brain waves based on an incremental autonomous learning method includes:
Step 1, preprocessing and windowing the brain wave;
preprocessing brain waves, including baseline removal and bandpass filtering;
the baseline is removed by a high pass filter, specifically:
Y(t)=HighpassFilter(X(t),fcutoff)
Wherein f cutoff is the cut-off frequency of the high-pass filter, X (t) is the original brain wave, Y (t) is the signal filtered by the high-pass filter, HIGHPASSFILTER () is the high-pass filter;
the process of bandpass filtering the brain wave is as follows:
Z(t)=BandstopFilter(Y(t),50Hz)
Wherein, the signal after band-pass filtering is Z (t), bandstopFilter () is band-stop filtering;
Slicing the preprocessed brain waves according to a preset time window, wherein each window comprises a certain number of sample points N, and the method specifically comprises the following steps:
Zk=[Z(tk),Z(tk+1),...,Z(tk+N-1)]
Z k is the windowed signal;
Step 2, constructing an initial brain wave template;
Constructing sine and cosine reference signals corresponding to the stimulation frequency f i according to the stimulation frequency f i coded by the sine wave on the target object:
where M is the number of harmonics, i is the ID of the target object, k is the discretized point in time, Sine and cosine reference signals corresponding to the stimulation frequency f i;
Step 3, calculating a correlation value;
decomposing the windowed signal Z k into a number of sub-bands using a zero-phase type I Chebyshev filter
Applying a standard typical correlation analysis algorithm to each subband component, respectively, obtaining a correlation value between each subband component and a predefined reference signal, specifically:
wherein ρ k is the correlation value corresponding to the kth template signal, Is the correlation value of the nth sub-band;
n subband components in ρ k Weighted sum of squares fusion is performed, namely:
wherein, To output the result for the weighted sum of squares fused correlation,W (N) is a weighting function for the correlation value of the nth sub-band;
the definition of the weighting function w (n) is:
w(n)=n-a+b,n∈[1,N]
Wherein a and b are both constants.
Preferably, obtaining the operation intention of the user includes:
Collecting brain wave data generated by a user, when the data quantity corresponding to the sine wave coding frequency on each target exceeds M, obtaining a batch of user-specific templates, generating new relevant values by using the user-specific templates, and representing the new relevant values as Wherein, when a new user-specific template is collected each time, discarding the last user-specific template;
finally, S weighted correlation values corresponding to the sine wave coded stimulation frequencies on the S target targets are obtained Taking the largest of themThe corresponding target is the identified operation intention of the user;
The user-specific templates are defined as:
wherein M is the data quantity collected by a batch of user-specific templates, C is the electroencephalogram signal channel quantity, For the purpose of a user-specific template,Electroencephalogram data of a single test time and a single channel;
The optimized correlation value is as follows:
Wherein alpha is an updated weight parameter, To optimize the correlation value;
The operation intention of the user is as follows:
wherein, For a preset threshold, f target is the identified target ID,And the optimal correlation value corresponding to the S-th target object.
Preferably, the method for obtaining the motion trajectory of the mechanical arm based on the motion planning method includes:
Generating the running track by using a quintic polynomial interpolation, correcting the deviation between the running track and an expected path through proportional-differential control, and continuously executing the path planned by the autonomous movement by the mechanical arm when the user continuously outputs the operation intention, otherwise, stopping the movement of the mechanical arm;
the method for generating the running track comprises the following steps:
q(t)=a0+a1t+a2t2+a3t3+a4t4+a5t5
In the formula, polynomial coefficient a 0,a1,...,a5 is solved by setting boundary conditions, and q (t) is a running track;
For correcting deviations from the desired path by proportional-differential control, the method is:
Where τ is the control input, K p and K d are the proportional and differential gain matrices, q desired and Respectively desired joint position and velocity, q actual andThe actual joint position and velocity, respectively.
Preferably, the dynamic model is used to drive each joint of the mechanical arm to perform grabbing and placing tasks, including:
Obtaining an overall transformation matrix based on the transformation matrix of each joint of the mechanical arm, and describing forces and moments for generating required joint motions through the dynamic model, wherein the forces and moments comprise inertia, coriolis forces and gravity effects;
the transformation matrix T i of each joint is:
Wherein θ i is the joint angle, d i is the link displacement, a i is the link length, and α i is the link torsion angle;
The overall transformation matrix T is derived from the base to the end effector by multiplying the transformation matrices of the individual joints:
T=T1T2T3T4T5T6;
the dynamics model is described using Lagrange's method, specifically:
wherein tau is a joint moment vector, M (theta) is a joint space inertia matrix, Is a matrix of Coriolis force and centrifugal force, G (theta) is a gravity moment vector,As the joint acceleration vector, the motion vector,Is a joint velocity vector;
The joint space inertia matrix M (theta) represents the resistance of the mass and the configuration of the robot to acceleration, and is specifically:
wherein m 11 (θ) is the inertial coupling effect between joint 1 and joint 1;
The coriolis force and centrifugal force matrix Reflecting the forces acting on the robot as it moves, in particular:
Wherein c 11 is the coriolis and centrifugal coupling effect between joint 1 and joint 1;
the gravity moment vector G (theta) represents gravity moment acting on the robot connecting rod, and specifically comprises the following components:
Where g 1 (θ) is the gravitational moment on joint 1.
On the other hand, in order to achieve the above object, the present invention further provides a mechanical arm control system based on deep learning and man-machine interaction, including:
the system comprises an operation interface generation module, a user operation intention decoding module and a mechanical arm motion control module;
The operation interface generation module is used for displaying pictures in the intelligent manufacturing scene with the transparency effect of the superposition sine wave codes to a user in real time to generate an operation interface;
the user operation intention decoding module is used for decoding brain waves by using an incremental self-learning algorithm to obtain the operation intention of a user;
the mechanical arm motion control module is used for controlling the mechanical arm to execute grabbing and placing tasks.
Compared with the prior art, the invention has the following advantages and technical effects:
(1) Interactive naturalness enhancement
The camera is used for collecting the picture of the first person view angle of the user, and the deep learning algorithm is used for identifying the pose of the target object, so that the system can seamlessly combine the view field of the user with the operating environment of the mechanical arm;
The transparency effect of sine wave coding is superimposed on the target targets, so that a plurality of target targets are effectively distinguished in the same scene, visual fatigue is reduced, the selection and operation precision of a user on the targets are improved, and the complexity of operation is reduced.
(2) Reducing user workload
The traditional mechanical arm control system based on the brain-computer interface often needs long-time training and calibration, and the invention adopts an incremental self-learning algorithm, so that the system can adapt to the operation habit and intention of a user in real time, and the learning burden of the user is reduced;
the method has the advantages that the target targets are automatically identified and marked through the deep learning technology, the information is presented to the user in real time, the user can trigger the control intention only by looking at the specific targets, and compared with the traditional mechanical arm control mode, the method greatly reduces the active operation burden of the user.
(3) Robustness and adaptability improvement of system
The system can process the operation requirements of a plurality of target targets in a complex intelligent manufacturing scene through the transparency effect of sine wave coding, and the multi-target operation capability ensures that the system is more flexible and efficient when dealing with multiple tasks and multiple scenes, thereby improving the robustness of the system;
the target in the scene is identified and positioned in real time by using a deep learning algorithm, so that the system can accurately read the environmental information in various complex scenes, and an optimal mechanical arm operation decision is made, thereby further improving the adaptability of the system.
(4) Promote integration of brain-computer interface and internet of things
In the intelligent manufacturing scene of the Internet of things, the novel mechanical arm control solution is provided by fusing the brain-computer interface technology and the deep learning algorithm, and the fusion promotes the application of the brain-computer technology in the Internet of things, so that the intelligent and efficient manufacturing process can be realized.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application. In the drawings:
FIG. 1 is a flow chart of a control method of a mechanical arm based on deep learning and man-machine interaction according to an embodiment of the invention;
FIG. 2 is a schematic diagram of a neural network model according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart of an incremental self-learning algorithm according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a mechanical arm control device according to an embodiment of the present invention;
FIG. 5 is a graph of acquisition channels employed by participants in an embodiment of the present invention when participating in an online experiment;
FIG. 6 is an experimental scenario diagram of participants engaged in an online experiment according to an embodiment of the present invention;
Fig. 7 is a graph of the results of tasks performed by all volunteers using three different robotic arm control devices in accordance with an embodiment of the present invention.
Detailed Description
It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other. The application will be described in detail below with reference to the drawings in connection with embodiments.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowcharts, in some cases the steps illustrated or described may be performed in an order other than that illustrated herein.
Embodiment 1,
The invention provides a mechanical arm control method based on deep learning and man-machine interaction, as shown in fig. 1, comprising the following steps:
collecting pictures in an intelligent manufacturing scene, identifying the pose of each target object in the pictures, rendering the pictures, and generating an operation interface of a user;
Based on the control intention of the target object generated by the user, the brain wave is induced by looking at the target object in the scene, and the brain wave is decoded to obtain an operation interface of the user;
the operation intention of a user is received through the mechanical arm controller, the running track of the mechanical arm is obtained based on a motion planning method, and each joint of the mechanical arm is driven by a dynamic model to execute grabbing and placing tasks according to the running track.
In the intelligent manufacturing scene of the Internet of things, the invention provides a brand-new mechanical arm control solution by integrating a brain-computer interface technology and a deep learning algorithm. The integration promotes the application of the brain-computer technology in the Internet of things, and is beneficial to realizing more intelligent and efficient manufacturing processes.
Further, generating an operation interface of the user includes:
The method comprises the steps of collecting pictures in an intelligent manufacturing scene with a first-person view angle of a user, identifying the pose of each target in the pictures through a depth learning algorithm, superposing a sine wave encoded transparency effect on each target in the scene, rendering the pictures, displaying the pictures in the intelligent manufacturing scene with the superposed sine wave encoded transparency effect to the user in real time, and generating an operation interface of the user.
The method comprises the steps of collecting pictures in an intelligent manufacturing scene from a first-person view angle of a user by using a camera, identifying the pose (including the position and the pose) of each target object in the pictures by using a deep learning algorithm, superposing a sine wave coded transparency effect on each target object in the scene, rendering the pictures, and displaying the pictures in the intelligent manufacturing scene with the superposed sine wave coded transparency effect to the user in real time.
The camera is used for collecting the picture of the first-person view angle of the user, and the deep learning algorithm is used for identifying the pose of the target object, so that the field of view of the user can be seamlessly combined with the operating environment of the mechanical arm. The self-perception enhances the immersion of the user, so that the operation of the mechanical arm is more natural and visual. The transparency effect of sine wave encoding is superimposed for the target objects so that multiple target objects are effectively distinguished in the same scene. The method not only reduces visual fatigue, but also improves the selection and operation precision of the user on the target, and reduces the complexity of operation.
Further, the deep learning algorithm is a neural network model (as shown in fig. 2) realized based on a transfer learning method, the neural network model receives pictures in an intelligent manufacturing scene, extracts features through VGG16, performs gesture estimation through a translation branch and a rotation branch respectively, and recognizes the pose of the target object, wherein the translation branch is used for position estimation, outputs a three-dimensional vector to represent the position of an object in a three-dimensional space, the rotation branch is used for gesture estimation, outputs a four-dimensional vector to represent the quaternion rotation of the object, the translation branch consists of three full-connection layers, maps the feature vector to 256-dimension and 64-dimension respectively, finally outputs a 3-dimensional position vector, and the rotation branch consists of three full-connection layers, outputs a 4-dimensional quaternion vector and performs normalization processing on the quaternion through a custom normalization layer. The neural network model receives pictures in the intelligent manufacturing scene, extracts features through the VGG16, carries out gesture estimation through the translation branch and the rotation branch respectively, and returns to the pose of the target.
Specifically, the migration learning is to improve the performance of the original model, and the migration learning-based method effectively utilizes the learning result of the pre-training model on a large-scale data set, so that the accuracy and the robustness of the posture estimation can be improved.
Further, the transparency of the sine wave code is:
alpha(t)=0.5·sin(2πft+Δφ)+0.5
where alpha (t) is the transparency of the target at time t, f is the frequency of the sine wave, and Δφ is the phase difference.
Sine wave encoding is used to superimpose transparency using sine wave encoding on the target object. Sine wave coding can enable the induced brain wave to be smoother and stable, so that the signal-to-noise ratio of the brain wave is improved. This is very important for brain wave detection in practical applications, because brain waves have a large noise component. Compared with the traditional square wave or pulse wave modulation, the sine wave modulation has softer visual stimulus and less visual fatigue to users, so that the experiment or application is more comfortable.
Using transparency codes instead of brightness codes may reduce the direct visual stimulus intensity, thereby reducing the stimulus and potential visual fatigue to the user's eyes. The modulation of the transparency allows the visual effects of multiple target objects to be superimposed without significantly changing the brightness. Thus, when the transparency of different objects in the field of view is encoded with different sine wave frequencies, the user's brain can still detect different frequency responses, which enables multi-target detection. In a scene with larger change of ambient light intensity, the transparency coding effect is relatively stable, and the system is not easy to be interfered by the ambient light, so that the robustness of the system is higher.
Further, decoding the brain waves based on the incremental autonomous learning method, as shown in fig. 3, includes:
step 1, preprocessing and windowing an electroencephalogram signal;
in this embodiment, according to characteristics of baseline drift and power frequency interference of brain waves, preprocessing of brain waves is first required, including baseline removal and 50Hz band-pass filtering. Since baseline wander of brain waves affects signal accuracy, it is first necessary to remove the baseline wander. Baseline removal is achieved by a high pass filter. Let the original brain wave be X (t), the signal filtered by the high-pass filter be Y (t), then the baseline removal process can be expressed as:
Y(t)=HighpassFilter(X(t),fcutoff)
Where f cutoff is the cut-off frequency of the high-pass filter, typically set between 0.5Hz and 1Hz, HIGHPASSFILTER () is high-pass filtering.
In order to remove the 50Hz power frequency interference, 50Hz band-pass filtering is required for brain waves. The band-pass filtered signal is Z (t), and the filtering process can be expressed as:
Z(t)=BandstopFilter(Y(t),50Hz)
Wherein BandstopFilter () is band-stop filtering.
Slicing the preprocessed brain waves according to a certain time window, wherein each window comprises a certain number of sample points N for subsequent calculation.
Let the windowed signal be Z k, it can be expressed as:
Zk=[Z(tk),Z(tk+1),...,Z(tk+N-1)]。
Step 2, constructing an initial brain wave template;
Constructing corresponding sine and cosine reference signals according to the sine wave coded stimulation frequency f i on the target object:
where M is the number of harmonics, set to 5;i ID of the target object.
Step 3, calculating a correlation value;
Using zero-phase I-type chebyshev filter to decompose windowed signal into several sub-bands for Z k
Applying a standard canonical correlation analysis algorithm to each subband component separately, resulting in each subband component and a predefined reference signal (corresponding to all stimulus frequencies) Correlation value between:
The correlation value corresponding to the kth template signal is represented by a vector ρ k, which contains the correlation values corresponding to the N subbands.
N subband components in ρ k A weighted sum of squares fusion is made, namely:
wherein the weighting function w (n) is defined as:
w(n)=n-a+b,n∈[1,N]
wherein, a and b are constants, and their values are determined when the classifier performance is best.
Further, obtaining the operation intention of the user includes:
Collecting brain wave data generated by a user, when the data quantity corresponding to the sine wave coding frequency on each target exceeds M, obtaining a batch of user-specific templates, generating new relevant values by using the user-specific templates, and representing the new relevant values as Wherein, when a new user-specific template is collected each time, discarding the last user-specific template;
finally, S weighted correlation values corresponding to the sine wave coded stimulation frequencies on the S target targets are obtained Taking the largest of themThe corresponding target is the operation intention of the identified user;
The user-specific templates are defined as:
wherein M is the data quantity collected by a batch of user-specific templates, C is the electroencephalogram signal channel quantity, For the purpose of a user-specific template,Electroencephalogram data of a single test time and a single channel;
The optimized correlation value is as follows:
Wherein alpha is an updated weight parameter, To optimize the correlation value;
The operation intention of the user is as follows:
wherein, For a preset threshold, f target is the identified target ID,And the optimal correlation value corresponding to the S-th target object.
The traditional mechanical arm control system based on the brain-computer interface often needs long-time training and calibration, and the invention adopts an incremental self-learning algorithm, so that the system can adapt to the operation habit and intention of a user in real time, and the learning burden of the user is reduced. Meanwhile, the algorithm can continuously optimize the decoding precision of brain waves, and the response speed and the accuracy of the system are improved. The target targets are automatically identified and marked through a deep learning technology, and the information is presented to a user in real time, so that the user can trigger the control intention only by looking at the specific targets. Compared with the traditional mechanical arm control mode, the method greatly reduces the active operation burden of the user.
Further, obtaining the motion trail of the mechanical arm based on the motion planning method comprises the following steps:
The motion track is generated by using a quintic polynomial interpolation, deviation between the motion track and an expected path is corrected by proportional-differential control, and when a user continuously outputs an operation intention, the mechanical arm continuously executes a path of autonomous motion planning, otherwise, the mechanical arm stops moving, and a motion planning algorithm is used for generating a smooth and collision-free motion track to be followed by the mechanical arm.
A smoothed trajectory was generated using a polynomial interpolation of fifth order, expressed as:
q(t)=a0+a1t+a2t2+a3t3+a4t4+a5t5
In the formula, polynomial coefficient a 0,a1,...,a5 is solved by setting boundary conditions, and q (t) is a running track;
to ensure accurate execution of the trajectory planning, proportional-derivative control is used to correct deviations from the desired path by:
Where τ is the control input, K p and K d are the proportional and differential gain matrices, q desired and Respectively desired joint position and velocity, q actual andThe actual joint position and velocity, respectively.
Further, the use of the kinetic model to drive the joints of the robotic arm to perform the grasping and placing tasks includes:
obtaining an overall transformation matrix based on the transformation matrix of each joint of the mechanical arm, and describing the force and moment for generating the required joint movement through a dynamic model, wherein the force and moment comprise inertia, coriolis force and gravity effect;
the transformation matrix T i of each joint is:
Wherein θ i is the joint angle, d i is the link displacement, a i is the link length, and α i is the link torsion angle;
The overall transformation matrix T is derived from the base to the end effector by multiplying the transformation matrices of the individual joints:
T=T1T2T3T4T5T6;
the dynamics model is described using Lagrange's method, specifically:
wherein tau is a joint moment vector, M (theta) is a joint space inertia matrix, Is a matrix of Coriolis force and centrifugal force, G (theta) is a gravity moment vector,As the joint acceleration vector, the motion vector,Is a joint velocity vector;
the joint space inertia matrix M (θ) represents the resistance of the mass and configuration of the robot to acceleration, specifically:
Where m 11 (θ) is the inertial coupling effect between joint 1 and joint 1, and if m 12, it represents joint 1 and joint 2, and so on.
Coriolis force and centrifugal force matrixReflecting the forces acting on the robot as it moves, in particular:
Where c 11 is the coriolis and centrifugal coupling effect between joint 1 and joint 1, and if m 12, represents joint 1 and joint 2, and so on.
The gravity moment vector G (θ) represents the gravity moment acting on the robot link, specifically:
Where g 1 (θ) is the gravitational moment on joint 1.
Embodiment II,
The embodiment provides a mechanical arm control system based on deep learning and man-machine interaction, as shown in fig. 4, which comprises an operation interface generating module, a user operation intention decoding module and a mechanical arm motion control module;
the operation interface generation module is used for displaying pictures in the intelligent manufacturing scene with the transparency effect of the superposition sine wave codes to a user in real time to generate an operation interface;
The user operation intention decoding module is used for decoding brain waves by using an incremental self-learning algorithm to obtain the operation intention of the user;
the mechanical arm motion control module is used for controlling the mechanical arm to execute grabbing and placing tasks.
The control system is further divided into a local client and a remote server. The local client and the remote server communicate through a data distribution service. The deep learning algorithm and the incremental self-learning algorithm which consume the computing resources are deployed at the remote server. The motion planning algorithm with high real-time requirement is deployed at the local client. The user operates at the local client. The proposed system supports multiple local clients communicating with a remote server.
Third embodiment,
In order to verify the effectiveness and implementation effect of the mechanical arm control method and system based on deep learning and man-machine interaction, the following scientific experiment is performed.
(1) Experimental volunteers
The experiment recruited 12 healthy adult volunteers, all right-handed, with no history of neurological disease and normal or corrected vision. Volunteers all signed informed consent prior to the experiment and had detailed knowledge of the experimental process and its potential risks. Experiments have passed the ethical committee of the institution.
(2) Experimental setup
In order to be closer to the daily use environment, the data acquisition of the experiment is not carried out in a shielding room, electromagnetic interference shielding measures are not adopted, and meanwhile, the walking sound of surrounding people is not removed. The experimental operator introduced the volunteers with the goals and content of the experiment. The volunteer sits on a comfortable chair with a display provided in front of the chair for presenting a user interface. The brain electrical signals are acquired by means of OpenBCI (https:// openbci.com /) amplifier devices. To ensure signal quality, the skin resistance of all electrodes is kept below 10kΩ. The reference electrode is located on the left ear and the ground electrode is located on top of the forehead. In addition to the reference electrode and the ground electrode, 8 channels of brain electrical data from the occipital region were acquired. The collection channel diagram used in the experiment is shown in fig. 5.
(3) On-line experimental procedure
The invention discloses a mechanical arm control method and system based on deep learning and man-machine interaction.
In-line experiments include three different robotic arm-based control devices. The three different mechanical arm control devices are a mechanical arm control system (device 1), a mechanical arm control device (device 2) based on two-dimensional dynamic stimulation and a mechanical arm control device (device 3) based on two-dimensional fixed stimulation, which are disclosed in the embodiment. Volunteers were required to complete experimental tasks using these three different robotic arm control devices to control the corresponding three different devices. The experimental task is set as follows, in an unstructured environment, a target object grabbing and placing task is designed. In a scenario, there are a total of three workpieces. At the beginning of a task, three workpieces will appear randomly at any position on the table in the scene. Volunteers need to control the robotic arm to place three workpieces into a designated position. The order of operation of the three workpieces is not limited. Each volunteer was required to complete five grab tasks. And collecting brain electricity data, task completion condition, task completion time and mechanical arm state data of volunteers in the online experiment process.
The experimental scenario of volunteers participating in the online experiment is shown in fig. 6.
(4) Experimental results
Fig. 7 shows the average task completion time, average output delay, and average intent recognition ratio for all volunteers to complete a task using three different robotic arm control devices.
The results showed that the average task completion times for all volunteers to complete tasks using three different robotic arm control devices were 89.04s, 96.65s and 104.39s, respectively. The average task completion time based on the mechanical arm control system disclosed by the invention is obviously lower than that of mechanical arm control equipment based on two-dimensional dynamic stimulation and mechanical arm control equipment based on two-dimensional fixed stimulation. Compared with the mechanical arm control equipment based on two-dimensional dynamic stimulation and the mechanical arm control equipment based on two-dimensional fixed stimulation, the average task completion time of the mechanical arm control system disclosed by the invention is reduced by 7.86% and 14.70%. The average output delays for all volunteers to complete the task using three different robotic arm control devices were 2.22s, 2.01s and 2.08s, respectively. Based on the mechanical arm control system disclosed by the invention, no significant difference exists between average output delays of tasks completed by volunteers by using mechanical arm control equipment based on two-dimensional dynamic stimulation and mechanical arm control equipment based on two-dimensional fixed stimulation. The average intent recognition ratio for all volunteers to complete the task using three different robotic arm control devices was 46.59%, 56.14% and 63.51%. The average intention recognition ratio of the mechanical arm control system disclosed by the invention is obviously lower than that of mechanical arm control equipment based on two-dimensional dynamic stimulation and mechanical arm control equipment based on two-dimensional fixed stimulation. Compared with the mechanical arm control equipment based on two-dimensional dynamic stimulation and the mechanical arm control equipment based on two-dimensional fixed stimulation, the average intention recognition ratio of the mechanical arm control system disclosed by the invention is reduced by 17.01% and 26.65%.
In summary, the mechanical arm control device disclosed by the invention shortens the task time, reduces the workload of a user (reduces the intention recognition ratio), and brings better operation experience to the user. These results demonstrate the potential advantages of the disclosed robotic arm control device in practical applications.
The present application is not limited to the above-mentioned embodiments, and any changes or substitutions that can be easily understood by those skilled in the art within the technical scope of the present application are intended to be included in the scope of the present application. Therefore, the protection scope of the present application should be subject to the protection scope of the claims.

Claims (9)

1. The mechanical arm control method based on deep learning and man-machine interaction is characterized by comprising the following steps of:
collecting pictures in an intelligent manufacturing scene, identifying the pose of each target object in the pictures, rendering the pictures, and generating an operation interface of a user;
Based on the control intention of a target object generated by a user, the brain waves are induced by looking at the target object in a scene, and the brain waves are decoded to obtain the operation intention of the user;
And receiving the operation intention of the user through a mechanical arm controller, obtaining the running track of the mechanical arm based on a motion planning method, and driving each joint of the mechanical arm to execute grabbing and placing tasks by using a dynamic model according to the running track.
2. The method for controlling a mechanical arm based on deep learning and man-machine interaction according to claim 1, wherein generating the operation interface of the user comprises:
And acquiring pictures in the intelligent manufacturing scene by using a camera at a first-person view angle of a user, identifying the pose of each target object in the pictures by a depth learning algorithm, superposing the transparency effect of sine wave coding for each target object in the scene, rendering the pictures, and displaying the pictures in the intelligent manufacturing scene with the transparency effect of sine wave coding superposed to the user in real time to generate an operation interface of the user.
3. The mechanical arm control method based on deep learning and man-machine interaction according to claim 2, wherein the deep learning algorithm is a neural network model realized based on a transfer learning method, the neural network model receives pictures in the intelligent manufacturing scene, extracts features through VGG16, and then carries out gesture estimation through a translation branch and a rotation branch respectively to identify the pose of the target;
The translation branch is used for position estimation and outputting a three-dimensional vector to represent the position of an object in a three-dimensional space, the rotation branch is used for gesture estimation and outputting a four-dimensional vector to represent the quaternion rotation of the object, the translation branch consists of three full-connection layers, the feature vector is mapped to 256 dimensions and 64 dimensions respectively and finally a 3-dimensional position vector is output, the rotation branch consists of three full-connection layers, a 4-dimensional quaternion vector is output, and the quaternion is normalized through a custom normalization layer.
4. The mechanical arm control method based on deep learning and man-machine interaction according to claim 2, wherein the transparency of the sine wave code is:
alpha(t)=0.5·sin(2πft+Δφ)+0.5
where alpha (t) is the transparency of the target at time t, f is the frequency of the sine wave, and Δφ is the phase difference.
5. The mechanical arm control method based on deep learning and man-machine interaction according to claim 1, wherein decoding the brain waves based on an incremental autonomous learning method comprises:
Step 1, preprocessing and windowing the brain wave;
preprocessing brain waves, including baseline removal and bandpass filtering;
the baseline is removed by a high pass filter, specifically:
Y(t)=HighpassFilter(X(t),fcutoff)
Wherein f cutoff is the cut-off frequency of the high-pass filter, X (t) is the original brain wave, Y (t) is the signal filtered by the high-pass filter, HIGHPASSFILTER () is the high-pass filter;
the process of bandpass filtering the brain wave is as follows:
Z(t)=BandstopFilter(Y(t),50Hz)
Wherein, the signal after band-pass filtering is Z (t), bandstopFilter () is band-stop filtering;
Slicing the preprocessed brain waves according to a preset time window, wherein each window comprises a certain number of sample points N, and the method specifically comprises the following steps:
Zk=[Z(tk),Z(tk+1),…,Z(tk+N-1)]
Z k is the windowed signal;
Step 2, constructing an initial brain wave template;
Constructing sine and cosine reference signals corresponding to the stimulation frequency f i according to the stimulation frequency f i coded by the sine wave on the target object:
where M is the number of harmonics, i is the ID of the target object, k is the discretized point in time, Sine and cosine reference signals corresponding to the stimulation frequency f i;
Step 3, calculating a correlation value;
decomposing the windowed signal Z k into a number of sub-bands using a zero-phase type I Chebyshev filter
Applying a standard typical correlation analysis algorithm to each subband component, respectively, obtaining a correlation value between each subband component and a predefined reference signal, specifically:
wherein ρ k is the correlation value corresponding to the kth template signal, Is the correlation value of the nth sub-band;
n subband components in ρ k Weighted sum of squares fusion is performed, namely:
wherein, To output the result for the weighted sum of squares fused correlation,W (N) is a weighting function for the correlation value of the nth sub-band;
the definition of the weighting function w (n) is:
w(n)=n-a+b,n∈[1,N]
Wherein a and b are both constants.
6. The method for controlling a mechanical arm based on deep learning and man-machine interaction according to claim 5, wherein obtaining the operation intention of the user comprises:
Collecting brain wave data generated by a user, when the data quantity corresponding to the sine wave coding frequency on each target exceeds M, obtaining a batch of user-specific templates, generating new relevant values by using the user-specific templates, and representing the new relevant values as Wherein, when a new user-specific template is collected each time, discarding the last user-specific template;
finally, S weighted correlation values corresponding to the sine wave coded stimulation frequencies on the S target targets are obtained Taking the largest of themThe corresponding target is the identified operation intention of the user;
The user-specific templates are defined as:
wherein M is the data quantity collected by a batch of user-specific templates, C is the electroencephalogram signal channel quantity, For the purpose of a user-specific template,Electroencephalogram data of a single test time and a single channel;
The optimized correlation value is as follows:
Wherein alpha is an updated weight parameter, To optimize the correlation value;
The operation intention of the user is as follows:
wherein, For a preset threshold, f target is the identified target ID,And the optimal correlation value corresponding to the S-th target object.
7. The method for controlling a mechanical arm based on deep learning and man-machine interaction according to claim 1, wherein the method for obtaining the motion trajectory of the mechanical arm based on a motion planning method comprises the steps of:
Generating the running track by using a quintic polynomial interpolation, correcting the deviation between the running track and an expected path through proportional-differential control, and continuously executing the path planned by the autonomous movement by the mechanical arm when the user continuously outputs the operation intention, otherwise, stopping the movement of the mechanical arm;
the method for generating the running track comprises the following steps:
q(t)=a0+a1t+a2t2+a3t3+a4t4+a5t5
In the formula, polynomial coefficient a 0,a1,...,a5 is solved by setting boundary conditions, and q (t) is a running track;
For correcting deviations from the desired path by proportional-differential control, the method is:
Where τ is the control input, K p and K d are the proportional and differential gain matrices, q desired and Respectively desired joint position and velocity, q actual andThe actual joint position and velocity, respectively.
8. The method for controlling a mechanical arm based on deep learning and man-machine interaction according to claim 1, wherein the driving each joint of the mechanical arm to perform the grabbing and placing tasks using the dynamics model comprises:
Obtaining an overall transformation matrix based on the transformation matrix of each joint of the mechanical arm, and describing forces and moments for generating required joint motions through the dynamic model, wherein the forces and moments comprise inertia, coriolis forces and gravity effects;
the transformation matrix T i of each joint is:
Wherein θ i is the joint angle, d i is the link displacement, a i is the link length, and α i is the link torsion angle;
The overall transformation matrix T is derived from the base to the end effector by multiplying the transformation matrices of the individual joints:
T=T1T2T3T4T5T6;
the dynamics model is described using Lagrange's method, specifically:
wherein tau is a joint moment vector, M (theta) is a joint space inertia matrix, Is a matrix of Coriolis force and centrifugal force, G (theta) is a gravity moment vector,As the joint acceleration vector, the motion vector,Is a joint velocity vector;
The joint space inertia matrix M (theta) represents the resistance of the mass and the configuration of the robot to acceleration, and is specifically:
wherein m 11 (θ) is the inertial coupling effect between joint 1 and joint 1;
The coriolis force and centrifugal force matrix Reflecting the forces acting on the robot as it moves, in particular:
Wherein c 11 is the coriolis and centrifugal coupling effect between joint 1 and joint 1;
the gravity moment vector G (theta) represents gravity moment acting on the robot connecting rod, and specifically comprises the following components:
Where g 1 (θ) is the gravitational moment on joint 1.
9. A mechanical arm control system based on deep learning and man-machine interaction, which is used for realizing the mechanical arm control method based on deep learning and man-machine interaction according to any one of claims 1-8, and is characterized by comprising an operation interface generating module, a user operation intention decoding module and a mechanical arm motion control module;
The operation interface generation module is used for displaying pictures in the intelligent manufacturing scene with the transparency effect of the superposition sine wave codes to a user in real time to generate an operation interface;
the user operation intention decoding module is used for decoding brain waves by using an incremental self-learning algorithm to obtain the operation intention of a user;
the mechanical arm motion control module is used for controlling the mechanical arm to execute grabbing and placing tasks.
CN202411353138.XA 2024-09-26 2024-09-26 A robotic arm control method and system based on deep learning and human-computer interaction Active CN119077737B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411353138.XA CN119077737B (en) 2024-09-26 2024-09-26 A robotic arm control method and system based on deep learning and human-computer interaction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411353138.XA CN119077737B (en) 2024-09-26 2024-09-26 A robotic arm control method and system based on deep learning and human-computer interaction

Publications (2)

Publication Number Publication Date
CN119077737A true CN119077737A (en) 2024-12-06
CN119077737B CN119077737B (en) 2025-09-26

Family

ID=93696970

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411353138.XA Active CN119077737B (en) 2024-09-26 2024-09-26 A robotic arm control method and system based on deep learning and human-computer interaction

Country Status (1)

Country Link
CN (1) CN119077737B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060195501A1 (en) * 2005-01-14 2006-08-31 Gregor Feldhaus Method and system for noise measurement with combinable subroutines for the mesurement, identificaiton and removal of sinusoidal interference signals in a noise signal
US20150051734A1 (en) * 2013-08-15 2015-02-19 Yu Zheng Human motion tracking control with strict contact force contstraints for floating-base humanoid robots
US20160242690A1 (en) * 2013-12-17 2016-08-25 University Of Florida Research Foundation, Inc. Brain state advisory system using calibrated metrics and optimal time-series decomposition
CN108227492A (en) * 2018-01-03 2018-06-29 华中科技大学 A kind of discrimination method of six degree of freedom serial manipulator end load kinetic parameter
CN110211180A (en) * 2019-05-16 2019-09-06 西安理工大学 A kind of autonomous grasping means of mechanical arm based on deep learning
CN111152212A (en) * 2019-12-05 2020-05-15 北京蒂斯科技有限公司 Mechanical arm movement track planning method and device based on optimal power
CN116038681A (en) * 2022-06-30 2023-05-02 北京理工大学 Method and device for identifying dynamic parameters of manipulator based on parameter separation
CN118161317A (en) * 2024-03-12 2024-06-11 北京理工大学 A brain-controlled hand exoskeleton method and device
CN118228052A (en) * 2024-04-02 2024-06-21 西安电子科技大学 Phase-locked time-shifted data enhancement method based on temporal local weighting
CN118342502A (en) * 2024-04-19 2024-07-16 中国人民解放军国防科技大学 Brain-computer interface-based collaborative perception and grasping control method and system for manipulators

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060195501A1 (en) * 2005-01-14 2006-08-31 Gregor Feldhaus Method and system for noise measurement with combinable subroutines for the mesurement, identificaiton and removal of sinusoidal interference signals in a noise signal
US20150051734A1 (en) * 2013-08-15 2015-02-19 Yu Zheng Human motion tracking control with strict contact force contstraints for floating-base humanoid robots
US20160242690A1 (en) * 2013-12-17 2016-08-25 University Of Florida Research Foundation, Inc. Brain state advisory system using calibrated metrics and optimal time-series decomposition
CN108227492A (en) * 2018-01-03 2018-06-29 华中科技大学 A kind of discrimination method of six degree of freedom serial manipulator end load kinetic parameter
CN110211180A (en) * 2019-05-16 2019-09-06 西安理工大学 A kind of autonomous grasping means of mechanical arm based on deep learning
CN111152212A (en) * 2019-12-05 2020-05-15 北京蒂斯科技有限公司 Mechanical arm movement track planning method and device based on optimal power
CN116038681A (en) * 2022-06-30 2023-05-02 北京理工大学 Method and device for identifying dynamic parameters of manipulator based on parameter separation
CN118161317A (en) * 2024-03-12 2024-06-11 北京理工大学 A brain-controlled hand exoskeleton method and device
CN118228052A (en) * 2024-04-02 2024-06-21 西安电子科技大学 Phase-locked time-shifted data enhancement method based on temporal local weighting
CN118342502A (en) * 2024-04-19 2024-07-16 中国人民解放军国防科技大学 Brain-computer interface-based collaborative perception and grasping control method and system for manipulators

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李沫;印力承;闫天翼;: "基于积分直方图的粒子滤波跟踪算法", 光学与光电技术, no. 03, 10 June 2013 (2013-06-10), pages 45 - 48 *
江京;王春慧;印二威;黄守鹏;赵岩;黄肖山;田雨;张绍尧;: "面向星球探测的脑控无人车系统设计与实现", 航天医学与医学工程, no. 02, 15 April 2018 (2018-04-15), pages 237 - 242 *

Also Published As

Publication number Publication date
CN119077737B (en) 2025-09-26

Similar Documents

Publication Publication Date Title
Su et al. Recent advancements in multimodal human–robot interaction
Liu et al. A CNN-transformer hybrid recognition approach for sEMG-based dynamic gesture prediction
CN109062398B (en) A spacecraft rendezvous and docking method based on virtual reality and multimodal human-machine interface
Liu et al. Multimodal data-driven robot control for human–robot collaborative assembly
Liu et al. Frame mining: a free lunch for learning robotic manipulation from 3d point clouds
CN106227341A (en) Unmanned plane gesture interaction method based on degree of depth study and system
CN112990074A (en) VR-based multi-scene autonomous control mixed brain-computer interface online system
CN112518743B (en) Multi-mode neural decoding control system and method for on-orbit operation of space manipulator
CN112597967A (en) Emotion recognition method and device for immersive virtual environment and multi-modal physiological signals
CN111966217A (en) Unmanned aerial vehicle control method and system based on gestures and eye movements
CN112631173A (en) Brain-controlled unmanned platform cooperative control system
CN109009887A (en) A kind of man-machine interactive navigation system and method based on brain-computer interface
CN120552080A (en) A human-computer interaction system and method for robot training and data collection
Lou Crawling robot manipulator tracking based on gaussian mixture model of machine vision
CN113408443B (en) Gesture posture prediction method and system based on multi-view images
CN112936259B (en) A Human-Robot Collaboration Method Applicable to Underwater Robots
Li et al. Challenges and Trends in Egocentric Vision: A Survey
CN119077737B (en) A robotic arm control method and system based on deep learning and human-computer interaction
Zheng et al. CG-Recognizer: A biosignal-based continuous gesture recognition system
CN120295326A (en) A brain-controlled drone swarm method based on deep brain-computer collaborative fusion
Fu et al. Research on application of cognitive-driven human-computer interaction
Wei et al. A hybrid human-machine interface for hands-free control of an intelligent wheelchair
Tokmurziyev et al. GazeRace: Revolutionizing remote piloting with eye-gaze control
Parikh et al. Quadcopter control in three-dimensional space using SSVEP and motor imagery-based brain-computer interface
Arsenio et al. The whole world in your hand: Active and interactive segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant