[go: up one dir, main page]

CN118303883B - Fatigue detection method based on characteristic fusion of electroencephalogram signals and electrooculogram signals - Google Patents

Fatigue detection method based on characteristic fusion of electroencephalogram signals and electrooculogram signals Download PDF

Info

Publication number
CN118303883B
CN118303883B CN202410217556.XA CN202410217556A CN118303883B CN 118303883 B CN118303883 B CN 118303883B CN 202410217556 A CN202410217556 A CN 202410217556A CN 118303883 B CN118303883 B CN 118303883B
Authority
CN
China
Prior art keywords
signal
layer
eeg
electroencephalogram
signals
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410217556.XA
Other languages
Chinese (zh)
Other versions
CN118303883A (en
Inventor
李航
于烨玮
黄瑞
施柯丞
程洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202410217556.XA priority Critical patent/CN118303883B/en
Publication of CN118303883A publication Critical patent/CN118303883A/en
Application granted granted Critical
Publication of CN118303883B publication Critical patent/CN118303883B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/18Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state for vehicle drivers or machine operators
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/372Analysis of electroencephalograms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/372Analysis of electroencephalograms
    • A61B5/374Detecting the frequency distribution of signals, e.g. detecting delta, theta, alpha, beta or gamma waves
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/398Electrooculography [EOG], e.g. detecting nystagmus; Electroretinography [ERG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/725Details of waveform analysis using specific filters therefor, e.g. Kalman or adaptive filters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/254Fusion techniques of classification results, e.g. of results related to same input data
    • G06F18/256Fusion techniques of classification results, e.g. of results related to same input data of results relating to different input data, e.g. multimodal recognition
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Theoretical Computer Science (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Pathology (AREA)
  • Psychiatry (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Data Mining & Analysis (AREA)
  • Public Health (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Psychology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Physiology (AREA)
  • Signal Processing (AREA)
  • Evolutionary Biology (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Fuzzy Systems (AREA)
  • Computing Systems (AREA)
  • Child & Adolescent Psychology (AREA)
  • Developmental Disabilities (AREA)
  • Educational Technology (AREA)
  • Hospice & Palliative Care (AREA)
  • Social Psychology (AREA)
  • Ophthalmology & Optometry (AREA)

Abstract

本发明公开了一种基于脑电信号和眼电信号特征融合的疲劳检测方法,分别采集每位受试者在正常状态和疲劳状态下的脑电信号和眼电信号并生成训练样本,构建包括脑电信号数据降维处理模块、空间注意力特征提取模块、脑电信号通道与时间特征提取模块、脑电信号特征降维模块、眼电信号时间特征提取模块、眼电信号通道与时间特征提取模块、特征拼接模块和分类模块的疲劳检测模型并采用训练样本进行训练,当需要对某人员进行疲劳检测时,获取脑电信号和眼电信号,将脑电信号采用相同方法进行预处理后与眼电信号一起输入训练好的疲劳检测模型,得到疲劳检测结果。本发明可以更好地提取和保留时空特征,提高疲劳检测的准确率。

The present invention discloses a fatigue detection method based on the fusion of EEG signals and EOG signals. The EEG signals and EOG signals of each subject in a normal state and a fatigue state are collected respectively and training samples are generated. A fatigue detection model including an EEG signal data dimensionality reduction processing module, a spatial attention feature extraction module, an EEG signal channel and time feature extraction module, an EEG signal feature dimensionality reduction module, an EOG signal time feature extraction module, an EOG signal channel and time feature extraction module, a feature splicing module and a classification module is constructed and trained using the training samples. When fatigue detection is required for a certain person, the EEG signals and EOG signals are obtained, and the EEG signals are preprocessed using the same method and then input into the trained fatigue detection model together with the EOG signals to obtain fatigue detection results. The present invention can better extract and retain spatiotemporal features and improve the accuracy of fatigue detection.

Description

Fatigue detection method based on characteristic fusion of electroencephalogram signals and electrooculogram signals
Technical Field
The invention belongs to the technical field of electroencephalogram signals, and particularly relates to a fatigue detection method based on characteristic fusion of an electroencephalogram signal and an electrooculogram signal.
Background
In the conventional studies, fatigue is a state of physical and mental dysfunction and discomfort caused by excessive use of the body. This uncomfortable state can affect various behaviors of people in daily life, especially in some operations that require high concentration and sensitivity to physical reaction, such as in driving, flying and safety activities. According to the data of the national highway traffic safety administration, about 10 tens of thousands of police reported traffic accidents involving fatigue driving or drowsiness each year, resulting in 1550 deaths and 7.1 tens of thousands of injuries. The risk brought by fatigue driving is not neglected, so that a great deal of research is needed for accurately detecting the fatigue state. The existing fatigue driving detection methods for mass production mainly comprise three steps of 1) detecting and analyzing the operation characteristics of a steering wheel, 2) analyzing the facial characteristics of a driver by using a camera, and 3) checking the track characteristics of a vehicle. These methods have good detection results, but when any one of the algorithms detects a driver fatigue state, the driver is already in a more serious fatigue state, even if the steering wheel and the vehicle track are detected. When this is detected, the driver has caused a malfunction which may lead to more serious consequences, and the fatigue state of the driver cannot be predicted to avoid the danger. The brain electrical signal can make up for the omission.
In recent years, research for detecting the mental and physical states of a person using an electroencephalogram signal has been greatly progressed. Particularly in fatigue driving, the brain electrical signal is called a 'gold standard' for fatigue detection, and is a direct and accurate expression of the fatigue state of a human body, and is more accurate, real-time and reliable. In 2018 Garnelo et al proposed a model called neural processes. The neural process network model combines the advantages of a neural network and a gaussian process. The method can effectively learn some characteristics like a neural network, optimize parameters of the characteristics, fully utilize data like a Gaussian process, infer data distribution and effectively analyze the data. 2022, fang et al converted the brain electrical signals into images by means of feature band selection, energy calculation, spatial channel construction, etc., and classified the images by means of cnn+lstm networks, achieving a better effect. The method has the advantages that the brain activity area can be displayed through the image, the change of the brain movement can be reflected more intuitively, and the method is easier to explain.
However, there are also problems in fatigue driving using an electroencephalogram signal, such as fatigue judgment criteria. There are currently two types of fatigue assessment, subjective and objective. The subjective fatigue state is judged by filling in a scale (such as a Carriens card somnolence scale), so that the state of a person can be comprehensively analyzed, and whether the subject is tired or not is judged. However, the judgment data is easily affected by subjective ideas of the subject, and the deviation is relatively large. In order to objectively judge the fatigue degree of a driver, physiological data such as heartbeat, blood pressure, facial information and the like must be monitored, and the information is more objective, but there are a plurality of branches in threshold division for judging fatigue, and deep analysis and experiments are needed, so that a fatigue detection method which is more objective, effective and higher in accuracy is provided.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, and provides a fatigue detection method based on the characteristic fusion of an electroencephalogram signal and an electrooculogram signal, which adopts multi-stage characteristic fusion to better extract and retain space-time characteristics and improve the accuracy of fatigue detection.
In order to achieve the above object, the fatigue detection method based on the fusion of the characteristics of the electroencephalogram signal and the electrooculogram signal of the invention comprises the following steps:
S1, determining a tested scene and a tested subject according to actual needs, and respectively collecting electroencephalogram signals with the time length of T of each tested subject in a normal state and a fatigue state And an electrooculogram signal EOG;
s2, adopting a preset preprocessing method to perform electroencephalogram signal processing Preprocessing to obtain EEG signals;
S3, taking the corresponding EEG signals EEG and EOG as a group of input signals, and labeling labels label corresponding to each group of input signals, wherein label=1 indicates that the subject is in a fatigue state, and label=0 indicates that the subject is in a normal state, so that training samples are obtained;
S4, constructing a fatigue detection model, wherein the fatigue detection model comprises an electroencephalogram signal data dimension reduction processing module, a spatial attention feature extraction module, an electroencephalogram signal channel and time feature extraction module, an electroencephalogram signal feature dimension reduction module, an electroencephalogram signal time feature extraction module, an electroencephalogram signal channel and time feature extraction module, a feature splicing module and a classification module, wherein:
The electroencephalogram data dimension reduction processing module is used for carrying out data dimension reduction on the electroencephalogram EEG and sending the dimension reduced electroencephalogram EEG 1 to the spatial attention feature extraction module;
The spatial attention feature extraction module is used for extracting spatial features of the brain electrical signal EEG 1 subjected to dimension reduction by adopting a spatial attention mechanism, and sending the obtained brain electrical signal spatial features t EEG to the brain electrical signal channel and time feature extraction module;
The electroencephalogram signal channel and time feature extraction module is used for further feature extraction of the received electroencephalogram signal spatial feature t EEG and sending the obtained electroencephalogram signal feature f EEG to the electroencephalogram signal feature dimension reduction module, and comprises a first convolution layer, a second convolution layer, a mean value pooling layer and an activation function layer, wherein:
The first convolution layer is used for carrying out convolution operation on the electroencephalogram signal spatial feature t EEG, and then sending the obtained feature to the second convolution layer;
the second convolution layer is used for carrying out convolution operation on the received characteristics and then sending the obtained characteristics to the average value pooling layer;
The averaging layer is used for carrying out averaging on the received characteristics and then sending the obtained characteristics to the activation function layer;
The activation function layer is used for processing the received characteristics by adopting Relu activation functions, and then outputting the obtained characteristics as an electroencephalogram signal characteristic f EEG;
The electroencephalogram signal feature dimension reduction module is used for reducing dimensions of the received electroencephalogram signal feature F EEG to obtain a dimension-reduced electroencephalogram signal feature F EEG, enabling the dimension of the electroencephalogram signal feature F EEG to be identical with that of the electroencephalogram signal feature F EOG, and then outputting the electroencephalogram signal feature F EEG to the feature splicing module;
The eye electrical signal time feature extraction module is used for extracting time features of the eye electrical signal EOG, and sending the obtained eye electrical signal time feature t EOG to the eye electrical signal channel and time feature extraction module;
The electro-oculogram signal channel and time feature extraction module is used for further feature extraction of the received electro-oculogram signal time feature t EOG and sending the obtained electro-oculogram signal feature F EOG to the feature splicing module, wherein the electro-oculogram signal channel and time feature extraction module comprises a convolution layer, a mean value pooling layer and an activation function layer, and the electro-oculogram signal channel and time feature extraction module comprises the following components:
the convolution layer is used for carrying out convolution operation on the received time feature t EOG of the ocular signal, and sending the obtained feature to the mean value pooling layer;
The averaging layer is used for carrying out averaging on the received characteristics and then sending the obtained characteristics to the activation function layer;
The activation function layer is used for processing the received characteristics by adopting Relu activation functions, and then outputting the obtained characteristics as an electro-oculogram signal characteristic F EOG;
The characteristic splicing module is used for splicing the electroencephalogram characteristic F EEG and the electrooculogram characteristic F EOG and sending the spliced characteristic F con to the classification module;
The classification module is used for classifying according to the splicing characteristics F con to obtain a detection result of whether the person corresponding to the input signal is tired;
S5, training the fatigue detection model by adopting the training sample in the step S3 to obtain a trained fatigue detection model;
s6, when the fatigue detection needs to be carried out on a person, acquiring the electroencephalogram signal with the duration of T And the electro-oculogram signal EOG ', preprocessing the electro-cerebral signal by adopting the same method in the step S2 to obtain an electro-cerebral signal EEG', and inputting the electro-cerebral signal EEG 'and the electro-oculogram signal EOG' into a trained fatigue detection model to obtain a fatigue detection result.
According to the fatigue detection method based on the fusion of the characteristics of the electroencephalogram signals and the electrooculogram signals, a tested scene and a tested person are determined according to actual needs, the electroencephalogram signals and the electrooculogram signals of each tested person in a normal state and a fatigue state are respectively collected, the electroencephalogram signals and the corresponding electrooculogram signals are used as input signals after being preprocessed, the corresponding labels are marked, so that training samples are obtained, a fatigue detection model comprising an electroencephalogram signal data dimension reduction processing module, a spatial attention feature extraction module, an electroencephalogram signal channel and time feature extraction module, an electroencephalogram signal feature dimension reduction module, an electrooculogram signal time feature extraction module, an electrooculogram signal channel and time feature extraction module, a feature stitching module and a classification module is constructed, and training samples are adopted for training, when a person needs to be subjected to fatigue detection, the electroencephalogram signals and the electrooculogram signals are preprocessed by the same method and then are input into the trained fatigue detection model together, and a fatigue detection result is obtained.
The invention has the following beneficial effects:
1) The invention comprehensively utilizes the high accuracy of the electroencephalogram signals and the good cross-subject recognition accuracy of the electrooculogram signals, and detects the fatigue of the driver by a multi-mode method, thereby improving the accuracy of fatigue detection;
2) The invention adopts different characteristic extraction networks to extract the characteristics aiming at the electroencephalogram signals and the electrooculogram signals, so that the extracted characteristics can better reflect the characteristics of the electroencephalogram signals and the electrooculogram signals, thereby further improving the accuracy of fatigue detection.
Drawings
FIG. 1 is a flowchart of a specific embodiment of a fatigue detection method based on the feature fusion of an electroencephalogram signal and an electrooculogram signal according to the present invention;
FIG. 2 is a block diagram of a fatigue detection model in the present invention;
FIG. 3 is a schematic diagram of specific sample electrode locations for a SEED-VIG dataset;
fig. 4 is a specific structural view of the fatigue detection model in the present embodiment;
FIG. 5 is a graph of the accuracy of recognition across subjects for the present invention and two comparison examples in this example.
Detailed Description
The following description of the embodiments of the invention is presented in conjunction with the accompanying drawings to provide a better understanding of the invention to those skilled in the art. It is to be expressly noted that in the description below, detailed descriptions of known functions and designs are omitted here as perhaps obscuring the present invention.
Examples
FIG. 1 is a flowchart of an embodiment of a fatigue detection method based on the feature fusion of an electroencephalogram signal and an electrooculogram signal. As shown in fig. 1, the specific steps of the fatigue detection method based on the feature fusion of the electroencephalogram signal and the electrooculogram signal comprise:
s101, acquiring an electroencephalogram signal and an electrooculogram signal:
determining a tested scene and a tested subject according to actual needs, and respectively collecting electroencephalogram signals with the time length of T of each tested subject in a normal state and a fatigue state And an electrooculogram signal EOG.
In practical application, the electroencephalogram signals and the electrooculogram signals are collected by adopting a plurality of electrodes, and the arrangement of the electrodes (especially the electroencephalogram signals) has great influence on the final fatigue detection effect, so that the positions of the electrodes can be determined according to practical requirements. According to the invention, as far as the electroencephalogram signals are concerned, the regions with great influence on the fatigue recognition result are the left rear and the right front of the head, and the characteristics of the recognition signals of the channels have great influence on the recognition effect, so that the electroencephalogram signals should be preferentially arranged at the left rear and the right front of the head.
S102, preprocessing an electroencephalogram signal:
In practical application, a noninvasive acquisition method is generally adopted for acquiring the electroencephalogram signals, and the acquired electroencephalogram signals are generally low in signal-to-noise ratio and are influenced by power frequency, myoelectric noise and other noise. Therefore, the electroencephalogram signal needs to be subjected to a preset pretreatment method Preprocessing is carried out to obtain EEG signals so as to highlight the characteristics of EEG signals. In this embodiment, the preprocessing method of the electroencephalogram signal includes:
1) For brain electric signal Filtering, including band-pass filtering and power frequency filtering, wherein specific filtering parameters are set according to actual needs.
2) And selecting the wave band of the electroencephalogram signal according to the actual situation. The brain of a human is mainly provided with five different brain waves, namely an alpha wave (alpha wave), a beta wave (beta wave), a gamma wave (gamma wave), a delta wave (delta wave) and a theta wave (theta wave), and the embodiment is compared and the alpha wave (alpha wave), the delta wave (delta wave) and the theta wave (theta wave) in the brain electric signals are selected after filtering, so that the screened brain electric signals are obtained.
3) Extracting differential entropy characteristics:
The differential entropy (DIFFERENTIAL ENTROPY, DE) is a generalized form of shannon information entropy in continuous variables, and the calculation formula is as follows:
Wherein p (x) represents a probability density function of the continuous information x, and [ a, b ] represents an information value interval.
Approximately obeying gaussian distribution for a particular lengthThe differential entropy calculation formula of the EEG signal EEG of (a) can be expressed as follows:
and calculating differential entropy of the screened EEG signals, and taking the obtained signals as EEG signals EEG for final use.
S103, generating training samples:
And taking the corresponding EEG signals EEG and EOG as a group of input signals, and labeling labels label corresponding to each group of input signals, wherein label=1 indicates that the subject is in a fatigue state, and label=0 indicates that the subject is in a normal state, so that a training sample is obtained.
S104, constructing a fatigue detection model:
In order to realize accurate fatigue detection, the invention introduces a space attention framework when constructing a fatigue detection model, further extracts the electroencephalogram signal characteristics of different channels in a space signal, simultaneously absorbs the framework thought of DenseNet, and fuses the multi-stage characteristics to predict the result. Fig. 2 is a structural diagram of a fatigue detection model in the present invention. As shown in fig. 2, the fatigue detection model in the present invention includes an electroencephalogram data dimension reduction processing module, a spatial attention feature extraction module, an electroencephalogram channel and time feature extraction module, an electroencephalogram feature dimension reduction module, an electroencephalogram time feature extraction module, an electroencephalogram channel and time feature extraction module, a feature stitching module and a classification module, wherein:
The electroencephalogram data dimension reduction processing module is used for carrying out data dimension reduction on the electroencephalogram EEG and sending the dimension reduced electroencephalogram EEG 1 to the spatial attention feature extraction module. As shown in fig. 2, the electroencephalogram data dimension reduction module in this embodiment includes a convolution layer, a mean pooling layer and an activation function layer, where:
the convolution layer is used for carrying out convolution processing on the electroencephalogram signals of each channel respectively, and sending the obtained characteristic signals to the average value pooling layer.
The averaging layer is used for carrying out averaging treatment on the received characteristic signals of all the channels and sending the obtained characteristic signals to the activation function layer.
The activation function layer is used for processing the received characteristic signals by adopting Relu activation functions and outputting the obtained characteristic signals as brain electrical signals after dimension reduction.
The spatial attention feature extraction module is used for extracting spatial features of the brain electrical signal EEG 1 subjected to dimension reduction by adopting a spatial attention mechanism, and sending the obtained brain electrical signal spatial features t EEG to the brain electrical signal channel and time feature extraction module. As shown in fig. 2, the spatial attention feature extraction module in this embodiment includes a max pooling layer, an average pooling layer and a feature stitching layer, where:
the maximum pooling layer is used for carrying out maximum pooling on the brain electrical signals after dimension reduction, and then sending the obtained maximum pooling characteristics to the characteristic splicing layer.
The average pooling layer is used for averaging the brain electrical signals after dimension reduction, and then the obtained average pooling characteristics are sent to the characteristic splicing layer.
The feature splicing layer is used for splicing the maximum pooling feature and the average pooling feature, and outputting the spliced feature as a spatial feature t EEG.
The electroencephalogram signal channel and time feature extraction module is used for further feature extraction of the received electroencephalogram signal spatial feature t EEG and sending the obtained electroencephalogram signal feature f EEG to the electroencephalogram signal feature dimension reduction module. As shown in fig. 2, the electroencephalogram signal channel and time feature extraction module in this embodiment includes a first convolution layer, a second convolution layer, a mean-pooling layer and an activation function layer, where:
The first convolution layer is used for carrying out convolution operation on the spatial characteristics t EEG of the electroencephalogram signals, and then the obtained characteristics are sent to the second convolution layer.
The second convolution layer is used for carrying out convolution operation on the received characteristics and then sending the obtained characteristics to the mean value pooling layer.
The averaging layer is used for carrying out averaging on the received characteristics and then sending the obtained characteristics to the activation function layer.
The activation function layer is used for processing the received characteristics by adopting Relu activation functions, and then outputting the obtained characteristics as an electroencephalogram signal characteristic f EEG.
The electroencephalogram signal feature dimension reduction module is used for reducing dimensions of the received electroencephalogram signal feature F EEG to obtain a dimension-reduced electroencephalogram signal feature F EEG, enabling the dimension of the electroencephalogram signal feature F EEG to be identical with that of the electroencephalogram signal feature F EOG, and then outputting the electroencephalogram signal feature F EEG to the feature splicing module. In this embodiment, the electroencephalogram feature dimension reduction module includes a first convolution layer, a second convolution layer, a mean value pooling layer and an activation function layer, where:
The first convolution layer is used for carrying out convolution operation on the electroencephalogram signal characteristic f EEG, and then the obtained characteristic is sent to the second convolution layer.
The second convolution layer is used for carrying out convolution operation on the received characteristics and then sending the obtained characteristics to the mean value pooling layer.
The averaging layer is used for carrying out averaging on the received characteristics and then sending the obtained characteristics to the activation function layer.
The activation function layer is used for processing the received characteristics by adopting Relu activation functions, and then outputting the obtained characteristics as the brain electrical signal characteristics F EEG after dimension reduction.
The eye electrical signal time feature extraction module is used for extracting time features of the eye electrical signal EOG, and sending the obtained eye electrical signal time feature t EOG to the eye electrical signal channel and time feature extraction module. As shown in fig. 2, the electrooculogram signal time feature extraction module in this embodiment includes K channel convolution layers, a mean pooling layer and an activation function layer, where K represents the number of electrooculogram signal channels, and where:
and each channel convolution layer carries out convolution operation on each channel signal of the electro-oculogram signal EOG respectively, and sends the obtained channel characteristics to the averaging layer.
The averaging layer is used for carrying out averaging on the received K channel features and then sending the obtained features to the activation function layer.
The activation function layer is used for processing the received characteristics by adopting Relu activation functions, and then outputting the obtained characteristics as an electrooculogram signal time characteristic t EOG.
The electro-oculogram signal channel and time feature extraction module is used for further feature extraction of the received electro-oculogram signal time feature t EOG, and the obtained electro-oculogram signal feature F EOG is sent to the feature splicing module. As shown in fig. 2, the electrooculogram signal channel and time feature extraction module in this embodiment includes a convolution layer, a mean pooling layer, and an activation function layer, where:
the convolution layer is used for carrying out convolution operation on the received time feature t EOG of the electro-oculogram signal, and sending the obtained feature to the mean value pooling layer.
The averaging layer is used for carrying out averaging on the received characteristics and then sending the obtained characteristics to the activation function layer.
The activation function layer is used for processing the received features by adopting Relu activation functions, and then outputting the obtained features as electro-ocular signal features F EOG.
The characteristic splicing module is used for splicing the electroencephalogram characteristic F EEG and the electrooculogram characteristic F EOG and sending the spliced characteristic F con to the classification module.
The classification module is used for classifying according to the splicing characteristic F con to obtain a detection result of whether the person corresponding to the input signal is tired. As shown in fig. 2, the classification module in this embodiment is implemented by using two fully connected layers and a softmax layer in cascade.
S105, training a fatigue detection model:
And (3) training the fatigue detection model by adopting the training sample in the step (S103) to obtain a trained fatigue detection model.
In the training process of the fatigue detection model, the embodiment uses a cross entropy loss function for training to determine the proximity degree of the actual output and the expected output of the fatigue detection. The cross entropy loss H (p, q) is calculated as:
H(p,q)=-∑x(p(x)logq(x))+(1+p(x)log(1-q(x)))
where p (x) represents the desired output of sample x and q (x) represents the actual output of sample x.
Adam optimizers have extremely fast convergence rates when training neural networks and multi-layer neural networks, so the Adam optimizers were chosen to train models in this embodiment. The learning rate was 0.001 and the batch size was 30. In addition, since more convolution layers are used and the number of model layers is large, the dropout rate is set to 0.25 to avoid overfitting.
S106, fatigue detection:
when a person needs to be subjected to fatigue detection, acquiring an electroencephalogram signal with the duration of T And an electro-oculogram signal EOG ', preprocessing the electro-brain signal by adopting the same method in the step S102 to obtain an electro-brain signal EEG', and inputting the electro-brain signal EEG 'and the electro-oculogram signal EOG' into a trained fatigue detection model to obtain a fatigue detection result.
In order to better demonstrate the technical effects of the invention, the invention is experimentally verified by adopting a specific example. The embodiment uses a public data set SEED-VIG published in 2017 of Shanghai university of traffic as a test data set, wherein data of the data set is collected through driving simulation, and an electroencephalogram signal are collected through a simulation driving system in the collecting process. During the collection process, a screen is placed in front of the subject showing a four lane highway scene without unnecessary engines or other components. The subject controls the vehicle in software through the steering wheel and the pedals, and the scene is synchronously updated according to the operation of the subject. In the aspect of scene design, a straight white monotone scene is adopted, so that fatigue driving is better induced.
In the acquisition process of the SEED-VIG dataset, a nerve scanning system is adopted for data acquisition, and 23 subjects are tested and data are acquired. Most experiments were performed early in the afternoon, which is generally the optimal time for noon break in the subjects, since they are likely to induce biorhythmic drowsiness. The entire experiment lasted two hours. The electroencephalogram signal sampling electrode selects 18 sampling electrode channels such as CP1, CP2, CPZ and the like in the international 10-20 standard, and the electroencephalogram signal and the electrooculogram signal respectively adopt 1000Hz and 200Hz. FIG. 3 is a schematic diagram of specific sample electrode locations for the SEED-VIG dataset.
The SEED-VIG dataset included a PERCLOS index recorded synchronously with the eye tracker during the experiment. The PERCLOS index is a numerical value which can effectively represent the sleepiness degree of a person and is obtained by repeated experimental investigation and research by the institute of kaneh metilone, and the calculation formula is as follows:
Wherein eye_close_time represents eye closing time, total_time represents total duration, and is the sum of blink time, eye closing time and eye jump time.
The status tag of the subject is determined according to the PERCLOS index, and the fatigue threshold is set to 0.35 in this embodiment, so the data tag can be determined by the following formula:
thus a data set comprising an electroencephalogram EEG, an electrooculogram EOG and a data tag perclos_label can be obtained.
Fig. 4 is a specific structural diagram of the fatigue detection model in the present embodiment. Table 1 is a table of parameters of the fatigue detection model shown in fig. 3.
TABLE 1
In this embodiment, when the fatigue test model training is performed, the learning rate is set to 0.001, the batch size is set to 30, and the discontinue one's studies rate is set to 0.25. The experimental equipment was tested using NVIDIA a 100. Python version 3.8, pytorch version 1.13.1, cuda version 11.3.
In order to better show the performance of the invention, the training result of the fatigue detection model of the invention is compared with the convolutional neural network CNN and the linear classifiers LDA and EEGNet models, wherein CNN, LDA and EEGNet are respectively trained and tested by using EEG signals EEG, and the fatigue detection model of the invention adopts EEG fusion, namely, training and testing are performed by using EEG signals EEG and EOG signals. The LDA extracts features using a common SPATIAL PATTERN algorism (CSP) and then uses these features to train a linear classifier to classify the data. An accuracy rate (accuracy) and a recall rate (recall) are selected to evaluate the performance of the proposed model. Table 2 is a comparative table of training results for the present invention and three comparative models in this example.
Model Accuracy rate in class Homogeneous recall rate Accuracy rate between classes
CNN 0.87 0.80 0.62
CSP+LDA 0.72 0.72 0.60
EEGNet 0.93 0.87 0.82
The invention is that 0.97 0.96 0.87
TABLE 2
As shown in Table 2, the brain-eye electrical signal fusion model is superior to the traditional brain-eye electrical signal-based fatigue detection method in the aspects of identification precision and cross-body identification precision. In addition, the reduction of the cross-object recognition precision of the electroencephalogram fusion model is obviously lower than that of other algorithms, and the model can be proved to have stronger cross-disciplinary generalization capability.
And then designing an ablation experiment for each module in the electroencephalogram signal integration model, and verifying the effects of the electroencephalogram signal feature extraction module, the electroencephalogram signal feature extraction module and the spatial attention feature extraction module.
Table 3 is a comparative table of the performance of the ablation experiments in this example.
TABLE 3 Table 3
As can be seen from Table 3, the feature extraction modules of the electroencephalogram signals and the electrooculogram signals greatly improve the fatigue detection precision of the fusion model, and the feature extraction capability of the electroencephalogram signals is further improved by adding spatial attention. In addition, the space attention feature extraction module effectively improves the effective features of the electroencephalogram signals among different channels, and facilitates the identification of a final model, so that the depth separable convolution used in the network improves the identification precision.
In order to verify the cross-subject recognition capability of the model of the invention, the experimental design of the reference "D.Gao,K.Wang,M.Wang,J.Zhou and Y.Zhang,"SFT-Net:ANetwork for Detecting Fatigue From EEG Signals by Combining 4D Feature Flow and Attention Mechanism,"in IEEE Journal of Biomedical and Health Informatics,doi:10.1109/JBHI.2023.3285268." of the embodiment selects the data of one subject sample as a training set, and performs recognition accuracy test on the experimental data of other sample subjects. This strategy selection method ensures that the test data set is a strict new data set, making the test process more reliable. CNNs and EEGNet were also selected as comparative models in this cross-subject experiment. FIG. 5 is a graph of the accuracy of recognition across subjects for the present invention and two comparison examples in this example. As shown in fig. 5, the average accuracy of recognition across subjects of the present invention is 87%, wherein the accuracy of recognition of subjects 1,3, 8, 9, 16, 19, 20, 23 is 90% or more.
While the foregoing describes illustrative embodiments of the present invention to facilitate an understanding of the present invention by those skilled in the art, it should be understood that the present invention is not limited to the scope of the embodiments, but is to be construed as protected by the accompanying claims insofar as various changes are within the spirit and scope of the present invention as defined and defined by the appended claims.

Claims (5)

1. The fatigue detection method based on the characteristic fusion of the electroencephalogram signal and the electrooculogram signal is characterized by comprising the following steps of:
S1, determining a tested scene and a tested subject according to actual needs, and respectively collecting electroencephalogram signals with the time length of T of each tested subject in a normal state and a fatigue state And an electrooculogram signal EOG;
s2, adopting a preset preprocessing method to perform electroencephalogram signal processing Preprocessing to obtain EEG signals;
S3, taking the corresponding EEG signals EEG and EOG as a group of input signals, and labeling labels label corresponding to each group of input signals, wherein label=1 indicates that the subject is in a fatigue state, and label=0 indicates that the subject is in a normal state, so that training samples are obtained;
S4, constructing a fatigue detection model, wherein the fatigue detection model comprises an electroencephalogram signal data dimension reduction processing module, a spatial attention feature extraction module, an electroencephalogram signal channel and time feature extraction module, an electroencephalogram signal feature dimension reduction module, an electroencephalogram signal time feature extraction module, an electroencephalogram signal channel and time feature extraction module, a feature splicing module and a classification module, wherein:
The electroencephalogram data dimension reduction processing module is used for carrying out data dimension reduction on the electroencephalogram EEG and sending the dimension reduced electroencephalogram EEG 1 to the spatial attention feature extraction module;
The spatial attention feature extraction module is used for extracting spatial features of the brain electrical signal EEG 1 subjected to dimension reduction by adopting a spatial attention mechanism, and sending the obtained brain electrical signal spatial features t EEG to the brain electrical signal channel and time feature extraction module, wherein the spatial attention feature extraction module comprises a maximum pooling layer, an average pooling layer and a feature splicing layer, and the spatial attention feature extraction module comprises the following components:
the maximum pooling layer is used for carrying out maximum pooling on the brain electrical signals subjected to dimension reduction, and then sending the obtained maximum pooling characteristics to the characteristic splicing layer;
the average pooling layer is used for averaging the brain electrical signals after dimension reduction, and then sending the obtained average pooling characteristics to the characteristic splicing layer;
the feature splicing layer is used for splicing the maximum pooling feature and the average pooling feature, and outputting the spliced feature as a spatial feature t EEG;
The electroencephalogram signal channel and time feature extraction module is used for further feature extraction of the received electroencephalogram signal spatial feature t EEG and sending the obtained electroencephalogram signal feature f EEG to the electroencephalogram signal feature dimension reduction module, and comprises a first convolution layer, a second convolution layer, a mean value pooling layer and an activation function layer, wherein:
The first convolution layer is used for carrying out convolution operation on the electroencephalogram signal spatial feature t EEG, and then sending the obtained feature to the second convolution layer;
the second convolution layer is used for carrying out convolution operation on the received characteristics and then sending the obtained characteristics to the average value pooling layer;
The averaging layer is used for carrying out averaging on the received characteristics and then sending the obtained characteristics to the activation function layer;
The activation function layer is used for processing the received characteristics by adopting Relu activation functions, and then outputting the obtained characteristics as an electroencephalogram signal characteristic f EEG;
The electroencephalogram signal feature dimension reduction module is used for reducing dimensions of the received electroencephalogram signal feature F EEG to obtain a dimension-reduced electroencephalogram signal feature F EEG, enabling the dimension of the electroencephalogram signal feature F EEG to be identical with that of the electroencephalogram signal feature F EOG, and then outputting the electroencephalogram signal feature F EEG to the feature splicing module;
the electro-oculogram signal time feature extraction module is used for extracting time features of electro-oculogram signal EOG and sending the obtained electro-oculogram signal time feature t EOG to the electro-oculogram signal channel and time feature extraction module, wherein the electro-oculogram signal time feature extraction module comprises K channel convolution layers, a mean value pooling layer and an activation function layer, and K represents the number of electro-oculogram signal channels, wherein the K represents the number of the electro-oculogram signal channels:
each channel convolution layer carries out convolution operation on each channel signal of the electro-oculogram signal EOG respectively, and the obtained channel characteristics are sent to the average pooling layer;
the averaging layer is used for carrying out averaging on the received K channel characteristics and then sending the obtained characteristics to the activation function layer;
The activation function layer is used for processing the received characteristics by adopting Relu activation functions, and then outputting the obtained characteristics as an electrooculogram signal time characteristic t EOG;
The electro-oculogram signal channel and time feature extraction module is used for further feature extraction of the received electro-oculogram signal time feature t EOG and sending the obtained electro-oculogram signal feature F EOG to the feature splicing module, wherein the electro-oculogram signal channel and time feature extraction module comprises a convolution layer, a mean value pooling layer and an activation function layer, and the electro-oculogram signal channel and time feature extraction module comprises the following components:
the convolution layer is used for carrying out convolution operation on the received time feature t EOG of the ocular signal, and sending the obtained feature to the mean value pooling layer;
The averaging layer is used for carrying out averaging on the received characteristics and then sending the obtained characteristics to the activation function layer;
The activation function layer is used for processing the received characteristics by adopting Relu activation functions, and then outputting the obtained characteristics as an electro-oculogram signal characteristic F EOG;
The characteristic splicing module is used for splicing the electroencephalogram characteristic F EEG and the electrooculogram characteristic F EOG and sending the spliced characteristic F con to the classification module;
The classification module is used for classifying according to the splicing characteristics F con to obtain a detection result of whether the person corresponding to the input signal is tired;
S5, training the fatigue detection model by adopting the training sample in the step S3 to obtain a trained fatigue detection model;
s6, when the fatigue detection needs to be carried out on a person, acquiring the electroencephalogram signal with the duration of T And the electro-oculogram signal EOG ', preprocessing the electro-cerebral signal by adopting the same method in the step S2 to obtain an electro-cerebral signal EEG', and inputting the electro-cerebral signal EEG 'and the electro-oculogram signal EOG' into a trained fatigue detection model to obtain a fatigue detection result.
2. The fatigue detection method according to claim 1, wherein the electroencephalogram signal electrodes are arranged in the left rear and right front of the head in step S1.
3. The fatigue detection method according to claim 1, wherein the preprocessing method of the electroencephalogram signal in step S2 includes the steps of:
s2.1 for brain electric signals Filtering, including bandpass filtering and power frequency filtering;
s2.2, selecting alpha waves, delta waves and theta waves in the filtered electroencephalogram signals, so as to obtain screened electroencephalogram signals;
And S2.3, calculating differential entropy of the screened EEG signals, and taking the obtained signals as EEG signals EEG for final use.
4. The fatigue detection method according to claim 1, wherein the electroencephalogram data dimension reduction module in step S3 includes a convolution layer, a mean pooling layer, and an activation function layer, wherein:
the convolution layer is used for carrying out convolution processing on the electroencephalogram signals of each channel respectively and sending the obtained characteristic signals to the average value pooling layer;
the averaging layer is used for carrying out averaging treatment on the received characteristic signals of all the channels and sending the obtained characteristic signals to the activation function layer;
the activation function layer is used for processing the received characteristic signals by adopting Relu activation functions and outputting the obtained characteristic signals as brain electrical signals after dimension reduction.
5. The fatigue detection method according to claim 1, wherein the electroencephalogram characteristic dimension reduction module in step S3 includes a first convolution layer, a second convolution layer, a mean-pooling layer, and an activation function layer, wherein:
The first convolution layer is used for carrying out convolution operation on the electroencephalogram signal characteristic f EEG, and then sending the obtained characteristic to the second convolution layer;
the second convolution layer is used for carrying out convolution operation on the received characteristics and then sending the obtained characteristics to the average value pooling layer;
The averaging layer is used for carrying out averaging on the received characteristics and then sending the obtained characteristics to the activation function layer;
The activation function layer is used for processing the received characteristics by adopting Relu activation functions, and then outputting the obtained characteristics as the brain electrical signal characteristics F EEG after dimension reduction.
CN202410217556.XA 2024-02-27 2024-02-27 Fatigue detection method based on characteristic fusion of electroencephalogram signals and electrooculogram signals Active CN118303883B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410217556.XA CN118303883B (en) 2024-02-27 2024-02-27 Fatigue detection method based on characteristic fusion of electroencephalogram signals and electrooculogram signals

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410217556.XA CN118303883B (en) 2024-02-27 2024-02-27 Fatigue detection method based on characteristic fusion of electroencephalogram signals and electrooculogram signals

Publications (2)

Publication Number Publication Date
CN118303883A CN118303883A (en) 2024-07-09
CN118303883B true CN118303883B (en) 2025-06-10

Family

ID=91723243

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410217556.XA Active CN118303883B (en) 2024-02-27 2024-02-27 Fatigue detection method based on characteristic fusion of electroencephalogram signals and electrooculogram signals

Country Status (1)

Country Link
CN (1) CN118303883B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110772268A (en) * 2019-11-01 2020-02-11 哈尔滨理工大学 A driving fatigue state recognition method based on multimodal EEG signals and 1DCNN transfer
CN114781442A (en) * 2022-04-07 2022-07-22 成都信息工程大学 Fatigue classification method based on four-dimensional attention convolution cyclic neural network

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9771081B2 (en) * 2014-09-29 2017-09-26 The Boeing Company System for fatigue detection using a suite of physiological measurement devices
CN109472224A (en) * 2018-10-26 2019-03-15 蓝色传感(北京)科技有限公司 The fatigue driving detecting system merged based on EEG with EOG
CN109820525A (en) * 2019-01-23 2019-05-31 五邑大学 A driving fatigue recognition method based on CNN-LSTM deep learning model
CN111428699B (en) * 2020-06-10 2020-09-22 南京理工大学 Driving fatigue detection method and system combined with pseudo 3D convolutional neural network and attention mechanism
CN114533085B (en) * 2022-02-18 2024-05-28 北京工业大学 Attention mechanism-based EEG-fNIRS multi-mode space-time fusion classification method
CN115238796A (en) * 2022-07-26 2022-10-25 重庆邮电大学 Motor imagery electroencephalogram signal classification method based on parallel DAMSCN-LSTM
CN116763324A (en) * 2023-06-16 2023-09-19 中国矿业大学 Single-channel EEG signal sleep staging method based on multi-scale and multi-attention

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110772268A (en) * 2019-11-01 2020-02-11 哈尔滨理工大学 A driving fatigue state recognition method based on multimodal EEG signals and 1DCNN transfer
CN114781442A (en) * 2022-04-07 2022-07-22 成都信息工程大学 Fatigue classification method based on four-dimensional attention convolution cyclic neural network

Also Published As

Publication number Publication date
CN118303883A (en) 2024-07-09

Similar Documents

Publication Publication Date Title
Zhu et al. Vehicle driver drowsiness detection method using wearable EEG based on convolution neural network
CN114781442B (en) Fatigue classification method based on four-dimensional attention convolutional recurrent neural network
CN108216254B (en) Road anger emotion recognition method based on fusion of facial image and pulse information
Rajwal et al. Convolutional neural network-based EEG signal analysis: A systematic review
US20200367800A1 (en) Method for identifying driving fatigue based on cnn-lstm deep learning model
Hayawi et al. Driver's drowsiness monitoring and alarming auto-system based on EOG signals
Shalash Driver fatigue detection with single EEG channel using transfer learning
US20050277813A1 (en) Brain state recognition system
CN110584597A (en) Multi-channel electroencephalogram signal monitoring method based on time-space convolutional neural network and application
CN113545789B (en) A method for constructing an electroencephalogram (EEG) signal analysis model based on a CSP algorithm and a PSD algorithm, an electroencephalogram (EEG) signal analysis method and a system
CN118383768A (en) Multi-dimensional detection system for driver state
CN120130931A (en) Epileptic seizure detection method and system based on self-attention mechanism and GRU-LSTM fusion
CN118303883B (en) Fatigue detection method based on characteristic fusion of electroencephalogram signals and electrooculogram signals
Aghaei et al. Epileptic seizure detection based on video and EEG recordings
Asrithavalli et al. Epileptic seizures detection using fusion of artificial neural network with hybrid deep learning
Swapnil et al. An Ensemble Approach to Classify Mental Stress using EEG Based Time-Frequency and Non-Linear Features
Umsura et al. Relaxation state assessment on sleepy driver through EEG-based ensemble learning
Mamtha et al. EEG Signal processing and identification of P300 signals using deep learning
Gulenc et al. Diagnosis of Major Depressive Disorder Using EEG Signals
CN110269610A (en) A kind of prior-warning device of brain electrical anomaly signal
AU2021102045A4 (en) A system and method for recognizing the emotions based on electrical bio-signals and speech signals
Qiu et al. Driver Drowsiness Detection Using EEG and EOG with an Attention-CNN Framework
US20240021307A1 (en) Computer-Implemented Method for Detecting a Microsleep State of Mind of a Person by Processing at Least One EEG Trace Using an Artificial Intelligence Algorithm and System Configured to Implement Such Method
Lu et al. EEG-based drowsiness and alertness recognition using feature fusion and deep residual networks
CN114863403B (en) A fatigue driving monitoring method based on graph regularized extreme learning machine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant