[go: up one dir, main page]

CN109620244B - A Conditional Generative Adversarial Network and SVM-Based Approach for Infant Abnormal Behavior Detection - Google Patents

A Conditional Generative Adversarial Network and SVM-Based Approach for Infant Abnormal Behavior Detection Download PDF

Info

Publication number
CN109620244B
CN109620244B CN201811494749.0A CN201811494749A CN109620244B CN 109620244 B CN109620244 B CN 109620244B CN 201811494749 A CN201811494749 A CN 201811494749A CN 109620244 B CN109620244 B CN 109620244B
Authority
CN
China
Prior art keywords
abnormal
whole body
wavelet
normal
limbs
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201811494749.0A
Other languages
Chinese (zh)
Other versions
CN109620244A (en
Inventor
王世刚
戴晓辉
赵岩
韦健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin University
Original Assignee
Jilin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin University filed Critical Jilin University
Priority to CN201811494749.0A priority Critical patent/CN109620244B/en
Publication of CN109620244A publication Critical patent/CN109620244A/en
Application granted granted Critical
Publication of CN109620244B publication Critical patent/CN109620244B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
    • A61B5/1113Local tracking of patients, e.g. in a hospital or private home
    • A61B5/1114Tracking parts of the body
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
    • A61B5/1126Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb using a particular sensing technique
    • A61B5/1127Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb using a particular sensing technique using markers
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Veterinary Medicine (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Physiology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Dentistry (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Processing Or Creating Images (AREA)

Abstract

基于条件生成对抗网络和SVM的婴儿异常行为检测方法属视频图像处理和深度学习技术领域,本发明通过对视频中婴儿运动轨迹进行分析判断婴儿行为是否异常,首先获取婴儿视频,进行合理长度的截取并转化为帧图像,对四肢和全身进行标记建立样本库;然后利用条件生成对抗网络对婴儿的四肢和全身进行目标跟踪;再对得到的目标运动轨迹进行小波近似波形和小波功率谱计算,将得到的特征用支持向量机SVM进行分类,综合判断;本发明对婴儿四肢和全身信息进行运动轨迹检测,比单一肢体检测信息更全面,小波域和功率谱域结合训练,使检测精确度提高,检测婴儿行为是否异常,及早进行干预,对预防婴儿脑瘫等疾病有很重要的意义。

Figure 201811494749

The abnormal infant behavior detection method based on conditional generative adversarial network and SVM belongs to the technical field of video image processing and deep learning. The present invention judges whether the infant behavior is abnormal by analyzing the motion trajectory of the infant in the video. First, the infant video is obtained and intercepted with a reasonable length. And convert it into frame images, mark the limbs and the whole body to establish a sample library; then use the conditional generative adversarial network to track the baby's limbs and whole body; then calculate the wavelet approximate waveform and wavelet power spectrum of the obtained target motion trajectory, The obtained features are classified and judged comprehensively by the support vector machine SVM; the present invention performs motion trajectory detection on infant limbs and whole body information, which is more comprehensive than single limb detection information, and the combination of wavelet domain and power spectrum domain training improves detection accuracy, It is of great significance to detect whether the infant's behavior is abnormal and to intervene as soon as possible to prevent diseases such as infant cerebral palsy.

Figure 201811494749

Description

Infant abnormal behavior detection method based on condition generation countermeasure network and SVM
Technical Field
The invention belongs to the technical field of video image processing and deep learning, and particularly relates to a method for detecting abnormal behaviors of infants based on a condition generation countermeasure network and an SVM.
Background
The abnormal behavior of the infant mainly refers to that small-amplitude middle-speed movement in various directions with variable acceleration is not seen in the movement of the infant, the movement of the infant is not suitable for the age of the infant, other movement forms (such as limb midline movement, hand and knee contact, visual search, finger scratching and clothes grabbing and the like) are not available, and the whole movement is poor in fluency. The abnormal behavior of the infant is damaged corresponding to the brain, severe patients can cause cerebral palsy, and cerebral palsy diseases can be diagnosed only after the child ages from one to two, so that the early detection of the abnormal behavior of the infant and timely intervention and treatment have strong practical significance.
In response to this problem, researchers have proposed several methods for detecting abnormal behavior in infants, which can be broadly divided into three types: whole body motion quality assessment, wearable sensor assessment, combined with pattern recognition assessment. The first method is to adopt a specific video recording mode for the baby, and judge whether the behavior is abnormal or not by using the whole body movement quality evaluation criterion for the video recording result, and the mode mainly depends on observation and has certain subjectivity. The second method is to wear a sensor device on the infant to observe the parameters, but the wearable method itself interferes with the movement of the infant to some extent, causing inaccuracy in the prediction result. The third method is to extract the motion characteristics of the baby by using a computer to perform pattern recognition analysis, which does not interfere with the movement of the baby and has objectivity, but in the process of extracting the motion characteristics and recognizing, only a limited number of body parts are observed, the analysis of the whole body motion is not performed, and certain specificity is achieved.
Due to the defects of the above algorithm, it is difficult to achieve the desired effect in practical applications, so improvement is necessary.
Disclosure of Invention
The invention aims to provide a baby abnormal behavior detection method based on a condition generation countermeasure network CGAN, which is combined with a supervised SVM classification method to improve the accuracy of baby abnormal behavior detection.
A baby abnormal behavior detection method based on condition generation countermeasure network and SVM is characterized in that a training sample library required by target tracking is constructed in advance, the condition generation countermeasure network is utilized to track the four limbs and the whole body of a baby, the training sample library comprises the limbs and the whole body marked by the baby, motion trail information is extracted by utilizing wavelet approximate waveform and wavelet power spectrum analysis, and the characteristics of the motion trail information are classified by a support vector machine SVM, and the method comprises the following steps:
1.1, acquiring a baby video and carrying out unified preprocessing;
1.2, intercepting the baby video obtained in the step 1.1 for 15s, uniformly naming, and uniformly naming the images converted into frames;
1.3 tracking of baby motion trail: for the frame image obtained in the step 1.2, a confrontation network CGAN is generated by using conditions to track the four limbs and the whole body movement locus of the baby respectively, and the method specifically comprises the following steps:
1.3.1 constructing a training sample library required by target tracking: marking the left hand, the right hand, the left leg, the right leg and the whole body of the baby in the frame image obtained in the step 1.2, forming a training sample library by the marked limbs and the whole body of the baby as a target data set input to CGAN, and using the corresponding label as a condition Y;
1.3.2 generative model design: randomly dividing each frame of image containing the baby to be used as a pseudo target data set, and inputting the pseudo target data set and the condition Y into a discrimination model device through a convolution layer;
1.3.3 discriminant model design: sending the target data set and the condition Y into a discrimination model to judge limbs and the whole body, sending the pseudo target data set into a discriminator, and judging whether the target is a target or not;
1.3.4 judging whether the target is the target or not, calculating errors to enable the errors to accord with a formula, wherein the specific formula is as follows:
optimizing D:
maxDV(D,G)=Ex~pdata(x)[log(D(x))]+Ez~pz(z)[log(1-D(G(z)))]
optimizing G:
minGV(D,G)=Ez~pz(z)[log(1-D(G(z)))]
wherein: v (D, G) represents a loss function; pdata (x) is the true sample distribution; pz (z) is the pseudo-sample distribution; d (x) represents the real sample data in the discriminator; d (g (z)) represents pseudo sample data in the discriminator; e represents expectation;
performing model parameter adjustment according to the optimization conditions, wherein parameters of the generated model G and the discrimination model D are shared;
1.3.5 if the error is too large, feeding back the error to the input of the generation model, reconstructing a pseudo target data set, judging again until the positions of the four limbs and the whole body of the baby in the pseudo target data set are found, and recording the positions and the motion tracks of the left hand, the right hand, the left leg, the right leg and the whole body of the baby in each frame;
1.4, analyzing the motion trail information: the method specifically comprises the following steps of storing the position information of continuous y-axis coordinate change during movement of the four limbs and the whole body of the baby tracked in the step 1.3, and calculating the wavelet approximation waveform and the wavelet power spectrum of a continuous change waveform diagram formed by the position information of the y-axis coordinate, wherein the continuous change waveform diagram comprises the following steps:
1.4.1 because the coordinate change of the x axis is not obvious, only selecting the coordinate change diagram of the y axis for analysis, firstly, analyzing the approximate waveform of the wavelet, and analyzing the tracked waveform by using harr wavelet to obtain the approximate waveform of the wavelet;
1.4.2 for the y-axis coordinate change graphs of limbs and the whole body, power spectrum information is obtained by utilizing a power spectrogram based on wavelets;
1.5 extracting characteristic vectors from the obtained wavelet approximate oscillogram and wavelet power spectrogram, and training and learning by using a Support Vector Machine (SVM), specifically comprising the following steps:
1.5.1, dividing the sample into normal and abnormal samples for marking, and setting a normal sample label as 1 and an abnormal sample label as-1;
1.5.2 dividing the sample into a training set and a testing set, normalizing the data, and obtaining the highest accuracy by adjusting the values of parameters c and g in the SVM, thereby obtaining the optimal training model;
1.6 comprehensive judgment of abnormal behaviors of infants: according to the optimal training model obtained in the step 1.5.2, different weights are set for different accuracies, and weighting judgment is carried out, and the method specifically comprises the following steps:
1.6.1 for the SVM model trained from the wavelet approximation waveform obtained in step 1.4.1, different weight coefficients are set according to different accuracies of limbs and the whole body, specifically: left upper limb a 1: 0.35; right upper limb a 2: 0.01; left lower limb a 3: 0.2; right lower limb a 4: 0.35; whole body a 5: 0.09; the judgment result vectors of the four limbs and the whole body are respectively represented by Y1 to Y5, and the calculation formula is as follows:
Y1=(test label+predict label)/2
wherein: test label is the actual label of the test sample; the prediction label is a label predicted by the test sample; y2 through Y5 were calculated in the same manner as above;
the five resulting vectors are weighted, as follows:
Y=0.35*Y1+0.01*Y2+0.2*Y3+0.35*Y4+0.09*Y5
wherein: the multiplication operation is represented, Y is a judgment value predicted by the wavelet details, a judgment standard is defined, if-1 < Y < -0.3, the baby behavior is judged to be in an abnormal state, if 0.3< Y <1, the baby behavior is judged to be in a normal state, and the rest are all regarded as judgment error states;
1.6.2 for the SVM model trained from the wavelet power spectrum obtained in the step 1.4.2, different weight coefficients are set according to different accuracy of limbs and the whole body, specifically: left upper limb P1: 0.35; right upper limb P2: 0.01; left lower limb P3: 0.35; right lower limb P4: 0.2; whole body P5: 0.09; the judgment result vectors of the four limbs and the whole body are respectively expressed by X1 to X5, and the calculation formula is as follows:
X1=(test label+predict label)/2
wherein: test label is the actual label of the test sample; the prediction label is a label predicted by the test sample; x2 through X5 were calculated in the same manner as above;
the five resulting vectors are weighted, as follows:
X=0.35*X1+0.01*X2+0.35*X3+0.2*X4+0.09*X5
wherein: expressing multiplication, wherein X is a judgment value predicted by the wavelet power spectrum, a judgment standard is specified, if-1 is more than X and less than-0.3, the baby behavior is judged to be in an abnormal state, if 0.3 is more than X and less than 1, the baby behavior is judged to be in a normal state, and the rest are all considered to be in a judgment error state;
and comprehensively judging the X and the Y, and if the test sample at least meets one condition of the X and the Y, determining that the judgment result is correct, and distinguishing whether the behavior of the baby is normal.
The generative model design and discriminative model design of step 1.3.2 and step 1.3.3 specifically includes the following steps:
2.1 generative model design: wherein 6 layers of convolution layers are arranged, and the step length is set as 1; 6 layers of pooling layers, the size of the pooling window being 2 x 2; the network applies a corrected Linear unit relu (corrected Linear unit) activation function, which can obtain good results and faster convergence speed, and the specific operation formula is as follows:
F(Z)=σ(W*Z+b)
wherein: w is the convolution kernel; is a convolution operation; z is a feature vector; b is an offset; σ is a ReLU activation function;
2.2 discriminant model design: wherein 5 layers of convolution layers are arranged, and the step length is set as 1; 5 layers of pooling layers, the size of the pooling window being 2 x 2; applied in the network is a correcting Linear unit ReLU (rectified Linear Unit) activation function.
Step 1.4 the specific calculation of the wavelet approximation waveform map and the wavelet power spectrogram comprises the following steps:
3.1 analyzing the tracked oscillogram by harr wavelets, constructing a five-layer pyramid according to a Mallat pyramid decomposition algorithm of discrete wavelet transformation, extracting a wavelet approximation signal of a fifth layer, corresponding to four limbs and the whole body, and respectively recording as: abnormal left upper limb: a01; abnormal right upper limb: a02; abnormal left lower limb: a03; abnormal right lower limb: a04; abnormal whole body: a05; normal left upper limb: a11; normal right upper limb: a12; normal left lower limb: a13; normal right lower limb: a14; normal whole body: a15;
3.2 for the y-axis coordinate change graphs of limbs and the whole body, utilizing a wavelet-based power spectrogram, wherein the set sampling length is the video total frame length 375, the sampling frequency is 1000, the sampling interval is 1/1000, and the obtained power spectrograms are respectively recorded as: abnormal left upper limb: p01; abnormal right upper limb: p02; abnormal left lower limb: p03; abnormal right lower limb: p04; abnormal whole body: p05; normal left upper limb: p11; normal right upper limb: p12; normal left lower limb: p13; normal right lower limb: p14; normal whole body: p15.
For generative model design and discriminative model design, the method comprises the following steps:
step A1, generating model design: wherein 6 layers of convolution layers are arranged, and the step length is set as 1; 6 layers of pooling layers, the size of the pooling window being 2 x 2; the network applies a corrected Linear unit relu (corrected Linear unit) activation function, which can obtain good results and faster convergence speed, and the specific operation formula is as follows:
F(Z)=σ(W*Z+b)
wherein: w is the convolution kernel; is a convolution operation; z is a feature vector; b is an offset; σ is a ReLU activation function;
step A2, design of a discriminant model: wherein 5 layers of convolution layers are arranged, and the step length is set as 1; 5 layers of pooling layers, the size of the pooling window being 2 x 2; applied in the network is a correcting Linear unit ReLU (rectified Linear Unit) activation function.
Further, the specific calculation of the wavelet approximate oscillogram and the wavelet power spectrogram comprises the following steps:
step R1, analyzing the tracked oscillogram by harr wavelets, constructing a five-layer pyramid according to a Mallat pyramid decomposition algorithm of discrete wavelet transformation, extracting a fifth-layer wavelet approximate waveform corresponding to limbs and the whole body, and respectively recording as: abnormal left upper limb: a01; abnormal right upper limb: a02; abnormal left lower limb: a03; abnormal right lower limb: a04; abnormal whole body: a05; normal left upper limb: a11; normal right upper limb: a12; normal left lower limb: a13; normal right lower limb: a14; normal whole body: a15;
step B2, for the y-axis coordinate change diagrams of limbs and the whole body, utilizing a wavelet-based power spectrogram, wherein the set sampling length is the video total frame length 375, the sampling frequency is 1000, the sampling interval is 1/1000, and the obtained power spectrograms are respectively marked as: abnormal left upper limb: p01; abnormal right upper limb: p02; abnormal left lower limb: p03; abnormal right lower limb: p04; abnormal whole body: p05; normal left upper limb: p11; normal right upper limb: p12; normal left lower limb: p13; normal right lower limb: p14; normal whole body: p15.
The invention adopts a baby abnormal behavior detection method based on condition generation countermeasure network and SVM, firstly preprocessing the acquired baby video, then, a conditional generation countermeasure network CGAN is used for respectively tracking the target movement locus of the four limbs and the whole body of the baby in the video, the obtained movement locus information is stored, then the movement locus information is extracted by utilizing wavelet transformation, establishing a sample set for the extracted wavelet approximate waveform, training the sample set by using a set SVM (support vector machine), solving a power spectrum for motion trajectory information by using the wavelet to obtain a characteristic establishing sample set, training the sample set by using the set SVM support vector machine, testing two trained models, according to the difference of the accuracy of the two models, different weight parameters are set for weighting judgment, so that the optimal training result is obtained;
the invention combines the four limbs and the whole body information of the baby to detect the movement track, the information obtained by the detection is more comprehensive than the information obtained by single limb detection, the track tracking is more accurate by using CGAN network semi-supervised learning, and the characteristics have more abstract specificity by combining a wavelet domain and a power spectrum domain, meanwhile, an SVM support vector machine is used to classify the characteristics, the detection result is weighted and judged, the false detection rate is reduced, whether the baby behavior is abnormal or not is detected, the intervention is carried out as soon as possible, and the invention has important significance for preventing the diseases such as the cerebral palsy of the baby.
Drawings
FIG. 1 is a flow chart of a method for detecting abnormal behavior of an infant based on a condition-generated countermeasure network and SVM
FIG. 2 is a flow chart for tracking a motion trajectory using CGAN
FIG. 3 is an image of single frame infant left upper limb tracking
FIG. 4 is a y-axis motion trace image of a detected baby
FIG. 5 is a diagram of approximate wavelet waveform obtained by wavelet transform
FIG. 6 is a schematic diagram of wavelet power spectrum
FIG. 7 is a flowchart for determining whether baby behavior is abnormal
Detailed Description
The following describes the implementation process of the present invention with reference to the attached drawings.
An infant abnormal behavior detection method based on condition generation countermeasure network and SVM, which integrally realizes a flow, as shown in FIG. 1, and comprises the following steps:
1. and acquiring a baby video and carrying out unified preprocessing.
2. And (3) intercepting the baby video in the step (1) for 15s, uniformly naming, and uniformly naming the images converted into frames.
3. Tracking the motion trail of the baby: for the frame image obtained in step 2, a confrontation network CGAN is generated by using conditions to track the four limbs and the whole body movement locus of the infant, and a flow chart is shown in fig. 2, and specifically includes the following steps:
3.1 constructing a training sample library required by target tracking, marking the left hand, the right hand, the left leg, the right leg and the whole body of the baby in the frame image obtained in the step 2, forming the training sample library by the marked limbs and the whole body of the baby as a target data set input into the CGAN, and taking the corresponding label as a condition Y;
3.2 generative model design: randomly dividing each frame of image containing the baby to be used as a pseudo target data set, and inputting the pseudo target data set and the condition Y into a discrimination model device through a convolution layer;
3.3 discriminant model design: sending the target data set and the condition Y into a discrimination model to judge limbs and the whole body, sending the pseudo target data set into a discriminator, and judging whether the target is a target or not;
3.4 judging whether the target is the target or not, calculating the error to enable the error to accord with a formula, wherein the specific formula is as follows:
optimizing D:
maxDV(D,G)=Ex~pdata(x)[log(D(x))]+Ez~pz(z)[log(1-D(G(z)))]
optimizing G:
minGV(D,G)=Ez~pz(z)[log(1-D(G(z)))]
wherein: v (D, G) represents a loss function; pdata (x) is the true sample distribution; pz (z) is the pseudo-sample distribution; d (x) represents the real sample data in the discriminator; d (g (z)) represents pseudo sample data in the discriminator; e represents expectation.
The purpose of the formula is to minimize the error of the generated model to make the generated false target as true as possible, i.e. to find the target position as possible and to maximize the error of the discriminant model.
Performing model parameter adjustment according to the optimization conditions, wherein parameters of the generated model G and the discrimination model D are shared;
3.5 if the error is too large, feeding back the error to the input of the generation model, reconstructing the pseudo target data set, judging again until the positions of the limbs and the whole body of the baby in the pseudo target data set are found, as shown in fig. 3, recording the positions and the motion tracks of the left hand, the right hand, the left leg, the right leg and the whole body of the baby in each frame, wherein the image is a single frame of image tracked by the left upper limb of the baby.
4. Analyzing the motion track information: storing the position information of continuous y-axis coordinate change during movement (as shown in fig. 4) of the four limbs and the whole body of the baby tracked in the step 3, and calculating the wavelet approximation waveform and the wavelet power spectrum of a continuous change waveform diagram formed by the position information of the y-axis coordinate, specifically comprising the following steps:
4.1 because the coordinate change of the x axis is not obvious, only selecting the coordinate change diagram of the y axis for analysis, firstly, carrying out wavelet analysis, and analyzing the tracked oscillogram by using harr wavelets to obtain a wavelet approximate oscillogram;
and 4.2, solving power spectrum information by using a wavelet-based power spectrogram according to the y-axis coordinate change graphs of the limbs and the whole body.
5. Extracting feature vectors from the obtained wavelet approximate oscillogram and wavelet power spectrogram, and training and learning by using a Support Vector Machine (SVM), wherein the method specifically comprises the following steps:
5.1, dividing the sample into normal and abnormal samples for marking, setting a normal sample label as 1, and setting an abnormal sample label as-1;
5.2, dividing the sample into a training set and a testing set, normalizing the data, and obtaining the highest accuracy by adjusting the values of parameters c and g in the SVM, so as to obtain the optimal training model;
6. and (3) comprehensive judgment of abnormal behaviors of the infant: according to the optimal training model obtained in the step 5.2, different weights are set for different accuracies, and weighting judgment is carried out, and the method specifically comprises the following steps:
6.1 for the SVM model trained by the wavelet approximate waveform obtained in the step 4.1, different weight coefficients are set according to different accuracies of limbs and the whole body, specifically: left upper limb a 1: 0.35; right upper limb a 2: 0.01; left lower limb a 3: 0.2; right lower limb a 4: 0.35; whole body a 5: 0.09; the judgment result vectors of the four limbs and the whole body are respectively represented by Y1 to Y5, and the calculation formula is as follows:
Y1=(test label+predict label)/2
wherein: test label is the actual label of the test sample; the prediction label is a label predicted by the test sample; y2 through Y5 were calculated in the same manner as above;
the five resulting vectors are weighted, as follows:
Y=0.35*Y1+0.01*Y2+0.2*Y3+0.35*Y4+0.09*Y5
wherein: and (4) multiplication operation is performed, Y is a judgment value predicted by the wavelet details, a judgment standard is defined, if-1 < Y < -0.3, the baby behavior is judged to be in an abnormal state, if 0.3< Y <1, the baby behavior is judged to be in a normal state, and the rest are all considered to be in a judgment error state.
6.2 for the SVM model trained from the wavelet power spectrum obtained in the step 4.2, different weight coefficients are set according to different accuracy of limbs and the whole body, specifically: left upper limb P1: 0.35; right upper limb P2: 0.01; left lower limb P3: 0.35; right lower limb P4: 0.2; whole body P5: 0.09; the judgment result vectors of the four limbs and the whole body are respectively expressed by X1 to X5, and the calculation formula is as follows:
X1=(test label+predict label)/2
wherein: test label is the actual label of the test sample; the prediction label is a label predicted by the test sample; the calculation of X2 through X5 is the same as above.
The five resulting vectors are weighted, as follows:
X=0.35*X1+0.01*X2+0.35*X3+0.2*X4+0.09*X5
wherein: and (4) multiplication operation is performed, X is a judgment value predicted by the wavelet power spectrum, a judgment standard is defined, if-1 < X < -0.3, the baby behavior is judged to be in an abnormal state, if 0.3< X <1, the baby behavior is judged to be in a normal state, and the rest are all considered to be in a judgment error state.
And comprehensively judging X and Y, wherein a specific flow chart is shown in FIG. 7, if the test sample at least meets one condition of X and Y, the judgment result is considered to be correct, and whether the behavior of the baby is normal can be distinguished.
The invention relates to a generative model design and a discriminant model design, which comprises the following steps:
step A1, generating model design: wherein 6 layers of convolution layers are arranged, and the step length is set as 1; 6 layers of pooling layers, the size of the pooling window being 2 x 2; the network applies a corrected Linear unit relu (corrected Linear unit) activation function, which can obtain good results and faster convergence speed, and the specific operation formula is as follows:
F(Z)=σ(W*Z+b)
wherein: w is the convolution kernel; is a convolution operation; z is a feature vector; b is an offset; σ is a ReLU activation function;
step A2, design of a discriminant model: wherein 5 layers of convolution layers are arranged, and the step length is set as 1; 5 layers of pooling layers, the size of the pooling window being 2 x 2; applied in the network is a correcting Linear unit ReLU (rectified Linear Unit) activation function.
The specific calculation of the wavelet approximate oscillogram and the wavelet power spectrogram in the invention comprises the following steps:
step B1, analyzing the tracked oscillogram by using hart wavelets, constructing a five-layer pyramid according to the Mallat pyramid decomposition algorithm of discrete wavelet transform, extracting a fifth-layer wavelet approximate waveform (as shown in fig. 5) corresponding to four limbs and the whole body, and respectively recording as: abnormal left upper limb: a01; abnormal right upper limb: a02; abnormal left lower limb: a03; abnormal right lower limb: a04; abnormal whole body: a05; normal left upper limb: a11; normal right upper limb: a12; normal left lower limb: a13; normal right lower limb: a14; normal whole body: A15.
step B2, for the y-axis coordinate change diagrams of limbs and the whole body, using a wavelet-based power spectrogram, wherein the set sampling length is the video total frame length 375, the sampling frequency is 1000, and the sampling interval is 1/1000, and the obtained power spectrograms (as shown in fig. 6) are respectively recorded as: abnormal left upper limb: p01; abnormal right upper limb: p02; abnormal left lower limb: p03; abnormal right lower limb: p04; abnormal whole body: p05; normal left upper limb: p11; normal right upper limb: p12; normal left lower limb: p13; normal right lower limb: p14; normal whole body: p15.

Claims (3)

1.一种基于条件生成对抗网络和SVM的婴儿异常行为检测方法,其特征在于,提前构建目标跟踪所需的训练样本库,利用条件生成对抗网络对婴儿四肢及全身进行目标跟踪,所述训练样本库包括婴儿标记的四肢和全身,利用小波近似波形和小波功率谱分析对运动轨迹信息进行提取,再由支持向量机SVM对其特征进行分类,包括下列步骤:1. a method for detecting abnormal behavior of infants based on conditional generative adversarial network and SVM, it is characterized in that, build the training sample bank required for target tracking in advance, utilize conditional generative adversarial network to carry out target tracking to infant limbs and whole body, described training. The sample library includes the limbs and whole body marked by the baby, and the motion trajectory information is extracted by wavelet approximate waveform and wavelet power spectrum analysis, and then the support vector machine SVM is used to classify its features, including the following steps: 1.1获取婴儿视频并进行统一预处理;1.1 Obtain baby videos and perform unified preprocessing; 1.2将步骤1.1的婴儿视频进行15s为一份的截取,并进行统一命名,将转化为帧的图像也进行统一命名;1.2 Take the baby video of step 1.1 for 15s into one copy, and name them uniformly, and also name the images converted into frames uniformly; 1.3婴儿运动轨迹跟踪:对步骤1.2获取到的帧图像,利用条件生成对抗网络CGAN分别对婴儿的四肢和全身整体运动轨迹进行跟踪,具体包括以下步骤:1.3 Infant motion trajectory tracking: For the frame images obtained in step 1.2, the conditional generation confrontation network CGAN is used to track the infant's limbs and the overall motion trajectory of the whole body, including the following steps: 1.3.1构建目标跟踪所需的训练样本库:对步骤1.2中所获得的帧图像中婴儿的左手、右手、左腿、右腿和全身整体进行标记,带有标记的婴儿四肢和全身部位构成训练样本库,作为输入CGAN的目标数据集,相应的标签作为条件Y;1.3.1 Construct the training sample library required for target tracking: mark the left hand, right hand, left leg, right leg and whole body of the baby in the frame image obtained in step 1.2, with the marked limbs and whole body parts of the baby The training sample library is used as the target data set of the input CGAN, and the corresponding label is used as the condition Y; 1.3.2生成模型设计:将每一帧包含婴儿的图像进行随机分割,作为伪目标数据集,然后和条件Y一起通过卷积层输入判别模型器中;1.3.2 Generative model design: randomly segment each frame of images containing babies as a pseudo-target data set, and then input it into the discriminant modeler together with the condition Y through the convolutional layer; 1.3.3判别模型设计:将目标数据集和条件Y送入判别模型进行四肢和全身的判断,再将伪目标数据集送入判别器,判断是否为目标;1.3.3 Discriminant model design: The target data set and condition Y are sent to the discriminant model to judge the limbs and the whole body, and then the pseudo-target data set is sent to the discriminator to determine whether it is a target; 1.3.4判别是否是目标,计算误差,使误差符合公式,具体公式如下:1.3.4 Determine whether it is the target, calculate the error, and make the error conform to the formula. The specific formula is as follows: 优化D:Optimize D: maxDV(D,G)=Ex~pdata(x)[log(D(x))]+Ez~pz(z)[log(1-D(G(z)))]max D V(D, G)=Ex~pdata(x)[log(D(x))]+Ez~pz(z)[log(1-D(G(z)))] 优化G:Optimize G: minGV(D,G)=Ez~pz(z)[log(1-D(G(z)))]min G V(D, G)=Ez~pz(z)[log(1-D(G(z)))] 其中:V(D,G)表示损失函数;pdata(x)为真实样本分布;pz(z)为伪样本分布;D(x)表示判别器中的真实样本数据;D(G(z))表示判别器中的伪样本数据;E表示求期望;Among them: V(D, G) represents the loss function; pdata(x) is the real sample distribution; pz(z) is the pseudo-sample distribution; D(x) represents the real sample data in the discriminator; D(G(z)) represents the pseudo sample data in the discriminator; E represents the expectation; 根据优化条件进行模型调参,其中,生成模型G和判别模型D的参数共享;The model parameters are adjusted according to the optimization conditions, wherein the parameters of the generative model G and the discriminant model D are shared; 1.3.5若误差太大,则反馈到生成模型的输入,重新构建伪目标数据集,再次判断,直至找到伪目标数据集中的婴儿四肢和全身位置,记录每一帧中婴儿左手、右手、左腿、右腿和全身的位置和运动轨迹;1.3.5 If the error is too large, feed back to the input of the generative model, reconstruct the pseudo-target data set, and judge again until the position of the baby's limbs and whole body in the pseudo-target data set is found, and record the baby's left hand, right hand, and left hand in each frame. The position and motion trajectory of the leg, right leg and whole body; 1.4对运动轨迹信息进行分析:对步骤1.3跟踪到的婴儿四肢和全身的运动轨迹,保存运动时的连续y轴坐标变化的位置信息,对y轴坐标位置信息构成的连续变化波形图进行小波近似波形和小波功率谱计算,具体包括下列步骤:1.4 Analyze the motion trajectory information: For the motion trajectory of the baby's limbs and whole body tracked in step 1.3, save the position information of the continuous y-axis coordinate change during exercise, and perform wavelet approximation on the continuous change waveform formed by the y-axis coordinate position information. Waveform and wavelet power spectrum calculation, including the following steps: 1.4.1由于x轴坐标变化并不明显,仅选取y轴坐标变化图进行分析,首先是小波近似波形分析,利用harr小波对已跟踪得到的波形图进行分析,得到小波近似波形图;1.4.1 Since the change of the x-axis coordinate is not obvious, only the y-axis coordinate change graph is selected for analysis. The first is the wavelet approximation waveform analysis, and the harr wavelet is used to analyze the waveform graph that has been tracked, and the wavelet approximation waveform graph is obtained; 1.4.2对于四肢和全身的y轴坐标变化图,利用基于小波的功率谱图,求得功率谱信息;1.4.2 For the y-axis coordinate change diagram of the limbs and the whole body, use the wavelet-based power spectrum diagram to obtain the power spectrum information; 1.5对得到的小波近似波形图和小波功率谱图,提取特征向量利用支持向量机SVM进行训练学习,具体包括下列步骤:1.5 For the obtained wavelet approximation waveform and wavelet power spectrum, extract the feature vector and use the support vector machine SVM for training and learning, which specifically includes the following steps: 1.5.1将样本分为正常和异常进行标记,设正常样本标签为1,设异常的样本标签为-1;1.5.1 Divide the samples into normal and abnormal samples for labeling, set the label of normal samples as 1, and set the label of abnormal samples as -1; 1.5.2将样本分为训练集和测试集,对数据进行归一化,通过调整SVM中参数c和g的取值来获得最高的精确度,从而得到最佳的训练模型;1.5.2 Divide the samples into training sets and test sets, normalize the data, and obtain the highest accuracy by adjusting the values of the parameters c and g in the SVM, thereby obtaining the best training model; 1.6综合婴儿异常行为判断:根据步骤1.5.2得到的最佳训练模型,对于不同的精确度设置不同的权重,进行加权判断,具体包括下列步骤:1.6 Comprehensive judgment of abnormal infant behavior: According to the best training model obtained in step 1.5.2, different weights are set for different accuracies, and weighted judgment is carried out, which includes the following steps: 1.6.1对于步骤1.4.1得到的小波近似波形训练出的SVM模型,根据四肢和全身的不同精确度设置不同的权重系数,具体为:左上肢A1:0.35;右上肢A2:0.01;左下肢A3:0.2;右下肢A4:0.35;全身A5:0.09;用Y1到Y5分别表示四肢和全身的判断结果向量,计算公式如下:1.6.1 For the SVM model trained by the wavelet approximate waveform obtained in step 1.4.1, set different weight coefficients according to the different accuracy of the limbs and the whole body, specifically: left upper limb A1: 0.35; right upper limb A2: 0.01; left lower limb A3: 0.2; A4: 0.35 for the right lower limb; A5: 0.09 for the whole body; Y1 to Y5 represent the judgment result vectors of the limbs and the whole body respectively. The calculation formula is as follows: Y1=(test label+predict label)/2Y1=(test label+predict label)/2 其中:test label为测试样本的实际标签;predict label为测试样本预测的标签;Y2到Y5的计算方式同上;Among them: test label is the actual label of the test sample; predict label is the predicted label of the test sample; Y2 to Y5 are calculated in the same way; 对于所得到的五个结果向量进行加权,公式如下:The five result vectors obtained are weighted as follows: Y=0.35*Y1+0.01*Y2+0.2*Y3+0.35*Y4+0.09*Y5Y=0.35*Y1+0.01*Y2+0.2*Y3+0.35*Y4+0.09*Y5 其中:*表示乘法运算,Y即为小波细节所预测的判断值,规定一个判断标准,若-1<Y<-0.3,判断婴儿行为为异常状态,若0.3<Y<1,判断婴儿行为为正常状态,其余均视为判断错误状态;Among them: * represents the multiplication operation, Y is the judgment value predicted by the wavelet details, and a judgment standard is specified. If -1<Y<-0.3, the baby's behavior is judged to be abnormal; if 0.3<Y<1, the baby's behavior is judged as Normal state, the rest are regarded as judgment error state; 1.6.2对于步骤1.4.2得到的小波功率谱训练出的SVM模型,根据对四肢和全身不同的精确度设置不同的权重系数,具体为:左上肢P1:0.35;右上肢P2:0.01;左下肢P3:0.35;右下肢P4:0.2;全身P5:0.09;用X1到X5分别表示四肢和全身的判断结果向量,计算公式如下:1.6.2 For the SVM model trained by the wavelet power spectrum obtained in step 1.4.2, set different weight coefficients according to the different accuracy of the limbs and the whole body, specifically: left upper limb P1: 0.35; right upper limb P2: 0.01; Lower extremity P3: 0.35; right lower extremity P4: 0.2; whole body P5: 0.09; X1 to X5 represent the judgment result vectors of the limbs and the whole body respectively. The calculation formula is as follows: X1=(test label+predict label)/2X1=(test label+predict label)/2 其中:test label为测试样本的实际标签;predict label为测试样本预测的标签;X2到X5的计算方式同上;Among them: test label is the actual label of the test sample; predict label is the label predicted by the test sample; the calculation methods of X2 to X5 are the same as above; 对于所得到的五个结果向量进行加权,公式如下:The five result vectors obtained are weighted as follows: X=0.35*X1+0.01*X2+0.35*X3+0.2*X4+0.09*X5X=0.35*X1+0.01*X2+0.35*X3+0.2*X4+0.09*X5 其中:*表示乘法运算,X即为小波功率谱所预测的判断值,规定一个判断标准,若-1<X<-0.3,判断婴儿行为为异常状态,若0.3<X<1,判断婴儿行为为正常状态,其余均视为判断错误状态;Among them: * represents the multiplication operation, X is the judgment value predicted by the wavelet power spectrum, and a judgment standard is specified. If -1<X<-0.3, the baby's behavior is judged to be abnormal, and if 0.3<X<1, the baby's behavior is judged. It is a normal state, and the rest are regarded as a judgment error state; 对X和Y进行综合判断,若测试样本至少满足X和Y中一个条件,就认为判断结果是正确的,即可分辨出婴儿行为是否正常。Comprehensive judgment is made on X and Y. If the test sample meets at least one of the conditions of X and Y, the judgment result is considered to be correct, and it is possible to distinguish whether the baby's behavior is normal. 2.按权利要求1所述的基于条件生成对抗网络和SVM的婴儿异常行为检测方法,其特征在于,步骤1.3.2和步骤1.3.3所述的生成模型设计和判别模型设计,具体包括下列步骤:2. The method for detecting abnormal infant behavior based on conditional generative adversarial network and SVM according to claim 1, characterized in that, the generative model design and discriminant model design described in step 1.3.2 and step 1.3.3 specifically include the following step: 2.1生成模型设计:设置6层卷积层,步长设为1;6层池化层,池化窗的大小为2*2;网络中应用的是纠正线性单元ReLU(Rectified Linear Unit)激活函数,具体操作公式如下:2.1 Generative model design: set 6 layers of convolutional layers, the stride is set to 1; 6 layers of pooling layers, the size of the pooling window is 2*2; the activation function of ReLU (Rectified Linear Unit) is applied in the network , the specific operation formula is as follows: F(Z)=σ(W*Z+b)F(Z)=σ(W*Z+b) 其中:W是卷积核;*是卷积运算;Z是特征向量;b为偏移量;σ为ReLU激活函数;Where: W is the convolution kernel; * is the convolution operation; Z is the feature vector; b is the offset; σ is the ReLU activation function; 2.2判别模型设计:其中设置5层卷积层,步长设为1;5层池化层,池化窗的大小为2*2;网络中应用的是纠正线性单元ReLU(Rectified Linear Unit)激活函数;2.2 Discriminant model design: 5 layers of convolutional layers are set, and the stride is set to 1; function; 所述的生成模型设计和判别模型设计,包括以下步骤:The described generative model design and discriminant model design include the following steps: 步骤A1,生成模型设计:其中设置6层卷积层,步长设为1;6层池化层,池化窗的大小为2*2;网络中应用的是纠正线性单元ReLU(Rectified Linear Unit)激活函数,具体操作公式如下:Step A1, generating model design: 6 layers of convolution layers are set, and the step size is set to 1; 6 layers of pooling layers are set, and the size of the pooling window is 2*2; ) activation function, the specific operation formula is as follows: F(Z)=σ(W*Z+b)F(Z)=σ(W*Z+b) 其中:W是卷积核;*是卷积运算;Z是特征向量;b为偏移量;σ为ReLU激活函数;Where: W is the convolution kernel; * is the convolution operation; Z is the feature vector; b is the offset; σ is the ReLU activation function; 步骤A2,判别模型设计:其中设置5层卷积层,步长设为1;5层池化层,池化窗的大小为2*2;网络中应用的是纠正线性单元ReLU(Rectified Linear Unit)激活函数。Step A2, discriminant model design: 5 layers of convolutional layers are set, and the step size is set to 1; ) activation function. 3.按权利要求1所述的基于条件生成对抗网络和SVM的婴儿异常行为检测方法,其特征在于,步骤1.4所述小波近似波形图和小波功率谱图的具体计算,包括下列步骤:3. by the described infant abnormal behavior detection method based on conditional generation adversarial network and SVM according to claim 1, it is characterized in that, the concrete calculation of wavelet approximation waveform diagram and wavelet power spectrogram described in step 1.4, comprises the following steps: 3.1利用harr小波对已跟踪得到的波形图进行分析,根据离散小波变换的Mallat金字塔分解算法,构建五层金字塔,提取第五层的小波近似信号,对应于四肢和全身,分别记为:异常左上肢:A01;异常右上肢:A02;异常左下肢:A03;异常右下肢:A04;异常全身:A05;正常左上肢:A11;正常右上肢:A12;正常左下肢:A13;正常右下肢:A14;正常全身:A15;3.1 Use harr wavelet to analyze the traced waveforms, build a five-layer pyramid according to the Mallat pyramid decomposition algorithm of discrete wavelet transform, and extract the wavelet approximation signal of the fifth layer, corresponding to the limbs and the whole body, respectively recorded as: abnormal left Upper limb: A01; Abnormal right upper limb: A02; Abnormal left lower limb: A03; Abnormal right lower limb: A04; Abnormal whole body: A05; Normal left upper limb: A11; Normal right upper limb: A12; Normal left lower limb: A13; Normal right lower limb: A14 ; Normal whole body: A15; 3.2对于四肢和全身的y轴坐标变化图,利用基于小波的功率谱图,其中设置的采样长度为视频总帧长375,采样频率为1000,采样间隔为1/1000,得到的功率谱图,分别记为:异常左上肢:P01;异常右上肢:P02;异常左下肢:P03;异常右下肢:P04;异常全身:P05;正常左上肢:P11;正常右上肢:P12;正常左下肢:P13;正常右下肢:P14;正常全身:P15 ;3.2 For the y-axis coordinate change diagram of the limbs and the whole body, use the wavelet-based power spectrogram, where the set sampling length is 375 of the total video frame length, the sampling frequency is 1000, and the sampling interval is 1/1000, and the obtained power spectrogram, Abnormal left upper limb: P01; abnormal right upper limb: P02; abnormal left lower limb: P03; abnormal right lower limb: P04; abnormal whole body: P05; normal left upper limb: P11; normal right upper limb: P12; normal left lower limb: P13 ; Normal right lower limb: P14; Normal whole body: P15; 对小波近似波形图和小波功率谱图的具体计算,包括以下步骤:The specific calculation of the wavelet approximation waveform diagram and the wavelet power spectrum diagram includes the following steps: 步骤B1,利用harr小波对已跟踪得到的波形图进行分析,根据离散小波变换的Mallat金字塔分解算法,构建五层金字塔,提取第五层的小波近似波形,对应于四肢和全身,分别记为:异常左上肢:A01;异常右上肢:A02;异常左下肢:A03;异常右下肢:A04;异常全身:A05;正常左上肢:A11;正常右上肢:A12;正常左下肢:A13;正常右下肢:A14;正常全身:A15;Step B1, use harr wavelet to analyze the traced waveform, build a five-layer pyramid according to the Mallat pyramid decomposition algorithm of discrete wavelet transform, and extract the wavelet approximate waveform of the fifth layer, corresponding to the limbs and the whole body, which are respectively recorded as: Abnormal left upper limb: A01; abnormal right upper limb: A02; abnormal left lower limb: A03; abnormal right lower limb: A04; abnormal whole body: A05; normal left upper limb: A11; normal right upper limb: A12; normal left lower limb: A13; normal right lower limb : A14; normal whole body: A15; 步骤B2,对于四肢和全身的y轴坐标变化图,利用基于小波的功率谱图,其中设置的采样长度为视频总帧长375,采样频率为1000,采样间隔为1/1000,得到的功率谱图,分别记为:异常左上肢:P01;异常右上肢:P02;异常左下肢:P03;异常右下肢:P04;异常全身:P05;正常左上肢:P11;正常右上肢:P12;正常左下肢:P13;正常右下肢:P14;正常全身:P15。Step B2, for the y-axis coordinate change diagram of the limbs and the whole body, use the wavelet-based power spectrogram, where the set sampling length is the total video frame length of 375, the sampling frequency is 1000, and the sampling interval is 1/1000, and the obtained power spectrum is obtained. Abnormal left upper limb: P01; abnormal right upper limb: P02; abnormal left lower limb: P03; abnormal right lower limb: P04; abnormal whole body: P05; normal left upper limb: P11; normal right upper limb: P12; normal left lower limb : P13; normal right lower limb: P14; normal whole body: P15.
CN201811494749.0A 2018-12-07 2018-12-07 A Conditional Generative Adversarial Network and SVM-Based Approach for Infant Abnormal Behavior Detection Expired - Fee Related CN109620244B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811494749.0A CN109620244B (en) 2018-12-07 2018-12-07 A Conditional Generative Adversarial Network and SVM-Based Approach for Infant Abnormal Behavior Detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811494749.0A CN109620244B (en) 2018-12-07 2018-12-07 A Conditional Generative Adversarial Network and SVM-Based Approach for Infant Abnormal Behavior Detection

Publications (2)

Publication Number Publication Date
CN109620244A CN109620244A (en) 2019-04-16
CN109620244B true CN109620244B (en) 2021-07-30

Family

ID=66071933

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811494749.0A Expired - Fee Related CN109620244B (en) 2018-12-07 2018-12-07 A Conditional Generative Adversarial Network and SVM-Based Approach for Infant Abnormal Behavior Detection

Country Status (1)

Country Link
CN (1) CN109620244B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109615019B (en) * 2018-12-25 2022-05-31 吉林大学 Abnormal behavior detection method based on space-time automatic encoder
CN110236558A (en) * 2019-04-26 2019-09-17 平安科技(深圳)有限公司 Method, device, storage medium and electronic equipment for predicting infant development
CN110414306B (en) * 2019-04-26 2022-07-19 吉林大学 Baby abnormal behavior detection method based on meanshift algorithm and SVM
CN111985269B (en) * 2019-05-21 2024-12-03 顺丰科技有限公司 Detection model construction method, detection method, device, server and medium
CN110277170A (en) * 2019-06-28 2019-09-24 天津奥漫优悦科技有限公司 A kind of all-around exercises assessment system based on AI artificial intelligence
US11596807B2 (en) * 2019-11-25 2023-03-07 Accuray Incorporated Partial deformation maps for reconstructing motion-affected treatment dose
CN111353995B (en) * 2020-03-31 2023-03-28 成都信息工程大学 Cervical single cell image data generation method based on generation countermeasure network
TR202019150A1 (en) * 2020-11-27 2022-06-21 Ondokuz Mayis Ueniversitesi Cerebral palsy detection system in newborn infants.
CN113197570B (en) * 2021-05-07 2022-10-25 重庆大学 A knee-crawling motion posture analysis system for infants and young children to assist in the diagnosis of cerebral palsy
CN114358194B (en) * 2022-01-07 2024-11-19 吉林大学 Abnormal limb behavior detection method for autism spectrum disorder based on posture tracking
CN114999648B (en) * 2022-05-27 2023-03-24 浙江大学医学院附属儿童医院 Early screening system, equipment and storage medium for cerebral palsy based on baby dynamic posture estimation

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7539532B2 (en) * 2006-05-12 2009-05-26 Bao Tran Cuffless blood pressure monitoring appliance
US9232912B2 (en) * 2010-08-26 2016-01-12 The Regents Of The University Of California System for evaluating infant movement using gesture recognition
US11074495B2 (en) * 2013-02-28 2021-07-27 Z Advanced Computing, Inc. (Zac) System and method for extremely efficient image and pattern recognition and artificial intelligence platform
US9700222B2 (en) * 2011-12-02 2017-07-11 Lumiradx Uk Ltd Health-monitor patch
CN104083173B (en) * 2014-07-04 2015-11-18 吉林大学 A kind of quadruped movement observations system
US20160095524A1 (en) * 2014-10-04 2016-04-07 Government Of The United States, As Represented By The Secretary Of The Air Force Non-Contact Assessment of Cardiovascular Function using a Multi-Camera Array
CN104765959A (en) * 2015-03-30 2015-07-08 燕山大学 Computer vision based evaluation method for general movement of baby
WO2016164373A1 (en) * 2015-04-05 2016-10-13 Smilables Inc. Wearable infant monitoring device and system for determining the orientation and motions of an infant
CN105426919B (en) * 2015-11-23 2017-11-14 河海大学 The image classification method of non-supervisory feature learning is instructed based on conspicuousness
CN106230959A (en) * 2016-08-09 2016-12-14 西安科技大学 A kind of intelligence baby monitoring system
CN107679462B (en) * 2017-09-13 2021-10-19 哈尔滨工业大学深圳研究生院 A wavelet-based deep multi-feature fusion classification method
CN107684430A (en) * 2017-09-29 2018-02-13 上海市上海中学 Correcting device and its application method are detected based on Curie modules human body attitude
CN108764281A (en) * 2018-04-18 2018-11-06 华南理工大学 A kind of image classification method learning across task depth network based on semi-supervised step certainly
CN108836342A (en) * 2018-04-19 2018-11-20 北京理工大学 It is a kind of based on inertial sensor without feature human motion identification method
CN108567431A (en) * 2018-05-14 2018-09-25 浙江大学 A kind of intelligent sensing boots for measuring body gait and leg speed
CN108805188B (en) * 2018-05-29 2020-08-21 徐州工程学院 An Image Classification Method Based on Feature Recalibration Generative Adversarial Networks
CN108703760A (en) * 2018-06-15 2018-10-26 安徽中科智链信息科技有限公司 Human motion gesture recognition system and method based on nine axle sensors

Also Published As

Publication number Publication date
CN109620244A (en) 2019-04-16

Similar Documents

Publication Publication Date Title
CN109620244B (en) A Conditional Generative Adversarial Network and SVM-Based Approach for Infant Abnormal Behavior Detection
Pang et al. Automatic detection and quantification of hand movements toward development of an objective assessment of tremor and bradykinesia in Parkinson's disease
CN110414306B (en) Baby abnormal behavior detection method based on meanshift algorithm and SVM
US20190223748A1 (en) Methods and apparatus for mitigating neuromuscular signal artifacts
CN106384093B (en) A kind of human motion recognition method based on noise reduction autocoder and particle filter
US20200160044A1 (en) Physical activity quantification and monitoring
CN118213039B (en) Rehabilitation training data processing method and system based on deep reinforcement learning
CA2907411A1 (en) Feature extraction and classification to determine one or more activities from sensed motion signals
CN110084286A (en) A kind of human motion recognition method of sensor-based ECOC technology
CN112438741B (en) A driving state detection method and system based on EEG feature transfer learning
CN112541870A (en) Video processing method and device, readable storage medium and electronic equipment
US20230367398A1 (en) Leveraging machine learning and fractal analysis for classifying motion
CN114027786A (en) Sleep-disordered breathing detection method and system based on self-supervised memory network
Chen et al. An interpretable deep learning optimized wearable daily detection system for Parkinson’s disease
Meena et al. An explainable self-attention-based spatial–temporal analysis for human activity recognition
Wang et al. WCFormer: An interpretable deep learning framework for heart sound signal analysis and automated diagnosis of cardiovascular diseases
CN116740426A (en) A classification and prediction system for functional magnetic resonance images
CN118230903B (en) Accurate nutrition feeding method and system for pig feed based on image recognition
CN113197558A (en) Heart rate and respiratory rate detection method and system and computer storage medium
CN114999648B (en) Early screening system, equipment and storage medium for cerebral palsy based on baby dynamic posture estimation
CN116869516A (en) A comprehensive motion assessment method and system based on multi-source heterogeneous data
CN113537489B (en) Elbow angle prediction method, terminal equipment and storage medium
US20250275700A1 (en) Method and system for monitoring cognitive state of an individual
EP4086811A1 (en) Problematic behavior classification system and method based on deep neural network algorithm
CN111062352B (en) Method and server for identifying gait of terminal user based on mobile terminal data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210730

Termination date: 20211207