[go: up one dir, main page]

CN118072394A - Gait recognition-based motion-obstructing disease classification method - Google Patents

Gait recognition-based motion-obstructing disease classification method Download PDF

Info

Publication number
CN118072394A
CN118072394A CN202410235583.XA CN202410235583A CN118072394A CN 118072394 A CN118072394 A CN 118072394A CN 202410235583 A CN202410235583 A CN 202410235583A CN 118072394 A CN118072394 A CN 118072394A
Authority
CN
China
Prior art keywords
gait
convolution
motion
channel
joint point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410235583.XA
Other languages
Chinese (zh)
Inventor
陈艳华
雷斯越
龙邹荣
陈家权
陈湘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Seventh People's Hospital
Original Assignee
Chongqing Seventh People's Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Seventh People's Hospital filed Critical Chongqing Seventh People's Hospital
Priority to CN202410235583.XA priority Critical patent/CN118072394A/en
Publication of CN118072394A publication Critical patent/CN118072394A/en
Pending legal-status Critical Current

Links

Landscapes

  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention provides a gait recognition-based movement obstruction disease classification method, which comprises the following steps of S1: collecting images and videos of a multi-angle camera of a tested person, and constructing a training data set; s2: inputting a human body posture estimation network to obtain human skeleton joint point data; s3: processing the human skeleton joint point sequence data obtained in the step S2, firstly normalizing the human skeleton joint point sequence data, drawing a joint point coordinate information curve according to a time sequence, and finally processing the joint point coordinate information curve into a proper data format according to the requirement of an action recognition network to construct a new training data set; s4: constructing a gesture classification neural network; s5: the method comprises the steps of collecting joint point data of a patient in real time, calculating gait parameters and displaying the gait parameters. The invention adopts a multi-view motion vision capturing system to construct a gait data set of the motion disorder diseases, constructs a posture estimation network based on human body articulation points, and carries out auxiliary classification for the diseases with the motion disorder phenomenon.

Description

Gait recognition-based motion-obstructing disease classification method
Technical Field
The invention relates to the field of computer vision, in particular to a gait recognition-based motion obstruction disease classification method.
Background
Research shows that many diseases at present have clinical manifestations of dyskinesia, such as abnormal gait and balance disorder, due to decline of cognitive functions and the like after suffering from diseases; early accurate diagnosis and objective disease severity measurement are critical to developing personalized treatment plans aimed at slowing or stopping the continued progression of the disease, but for a variety of reasons, patients are difficult to diagnose and timely treat at this time; for parkinsonian patients, PD typically results in a slow motion, known as slow motion (bradykinesia), and stiffness, known as muscle rigidity, which is visible in the gait and general posture of the patient. However, early parkinson's disease is often manifested as a subclinical state of reduced flexibility and is difficult to detect. Although the parkinsonism has low mortality rate, the disability rate is high, and the influence on the daily life of patients is great. Other brain injury diseases, such as cerebral apoplexy, ataxia, cerebral paralysis, etc., can also have specific abnormal gait, such as central shaking during walking, cramped gait, scissors gait, duck gait, etc.; if this morning finds that the patient has an abnormal gait and classifies correctly according to his gait characteristics, and treats early, the physician can help the patient to resume good walking function by optimizing the modified gait pattern.
Currently, clinicians use various standardized scales to quantify, such as MDS-UPDRS, and comprehensively evaluate neurological and functional impairment and gait abnormalities of patients by observing how well they score their prescribed actions; the judgment method depends on the experience of field experts, and is often too subjective because a patient realizes that own gait is abnormal and can find a doctor to diagnose; aiming at objectively evaluating the motor function impediment of a patient, previous work has relied on expensive wearable professional devices that interfere with the normal movement of the patient.
Disclosure of Invention
The invention aims to solve the problems of the prior art and provide a gait recognition-based motion obstruction disease classification method for assisting classification of diseases with motion obstruction phenomenon.
In order to achieve the above purpose, the present invention adopts the following technical scheme: a method for classifying the movement-obstructing diseases based on gait recognition includes such steps as,
S1: acquiring images and videos of multi-angle cameras of a person to be tested in a determined field, preprocessing the images and videos to obtain videos and images containing a complete gait cycle of a tested target, and constructing a training data set;
S2: inputting the preprocessed video and the preprocessed image into a human body posture estimation network to obtain human body skeleton joint point data containing human body activity information;
S3: processing the human skeleton joint point sequence data obtained in the step S2, firstly carrying out normalization processing on the human skeleton joint point sequence data, drawing a joint point coordinate information curve according to a time sequence, processing data such as wrong detection, missed detection and the like of left and right joint points, and finally processing the data into a proper data format according to the requirement of an action recognition network to construct a new training data set;
s4: constructing a gesture classification neural network, training the data set obtained in the step S3, and outputting probability values of all actions by the belonging gesture classification neural network, wherein the action type with the highest probability value is the action recognition result;
S5: the method comprises the steps of collecting joint point data of a patient in real time, calculating gait parameters and displaying the gait parameters.
Further, the pictures and videos acquired in S1 to determine the walking posture of the person under the field need to include typical pathological gait exhibited by patients with ataxia, parkinson' S disease, cerebral stroke, cerebral palsy, and gluteus vannamei, and to collect gait of normal person walking.
Further, in S3, normalization processing is performed on the coordinate point data obtained in S2. And drawing a curve on the x-axis according to the coordinate information, wherein the left shoulder and the right shoulder are in one group, the left ankle and the right ankle are in one group, and the left eye and the right eye are in one group.
Further, the pose classification network TRMGCN in S4 is composed of a channel reconstruction unit, three TRM-GC units, and a layer of downsampling unit between each layer of TRM-GC units.
Further, the channel reconstruction unit comprises a channel adaptation part and a recombination part, wherein the channel adaptation consists of a 4×1 2D convolution sum layer normalization, and the recombination part comprises characteristic reconstruction and group normalization.
Further, the TRM-GC unit includes a time-dimensional convolution, a layer normalization, a1 x 1 2D convolution, and a space-dimensional convolution.
Further, the time dimension volume fraction is a channel division, channel transformation and channel fusion part; the channel division divides the features into two parts according to the proportion, and the two parts are subjected to 1X 1 2D convolution; the channel transformation carries out point convolution and group convolution processing on two parts of characteristics of channel division output respectively; channel fusion includes global pooling, soft attention-directed feature fusion.
Further, the space dimension convolution is used for obtaining graph features by adopting a multi-graph independent convolution structure on the features containing human body activity information, and the graph features are divided into a dynamic tense specific learning graph convolution and a global tense learning convolution; the attention module is added in the global tense learning convolution, the attention guiding module is obtained by reasoning, the guiding module S t performs channel attention by point multiplication with the original feature through a sigmoid function to obtain 2D space mapping, and the 2D space mapping performs space attention again to obtain a final output feature.
Further, the downsampling unit is added before a TRM-GC module of the appearance level layer transformation and is formed by convolution with the core size and the step size of 2 multiplied by 1.
Further, gait parameters calculated based on the human body node in S5 include: pace, stride frequency, step size, stride width, stance and swing times, and angular trends.
Further, the walking process of the tested person is divided into four triggering events: a forefoot (ground contact) off event, a heel (ground contact) off event; the pace in the gait parameters can be calculated according to the time spent by the person to be tested for one gait cycle, namely the time between heel triggering events after two identical sides of the person to be tested; the step frequency can be calculated according to the number of the gait cycles experienced in the unit time, and the step frequency=the unit time/the step speed; the step length can be calculated according to the longitudinal linear distance between two points when the two sides of the tested person trigger the front sole ground contact event (the heel ground contact event) successively; the step width is calculated according to the straight line distance between the two legs taking the coronal plane as the standard; standing time can be calculated according to the time difference between the heel strike event and the forefoot off event of the same side foot; the swing time can be calculated according to the time difference between the front sole grounding event on one side and the heel grounding event on the other side; the angle trend can be calculated according to the joint angle taking the knee joint as the vertex in the walking process.
Compared with the prior art, the application has the following beneficial effects: : the application provides a non-invasive and extensible method which is a motion obstruction disease classification method based on gait recognition, wherein a multi-view motion vision capturing system is adopted to construct a gait data set of the motion obstruction disease, and a posture estimation network based on human body articulation points is constructed to assist in classifying the disease with the motion obstruction phenomenon.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention.
Drawings
FIG. 1 is a flow chart of the present application;
Fig. 2 is a view of the scene acquisition setup in step S1;
FIG. 3 is a block diagram of a human body pose estimation network TRMGCN;
FIG. 4 is a diagram of a spatial dimension convolution block;
fig. 5 is a graph of left and right ankle joint motion.
Detailed Description
In order to make the technical means, the creation characteristics, the achievement of the purpose and the effect of the present invention more clear and easy to understand, the present invention is further described below with reference to the accompanying drawings and the detailed description:
The invention provides a gait recognition-based motion obstruction disease classification method, which comprises the following steps of S1: acquiring images and videos of multi-angle cameras of a person to be tested in a determined field, preprocessing the images and videos to obtain videos and images containing a complete gait cycle of a tested target, and constructing a training data set; the whole collection device is shown in fig. 2. And (5) sticking an adhesive tape on the ground of the room to prompt the tested person that the area is the tested area. One strip is attached every 1m, and the interval is 4m. Through simulation test of staff, the cameras b in the figure 2 are selected to be placed in the front, the upper left corner and the lower left corner of the tested area, so that the three cameras can be ensured to be capable of shooting the complete walking gesture of the tested person within 4m. In order to ensure the shooting integrity and facilitate the subsequent video processing, the control device shown in fig. 2 a is used for realizing the simultaneous recording of the cameras, and video contents can be stored in the cameras for the next processing. Before formal recording, we can wrap around the knee of the tested person at the position of the knee in fig. 2, so as to ensure that the knee of the tested person is clearly visible during walking and is not blocked by clothes. Acquiring images and videos of a multi-angle camera of a sporter in a determined field, preprocessing the images and videos to obtain a video and an image containing a complete gait cycle of a detected target, and constructing a training data set, wherein the training data set comprises dyskinesia type gait of patients such as cerebral apoplexy, ataxia, cerebral palsy, parkinsonism and gluteus weakness; s2: inputting the preprocessed video and the preprocessed image into a human body posture estimation network to obtain human body skeleton joint point data containing human body activity information; the human body posture estimation network is used for extracting human body joint point coordinate data containing human body activity information in the image sequence; the human body posture classification network further processes the human body coordinate sequence characteristics, performs characteristic extraction and fusion, and outputs a predicted probability value of each action, wherein the highest probability value is a predicted result of the action; preprocessing the acquired original image sequence and video, and firstly removing image sequences without detected personnel or excessive loss of human bodies of detected personnel in a picture; further segmenting into sub-segments while ensuring that the current image sequence segment contains at least one complete gait cycle; the complete gait cycle includes a stance phase and a swing phase, the stance phase corresponding to a duration between heel strike and toe off of the same foot; the swing stage starts from toe drop and ends with heel contact of the same foot; identifying a human body in the image sequence, and drawing a rectangular outer frame in a picture, wherein the frame can show 'receiving', 'releasing', 'receiving' change along with the standing period and the swinging period; when the rectangular outer frame passes through a complete change process, the detected personnel in the section can be considered to pass through a complete gait cycle; s3: processing the human skeleton joint point sequence data obtained in the step S2, firstly carrying out normalization processing on the human skeleton joint point sequence data, drawing a joint point coordinate information curve according to a time sequence, processing data such as wrong detection, missed detection and the like of left and right joint points, and finally processing the data into a proper data format according to the requirement of an action recognition network to construct a new training data set; s4: constructing a gesture classification neural network, training the data set obtained in the step S3, and outputting probability values of all actions by the belonging gesture classification neural network, wherein the action type with the highest probability value is the action recognition result; s5: the method comprises the steps of collecting joint point data of a patient in real time, calculating gait parameters and displaying the gait parameters.
The pictures and videos acquired in the step S1 for determining the walking postures of the tested personnel in the field need to include typical pathological gait presented by patients with ataxia, parkinson' S disease, cerebral apoplexy, cerebral palsy and gluteus medius weakness; s2, inputting the image sequence processed in the S1 into a human body posture estimation network for identification, and outputting coordinate point data containing human body activity information; each coordinate point information output in S2 is (x, y, score). Wherein x is the value of the node on the x-axis of the pixel coordinate axis in the frame; y is the value of the joint point on the y axis of the coordinate axes of the pixels in the frame; score is the confidence coefficient of the point output by the human body posture estimation network, and the higher the confidence coefficient is, the more accurate the joint point prediction position is; the coordinate sequence data output by the S2 comprises 17 pieces of human skeleton joint point information, wherein 17 pieces of human skeleton joint point specifically comprise a nose, a left eye, a right eye, a left ear, a right ear, a left shoulder, a right shoulder, a left elbow, a right elbow, a left wrist, a right wrist, a left hip, a right hip, a left knee, a right knee, a left ankle and a right ankle;
S3, carrying out normalization processing on the coordinate point data obtained in the step S2; drawing a curve on an x-axis according to the coordinate information, wherein the left shoulder and the right shoulder are a group, the left ankle and the right ankle are a group, and the left eye and the right eye are a group; and S3, processing the coordinate point data obtained in the step S2, firstly setting the coordinate point information with score of 0 to zero for reducing errors, and carrying out normalization processing on the coordinate point data. Normalized coordinate point information is Where frame_wide is the width of the frame image and frame_high is the height of the frame image. Fitting coordinate point data of the neck according to coordinate point data corresponding to the left shoulder and the right shoulder, and constructing 18 pieces of human skeleton joint point information; further processing the 18 human skeleton joint point information, and firstly converting the information into nose, neck, right shoulder, right elbow, right wrist, left shoulder, left elbow, left wrist, right hip, right knee, right ankle, left hip, left knee, left ankle, left eye, right eye, left ear and right ear in sequence; and drawing a curve on the x-axis according to the coordinate information, wherein the left shoulder and the right shoulder are in one group, and the left ankle and the right ankle are in one group. Setting a function, and automatically exchanging coordinate point information of a left shoulder, a left elbow, a left wrist and a corresponding right shoulder, a right elbow and a right wrist when the condition of wrong detection of left and right joint points is found, such as wrong detection of left shoulder and right shoulder coordinate points of the frame; when coordinate points of the left ankle and the right ankle of the frame are wrongly detected, the information of the right hip, the right knee, the right ankle, the left hip, the left knee and the left ankle is automatically exchanged; the four coordinate points of the head are the same; constructing a training set by using the processed coordinate point data, processing a file format into an input format required by the gesture classification network in S4, labeling each type of action sequence, and constructing the training set and a testing set; because of the small number of gait images for each dyskinesia disease, five-fold cross-validation was used to train it. Specifically, each of the training sets is equally divided into 5 parts, the first part is regarded as a verification set, the other four parts are regarded as training sets, and the rest four parts are folded in the same way. And finally obtaining an average value of the five times of accuracy rates as the accuracy rate of the training.
The pose classification network TRMGCN in S4 is composed of a channel reconstruction unit, three TRM-GC units, and a layer of downsampling unit between each layer of TRM-GC units.
The channel reconstruction unit comprises a channel adaptation part and a recombination part, wherein the channel adaptation part consists of a4 multiplied by 1 2D convolution sum layer normalization, and the recombination part comprises characteristic reconstruction and group normalization.
The TRM-GC units include time-dimensional convolution, layer normalization, 1 x 1 2D convolution, and space-dimensional convolution.
The time dimension volume integration is divided into a channel division part, a channel transformation part and a channel fusion part; the channel division divides the features into two parts according to the proportion, and the two parts are subjected to 1X1 2D convolution; the channel transformation carries out point convolution and group convolution processing on two parts of characteristics of channel division output respectively; channel fusion includes global pooling, soft attention-directed feature fusion.
The space dimension convolution is used for obtaining graph features by adopting a multi-graph independent convolution structure according to the features containing human body activity information, and is divided into a dynamic tense specific learning graph convolution and a global tense learning convolution; the figure is characterized byWherein f out represents the output characteristics after the spatial dimension convolution operation; σ () represents the spatial convolution of the corresponding point; /(I)Is a learnable convolution weight matrix; s is the number of weight matrixes which can be learned together, and S is the weight matrix currently; /(I)Representing an adjacency matrix for constructing a coordinate point relation, wherein K represents different partitions; constructing a specific adjacency matrix as/>, in the dynamic tense specific learning graph convolution Representing self-join,/>Representing an inbound connection,/>Representing an outward connection.
The attention module is added in the global tense learning convolution, an attention guiding module is obtained by reasoning, the guiding module S t performs channel attention through point-by-point multiplication of a sigmoid function and the original feature to obtain 2D space mapping, and the 2D space mapping performs space attention again to obtain a final output feature; the spatial attention module is used for characterizing the original coordinate pointsPerforming convolution n times to obtain characteristic/>Pair/>And/>Performing global pooling and performing MLP according to weights to obtain a 1D channel attention guiding module/>Where g () represents a learning transformation. The guiding module S t is processed by a sigmoid function and original characteristics/>Performing channel attention by point multiplication to obtain 2D spatial mapping and feature expression/>Finally, 2D spatial mapping re/>For the execution space attention, the final output characteristic is obtained; the specific implementation mode is as follows:
The downsampling unit is added before a TRM-GC module of the appearance level layer transformation and is formed by convolution with the core size and the step length of 2 multiplied by 1.
The gait parameters calculated based on the human body joint point in S5 include: pace, stride frequency, step size, stride width, stance and swing times, and angular trends.
The walking process of the tested person is divided into four triggering events: a forefoot (ground contact) off event, a heel (ground contact) off event; the pace in the gait parameters can be calculated according to the time spent by the person to be tested for one gait cycle, namely the time between heel triggering events after two identical sides of the person to be tested; the step frequency can be calculated according to the number of the gait cycles experienced in the unit time, and the step frequency=the unit time/the step speed; the step length can be calculated according to the longitudinal linear distance between two points when the two sides of the tested person trigger the front sole ground contact event (the heel ground contact event) successively; the step width is calculated according to the straight line distance between the two legs taking the coronal plane as the standard; standing time can be calculated according to the time difference between the heel strike event and the forefoot off event of the same side foot; the swing time can be calculated according to the time difference between the front sole grounding event on one side and the heel grounding event on the other side; the angle trend can be calculated according to the joint angle taking the knee joint as the vertex in the walking process.
Finally, it is noted that the above embodiments are only for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made thereto without departing from the spirit and scope of the technical solution of the present invention, which is intended to be covered by the scope of the claims of the present invention.

Claims (11)

1. A method for classifying movement-obstructing diseases based on gait recognition, which is characterized by comprising the following steps: comprises the steps of,
S1: acquiring images and videos of multi-angle cameras of a person to be tested in a determined field, preprocessing the images and videos to obtain videos and images containing a complete gait cycle of a tested target, and constructing a training data set;
S2: inputting the preprocessed video and the preprocessed image into a human body posture estimation network to obtain human body skeleton joint point data containing human body activity information;
S3: processing the human skeleton joint point sequence data obtained in the step S2, firstly carrying out normalization processing on the human skeleton joint point sequence data, drawing a joint point coordinate information curve according to a time sequence, processing data such as wrong detection, missed detection and the like of left and right joint points, and finally processing the data into a proper data format according to the requirement of an action recognition network to construct a new training data set;
s4: constructing a gesture classification neural network, training the data set obtained in the step S3, and outputting probability values of all actions by the belonging gesture classification neural network, wherein the action type with the highest probability value is the action recognition result;
S5: the method comprises the steps of collecting joint point data of a patient in real time, calculating gait parameters and displaying the gait parameters.
2. A method of classifying a motion-obstructive disease based on gait recognition according to claim 1, wherein: the pictures and videos acquired in S1 to determine the walking posture of the person under test in the field need to include typical pathological gait exhibited by patients with ataxia, parkinson' S disease, cerebral stroke, cerebral palsy, gluteus medius weakness, and to collect gait of normal person walking.
3. A method of classifying a motion-obstructive disease based on gait recognition according to claim 1, wherein: and S3, carrying out normalization processing on the coordinate point data obtained in the step S2. And drawing a curve on the x-axis according to the coordinate information, wherein the left shoulder and the right shoulder are in one group, the left ankle and the right ankle are in one group, and the left eye and the right eye are in one group.
4. A method of classifying a motion-obstructive disease based on gait recognition according to claim 1, wherein: the pose classification network TRMGCN in S4 is composed of a channel reconstruction unit, three TRM-GC units, and a layer of downsampling unit between each layer of TRM-GC units.
5. The method for classifying a motion-obstructive disease based on gait recognition according to claim 4, wherein: the channel reconstruction unit comprises a channel adaptation part and a recombination part, wherein the channel adaptation part consists of a 4 multiplied by 1 2D convolution sum layer normalization, and the recombination part comprises characteristic reconstruction and group normalization.
6. The method for classifying a motion-obstructive disease based on gait recognition according to claim 4, wherein: the TRM-GC units include time-dimensional convolution, layer normalization, 1 x 1 2D convolution, and space-dimensional convolution.
7. The method for classifying a motion-obstructive disease based on gait recognition according to claim 6, wherein: the time dimension volume integration is divided into a channel division part, a channel transformation part and a channel fusion part; the channel division divides the features into two parts according to the proportion, and the two parts are subjected to 1X 1 2D convolution; the channel transformation carries out point convolution and group convolution processing on two parts of characteristics of channel division output respectively; channel fusion includes global pooling, soft attention-directed feature fusion.
8. The method for classifying a motion-obstructive disease based on gait recognition according to claim 6, wherein: the space dimension convolution is used for obtaining graph features by adopting a multi-graph independent convolution structure according to the features containing human body activity information, and is divided into a dynamic tense specific learning graph convolution and a global tense learning convolution; the attention module is added in the global tense learning convolution, the attention guiding module is obtained by reasoning, the guiding module S t performs channel attention by point multiplication with the original feature through a sigmoid function to obtain 2D space mapping, and the 2D space mapping performs space attention again to obtain a final output feature.
9. The method for classifying a motion-obstructive disease based on gait recognition according to claim 4, wherein: the downsampling unit is added before a TRM-GC module of the appearance level layer transformation and is formed by convolution with the core size and the step length of 2 multiplied by 1.
10. The method for classifying a motion-obstructive disease based on gait recognition according to claim 4, wherein: the gait parameters calculated based on the human body joint point in S5 include: pace, stride frequency, step size, stride width, stance and swing times, and angular trends.
11. The method for classifying a motion-obstructive disease based on gait recognition according to claim 4, wherein: the walking process of the tested person is divided into four triggering events: a forefoot (ground contact) off event, a heel (ground contact) off event; the pace in the gait parameters can be calculated according to the time spent by the person to be tested for one gait cycle, namely the time between heel triggering events after two identical sides of the person to be tested; the step frequency can be calculated according to the number of the gait cycles experienced in the unit time, and the step frequency=the unit time/the step speed; the step length can be calculated according to the longitudinal linear distance between two points when the two sides of the tested person trigger the front sole ground contact event (the heel ground contact event) successively; the step width is calculated according to the straight line distance between the two legs taking the coronal plane as the standard; standing time can be calculated according to the time difference between the heel strike event and the forefoot off event of the same side foot; the swing time can be calculated according to the time difference between the front sole grounding event on one side and the heel grounding event on the other side; the angle trend can be calculated according to the joint angle taking the knee joint as the vertex in the walking process.
CN202410235583.XA 2024-03-01 2024-03-01 Gait recognition-based motion-obstructing disease classification method Pending CN118072394A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410235583.XA CN118072394A (en) 2024-03-01 2024-03-01 Gait recognition-based motion-obstructing disease classification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410235583.XA CN118072394A (en) 2024-03-01 2024-03-01 Gait recognition-based motion-obstructing disease classification method

Publications (1)

Publication Number Publication Date
CN118072394A true CN118072394A (en) 2024-05-24

Family

ID=91107279

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410235583.XA Pending CN118072394A (en) 2024-03-01 2024-03-01 Gait recognition-based motion-obstructing disease classification method

Country Status (1)

Country Link
CN (1) CN118072394A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118507072A (en) * 2024-07-09 2024-08-16 中国人民解放军总医院第三医学中心 Method, apparatus, medium and program product for predicting glaucoma based on gait characteristics
CN118861765A (en) * 2024-07-04 2024-10-29 中国医学科学院北京协和医院 A method and device for predicting human movement intention in rehabilitation training

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118861765A (en) * 2024-07-04 2024-10-29 中国医学科学院北京协和医院 A method and device for predicting human movement intention in rehabilitation training
CN118507072A (en) * 2024-07-09 2024-08-16 中国人民解放军总医院第三医学中心 Method, apparatus, medium and program product for predicting glaucoma based on gait characteristics
CN118507072B (en) * 2024-07-09 2024-10-22 中国人民解放军总医院第三医学中心 Method, apparatus, medium and program product for predicting glaucoma based on gait characteristics

Similar Documents

Publication Publication Date Title
US9996739B2 (en) System and method for automatic gait cycle segmentation
CN118072394A (en) Gait recognition-based motion-obstructing disease classification method
Hossain et al. Deepbbwae-net: A cnn-rnn based deep superlearner for estimating lower extremity sagittal plane joint kinematics using shoe-mounted imu sensors in daily living
JP7057589B2 (en) Medical information processing system, gait state quantification method and program
CN112401834B (en) Movement-obstructing disease diagnosis device
González et al. Comparison between passive vision-based system and a wearable inertial-based system for estimating temporal gait parameters related to the GAITRite electronic walkway
CN112438723B (en) Cognitive function evaluation method, cognitive function evaluation device and storage medium
CN111444879A (en) A method and system for recognizing actions for autonomous rehabilitation of joint strain
TWI848685B (en) Intelligent gait analyzer
CN117883074A (en) Parkinson's disease gait quantitative analysis method based on human body posture video
CN115105062B (en) Hip and knee joint coordination evaluation method, device and system and storage medium
CN112568898A (en) Method, device and equipment for automatically evaluating injury risk and correcting motion of human body motion based on visual image
CN117333932A (en) Methods, equipment, equipment and media for identifying sarcopenia based on machine vision
CN117690583B (en) Interactive management system and method for rehabilitation nursing based on Internet of Things
CN118822947A (en) Breathing pattern detection method, device and controller based on deep learning neural network
CN112115923A (en) Multichannel time sequence gait analysis algorithm based on direct feature extraction
CN116530976A (en) A method of human gait monitoring
CN116543455A (en) Method, equipment and medium for establishing parkinsonism gait damage assessment model and using same
CN117158952A (en) Method for realizing three-dimensional gait feature extraction and abnormal gait evaluation
CN118749953A (en) A method for extracting gait features and identifying abnormal gait in traditional Chinese medicine inspection
CN118177780A (en) A method and system for identifying bad posture gait based on convolutional neural network
US12029550B2 (en) 3D human body joint angle prediction method and system using 2D image
WO2020161947A1 (en) Physical health condition image analysis device, method, and system
CN114743664A (en) Gait analysis and determination learning-based spinal cervical spondylosis auxiliary diagnosis system
CN104331705B (en) Automatic detection method for gait cycle through fusion of spatiotemporal information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication