[go: up one dir, main page]

CN113239803A - Dead reckoning positioning method based on pedestrian motion state recognition - Google Patents

Dead reckoning positioning method based on pedestrian motion state recognition Download PDF

Info

Publication number
CN113239803A
CN113239803A CN202110521228.5A CN202110521228A CN113239803A CN 113239803 A CN113239803 A CN 113239803A CN 202110521228 A CN202110521228 A CN 202110521228A CN 113239803 A CN113239803 A CN 113239803A
Authority
CN
China
Prior art keywords
train
acctest
motion state
pedestrian
test
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110521228.5A
Other languages
Chinese (zh)
Inventor
邓平
赵荣鑫
朱飞翔
王浩祥
吴明辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Jiaotong University
Original Assignee
Southwest Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Jiaotong University filed Critical Southwest Jiaotong University
Priority to CN202110521228.5A priority Critical patent/CN113239803A/en
Publication of CN113239803A publication Critical patent/CN113239803A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C19/00Gyroscopes; Turn-sensitive devices using vibrating masses; Turn-sensitive devices without moving masses; Measuring angular rate using gyroscopic effects
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/18Stabilised platforms, e.g. by gyroscope
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01PMEASURING LINEAR OR ANGULAR SPEED, ACCELERATION, DECELERATION, OR SHOCK; INDICATING PRESENCE, ABSENCE, OR DIRECTION, OF MOVEMENT
    • G01P15/00Measuring acceleration; Measuring deceleration; Measuring shock, i.e. sudden change of acceleration
    • G01P15/18Measuring acceleration; Measuring deceleration; Measuring shock, i.e. sudden change of acceleration in two or more dimensions
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Social Psychology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Evolutionary Computation (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Navigation (AREA)

Abstract

本发明公开了一种基于行人运动状态识别的航迹推算定位方法,包括步骤:构建行人运动状态识别分类模型、识别行人运动状态、进行步频检测、步长估计、航向估计和航迹推算。本发明的有益效果在于:针对行人在二维空间内的运动状态,采集五类运动状态且仅采集三轴加速度和三轴陀螺仪数据实现行人导航定位,航迹推算方法的实用性更强,更加有利于实际应用环境的应用和开发。五类运动状态识别模型建模和使用模型识别运动状态更加完善且精准。步态检测和步频检测无需根据运动状态行为而改变检测方法,更加通用且准确,最大程度降低与人体运动状态识别方法的耦合性。航迹推算能够接受更为复杂的运动状态种类,适用性更强。

Figure 202110521228

The invention discloses a dead reckoning and positioning method based on pedestrian motion state recognition, comprising the steps of: constructing a pedestrian motion state recognition classification model, identifying the pedestrian motion state, performing step frequency detection, step length estimation, course estimation and dead track reckoning. The beneficial effects of the present invention are: for the motion state of pedestrians in the two-dimensional space, five types of motion states are collected and only three-axis acceleration and three-axis gyroscope data are collected to realize pedestrian navigation and positioning, and the practicability of the dead reckoning method is stronger, It is more conducive to the application and development of the actual application environment. The five types of motion state recognition models are more complete and accurate in modeling and using models to recognize motion states. Gait detection and cadence detection do not need to change the detection method according to the motion state behavior, are more versatile and accurate, and minimize the coupling with the human body motion state recognition method. Dead reckoning can accept more complex types of motion states and is more applicable.

Figure 202110521228

Description

Dead reckoning positioning method based on pedestrian motion state recognition
Technical Field
The invention relates to the technical field of positioning, in particular to a dead reckoning positioning method based on pedestrian motion state identification.
Background
With the rapid development of information technology, location-based services have gradually penetrated aspects of human life as a lifestyle. Indoor positioning has received much attention in the last decade as one of the most challenging technologies in Location Based Services (LBS). However, indoor positioning faces a series of challenges such as severe multipath effects, non-line-of-sight propagation, high signal attenuation and noise interference, etc., relative to an environment where global navigation satellite systems are indispensable or even dominant technologies outdoors. At present, a plurality of challenges still face to the implementation scheme of reliable and high-precision indoor navigation positioning, and the indoor navigation positioning technology of pedestrians is the key for ensuring the success of various positioning services. In recent years, due to the development of semiconductor technology, micro sensors such as acceleration sensors, gyroscopes and magnetometers have been widely integrated into various smart devices, so that the pedestrian indoor navigation positioning technology based on the inertial navigation technology has received more and more attention.
However, in the existing pedestrian inertial navigation positioning technology, there are certain requirements and limitations on the motion posture of the person to be positioned, and only basic simple motion states such as forward walking or jogging are generally considered. For various motion states such as left and right striding, backward walking and the like which may occur when people such as firefighters and the like carry out indoor disaster relief, the existing algorithm is rarely identified and correspondingly processed, so that the positioning performance is rapidly reduced, and the algorithm is difficult to be applied to accurate navigation positioning of pedestrians in such scenes.
Document 1: liu Yu, Zhou Sai, Li Yun Mei, etc. based on the three-dimensional autonomous navigation positioning algorithm [ J ] of human body multi-directional movement, China technical bulletin of inertia, 2016,24(04): 449) 453. The initial phase of different motion of human body is used to distinguish different motion states including walking, backward walking and left and right walking. However, the motion detection mode has poor reliability, and the detection capability is strongly related to the initial phase of the motion.
Document 2: the human motion state recognition [ J ] is based on a built-in sensor of the smart phone, and the communication bulletin, 2019,40(03):157-169. The method includes steps of recognizing actions including walking, running and riding, and finishing motion classification by using a Support Vector Machine (SVM).
Document 3: the application number is CN107084718A, and the invention name is an indoor positioning method based on pedestrian dead reckoning. The invention uses a smart phone as a positioning platform, and uses an Inertial Measurement Unit (IMU) element built in the smart phone as a positioning device, but the invention does not consider the problem of complex motion posture of pedestrians in terms of the motion state of the pedestrians, has strict requirements on positioning motion of the pedestrians, and cannot meet the positioning requirements of ordinary people in daily life.
Document 4: the invention has the application number of CN109827577B, and is named as a high-precision inertial navigation positioning algorithm based on motion state detection. The invention realizes a positioning algorithm based on foot-operated slow walking and slow running motion state identification, only quick and slow motion of pedestrians is considered in the aspect of motion state detection, the used positioning algorithm is based on a foot binding type, and equipment needs to be arranged at the feet of positioning personnel, so that the positioning algorithm is inconvenient in actual use.
Disclosure of Invention
The invention provides a track reckoning positioning method based on pedestrian motion state identification, which realizes accurate navigation positioning of pedestrians in various motion states by identifying and processing common motion states of the pedestrians and improving and correcting the existing inertial navigation positioning technology.
The technical scheme for realizing the purpose of the invention is as follows:
a dead reckoning positioning method based on pedestrian motion state identification comprises the following steps:
step one, constructing a pedestrian motion state identification classification model: acquiring training data of pedestrians in five motion states, and constructing a pedestrian motion state recognition classification model; the five motion states are walking, jogging, left striding, right striding and reversing; the training data comprises three-axis acceleration data;
step two, identifying the pedestrian motion state: collecting test data of pedestrians, and identifying a pedestrian motion state by using a pedestrian motion state identification classification model; the test data comprises triaxial acceleration data and triaxial gyroscope data;
step frequency detection is carried out to obtain single step frequency: step frequency detection is carried out on the triaxial acceleration data acquired in the step two, and single step frequency is obtained;
step four, estimating the step length: if the pedestrian motion state obtained by the identification in the step two is walking, left stepping, right stepping or reversing, estimating a single step by adopting a linear step model and combining single step frequency; if the pedestrian motion state obtained by the identification in the step two is jogging, estimating a single step size by combining a Weinberg nonlinear step size model with a single step frequency;
step five, course estimation: carrying out integral solution on the triaxial gyroscope data acquired in the step two by adopting angular velocity converted based on a quaternion coordinate system to calculate a course angle, and finishing course angle correction by adopting a heuristic offset elimination algorithm;
step six, dead reckoning: and setting the initial position coordinates and the initial course angle of the pedestrian, and updating the position of the pedestrian in the five motion states according to the pedestrian motion state obtained in the step two, the single step length obtained in the step four and the course angle corrected in the step five.
According to a further technical scheme, in the step one, a pedestrian motion state identification classification model is constructed, and the method comprises the following steps:
step 1-1: collecting training samples, and carrying out pretreatment: collecting triaxial acceleration data Acctrain of pedestrians in five motion statesx、AccTrainyAnd AccTrainzCombining to obtain combined acceleration data Acctrain; respectively carrying out acceleration filtering treatment to obtain Acctrain'x、AccTrain′y、AccTrain′zAnd AccTrain';
step 1-2: training sample segmentation: acctrain 'in five motion states'x、AccTrain′y、AccTrain′zAnd Acctrain' are respectively divided to form a training sample set Trainx[i]、Trainy[i]、Trainz[i]And Train [ i ]](ii) a Wherein, i is 1 to n is the number of the training sample set,
Figure BDA0003064037900000031
for training the number of sample sets, NtrainFor the total number of training sample samples, ncSample points for each training sample set [ ·]Represents rounding down; using a label TrainLabels of the training sample set to represent the category of the motion state of the training sample set;
step 1-3: calculating time domain characteristic values of a training sample set;
the time domain characteristic value of the ith training sample set is as follows:
respectively by Trainx[i]、Trainy[i]、Trainz[i]And Train [ i ]]As a time domain feature value F1i、F2i、F3iAnd F4iRespectively by Trainx[i]、Trainy[i]、Trainz[i]And Train [ i ]]Is taken as a time domain characteristic value F5i、F6i、F7iAnd F8i(ii) a By analogy, Train is used respectivelyx[i]、Trainy[i]、Trainz[i]And Train [ i ]]The variance, the mode, the maximum value, the minimum value, the quartile distance and the third quartile are used as time domain characteristic values F9i~F32iRespectively by Trainx[i]、Trainy[i]、Trainz[i]And Train [ i ]]The skewness, kurtosis and average absolute error of the time domain are taken as time domain characteristic values F36i~F47i(ii) a Respectively by Trainx[i]And Trainy[i]Cross correlation coefficient, Trainy[i]And Trainz[i]Cross correlation coefficient and Trainz[i]And Trainx[i]As a time domain feature value F33i、F34iAnd F35i
Respectively by Trainx[i]、Trainy[i]、Trainz[i]And Train [ i ]]Intermediate variable M incorporating Hjorth parameter4As time domain eigenvalues
Figure BDA0003064037900000032
By Trainx[i]Obtaining F (1) as a time domain feature value F as an input set object52i
Figure BDA0003064037900000033
Wherein d [ k ]]Representing the kth data, N, in the input set object ddFor input of the size of the collection object d, i.e. Nd=nc
Step 1-4: screening effective characteristics of five types of motion states: processing time domain features F1Said time domain feature F1For time domain eigenvalues F1iThe marking of (2): according to the motion state of the training sample set characterized by the TrainLabels, n time domain characteristic values F of n training sample sets1iRespectively extracting time domain characteristic values of five types of motion states; respectively combining time domain characteristic values corresponding to the five types of motion states, and drawing a characteristic curve graph of five characteristic curves corresponding to the five types of motion states after reordering in sequence; in the characteristic curve graph, the horizontal axis/the vertical axis are the serial numbers of the time domain characteristic values after the time domain characteristic values are rearranged in sequence, and the vertical axis/the horizontal axis are the time domain characteristic values; if there is more than one characteristic curve among the five characteristic curves which is not crossed with other characteristic curves, the time domain characteristic F1Screening effective characteristics of the motion state corresponding to more than one characteristic curve which is not crossed with other characteristic curves; otherwise, time domain feature F1Effective characteristics of five motion states are not screened;
processing time domain features F according to a similar method2~F52Screening effective characteristics of five types of motion states;
step 1-5: establishing a valid feature matrix QTrainThe effective feature matrix QTrainThe size of (a) is n x m; the values in the first row are m time domain feature values corresponding to the effective features of the five types of motion states screened by the first training sample set according to the steps 1-4, and so on;
step 1-6: establishing a pedestrian motion state identification classification model: normalizing the processed QTrainAs the characteristics for training the SVM model, TrainLabels is used as a training sample label for training the SVM model, and a pedestrian motion state recognition classification model SVM-OVO is established by utilizing an SVM one-to-one multi-classifier (OVO); wherein, the kernel function of the SVM is a linear kernel function.
According to a further technical scheme, in the second step, the motion state of the pedestrian is identified, and the method comprises the following steps:
step 2-1: collecting a test sample, and carrying out pretreatment: collecting pedestrian triaxial acceleration data AccTestx、AccTestyAnd AccTestzCombining to obtain a combined acceleration AccTest; respectively carrying out acceleration filtering treatment to obtain AccTest'x、AccTest′y、AccTest′zAnd AccTest'; the sampling frequency of the test sample is equal to that of the training sample;
step 2-2: test sample segmentation: acctest'x、AccTest′y、AccTest′zAnd Acctest' are respectively divided to form a Test sample set Testx[i']、Testy[i']、Testz[i']And Test [ i'](ii) a Wherein i ═ 1 to n' are numbers of the test sample sets,
Figure BDA0003064037900000041
for testing the number of sample sets, NtestThe total number of sampling points of the test sample; n iscSample points for each training sample set [ ·]Represents rounding down; target using test sample setLabel TestLabels characterize the category of the motion state of the test sample set; step 2-3: calculating effective characteristic values of the test sample set;
the valid eigenvalues for the ith' test sample set are:
selecting effective characteristics of five types of motion states of the pedestrian motion state recognition classification model from the time domain characteristics of the ith' test sample set, and calculating the effective characteristics according to a method for calculating the same time domain characteristic value of the training sample set;
step 2-4: establishing effective characteristic matrix Q of test sample setTestThe effective feature matrix QTestThe size of (a) is n' × m; wherein, the value of the first row is m effective characteristic values calculated according to the step 2-3 in the first test sample set, and so on;
step 2-5: recognizing the motion state of the pedestrian: normalizing the processed QTestInputting a pedestrian motion state recognition classification model SVM-OVO, and assigning values to the label TestLabels of the test sample set to obtain the motion state category of each test sample set.
In a further technical solution, the second step further includes correcting the category of the motion state of the test sample set, that is, the motion state of the test sample set is corrected
Step 2-6: the values of the labels TestLabels of the first, second, third and fourth test sample sets are made equal to W, respectively1、W2、W3And W4
Step 2-7: such as W2≠W3&&W2≠W4&&W3≠W4&&W1=W2Or W is2≠W3&&W2≠W4&&W3≠W4&&W1=W2Not satisfying and W2=W4&&W2≠W3&&W3≠W4Then W will be3The value of the corresponding label TestLabels is corrected to W2The value of the corresponding label TestLabels; otherwise, not correcting;
step 2-8: returning to step 2-6 and replacing itChanging to: the values of the labels TestLabels of the second, third, fourth and fifth test sample sets were made equal to W, respectively1、W2、W3And W4(ii) a Then, executing the step 2-7;
step 2-9: and finishing the correction of the value of the label TestLabels of the test sample set according to the method similar to the steps 2-8.
In a preferred technical scheme, in the third step, step frequency detection is performed to obtain a single step frequency, and the method comprises the following steps:
step 3-1: collecting a test sample, and carrying out pretreatment: collecting pedestrian triaxial acceleration data AccTestx、AccTestyAnd AccTestzCombining to obtain a combined acceleration AccTest; carrying out acceleration filtering processing on the combined acceleration Acctest to obtain Acctest';
step 3-2: wave crest detection: if AccTest' j-1 is satisfied]<AccTest′[j]≤AccTest′[j+1]Then AccTest' j will be]Labeled as peak; acctest' [ j]Represents the resultant acceleration of the jth sample point after filtering processing, and j belongs to [2, N ]Test-1],NtestThe total number of sampling points of the test sample; setting the peak minimum threshold σ1And inter-peak sample point spacing constraint sigma2The reject value is less than σ1When the sampling point interval between a plurality of peaks is less than sigma2Only the peak value with the maximum retention value is discarded, and n is obtainedpeakA peak to peak value; recording the index of AccTest' corresponding to the peak value obtained by detection into an array PeakIndex [ l ]]Is the index of AccTest' corresponding to the first peak value, and l is more than or equal to 1 and less than or equal to npeak
Step 3-3: searching the left zero point and the right zero point of each peak value: the first peak value PeakIndex [ l ]]Corresponding AccTest' [ PeakIndex [ l ]]]The value, the first zero found by the forward search is designated PeakIndex [ l ]]Left zero point Z of1[l]The second zero found by the backward search is designated PeakIndex [ l ]]Right zero point Z of2[l];
The search range for the left zero is defined as: if it is searched forward to PeakIndex [ l ]]-σ3The index position has not yet obtained the left zeroThen, the search is stopped, let AccTest' [ PeakIndex [ l ]]-σ3]Is a left zero point Z1[l],σ3Is a threshold value;
the search range for the right zero point is defined as: if search backward to PeakIndex [ l ]]+σ4If the index position has not obtained the right zero point, the search is stopped, let AccTest' [ PeakIndex [ l ]]+σ4]Is a right zero point Z2[l],σ4Is a threshold value;
the zero point is as follows:
if the conditions that Acctest 'j-1 <0 and Acctest' j > is more than or equal to 0 or Acctest 'j-1 >0 and Acctest' j > is less than or equal to 0 are met,
AccTest' j is zero;
step 3-4: calculate the step frequency for each single step: the peak value of the first wave crest corresponds to the first step, and the step frequency of the first step is fs[l]=Z2[l]-Z1[l]。
In a preferred technical scheme, in the step six, dead reckoning, the method comprises: after the initial position coordinates and the initial course angle of the pedestrians are set, the positions of the pedestrians in the five motion states are updated according to the following formula,
Figure BDA0003064037900000061
wherein, X [ k ]]And Y [ k ]]Indicating the position of the k-th step of the pedestrian, theta, on the two-dimensional navigation planekDenotes the heading angle of the kth step, L k]Represents the step size of the k step; if the motion state of the kth step is reverse, qkTaking 1, otherwise, taking-1; if the motion state of the kth step is left step or right step, p iskTaking 1 or-1, otherwise pkTake 0.
Compared with the prior art, the invention has the beneficial effects that:
1. aiming at the motion state of the pedestrian in a two-dimensional space, five motion states are collected, and only three-axis acceleration and three-axis gyroscope data are collected to realize the navigation and positioning of the pedestrian, so that the track calculation method has stronger practicability and is more favorable for the application and development of practical application environments.
2. The modeling of the five types of motion state recognition models and the recognition of the motion states by using the models are more complete and accurate.
3. The gait detection and the step frequency detection do not need to change the detection method according to the motion state behaviors, are more universal and accurate, and reduce the coupling with the human motion state identification method to the maximum extent.
4. And the dead reckoning can accept more complicated motion state types, and the applicability is stronger.
Drawings
FIG. 1 is a schematic diagram of the main steps of an embodiment.
Fig. 2 is a flow chart of gait detection.
Fig. 3 is a schematic diagram of step frequency detection.
Fig. 4 is a diagram illustrating the results of gait detection and step frequency detection.
FIG. 5 is a diagram illustrating the values of the acceleration x-axis f (1).
Fig. 6 is a flow chart of SVM classification.
Fig. 7 is a schematic diagram illustrating a motion state recognition error recognition result correction.
FIG. 8 is a diagram illustrating a multi-motion state location update.
Fig. 9 is a diagram of a manner in which a locater holds a smartphone.
Fig. 10 is a diagram of positioning effect in a multi-motion state.
Detailed Description
The method comprises the steps of firstly completing data acquisition and preprocessing of pedestrians to be positioned through portable waist-bound IMU positioning equipment, then firstly constructing an SVM multi-classification model of the pedestrian movement state before carrying out movement state identification, or identifying the pedestrian real-time movement state under the established classification model, then correcting individual error identification results through an adjacent gait correlation constraint method based on the human body movement law to realize accurate identification of the movement state in the positioning process of positioning personnel, and finally completing navigation positioning of the complex movement state of the pedestrians in a two-dimensional plane through improving a step length estimation algorithm, correcting a course angle based on an HDE (empirical Drift Elimination) algorithm and designing a position updating algorithm based on the multi-movement state on the basis of a traditional PDR algorithm.
The specific embodiment for realizing the purpose of the invention mainly comprises the following steps:
A. IMU positioning data acquisition: three-axis acceleration and three-axis gyroscope data are collected through IMU equipment carried by a person to be positioned, and preprocessing such as moving average filtering is carried out on the resultant acceleration data after the gravity is removed.
B. Establishing a motion state classification model: firstly, constructing motion state features, respectively extracting time domain features of acceleration x-axis, y-axis, z-axis and three-axis combined acceleration data in 5 motion states of walking, jogging, left striding, right striding and reversing, extracting feature data of 52 feature objects in total, and numbering the feature objects as F according to extraction sequence1~F52(ii) a Then, screening the effective features, filtering the ineffective features to reduce the interference of the ineffective features on the high-precision classification recognition rate, and finally, setting the number of the effective feature objects for the motion state classification recognition to be 34; then, completing the construction of a feature matrix by using the features of the effective feature objects, and carrying out normalization processing on the features to eliminate the factor of imbalance of feature weights caused by the fact that the feature values of different measurement units are different in size; finally, construction of a human motion state recognition classification model is completed by utilizing an SVM multi-classifier (OVO), and an SVM kernel function selects a linear kernel function.
C. Step frequency detection: using the combined acceleration waveform, firstly adopting a single peak value detection method and combining the minimum peak value threshold value constraint sigma1Constraint sigma of pedestrian motion period interval2And completing pedestrian motion detection in a multi-motion state, and then performing single-step left and right zero detection through a zero searching rule, thereby completing single-step frequency detection.
D. Human motion state identification: firstly, in the positioning process, performing feature extraction on the acceleration data preprocessed in the step A according to the effective feature objects screened in the step B and constructing a corresponding feature matrix, then, completing normalization processing of feature extraction in the positioning process by combining the features normalized in the step B, and finally, on the basis of the constructed SVM-OVO classification model, recognizing the motion state of the pedestrian in motion and obtaining a corresponding recognition result.
E. And (3) modifying the motion state identification result: on the basis of the preliminary identification of the motion state, the correction of individual error identification results is realized by an adjacent gait correlation constraint method based on the human motion law.
F. And (3) pedestrian track reckoning in a multi-motion state: firstly, estimating the step length by adopting a linear or nonlinear step length model based on the final identification result of the motion state, then carrying out integral solution on the angular speed converted by a quaternion coordinate system to calculate the course angle, finishing course angle correction by means of an HDE heuristic offset elimination algorithm, and finally finishing the dead reckoning and positioning of the pedestrian in the two-dimensional plane by adopting a dead reckoning algorithm based on the motion state identification.
The process of the present invention is further described in detail below with reference to specific examples.
The steps of establishing the motion state classification model are as follows:
A. setting the motion state of positioning personnel: the positioning personnel finish the acquisition of training data in a two-dimensional indoor scene, and the movement modes of the positioning personnel comprise walking, jogging, left striding, right striding and reversing;
B. collecting training data: the positioning personnel collects training data by binding or carrying an IMU (Inertial Measurement Unit, IMU) attitude sensor at the front part of the waist, and outputs acceleration sensor data with an uncompleted preprocessing result, including triaxial acceleration data Acctrainx、AccTrainy、AccTrainz. Sampling frequency of fcAnd may be set to 50 Hz.
C. Training data preprocessing: the obtained AccTrainx、AccTrainy、AccTrainzThe resultant acceleration AccTrain is obtained through the combination calculation,
Figure BDA0003064037900000081
wherein Acctrain [ i ]]The resultant acceleration at time i. Acceleration filtering processing is carried out by utilizing a moving average filtering method, and an object is filtered and outputAre Acctrain respectivelyx、AccTrainy、AccTrainzAnd Acctrain, the filtered output object is Acctrain'x、AccTrain′y、AccTrain′zAnd AccTrain', the filter equation is:
Figure BDA0003064037900000082
wherein x [ i ] and y [ i ] respectively represent input and output at the moment i; m is the input object set size; j is the filter window size and can take a value of 3.
D. Training sample segmentation: filtering to obtain Acctrain'x、AccTrain′y、AccTrain′zAnd AccTrain' every ncRespectively cutting the sampling intervals of the training samples to form a training sample set Trainx[i]、Trainy[i]、Trainz[i]And Train [ i ]]. Here, i has a value ranging from 1 to n. n is defined as:
Figure BDA0003064037900000083
wherein [ ·]Denotes rounding down, NtrainFor the total number of samples of the training sample, ncFor the size of a single training sample set, n is taken in the present inventionc=1.5fc. In addition, the array train labels is used for representing the real motion states corresponding to different training sample sets, and the value range is 1: integers between 5, 1, 2, 3, 4 and 5 characterize walking, jogging, left striding, right striding and backstepping, respectively. TrainLabels [ i ]]And representing the real motion state corresponding to the ith training sample set.
E. Extracting effective characteristics of a training sample set: the obtained Trainx[i]、Trainy[i]、Trainz[i]And Train [ i ]]As an input. For Trainx[i]、Trainy[i]、Trainz[i]And Train [ i ]]Time domain feature extraction is carried out, the total number of feature data extraction objects is 52, and the feature objects are numbered as F according to the extraction sequence1~F52(ii) a Then, the effective features are screened, the invalid features are filtered to reduce the interference of the invalid features on the high-precision classification recognition rate, and the number of the effective feature objects finally used for the motion state classification recognition is 34. The specific rules of feature extraction and feature screening are as follows:
1) feature extraction
The objects for feature extraction include mean, mean of absolute value (first data absolute value is calculated and then mean is calculated), variance, mode, maximum, minimum, quarter-quanta, third quartile, cross correlation coefficient, skewness, kurtosis and mean absolute error, and are marked by a mark F1~F47Marking the different characteristics in sequence, wherein each characteristic is according to the input Trainx[i]、Trainy[i]、Trainz[i]And Train [ i ]]The objects are numbered sequentially, wherein the cross-correlation coefficient numbering order follows TrainxAnd Trainy、TrainyAxle and Trainz、TrainzAnd TrainxThe cross correlation coefficient calculation order of (1). Furthermore, an intermediate variable M of the Hjorth parameter is introduced4As feature F48~F51The numbering sequence of which follows Trainx[i]、Trainy[i]、Trainz[i]And Train [ i ]]And inputting the sequence. Introducing a custom feature F (1) as the feature F52F (1) input object is Train onlyx。M4And f (1) is calculated as:
Figure BDA0003064037900000091
Figure BDA0003064037900000092
in the formula, d [ k ]]Representing the kth data, N, in the input set object ddIs the size of the set d. Can set Nd=ncThe alternative to d is Trainx[i]、Trainy[i]、Trainz[i]And Train [ i ]]And the value range of i is 1-n.
2) Feature screening
Aiming at the fact that corresponding characteristic values exist in various types of sports along with the serial numbers of the training sample set under different characteristic objects, the characteristic values are combined into characteristic curves, then the extracted characteristic objects are subjected to characteristic curve drawing to obtain characteristic curve graphs corresponding to different characteristics, and each characteristic curve graph comprises the characteristic curves corresponding to the five motion states. The characteristic curve corresponding to a certain motion state in the characteristic curves does not have obvious curve crossing phenomenon with the characteristic curves corresponding to the rest motion states, and the characteristic can be used as the effective characteristic of the motion state. The numbers of a total of 34 feature objects used for the motion state classification after screening are shown in the following table.
TABLE 1 available characteristics
Figure BDA0003064037900000101
F. Training sample set feature matrix normalization: forming a feature matrix Q with the size of n × m by using the obtained features of the effective feature objectTrainAnd carrying out normalization processing on the characteristics to eliminate the factor of unbalanced weight of the characteristics caused by the great difference of the characteristic values of different metering units, and carrying out normalization processing on QTrainEach line of data is processed into [ -1,1 [ -1 [ ]]An interval. Where m is the total number of valid feature objects.
G. Establishing a motion state classification model: input normalized QTrainAs the characteristics for training the SVM model, TrainLabels is input as a training sample label for training the SVM model. The construction of a human motion state recognition classification model SVM-OVO is completed by utilizing an SVM One-to-One multi-classification model (One-Versus-One, OVO), and an SVM kernel function selects a linear kernel function.
After the human motion state classification model is established, the positioning process comprises the following steps:
A. collecting test data: a positioning person collects test data by binding or carrying an IMU (Inertial Measurement Unit) attitude sensor at the front part of the waist, and outputs acceleration sensor data with an unfinished preprocessing result, including triaxial acceleration data AccTestx、AccTesty、AccTestzAnd three-axis gyroscope data GyrTestx、GyrTesty、GyrTestz. Sampling frequency of fcTotal number of sampling points is NTest,fcThe setting should be consistent with the setting in the motion state recognition model establishing step.
B. Preprocessing test data: the obtained AccTestx、AccTesty、AccTestzThe resultant acceleration AccTest is obtained through the combination calculation,
Figure BDA0003064037900000102
wherein AccTest [ i ]]The resultant acceleration at the ith sample point. Acceleration filtering processing is carried out by utilizing a moving average filtering method, and filtering output objects are AccTestx、AccTesty、AccTestzAnd AccTest, the filtered output object is AccTest'x、AccTest′y、AccTest′zAnd AccTest', the filter equation is:
Figure BDA0003064037900000111
wherein x [ i ] and y [ i ] respectively represent input and output at the moment i; m is the input object set size; j is the filter window size and takes the value of 3.
C. Step frequency detection: AccTest' is input. In the normal positioning process, a positioning person can have two stages of starting and falling in a single step period in the 5 motion states. By using the traditional step frequency detection algorithm for reference, the invention only detects the acceleration peak value in the multi-motion state to complete the single-step motion detection, namely the gait detection. For a certain AccTest' [ i ]],i∈[2,NTest-1]If it satisfies
AccTest′[i-1]<AccTest′[i]≤AccTest′[i+1],
Acctest' i]Where is marked as the peak. Setting the peak minimum threshold σ1Inter-peak sample point spacing constraintσ2I.e. the reject value is less than σ1When the sampling point interval between a plurality of peaks is less than sigma2Only the peak mark with the largest value is retained, and the rest peaks are discarded. Obtaining the final reserved peak value number n according to the complaint detection methodpeak,npeakAlso represents the single step number contained in the motion process, and the index positions of the corresponding sampling points are recorded from small to large to npeakIn the array PeakIndex. For a certain PeakIndex [ i ]],1≤i≤npeak,PeakIndex[i]The peak position corresponding to the ith step motion. Using PeakIndex [ i ]]The sample point index represented by the value is the reference, which corresponds to AccTest' [ PeakIndex [ i ]]]The first zero point searched before is marked as the left zero point of the step, and the right zero point is the second zero point searched after the peak of the step. For a certain AccTest' [ i ]],i∈[2,NTest-1]If it satisfies
Acctest 'i-1 <0 and Acctest' i >0,
or
Acctest 'i-1 >0 and Acctest' i ≦ 0,
acctest' i]Is zero. Setting a zero point maximum search range and left and right zero point maximum search ranges sigma3And σ4. By the method, the left zero point Z corresponding to each single step can be obtained1[i]And right zero point Z2[i]I represents the ith step, i is equal to [1, N ]Test]. By calculation of
fs[i]=Z2[i]-Z1[i],
fs[i]Indicating the step frequency of the ith step.
The detection results are shown in fig. 3: in the locating process of the locator, the acceleration of the locator changes periodically, each single step movement has a gait cycle, the acceleration in each single step has a wave peak, and therefore, only the wave peak needs to be found to indicate that there is a single step. The peak position first found will include all peaks, like there will be some invalid peaks due to the acceleration jitter when there is no motion, so the peak minimum threshold σ will appear1. This occurs because multiple peaks may be detected in a single stepConstrained the interval of sampling points between peaks2. And if the peaks which do not satisfy the condition are not required, the remaining peaks and the corresponding index positions can be calibrated to form a single step. And finally, the step frequency can be calculated by searching zero points before and after the peak position.
D. Test sample segmentation: acctest 'obtained after filtering'x、AccTest′y、AccTest′zAnd AccTest' every n, respectivelycThe sampling interval of each sample is cut to form a Test sample set Testx[i]、Testy[i]、Testz[i]、Test[i]. Here, i ranges from 1 to n, n being defined as:
Figure BDA0003064037900000121
[. cndot. ] represents rounding down. In addition, the motion states corresponding to different test sample sets are represented by an array TestLabels, and the value range is 1: integers between 5, 1, 2, 3, 4 and 5 characterize walking, jogging, left striding, right striding and backstepping, respectively. And the TestLabels [ i ] represents the motion state corresponding to the ith training sample set. The TestLabels assignment needs to be done in step G.
E. Extracting the characteristics of the test sample: input Testx[i]、Testy[i]、Testz[i]、Test[i]Calculating characteristic values of effective characteristic items obtained in the process of establishing a human motion state model and sequentially forming a characteristic matrix Q with the size of n multiplied by mTest. And m is the total number of the effective characteristic objects obtained in the step of establishing the motion state model.
F. Test sample feature normalization: the input is QTestBy normalizing QTestEach line of data is processed into [ -1,1 [ -1 [ ]]An interval. For the test samples, the preprocessing should be consistent with the training samples, i.e., the maximum and minimum values should be the maximum and minimum values of the training set.
G. Human motion state identification: input normalized QTest. On the basis of the established SVM-OVO classification model, the pedestrian movement is carried outAnd identifying the motion state and outputting a corresponding motion state identification result. Namely, the identification process completes the assignment work of the TestLabels.
H. And (3) correcting the identification result of partial error motion state: inputting the TestLabels completing assignment work, and realizing correction of individual motion state error recognition results in the TestLabels by an adjacent gait correlation constraint method based on human motion rules. Taking the current subscript t of the array of the TestLabels as a reference, and marking the label value sequence corresponding to the subscript t-3: t of the TestLabels as W1、W2、W3、W4The value to be corrected is W3Fig. 7 shows the condition to be corrected, in which the state 1, the state 2, and the state 3 represent three different motion states. Need to mix W3The label value corresponding to the TestLabels is corrected to W2The conditions of (a) are as follows:
①W2≠W3&&W2≠W4&&W3≠W4&&W1=W2
② does not satisfy first and W2=W4&&W2≠W3&&W3≠W4
And thirdly, the other conditions are not corrected.
Wherein, the value sequence of t is 3 to n. In particular, W is not carried out when t is 31And (4) assigning and correcting conditions to skip the first step. And the whole correction process is executed according to the value sequence of t.
I. Step length estimation: inputting preprocessed AccTest', single step frequency fsAnd modified TestLabels, based on which the step length is estimated using a linear or non-linear step model. For walking, left striding, right striding and reverse low speed motion states, a linear step size model determined by step frequency and acceleration variance is adopted as represented by:
L[k]=A+B·fs[k]+C·var(a)
for jogging motion states, a Weinberg nonlinear step size model, which is determined only by acceleration extremes, is used as follows:
Figure BDA0003064037900000131
in the above formula, L [ k ]]Denotes the kth step size, fs[k]For the K step frequency, var (a) represents the single step acceleration variance, A, B and C are constants, obtained by training, K is a constant, amaxAnd aminShowing the maximum and minimum values of the single-step acceleration.
J. Course estimation: input GyrTestx、GyrTesty、GyrTestzThe course angle psi is calculated by adopting the angular velocity based on the quaternion coordinate system conversion through integral solution, and course angle correction is completed by adopting a Heuristic Drift Elimination algorithm (HDE). The four-order Rungestota method is used for resolving the quaternion, and the quaternion is expressed as follows:
Q=q0+q1i+q2j+q3k
wherein Q represents a result of a quaternion expression, and Q represents0、q1、q2、q3Is a real number, i, j, k are imaginary parts. Introduction of four elements
After counting, a rotation matrix of b-system to n-system expressed by quaternion
Figure BDA0003064037900000141
Can be expressed as:
Figure BDA0003064037900000142
and (3) obtaining a rotation matrix to complete gyroscope data conversion:
Figure BDA0003064037900000143
in the formula, ωx、ωyAnd ωzRespectively representing the angular speeds of the x axis, the y axis and the z axis acquired by the gyroscope under the b system, namely GyrTestx、GyrTesty、GyrTestz
Figure BDA0003064037900000144
And
Figure BDA0003064037900000145
respectively, x-axis, y-axis and z-axis angular velocities converted to n-system. Then pass through the pair
Figure BDA0003064037900000146
The heading angle psi can be obtained by performing integral operation.
And utilizing an HDE algorithm to complete heuristic course compensation based on the turning angular speed. Based on HDE algorithm, the invention divides the main direction of the pedestrian positioning process into 8 main directions, namely the main direction delta is 45 degrees, and the system compensation factor i iscSet to 0.01.
K. And (3) dead reckoning: and based on the motion state identification result, the single step length estimation result and the corrected course angle, completing the position updating of the pedestrian in the multi-motion state, and realizing the positioning navigation. As shown in fig. 8, the formula for updating the dead reckoning position of the pedestrian in the multi-motion state defined by the present invention is defined as follows:
Figure BDA0003064037900000147
in the formula, X [ k ]]And Y [ k ]]Indicating the position of the k-th step of the pedestrian, theta, on the two-dimensional navigation planekIndicates the heading of the kth step, L k]Denotes the step size of the k-th step, qkAnd pkFor distinguishing different values according to different motion states, if the step is in a reverse state, the step q is in a reverse statekTaking 1, otherwise, taking-1; if the step status is left step, then pkTaking 1, if it is a right stride, then pkGet-1, otherwise pkTake 0.
The experiments using the method of the invention were as follows:
an experimental field: the southwest university of transportation library is outside rectangular square.
Experimental equipment: a smart phone is adopted as positioning equipment. A method for fixing equipment of reference waist binding type positioning personnel, wherein the positioning personnel complete the positioning process through a handheld smart phoneData acquisition, fig. 9. The data sampling rate of the intelligent mobile phone inertial navigation device is set to be 50Hz, and the sensors for positioning comprise an accelerometer and a gyroscope; the size of a moving average filtering window N used in data filtering is set to be 3; the minimum peak value threshold value constraint parameter of gait detection and the pedestrian movement period interval constraint parameter are respectively 2m/s2The maximum search ranges of the left zero point and the right zero point of the step frequency detection are respectively 12 sampling points and 25 sampling points, the specific gait detection process is shown in figure 2, the gait detection effect is shown in figure 4, and the step frequency detection result is shown in figure 3; the step length estimation model parameters are obtained according to offline training of different positioning personnel, and the values of specific parameters A, B, C and K in the experiment are 0.1616, 0.2370, 0.0139 and 0.5885 respectively; the angular interval Δ of the main direction of the heuristic heading drift elimination algorithm (HDE) algorithm is 30 °, and the feedback coefficient is set to 0.01.
The experimental contents are as follows: after the positioning equipment is worn, in the experimental process, an experimenter can randomly change the walking posture by holding the mobile phone, and the main motion modes are walking and jogging. The experiment site selects a library in the Xinan university of transportation, Rhinocpu school district, and the library is just opposite to the square for rectangular motion experiment, and the total length of the reference path is about 262.12 m. Through the system flow of fig. 1, a specific motion state recognition algorithm model is shown in fig. 6, a characteristic value diagram in a model training module is shown in fig. 5, a human motion state error recognition result correction condition based on gait constraint according to a recognition result is shown in fig. 7, a PDR-based multi-motion state track calculation method design is shown in fig. 8, and a calculation result is shown in fig. 10.
Purpose of the experiment: the simulation pedestrian carries out unpredictable motion state switching in the positioning process of the common regular indoor positioning scene, thereby carrying out motion state identification on the pedestrian, further completing self-adaptive pedestrian track calculation, and checking the positioning effect of autonomous inertial navigation in a complex motion state.
The experimental results are as follows: through the positioning effect shown in fig. 10, the calculation result of the pedestrian dead reckoning algorithm based on multi-motion state recognition designed by the invention is basically consistent with the actual route, and the motion state recognition of the pedestrian can be accurately completed. Through repeated comparison experiments under the same experiment condition, the average identification accuracy can reach 99.37%, the average positioning error can reach 1.28%, and the adaptability and the reliability of the algorithm are superior to those of the conventional pedestrian track calculation algorithm based on the complex motion state.
Compared with the existing indoor positioning technology, the invention has the obvious advantages that:
firstly, the invention does not need to plan and arrange in advance, has low requirement on equipment and high flexibility on algorithm application, can realize autonomous navigation of pedestrians in common scenes by directly utilizing a smart phone, and has important reference value for researching cheap and applicable inertial navigation products of the pedestrians.
The invention fully considers different motion modes and laws of human bodies, divides the motion of pedestrians in a two-dimensional space into 5 states of walking, jogging, left striding, right striding and reversing, and the adopted step frequency detection method is suitable for various motion states.
And thirdly, the invention fully considers the motion state of the positioning personnel in the navigation process, identifies the motion state in the positioning process, corrects the motion state identification result by utilizing the motion rule of the human body and realizes the high-precision pedestrian motion state identification.
And fourthly, the step length estimation under the multi-motion state of the pedestrian is completed by adopting a linear and nonlinear step length combined estimation model, so that different motion state modes of the pedestrian are more accurately adapted.
And fifthly, the invention improves the traditional PDR model by utilizing course changes caused by different motion modes, thereby realizing accurate pedestrian track calculation in a multi-motion state.

Claims (6)

1.一种基于行人运动状态识别的航迹推算定位方法,其特征在于,包括:1. a dead track reckoning positioning method based on pedestrian motion state identification, is characterized in that, comprises: 步骤一,构建行人运动状态识别分类模型:Step 1, build a pedestrian motion state recognition and classification model: 采集行人在五类运动状态下的训练数据,构建行人运动状态识别分类模型;所述五类运动状态为行走、慢跑、左跨步、右跨步和倒退;所述训练数据包括三轴加速度数据;Collect the training data of pedestrians in five types of motion states, and build a pedestrian motion state recognition and classification model; the five types of motion states are walking, jogging, left stride, right stride and backwards; the training data includes three-axis acceleration data ; 步骤二,识别行人运动状态:Step 2: Identify the motion state of pedestrians: 采集行人的测试数据,使用行人运动状态识别分类模型,识别行人运动状态;所述测试数据包括三轴加速度数据和三轴陀螺仪数据;Collect the test data of pedestrians, use the pedestrian motion state identification and classification model to identify the pedestrian motion state; the test data includes three-axis acceleration data and three-axis gyroscope data; 步骤三,进行步频检测,得到单步步频:Step 3: Perform step frequency detection to obtain a single-step step frequency: 将步骤二采集得到的三轴加速度数据进行步频检测,得到单步步频;Perform cadence detection on the triaxial acceleration data collected in step 2 to obtain a single-step cadence; 步骤四,步长估计:Step 4, step size estimation: 如步骤二识别得到的行人运动状态为行走、左跨步、右跨步或倒退,则采用线性步长模型结合单步步频估计单步步长;如步骤二识别得到的行人运动状态为慢跑,则采用Weinberg非线性步长模型结合单步步频估计单步步长;If the pedestrian motion state identified in step 2 is walking, left stride, right stride or backwards, the linear step size model combined with the single-step stride frequency is used to estimate the single-step stride length; if the pedestrian motion state identified in step 2 is jogging , the Weinberg nonlinear step size model combined with the single-step step frequency is used to estimate the single-step step size; 步骤五,航向估计:Step 5, heading estimation: 将步骤二采集得到的三轴陀螺仪数据,采用基于四元数坐标系转换的角速度进行积分解算出航向角,并采用启发式偏移消除算法完成航向角修正;The three-axis gyroscope data collected in step 2 is used to integrate the angular velocity based on the transformation of the quaternion coordinate system to calculate the heading angle, and the heuristic offset elimination algorithm is used to complete the heading angle correction; 步骤六,航迹推算:Step 6, dead reckoning: 设置行人初始位置坐标和初始航向角,根据步骤二得到的行人运动状态、步骤四得到的单步步长和步骤五修正后的航向角,进行行人在五类运动状态下的位置更新。Set the pedestrian's initial position coordinates and initial heading angle, and update the pedestrian's position in five types of motion states according to the pedestrian motion state obtained in step 2, the single-step step length obtained in step 4, and the heading angle corrected in step 5. 2.如权利要求1所述的一种基于行人运动状态识别的航迹推算定位方法,其特征在于,所述步骤一,构建行人运动状态识别分类模型,其方法为:2. a kind of dead reckoning positioning method based on pedestrian motion state identification as claimed in claim 1, is characterized in that, described step 1, constructs pedestrian motion state recognition classification model, and its method is: 步骤1-1:采集训练样本,进行预处理:采集行人在五类运动状态下的三轴加速度数据AccTrainx、AccTrainy和AccTrainz,并合并得到合加速度数据AccTrain;分别进行加速度滤波处理,得到AccTrain′x、AccTrain′y、AccTrain′z和AccTrain′;Step 1-1: Collect training samples and perform preprocessing: collect the three-axis acceleration data AccTrain x , AccTrain y and AccTrain z of pedestrians in five types of motion states, and combine them to obtain the resultant acceleration data AccTrain; respectively perform acceleration filtering to obtain AccTrain' x , AccTrain' y , AccTrain' z and AccTrain'; 步骤1-2:训练样本分割:将五类运动状态下的AccTrain′x、AccTrain′y、AccTrain′z和AccTrain′分别进行分割后构成训练样本集Trainx[i]、Trainy[i]、Trainz[i]和Train[i];其中,i=1~n为训练样本集的编号,
Figure FDA0003064037890000011
为训练样本集数量,Ntrain为训练样本采样点的总数,nc为每个训练样本集的采样点数,[·]表示向下取整;使用训练样本集的标签TrainLabels表征训练样本集的运动状态的类别;
Step 1-2: Training sample segmentation: AccTrain′ x , AccTrain′ y , AccTrain′ z and AccTrain′ under five types of motion states are divided into training sample sets Train x [i], Train y [i], Train z [i] and Train[i]; where i=1~n is the number of the training sample set,
Figure FDA0003064037890000011
is the number of training sample sets, N train is the total number of training sample sampling points, n c is the number of sampling points in each training sample set, [ ] means rounded down; use the label TrainLabels of the training sample set to represent the movement of the training sample set the category of the state;
步骤1-3:计算训练样本集的时域特征值;Step 1-3: Calculate the time-domain eigenvalues of the training sample set; 第i个训练样本集的时域特征值为:The temporal eigenvalues of the i-th training sample set are: 分别以Trainx[i]、Trainy[i]、Trainz[i]和Train[i]的均值作为时域特征值F1i、F2i、F3i和F4i,分别以Trainx[i]、Trainy[i]、Trainz[i]和Train[i]的绝对值均值作为时域特征值F5i、F6i、F7i和F8i;以此类推,分别以Trainx[i]、Trainy[i]、Trainz[i]和Train[i]的方差、众数、最大值、最小值、四分位距和第三四分位数作为时域特征值F9i~F32i,分别以Trainx[i]、Trainy[i]、Trainz[i]和Train[i]的偏度、峰度和平均绝对误差作为时域特征值F36i~F47i;分别以Trainx[i]与Trainy[i]的互相关系数、Trainy[i]与Trainz[i]的互相关系数和Trainz[i]与Trainx[i]的互相关系数作为时域特征值F33i、F34i和F35iTake the mean of Train x [i], Train y [i], Train z [i] and Train[i] as time domain eigenvalues F 1i , F 2i , F 3i and F 4i , respectively, take Train x [i] , Train y [i], Train z [i] and Train[i] absolute value mean as time domain eigenvalues F 5i , F 6i , F 7i and F 8i ; and so on, take Train x [i], The variance, mode, maximum value, minimum value, interquartile range and third quartile of Train y [i], Train z [i] and Train[i] are taken as time domain eigenvalues F 9i ~F 32i , The skewness, kurtosis and mean absolute error of Train x [i], Train y [i], Train z [i] and Train[ i ] are taken as time domain eigenvalues F 36i ~F 47i respectively; The cross-correlation coefficient of i] and Train y [i], the cross-correlation coefficient of Train y [i] and Train z [i], and the cross-correlation coefficient of Train z [i] and Train x [i] are taken as time domain eigenvalues F 33i , F 34i and F 35i ; 分别以Trainx[i]、Trainy[i]、Trainz[i]和Train[i]引入Hjorth参数的中间变量M4作为时域特征值F48i~F51i
Figure FDA0003064037890000021
以Trainx[i]作为输入集合对象得到f(1)作为时域特征值F52i
Figure FDA0003064037890000022
其中,d[k]表示输入集合对象d中的第k个数据,Nd为输入集合对象d的大小,即Nd=nc
Take Train x [i], Train y [i], Train z [i] and Train[i] to introduce the intermediate variable M 4 of the Hjorth parameter as the time domain eigenvalues F 48i ~F 51i ,
Figure FDA0003064037890000021
Take Train x [i] as the input set object to obtain f(1) as the time domain eigenvalue F 52i ,
Figure FDA0003064037890000022
Wherein, d[k] represents the kth data in the input set object d, and N d is the size of the input set object d, that is, N d =n c ;
步骤1-4:筛选五类运动状态的有效特征:Steps 1-4: Screen the valid features of five categories of motion states: 处理时域特征F1,所述时域特征F1为时域特征值F1i的标记:根据TrainLabels所表征的训练样本集的运动状态,从n个训练样本集的n个时域特征值F1i中分别提取五类运动状态的时域特征值;将五类运动状态对应的时域特征值分别组合并重新依次排序后,绘制成五类运动状态所对应的五条特征曲线的特征曲线图;所述特征曲线图中,横轴/纵轴为时域特征值重新依次排序后的序号,纵轴/横轴为时域特征值;如五条特征曲线中,存在一条以上不与其它特征曲线交叉的特征曲线,则时域特征F1筛选为不与其它特征曲线交叉的那一条以上特征曲线所对应的运动状态的有效特征;否则,时域特征F1不筛选为五类运动状态的有效特征;Process the time domain feature F 1 , which is the label of the time domain feature value F 1i : according to the motion state of the training sample set represented by TrainLabels, from the n time domain feature values F of the n training sample sets In 1i , the time-domain eigenvalues of the five types of motion states are extracted respectively; the time-domain eigenvalues corresponding to the five types of motion states are respectively combined and reordered, and then drawn into a characteristic curve diagram of five characteristic curves corresponding to the five types of motion states; In the characteristic curve diagram, the horizontal axis/vertical axis is the sequence number of the time domain characteristic values after reordering, and the vertical axis/horizontal axis is the time domain characteristic value; for example, among the five characteristic curves, there is more than one characteristic curve that does not intersect with other characteristic curves. characteristic curve, then the time domain feature F 1 is screened as the valid feature of the motion state corresponding to one or more characteristic curves that do not intersect with other feature curves; otherwise, the time domain feature F 1 is not screened as the valid feature of the five types of motion states ; 按照类同的方法处理时域特征F2~F52,筛选五类运动状态的有效特征;The time domain features F 2 to F 52 are processed according to the similar method, and the effective features of five types of motion states are screened; 步骤1-5:建立有效特征矩阵QTrain,所述有效特征矩阵QTrain的大小为n×m;其中,第一行的值是第一个训练样本集根据步骤1-4筛选的五类运动状态的有效特征所对应的m个时域特征值,以此类推;Step 1-5: establish an effective feature matrix Q Train , and the size of the effective feature matrix Q Train is n×m; wherein, the value of the first row is the five types of sports screened by the first training sample set according to steps 1-4 m time-domain eigenvalues corresponding to the valid features of the state, and so on; 步骤1-6:建立行人运动状态识别分类模型:将归一化处理后的QTrain作为SVM模型训练所用特征,将TrainLabels作为SVM模型训练所用训练样本标签,利用SVM一对一多分类器(OVO)建立行人运动状态识别分类模型SVM-OVO;其中,SVM的核函数为线性核函数。Step 1-6: Establish a pedestrian motion state recognition and classification model: use the normalized Q Train as the feature used for SVM model training, use TrainLabels as the training sample label for SVM model training, and use the SVM one-to-one multi-classifier (OVO). ) to establish a pedestrian motion state recognition and classification model SVM-OVO; wherein, the kernel function of SVM is a linear kernel function.
3.如权利要求2所述的一种基于行人运动状态识别的航迹推算定位方法,其特征在于,所述步骤二,识别行人运动状态,其方法为:3. a kind of dead reckoning positioning method based on pedestrian motion state identification as claimed in claim 2, is characterized in that, described step 2, identifies pedestrian motion state, and its method is: 步骤2-1:采集测试样本,进行预处理:采集行人的三轴加速度数据AccTestx、AccTesty和AccTestz,并合并得到合加速度AccTest;分别进行加速度滤波处理,得到AccTest′x、AccTest′y、AccTest′z和AccTest′;测试样本的采样频率与训练样本相等;步骤2-2:测试样本分割:将AccTest′x、AccTest′y、AccTest′z和AccTest′分别进行分割后构成测试样本集Testx[i']、Testy[i']、Testz[i']和Test[i'];其中,i'=1~n'为测试样本集的编号,
Figure FDA0003064037890000031
为测试样本集数量,Ntest为测试样本采样点的总数;nc为每个训练样本集的采样点数,[·]表示向下取整;使用测试样本集的标签TestLabels表征测试样本集的运动状态的类别;
Step 2-1: Collect test samples and perform preprocessing: collect pedestrian triaxial acceleration data AccTest x , AccTest y and AccTest z , and combine them to obtain the resultant acceleration AccTest; perform acceleration filtering processing respectively to obtain AccTest′ x , AccTest′ y , AccTest' z and AccTest'; the sampling frequency of the test sample is equal to that of the training sample; Step 2-2: Test sample segmentation: AccTest' x , AccTest' y , AccTest' z and AccTest' are divided respectively to form a test sample set Test x [i'], Test y [i'], Test z [i'] and Test[i']; where i'=1~n' is the number of the test sample set,
Figure FDA0003064037890000031
is the number of test sample sets, N test is the total number of test sample sampling points; n c is the number of sampling points in each training sample set, [ ] means rounded down; use the label TestLabels of the test sample set to characterize the motion of the test sample set the category of the state;
步骤2-3:计算测试样本集的有效特征值;Step 2-3: Calculate the effective eigenvalues of the test sample set; 第i'个测试样本集的有效特征值为:The valid eigenvalues of the i'th test sample set are: 从第i'个测试样本集的时域特征中,选择行人运动状态识别分类模型的五类运动状态的有效特征,按照计算训练样本集的时域特征值相同的方法计算得到;From the time domain features of the i'th test sample set, select the effective features of the five types of motion states of the pedestrian motion state recognition classification model, and calculate according to the same method of calculating the time domain feature values of the training sample set; 步骤2-4:建立测试样本集的有效特征矩阵QTest,所述有效特征矩阵QTest的大小为n'×m;其中,第一行的值是第一个测试样本集根据步骤2-3计算得到的m个有效特征值,以此类推;Step 2-4: establish a valid feature matrix Q Test of the test sample set, the size of the valid feature matrix Q Test is n'×m; wherein, the value of the first row is the first test sample set according to step 2-3 Calculated m valid eigenvalues, and so on; 步骤2-5:识别行人运动状态:将归一化处理后的QTest,输入行人运动状态识别分类模型SVM-OVO,对测试样本集的标签TestLabels赋值,得到每一个测试样本集的运动状态的类别。Step 2-5: Identify the pedestrian motion state: Input the normalized Q Test , input the pedestrian motion state recognition classification model SVM-OVO, assign the label TestLabels of the test sample set, and obtain the motion state of each test sample set. category.
4.如权利要求3所述的一种基于行人运动状态识别的航迹推算定位方法,其特征在于,所述步骤二,还包括对测试样本集的运动状态的类别进行修正,即4. a kind of dead reckoning positioning method based on pedestrian motion state identification as claimed in claim 3, it is characterized in that, described step 2, also comprises revising the classification of the motion state of the test sample set, namely 步骤2-6:将第一个、第二个、第三个和第四个测试样本集的标签TestLabels的值,分别令其等于W1、W2、W3和W4Step 2-6: Set the values of the labels TestLabels of the first, second, third and fourth test sample sets to be equal to W 1 , W 2 , W 3 and W 4 , respectively; 步骤2-7:如W2≠W3&&W2≠W4&&W3≠W4&&W1=W2,或者W2≠W3&&W2≠W4&&W3≠W4&&W1=W2不满足且W2=W4&&W2≠W3&&W3≠W4,则将W3对应的标签TestLabels的值修正为W2对应的标签TestLabels的值;否则,不修正;Step 2-7: If W 2 ≠W 3 &&W 2 ≠W 4 &&W 3 ≠W 4 &&W 1 =W 2 , or W 2 ≠W 3 &&W 2 ≠W 4 &&W 3 ≠W 4 &&W 1 =W 2 is not satisfied And W 2 =W 4 && W 2 ≠W 3 && W 3 ≠W 4 , then modify the value of the label TestLabels corresponding to W 3 to the value of the label TestLabels corresponding to W 2 ; otherwise, do not correct; 步骤2-8:返回步骤2-6,并将其替换为:将第二个、第三个、第四个和第五个测试样本集的标签TestLabels的值,分别令其等于W1、W2、W3和W4;之后,执行步骤2-7;Step 2-8: Return to step 2-6 and replace it with: Set the values of the labels TestLabels of the second, third, fourth and fifth test sample sets to be equal to W 1 , W respectively 2 , W 3 and W 4 ; after that, perform steps 2-7; 步骤2-9:按照步骤2-8类同的方法,完成测试样本集的标签TestLabels的值的修正。Step 2-9: According to the method similar to Step 2-8, complete the correction of the value of the label TestLabels of the test sample set. 5.如权利要求1所述的一种基于行人运动状态识别的航迹推算定位方法,其特征在于,所述步骤三,进行步频检测,得到单步步频,其方法为:5. a kind of dead reckoning positioning method based on pedestrian motion state identification as claimed in claim 1, is characterized in that, described step 3, carries out cadence detection, obtains single-step cadence, and its method is: 步骤3-1:采集测试样本,进行预处理:采集行人的三轴加速度数据AccTestx、AccTesty和AccTestz,并合并得到合加速度AccTest;对合加速度AccTest进行加速度滤波处理,得到AccTest′;Step 3-1: Collect test samples and perform preprocessing: collect three-axis acceleration data AccTest x , AccTest y and AccTest z of pedestrians, and combine them to obtain the resultant acceleration AccTest; perform acceleration filtering on the resultant acceleration AccTest to obtain AccTest′; 步骤3-2:波峰检测:如满足AccTest′[j-1]<AccTest′[j]≤AccTest′[j+1],则将AccTest′[j]标记为峰值;AccTest′[j]表示第j个样本点滤波处理后的合加速度,j∈[2,NTest-1],Ntest为测试样本采样点的总数;设置峰值最低阈值σ1和峰值间采样点间隔约束σ2,舍弃值小于σ1的峰值,且当多个峰值之间采样点间隔小于σ2时仅保留值最大的峰值其余峰值舍弃,得到npeak个波峰峰值;将检测得到的波峰峰值对应的AccTest′的索引记录到数组PeakIndex中,PeakIndex[l]为第l个波峰峰值对应的AccTest′的索引,1≤l≤npeakStep 3-2: Peak detection: if AccTest'[j-1]<AccTest'[j]≤AccTest'[j+1], mark AccTest'[j] as the peak; AccTest'[j] represents the first The resultant acceleration after filtering processing of j sample points, j∈[2,N Test -1], N test is the total number of test sample sampling points; set the peak minimum threshold σ 1 and the sampling point interval constraint σ 2 between peaks, and discard the value The peak value is less than σ 1 , and when the sampling point interval between multiple peaks is less than σ 2 , only the peak value with the largest value is retained and the remaining peak values are discarded to obtain n peak peak -to-peak values; the AccTest' index corresponding to the detected peak-to-peak value is recorded. In the array PeakIndex, PeakIndex[l] is the index of the AccTest' corresponding to the lth peak peak value, 1≤l≤n peak ; 步骤3-3:搜寻每个波峰峰值的左零点和右零点:将第l个波峰峰值PeakIndex[l]对应的AccTest′[PeakIndex[l]]值,向前搜寻得到的第一个零点标定为PeakIndex[l]的左零点Z1[l],向后搜寻得到的第二个零点标定为PeakIndex[l]的右零点Z2[l];Step 3-3: Search for the left zero point and right zero point of each peak-to-peak value: the AccTest'[PeakIndex[l]] value corresponding to the l-th peak-to-peak value PeakIndex[l], and the first zero point obtained by searching forward is demarcated as The left zero point Z 1 [l] of PeakIndex[l], the second zero point obtained by searching backward is calibrated as the right zero point Z 2 [l] of PeakIndex[l]; 左零点的搜寻范围限定为:若向前搜寻至PeakIndex[l]-σ3索引位置尚未得到左零点,则停止搜寻,令AccTest′[PeakIndex[l]-σ3]为左零点Z1[l],σ3为阈值;The search range of the left zero point is limited as follows: if the left zero point is not obtained by searching forward to the index position of PeakIndex[l]-σ 3 , stop the search, and let AccTest'[PeakIndex[l]-σ 3 ] be the left zero point Z 1 [l ], σ 3 is the threshold; 右零点的搜寻范围限定为:若向后搜寻至PeakIndex[l]+σ4索引位置尚未得到右零点,则停止搜寻,令AccTest′[PeakIndex[l]+σ4]为右零点Z2[l],σ4为阈值;The search range of the right zero point is limited as follows: if the right zero point is not obtained by searching backward to the index position of PeakIndex[l]+σ 4 , stop the search, and let AccTest'[PeakIndex[l]+σ 4 ] be the right zero point Z 2 [l ], σ 4 is the threshold; 所述零点为:The zero point is: 如满足AccTest′[j-1]<0且AccTest′[j]≥0,或者AccTest′[j-1]>0且AccTest′[j]≤0,则AccTest′[j]为零点;If AccTest'[j-1]<0 and AccTest'[j]≥0, or AccTest'[j-1]>0 and AccTest'[j]≤0, then AccTest'[j] is zero; 步骤3-4:计算每个单步的步频:第l个波峰峰值对应第l步,第l步的步频为fs[l]=Z2[l]-Z1[l]。Step 3-4: Calculate the step frequency of each single step: the lth peak value corresponds to the lth step, and the step frequency of the lth step is f s [l]=Z 2 [l]-Z 1 [l]. 6.如权利要求1所述的一种基于行人运动状态识别的航迹推算定位方法,其特征在于,所述步骤六,航迹推算,其方法为:设置行人初始位置坐标和初始航向角后,按照以下公式进行行人在五类运动状态下的位置更新,6. a kind of dead reckoning positioning method based on pedestrian motion state identification as claimed in claim 1, is characterized in that, described step 6, dead reckoning, its method is: after setting pedestrian initial position coordinates and initial heading angle , according to the following formula to update the pedestrian's position in five motion states,
Figure FDA0003064037890000051
Figure FDA0003064037890000051
其中,X[k]和Y[k]表示行人第k步在二维导航平面的位置,θk表示第k步的航向角,L[k]表示第k步的步长;若第k步运动状态为倒退,则qk取1,否则为-1;若第k步运动状态为左跨步或右跨步,则pk取1或-1,否则pk取0。Among them, X[k] and Y[k] represent the position of the pedestrian at the kth step on the two-dimensional navigation plane, θ k represents the heading angle of the kth step, and L[k] represents the step size of the kth step; if the kth step If the motion state is backward, q k takes 1, otherwise it is -1; if the k-th motion state is left stride or right stride, p k takes 1 or -1, otherwise p k takes 0.
CN202110521228.5A 2021-05-13 2021-05-13 Dead reckoning positioning method based on pedestrian motion state recognition Pending CN113239803A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110521228.5A CN113239803A (en) 2021-05-13 2021-05-13 Dead reckoning positioning method based on pedestrian motion state recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110521228.5A CN113239803A (en) 2021-05-13 2021-05-13 Dead reckoning positioning method based on pedestrian motion state recognition

Publications (1)

Publication Number Publication Date
CN113239803A true CN113239803A (en) 2021-08-10

Family

ID=77134016

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110521228.5A Pending CN113239803A (en) 2021-05-13 2021-05-13 Dead reckoning positioning method based on pedestrian motion state recognition

Country Status (1)

Country Link
CN (1) CN113239803A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113790735A (en) * 2021-08-20 2021-12-14 北京自动化控制设备研究所 Pedestrian single-step division method in complex motion state
CN113790722A (en) * 2021-08-20 2021-12-14 北京自动化控制设备研究所 A pedestrian stride modeling method based on time-frequency feature extraction from inertial data
CN114061616A (en) * 2021-10-22 2022-02-18 北京自动化控制设备研究所 An adaptive wave peak detection pedometer method
CN114459469A (en) * 2022-01-14 2022-05-10 北京信息科技大学 Multi-motion-state navigation method and device and intelligent wearable equipment
CN116026357A (en) * 2021-10-26 2023-04-28 中移物联网有限公司 A positioning method, device, electronic equipment and storage medium
CN116092193A (en) * 2023-02-14 2023-05-09 重庆邮电大学 Pedestrian track reckoning method based on human motion state identification
CN116518971A (en) * 2023-04-27 2023-08-01 武汉大学 Ubiquitous positioning signal enhancement-based personalized PDR positioning method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130311134A1 (en) * 2012-05-18 2013-11-21 Trx Systems, Inc. Method for step detection and gait direction estimation
CN107084718A (en) * 2017-04-14 2017-08-22 桂林电子科技大学 Indoor Positioning Method Based on Pedestrian Dead Reckoning
CN109459028A (en) * 2018-11-22 2019-03-12 东南大学 A kind of adaptive step estimation method based on gradient decline
CN110766726A (en) * 2019-10-17 2020-02-07 重庆大学 Visual positioning and dynamic tracking method for moving target of large bell jar container under complex background
CN110940972A (en) * 2019-12-09 2020-03-31 中国民航大学 Method for extracting S-mode signal arrival time of multi-preamble pulse combined filtering detection

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130311134A1 (en) * 2012-05-18 2013-11-21 Trx Systems, Inc. Method for step detection and gait direction estimation
CN107084718A (en) * 2017-04-14 2017-08-22 桂林电子科技大学 Indoor Positioning Method Based on Pedestrian Dead Reckoning
CN109459028A (en) * 2018-11-22 2019-03-12 东南大学 A kind of adaptive step estimation method based on gradient decline
CN110766726A (en) * 2019-10-17 2020-02-07 重庆大学 Visual positioning and dynamic tracking method for moving target of large bell jar container under complex background
CN110940972A (en) * 2019-12-09 2020-03-31 中国民航大学 Method for extracting S-mode signal arrival time of multi-preamble pulse combined filtering detection

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王存华: "基于地磁指纹地图和PDR的室内定位技术研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》, no. 2020, 15 December 2020 (2020-12-15), pages 140 - 89 *
邓平: "一种基于人体运动状态识别的行人航迹推算方法", 《中国惯性技术学报》, vol. 29, no. 1, 15 February 2021 (2021-02-15), pages 16 - 22 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113790735A (en) * 2021-08-20 2021-12-14 北京自动化控制设备研究所 Pedestrian single-step division method in complex motion state
CN113790722A (en) * 2021-08-20 2021-12-14 北京自动化控制设备研究所 A pedestrian stride modeling method based on time-frequency feature extraction from inertial data
CN113790722B (en) * 2021-08-20 2023-09-12 北京自动化控制设备研究所 Pedestrian step length modeling method based on inertial data time-frequency domain feature extraction
CN113790735B (en) * 2021-08-20 2023-09-12 北京自动化控制设备研究所 Pedestrian single-step dividing method under complex motion state
CN114061616A (en) * 2021-10-22 2022-02-18 北京自动化控制设备研究所 An adaptive wave peak detection pedometer method
CN116026357A (en) * 2021-10-26 2023-04-28 中移物联网有限公司 A positioning method, device, electronic equipment and storage medium
CN114459469A (en) * 2022-01-14 2022-05-10 北京信息科技大学 Multi-motion-state navigation method and device and intelligent wearable equipment
CN114459469B (en) * 2022-01-14 2023-05-23 北京信息科技大学 Multi-motion state navigation method and device and intelligent wearable equipment
CN116092193A (en) * 2023-02-14 2023-05-09 重庆邮电大学 Pedestrian track reckoning method based on human motion state identification
CN116518971A (en) * 2023-04-27 2023-08-01 武汉大学 Ubiquitous positioning signal enhancement-based personalized PDR positioning method and system
CN116518971B (en) * 2023-04-27 2025-09-09 武汉大学 Ubiquitous positioning signal enhancement-based personalized PDR positioning method and system

Similar Documents

Publication Publication Date Title
CN113239803A (en) Dead reckoning positioning method based on pedestrian motion state recognition
Ryu et al. Automated action recognition using an accelerometer-embedded wristband-type activity tracker
Zeinali et al. IMUNet: Efficient regression architecture for inertial IMU navigation and positioning
CN103997572B (en) A kind of step-recording method based on mobile phone acceleration sensor data and device
Wang et al. Recent advances in pedestrian navigation activity recognition: A review
CN109579853A (en) Inertial navigation indoor orientation method based on BP neural network
CN107016342A (en) A kind of action identification method and system
CN103699795A (en) Exercise behavior identification method and device and exercise intensity monitoring system
CN106874874A (en) Motion state identification method and device
Alrazzak et al. A survey on human activity recognition using accelerometer sensor
Ko et al. CNN and bi-LSTM based 3D golf swing analysis by frontal swing sequence images
CN106970705A (en) Motion capture method, device and electronic equipment
CN114022956A (en) A multi-dimensional intelligent method for judging the effect of fitness movements
CN109186594A (en) The method for obtaining exercise data using inertial sensor and depth camera sensor
CN111062412B (en) Novel intelligent shoe intelligent recognition method for indoor pedestrian movement speed
Uddin et al. SmartSpaghetti: Accurate and robust tracking of Human's location
Zeinali et al. IMUNet: Efficient regression architecture for IMU navigation and positioning
CN116189382A (en) Fall detection method and system based on inertial sensor network
İsmail et al. Human activity recognition based on smartphone sensor data using CNN
US20230397838A1 (en) System, apparatus and method for activity classification
CN105879301B (en) A kind of upper extremity exercise recognition methods towards intelligent Dumbbell
CN119110705A (en) Gait data processing method and system
Kawakura et al. Grouping Method Using Graph Theory for Agricultural Workers Engaging in Manual Tasks
Xia et al. Real-time recognition of human daily motion with smartphone sensor
Bao et al. Operation action recognition using wearable devices with inertial sensors

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210810

WD01 Invention patent application deemed withdrawn after publication