[go: up one dir, main page]

CN119810928B - A kayaking action recognition method, system, storage medium and program product based on data analysis - Google Patents

A kayaking action recognition method, system, storage medium and program product based on data analysis Download PDF

Info

Publication number
CN119810928B
CN119810928B CN202510309008.4A CN202510309008A CN119810928B CN 119810928 B CN119810928 B CN 119810928B CN 202510309008 A CN202510309008 A CN 202510309008A CN 119810928 B CN119810928 B CN 119810928B
Authority
CN
China
Prior art keywords
neural network
kayak
data
kayaking
motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202510309008.4A
Other languages
Chinese (zh)
Other versions
CN119810928A (en
Inventor
李梦
刘国辉
王梦琳
邹奕町
李嘉文
张璐琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Sport University
Original Assignee
Chengdu Sport University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Sport University filed Critical Chengdu Sport University
Priority to CN202510309008.4A priority Critical patent/CN119810928B/en
Publication of CN119810928A publication Critical patent/CN119810928A/en
Application granted granted Critical
Publication of CN119810928B publication Critical patent/CN119810928B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Analysis (AREA)

Abstract

本发明涉及皮划艇动作识别技术领域,公开了一种基于数据分析的皮划艇动作识别方法、系统、存储介质和程序产品,所述方法包括获取皮划艇动作数据;所述皮划艇动作数据包括皮划艇力学数据、皮划艇运动学数据和皮划艇环境数据;采用基于稀疏约束的全连接神经网络对皮划艇动作数据进行特征提取,获得皮划艇动作特征;采用基于自相似特征的高阶神经网络对皮划艇动作特征进行皮划艇动作识别。本发明能够提升皮划艇动作识别的分类准确性。

The present invention relates to the technical field of kayak motion recognition, and discloses a kayak motion recognition method, system, storage medium and program product based on data analysis, wherein the method comprises obtaining kayak motion data; the kayak motion data comprises kayak mechanical data, kayak kinematic data and kayak environmental data; a fully connected neural network based on sparse constraints is used to extract features of the kayak motion data to obtain kayak motion features; a high-order neural network based on self-similar features is used to perform kayak motion recognition on the kayak motion features. The present invention can improve the classification accuracy of kayak motion recognition.

Description

Kayak action recognition method, system, storage medium and program product based on data analysis
Technical Field
The invention relates to the technical field of kayak motion recognition, in particular to a kayak motion recognition method based on data analysis.
Background
Kayak sport is a complex athletic and technical activity involving core actions and technical evaluations that require comprehensive analysis in combination with multidimensional data of mechanical, kinematic, and environmental factors. Such data include mechanical parameters of the blade in the water (e.g., lift, drag, reaction, etc.), kinematic parameters (e.g., acceleration, angular velocity, change in direction of motion, etc.), and environmental factors (e.g., water flow rate, water temperature, etc.). Effective data collection and analysis is of great significance to the assessment of athlete action normalization and technical improvement. However, in the prior art, kayak data analysis still faces a number of technical bottlenecks.
The invention discloses a power distribution network data analysis method and system based on machine learning, and the method comprises the steps of obtaining a power distribution network data set to be processed, carrying out data enhancement and feature extraction on the power distribution network data set to be processed by using discrete wavelet transformation to obtain data features of a plurality of power node data, inputting the data features of the plurality of power node data into an integrated learning data classification model, and outputting a power distribution network data classification result, wherein the integrated learning data classification model is obtained by combining a plurality of classifiers through an integrated learning algorithm, carrying out time sequence analysis and numerical value anomaly analysis on the power distribution network data classification result to obtain the running state of each power node, realizing data classification and state analysis on the power distribution network data of multiple types, improving the efficiency and accuracy of the power distribution network data classification, and realizing the state analysis on the power nodes.
The invention patent of China with publication number CN118863740A provides an intelligent material estimating and managing system based on the Internet of things and machine learning, the system monitors the weight and the picture of materials in real time by the technology of the Internet of things, identifies images by using a machine learning algorithm, automatically classifies the materials and measures and calculates the stock quantity, the system comprises a perception layer, a data analysis layer, an application layer, a user interaction layer and an infrastructure layer, and the intelligent stock management, automatic purchasing and stock reminding functions are realized.
The above prior art uses the traditional SMOTE algorithm to generate corresponding data, so that it is difficult to ensure the diversity and uniformity of the data, and especially when the traditional SMOTE algorithm described in the above prior art is used to generate kayak motion data, the method is limited to simple interpolation of a few samples of kayak motion data, and is lack of sufficient coverage of the distribution of the kayak motion data samples, so that model deviation is easy to be caused.
Meanwhile, in the prior art, when the corresponding data features are processed by using the traditional neural network, gradient disappearance is easy to occur, and particularly when the traditional neural network is applied to kayak motion data features, the problem of gradient explosion or local optimal solution is easy to occur, the number of attributes of general kayak data can reach tens or hundreds, the extracted kayak motion data features can have redundancy, the calculated amount is large, and the model stability and generalization capability are easy to be poor.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a kayak action recognition method based on data analysis.
In order to achieve the aim of the invention, the invention adopts the following technical scheme:
In a first aspect, a kayak motion recognition method based on data analysis includes the steps of:
acquiring kayak motion data, wherein the kayak motion data comprises kayak mechanical data, kayak kinematic data and kayak environment data;
On the basis of the traditional neural network, adopting a neuron constraint sparsification and dynamic shearing activation strategy, and constraining the connection of each neuron in the training process of the neural network to force the neural network to only reserve the connection with the most information content in each layer, so that the calculation amount can be reduced, the overfitting can be avoided, and the robustness and generalization capability of the neural network model are improved.
And adopting a high-order neural network based on self-similar characteristics to identify the kayak motion characteristics.
Further, the feature extraction is performed on the kayak motion data by adopting a fully connected neural network based on sparse constraint, so as to obtain kayak motion features, which comprises the following steps:
initializing parameters of a fully-connected neural network;
performing standardized processing on kayak action data;
forward propagating kayak data input into the neural network through each layer of the neural network, wherein the output of each layer is used as the input of the next layer, and finally outputting a group of high-dimensional characteristic representations through layer-by-layer transmission;
According to the error of the feature extraction result, establishing a total loss function by adopting a sparsity constraint term and a constraint term of an activation function for training;
The neural network automatically adjusts the structure of the neural network according to different stages of kayak data by adopting adjustable sparse factors and activation parameters in the training process;
repeating the steps until the preset iteration stopping condition is met.
Further, according to the error of the feature extraction result, and adopting a sparsity constraint term and a constraint term of an activation function to establish a total loss function for training, specifically:
Wherein, As a total loss function; the number of samples input to the neural network for the current lot; Is the first True labels of the individual samples; the prediction output of the model; is the L2 norm; regularization coefficients for L2; a freunds Luo Beini norm of the weight matrix; Is a Friedel Luo Beini Stonex; is a sparsification constraint coefficient; is a sparsity control term; calculating norms for non-zero connection numbers; Is the total number of layers of the neural network.
Further, the adjustable sparse factor and the activation parameter are adopted in the training process, so that the neural network automatically adjusts the structure of the neural network according to different stages of kayak data, specifically:
Wherein, The updated sparse factor in the network; Is a sparse factor in the neural network; And Super parameters for controlling sparsity and shear strength rate of change; Is an error term when the neural network; is the neural network of An activation output of the layer; is a clipping factor in the updated neural network; is a shear factor in a neural network; is the L2 norm.
Further, the kayak motion recognition method for the kayak motion features by adopting the high-order neural network based on the self-similar features comprises the following steps:
initializing parameters of a high-order neural network model;
in the training process of the high-order neural network, calculating a self-similarity structure in kayak data through a self-similarity loss function;
the error is reversely propagated through each layer by a back propagation algorithm, and the gradient of a loss function of the higher-order neural network relative to the weight parameter of the higher-order neural network is calculated;
dynamically adjusting the learning rate of the higher-order neural network in each gradient updating process by adopting a self-adaptive learning rate mechanism;
repeating the steps until the preset iteration stopping condition is met.
Further, before the feature data reach the decision layer, in the training process of the higher-order neural network, calculating a self-similarity structure in the kayak data through a self-similarity loss function, wherein the self-similarity structure specifically comprises the following steps:
Wherein, Is a self-similarity loss function; For the number of samples before expansion; Is a self-similarity weight; Is the first Sample and the first input to the higher order neural networkSimilarity measures between samples input to the higher order neural network; to be input to the higher order neural network sample Weights of the individuals; Is the first Sample number of the samples input to the higher order neural networkA plurality of features; Is the first Sample number of the samples input to the higher order neural networkA plurality of features; is the L2 norm; Is the number of features.
Further, after the kayak motion data is acquired, the data expansion is performed on the kayak motion data based on the SMOTE method guided by quantum probability, and the method comprises the following steps:
Setting quantum states according to the existing minority samples, wherein each sample corresponds to one quantum bit, and superposition of the quantum states represents all possible sample states;
Adjusting the probability amplitude of each quantum bit through a probability amplitude modulation strategy in quantum computation;
performing quantum measurement, randomly selecting states and collapsing according to the modulated quantum state probability distribution, and generating new kayak data points;
Correcting the newly generated kayak data points through a topological invariance strategy;
repeating the steps until the preset iteration stopping condition is met.
Further, the probability amplitude of each qubit is adjusted through a probability amplitude modulation strategy in quantum computation, specifically:
Wherein, To be adjusted toProbability amplitude of individual samples; Parameters for controlling the sharpness of the distribution; Is the first The distance of each sample from its nearest neighbor; Is the first Complex probability magnitudes for individual samples; Is the first Complex probability magnitudes for individual samples; To the number of samples before expansion.
In a second aspect, a kayak motion recognition system based on data analysis, comprising:
the data acquisition module is used for acquiring the action data of the kayak;
The system comprises a first data feature extraction module, a second data feature extraction module and a third data feature extraction module, wherein the first data feature extraction module performs feature extraction on kayak motion data by adopting a fully-connected neural network based on sparse constraint to obtain kayak motion features;
The data identification module is used for identifying the action characteristics of the kayak;
And the output module is used for outputting the result of the kayak action recognition.
In a third aspect, a computer readable storage medium storing instructions, the storage medium storing a computer program or instructions which, when executed by an image processing apparatus, implement a kayak motion recognition method based on data analysis of the first aspect.
In a fourth aspect, a computer program product comprises computer program code which, when run, causes a processor to perform a kayak action recognition method of the first aspect based on data analysis.
The invention has the following beneficial effects:
1. according to the invention, a SMOTE algorithm based on quantum probability guidance is adopted, the expansion of few samples is realized by superposition of quantum states and probability amplitude modulation, more uniform and various samples are generated, and a topology invariance strategy is used for correcting newly generated samples, so that the consistency of the expanded samples and original data in numerical values and topology structures is ensured.
2. The invention provides a neural network algorithm based on sparse constraint, which optimizes a feature extraction process, reduces computational complexity, avoids overfitting, improves stability and generalization capability of a network through dynamic shearing activation functions and neuron sparsification constraint strategies, and automatically optimizes feature selection capability of the neural network according to a training process by adopting a mechanism for dynamically adjusting sparse factors and activation parameters.
3. The invention provides a high-order neural network classification algorithm based on self-similarity characteristics, which combines a self-similarity loss function to strengthen the recognition capability of a classifier on a kayak action mode, adopts weighted similarity measurement to combine local and global information, dynamically adjusts the calculation of the classifier on the characteristic similarity, and improves the classification accuracy.
Drawings
FIG. 1 is a schematic flow diagram of a kayak motion recognition method based on data analysis;
FIG. 2 is a graph of accuracy versus different regularization methods;
FIG. 3 is a graph comparing model performance at different sparsity.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and all the inventions which make use of the inventive concept are protected by the spirit and scope of the present invention as defined and defined in the appended claims to those skilled in the art.
As shown in fig. 1, an embodiment of the present invention provides a kayak motion recognition method based on data analysis, including the following steps S1 to S3:
S1, acquiring kayak motion data, wherein the kayak motion data comprises kayak mechanical data, kayak kinematic data and kayak environment data;
In an alternative embodiment of the present invention, the kayak data relates to the mechanics, kinematics and environmental factors of the kayak motion, and the sources of the kayak data acquisition mainly include kayak mechanics data, kayak kinematics data, and kayak environmental data.
The kayak mechanical data is acquired from a mechanical sensing module arranged on the blade, and specifically comprises information such as measuring the lift force, resistance and reaction force of the blade in water, and the resultant force of the blade;
The kayak kinematic data adopts a triaxial acceleration sensor and an angular velocity sensor which are arranged in a paddle rod to collect the kayak kinematic data, wherein the kayak kinematic data comprises information such as acceleration, angular velocity, movement direction change and the like;
The kayak environmental data adopts devices such as a water flow speed sensor, a temperature sensor and the like to collect information such as environmental variables and the like.
The kayak data are transmitted to a kayak data storage platform in real time through a wireless sensing network, and are connected to sensors through wireless modules, so that mechanical, kinematic and kayak environment data are monitored and transmitted in real time;
all the collected kayak data is stored in a standardized structured format, in particular CSV format.
In one embodiment, the attributes of the kayak data include:
Ra represents lift Force (FL), specifically force of a blade in water perpendicular to water surface, da represents resistance Force (FD), specifically water flow resistance of the blade in water, fwater a represents reaction force (Fwater), specifically reaction force of water to the blade, FHe a represents resultant force (FHe), specifically resultant force of lift force and resistance, acca represents acceleration, specifically acceleration of a paddle of an athlete, ωa represents angular velocity, specifically rotational velocity of the paddle, Δθa represents direction change, specifically angle change of the paddle, va represents water flow velocity, specifically water flow velocity of the athlete, ta represents temperature, specifically water temperature of a movement area, SLa represents attitude angle, specifically rowing attitude angle of the athlete (such as paddle angle, torso attitude, etc.).
It should be noted that, this embodiment is merely for illustrating one type of kayak data format and category of the present invention, in practical application, the number of the kayak data attributes is usually more than 10 attributes, and the number of the kayak data attributes may be tens or hundreds.
Further, the collected kayak data is marked by manual marking, and in one embodiment, the marked types comprise a standard and standard training state of the kayak, an nonstandard training state of the kayak, a non-operating state of the kayak and other states of the kayak.
It can be appreciated that in the task of the invention, acquisition, labeling and preprocessing of kayak training data are time-consuming and labor-consuming, and insufficient training samples easily result in poor generalization ability of the model, and influence the accuracy of the model. It can be appreciated that in the task of the invention, acquisition, labeling and preprocessing of kayak training data are time-consuming and labor-consuming, and insufficient training samples easily result in poor generalization ability of the model, and influence the accuracy of the model.
According to the invention, a sample generation is carried out by adopting a SMOTE algorithm based on quantum probability guidance, a new sample is generated by interpolation among few types of samples by the traditional SMOTE algorithm, but the sample distribution uniformity and diversity are often limited, and the interpolation step is guided by utilizing the quantum probability distribution, so that the newly generated sample can better cover a potential kayak data space, the kayak data diversity is kept, and the generalization capability of a model on the new sample is enhanced.
Specifically, the sample generation method based on the SMOTE algorithm guided by quantum probability is expressed as follows:
S101, setting quantum states according to the existing few types of samples, wherein each sample corresponds to one quantum bit, superposition of the quantum states represents all possible sample states, and defining the initial quantum states to be expressed as:
in the formula, Representing the quantum states of the entire sample set,In order to expand the number of samples before expansion,Is the firstThe complex probability amplitude of the individual samples,Is the corresponding quantum ground state.
S102, adjusting the probability amplitude of each quantum bit through a probability amplitude modulation strategy in quantum computation to enable the probability amplitude to reflect probability distribution generated by a sample, and particularly optimizing the generation probability of a new sample point according to the local density and the class unbalance of each sample point so as to more accurately reflect the distribution of a few classes in a feature space, wherein the probability amplitude modulation mode is expressed as follows:
in the formula, Is the probability amplitude after the adjustment,Is the firstThe distance of each sample from its nearest neighbor; First, the Euclidean distance between each sample and its nearest neighbor; Is the first Complex probability magnitudes for individual samples; e is a natural constant here in order to control the parameters of the sharpness of the distribution.
In one embodiment, the distance between a sample and its nearest neighbor reflects the similarity and difference between neighboring samples, and the probability amplitude modulation is made more sensitive to subtle changes between samples by using a sine term to increase the nonlinear characteristics of the calculated distance, expressed as:
in the formula, AndRespectively the firstSample number and its neighborsThe first sample is atThe characteristic value of the dimension is used to determine,Is the dimension of the feature of the kayak sample dataset to be expanded,Is a parameter that adjusts for nonlinear effects.
S103, performing quantum measurement, namely randomly selecting states and collapsing according to the modulated quantum state probability distribution to generate new kayak data points, wherein in the process, the random nature of quantum calculation helps to explore and generate diversified samples, the problems of over fitting and sample deviation possibly caused in a traditional SMOTE algorithm are overcome, and the quantum state measurement and collapsing process is expressed as:
in the formula, Indicating the post-measurement thThe probability that the state of the individual samples is selected.
Further, after quantum measurement, the state will be based onCollapsing to the corresponding quantum ground state, generating new kayak data points based thereon, the manner in which the new sample points are generated is expressed as:
in the formula, For a new sample point to be generated,AndRespectively the firstSample number and its neighborsA number of samples of the sample were taken,Is an interpolation coefficient randomly extracted from the [0,1] uniform distribution.
S104, correcting newly generated samples through a topology invariance strategy, ensuring that each new sample is reasonable in value and consistent in the overall structure of the kayak data, and specifically, checking and correcting any new sample possibly destroying the original topology structure of the kayak data by using the topology mapping of the kayak data, wherein the topology correction mode is expressed as follows:
in the formula, For a new sample point after topology correction,Is a corrected intensity parameter; is a mapping function aimed at adjusting the new sample to make it more consistent with the topology of the original kayak dataset. Preferably, the method comprises the steps of, Set to 0.2.
In one embodiment, the mapping function is calculated as:
in the formula, Is the number of reference samples that are to be taken,Is the firstThe impact weight of the individual samples on the new samples,Is a parameter for adjusting the sensitivity of the activation function,Is the learning rate of the sample generation,As a hyperbolic tangent function.
S105, merging the corrected new generation sample with the original kayak data set, and ensuring that the kayak data expanded set can be effectively used for subsequent model training, wherein the new generation sample is expressed as:
in the formula, For the expanded kayak data set,Is the original kayak dataset.
S2, performing feature extraction on kayak motion data by adopting a fully-connected neural network based on sparse constraint to obtain kayak motion features;
In an alternative embodiment of the present invention, the present invention uses a 6-layer fully connected neural network for feature extraction, and in the prior art, some schemes use neural networks for feature extraction, and in some neural network structures, problems of gradient disappearance, gradient explosion or sinking into a locally optimal solution may be encountered, which affects the training stability and the performance of the model. According to the invention, a neural network algorithm based on sparse constraint is adopted as a feature extraction model, on the basis of a traditional neural network, a neuron constraint sparsification and dynamic shearing activation strategy is adopted, and in the training process of the neural network, the connection of each neuron is constrained, so that the neural network is forced to only keep the connection with the most information content in each layer, thus the calculated amount can be reduced, the overfitting can be avoided, and the robustness and generalization capability of the neural network model are improved.
Specifically, the training process of the neural network algorithm based on the sparse constraint is as follows:
S201, initializing parameters of a neural network, wherein each layer of the neural network comprises a plurality of neurons, the connection number of each layer of neurons is subjected to sparsification constraint, and the configuration is provided Representing neural network NoThe weight matrix of the layer is used to determine,Representing neural network NoThe bias term of the layer, the initialization mode is expressed as:
in the formula, Is the neural network ofA weight matrix of the layer; representing a mean of zero and a variance of Is a normal distribution of (2); Is normally distributed; initializing variances for parameters of the neural network; representing compliance with a particular distribution; is the neural network of Bias terms of the layers. Preferably, the method comprises the steps of,Set to 0.001.
S202, scaling the range of the kayak data input into the neural network to a uniform scale through standardized processing on the expanded kayak training data, so as to avoid the problem of gradient explosion or disappearance in the training process, wherein the standardized mode is expressed as follows:
in the formula, For kayak data input to the neural network, i.e., expanded kayak data; a mean value of kayak data input to the neural network; Standard deviation for kayak data input to the neural network; is normalized kayak data.
S203, forward propagation is carried out on kayak data input into the neural network through each layer of the neural network, the output of each layer is used as the input of the next layer, a group of high-dimensional characteristic representations are finally output through layer-by-layer transmission, and the forward propagation mode is expressed as follows:
in the formula, Is the neural network ofThe active output of the layer, i.e. neural network NoAn activation input of a layer; is the neural network of The active output of the layer, i.e. neural network NoAn activation input of a layer; The function is activated for dynamic clipping.
In one embodiment, the dynamic clipping activation function adjusts sparsity and activation values according to the output of each layer of the neural network, so as to optimize the activation mode of the neural network in different training phases, and the calculation mode is expressed as:
in the formula, The result of the linear transformation is specifically the input of a dynamic shear activation function; is a shear strength factor; is a shear threshold; To indicate the function, if Then 1, otherwise 0; As a function of the maximum value. Preferably, the method comprises the steps of, The setting is made to be 0.3,Set to 0.1.
S204, the training target of the neural network is to minimize the error of the generated feature representation in the feature extraction process, the loss function not only considers the error of the feature extraction result, but also focuses on the optimal sparsity of the neural network structure by adopting a sparsity constraint term and a constraint term of an activation function, and the calculation mode of the loss function is expressed as follows:
in the formula, Is the total loss of the neural network; Is the first True labels of the individual samples; The method comprises the steps of calculating a feature vector obtained after feature extraction of a neural network by a preset Softmax for prediction output of a model; is the L2 norm; regularization coefficients for L2; The weight matrix is a French Luo Beini norm, and the weight size is represented; Is a Friedel Luo Beini Stonex; is a sparsification constraint coefficient; For sparsity control term, represent the first A non-zero number of connections of layer neurons; calculating norms for non-zero connection numbers; The number of samples input to the neural network for the current lot. Preferably, the method comprises the steps of, The setting is made to be 0.3,Set to 0.4.
Further, based on the loss function, updating the neural network parameters in an error back propagation mode, and calculating the gradient of the loss to the weight of each layer through a chain rule in the error back propagation process, wherein the calculation mode of the gradient update is expressed as follows for the weight and bias of each layer:
in the formula, Representing the partial derivative; is the neural network of Error terms of the layers.
Further, the calculation mode of the error term of the neural network is expressed as:
in the formula, Is the derivative of the dynamic shear activation function; is the neural network of Transposition of the weight matrix of the layer; is the neural network of Error terms of the layers.
S205, as the training process is carried out, the sparsity and the activation strategy of the neural network are dynamically adjusted, specifically, when the kayak training data become more complex, the neural network adjusts the number of activated features through a dynamic shearing activation function, more features are allowed to participate in the learning process of the neural network, meanwhile, the sparsity constraint is dynamically relaxed or strengthened to ensure that the neural network maintains proper feature selectivity in the learning process, specifically, the neural network can automatically adjust the structure of the neural network according to different stages of the kayak data by adopting adjustable sparsity factors and activation parameters, and the calculation mode is expressed as follows:
in the formula, AndSuper parameters for controlling sparsity and shear strength rate of change; The sparse factor in the neural network enables the neural network to dynamically control the sparsity of each layer according to the current training progress, and the adaptability of the model under a specific task is enhanced; the updated sparse factor in the network; is a shear factor in neural networks; is a clipping factor in the updated neural network; is an error term when the neural network. Preferably, the method comprises the steps of, AndSet to 0.3 and 0.4, respectively.
S206, repeating the steps until a preset iteration stopping condition is met, namely, model training is completed. In one embodiment, the preset stop iteration condition is that a preset maximum number of iterations is reached, preferably the preset maximum number of iterations is set to 1000.
In fig. 2, in order to verify the effectiveness of the neural network regularization method, the advantage of the sparse constraint in terms of maintaining the performance of the model is verified, and compared with the traditional L2 regularization method, dropout method and regularization-free method, experimental results show that the accuracy curve of the technique shows a stable rising trend, and finally reaches about 93%, and the convergence speed is fastest, while the traditional L2 regularization has an overcomplex problem, the accuracy is lower than that of the technique, and the Dropout method fluctuates greatly in the initial stage of training, and finally stabilizes at about 90%, the regularization-free method has the lowest accuracy and the slowest convergence, and the regularization necessity is verified, so that the technique effectively prevents overcompletion while maintaining the capacity of the model through dynamic sparse constraint.
In fig. 3, in order to verify the improvement of the dynamic sparsity constraint on the parameter efficiency, by comparing the model performance under different sparsities, the test result shows that the performance of the conventional method is reduced when the sparsity is more than 0.6, while the technology maintains the highest accuracy under each sparsity, and the performance is optimal in the range of 0.5-0.7 sparsity, thereby embodying the self-adaptive advantage.
And S3, performing photovoltaic power generation equipment energy consumption assessment on the photovoltaic power generation equipment energy consumption dimension reduction characteristics by adopting a high-order neural network classification model.
In an optional embodiment of the invention, the method adopts a high-order neural network classification algorithm based on self-similarity characteristics as a classifier model, and on the basis of a traditional high-order neural network, the method can strengthen the learning of the self-similarity characteristics in the training process by utilizing the self-similarity loss function, thereby improving the recognition capability of the model on action modes.
Specifically, the training process of the high-order neural network classification algorithm based on the self-similar characteristics is as follows:
S301, initializing a higher-order neural network model, wherein the model comprises an input layer of a higher-order neural network, 3 hidden layers of the higher-order neural network and an output layer of the higher-order neural network, the input layer of the higher-order neural network receives kayak data from a feature processing module, the hidden layers of the higher-order neural network carry out nonlinear transformation by adopting a specific activation function, the output layer of the higher-order neural network is used for probability output of multi-classification problems, and specifically, the kayak data input to the higher-order neural network is defined as ,Is the number of samples input to the higher order neural network,Is the characteristic dimension of each sample input to the higher order neural network for the firstNeuron outputs of the layers are calculated as:
in the formula, Is the first order neural networkThe weight matrix of the layer is used to determine,Is the first order neural networkLayer input (for layer 0 i.e. input layer,),Is the first order neural networkThe bias term of the layer is used,Is the first order neural networkThe pre-activation output of the layer.
Further, the pre-activation output of the higher order neural network is subjected to nonlinear transformation by using an activation function to obtain a layer output, which is expressed as:
in the formula, Is the first of the higher order neural networkThe output of the layer is provided with,Is an activation function of the higher order neural network.
Note that, in the conventional higher order neural network, the weight matrixIs randomly initialized from a certain distribution (such as normal distribution), and for better convergence in training, the invention adopts a weight initialization strategy based on kayak data characteristics, which is expressed as follows:
in the formula, Higher order neural network first calculated for data characterization by kayakThe weight matrix of the layer is used to determine,Is the first order neural networkThe characteristic dimension of the layer; Is the first order neural network The first learning coefficient of the layer is a training parameter.
And when calculating the bias of each layer, the characteristic relation with the input kayak data is adopted, so that the calculation of the bias item is more flexible, the bias item not only depends on the output of the upper layer, but also is adjusted according to the specific characteristics of the input kayak data, and the calculation mode is expressed as follows:
in the formula, Is the first order neural networkA second learning coefficient of the layer is a training parameter; is a weight coefficient associated with each feature, Representing the first input into the higher order neural network kayak dataAnd features.
Moreover, the activation functions (such as ReLU and Sigmoid) adopted by the traditional high-order neural network can cause the problem of gradient disappearance or gradient explosion under certain conditions, and the method adopts the self-adaptive activation functions, combines the characteristics of the high-order neural network layer and the distribution condition of kayak data, and adopts the calculation mode as follows:
in the formula, As a hyperbolic tangent function; Representing the activation response degree of each layer as an adaptive adjustment factor; Is the first order neural network A third learning coefficient of the layer is a training parameter; activating the number for ReLU; is the average of the similarity between all samples input to the higher order neural network.
S302, in the training process of the higher-order neural network, enhancing the learning capability of the model by calculating the self-similarity structure in the kayak data, wherein the self-similarity features in the kayak data represent that certain information has similar feature modes on different scales or time periods, and the self-similarity loss function is specifically realized by optimizing the self-similarity loss function, and the self-similarity loss function is used for measuring the similarity between different samples and is set for two samplesAnd,For a sample that is first input to the higher order neural network,For the second sample input to the higher order neural network, the similarity between the two in the feature space is expressed asThe similarity is calculated by a cosine similarity measurement mode, and the calculation mode of defining a self-similarity loss function is expressed as follows:
in the formula, Is a self-similarity loss function; for self-similarity weights, the samples are represented And sampleSimilarity importance between the two; Is the first Sample and the first input to the higher order neural networkSimilarity measures between samples input to the higher order neural network.
In one embodiment, for the first sample input to the higher order neural network and the second sample input to the higher order neural network, in order to measure the similarity of the two samples, a weighted similarity measure is used in combination with local and global information in the kayak data, so that the model can dynamically adjust the similarity measure according to the contribution of each feature in the training process, thereby capturing the self-similarity of the kayak data more accurately, and the calculation mode is expressed as:
in the formula, Is the first input to the higher order neural network sampleWeights of the numbers represent the importance of different features to similarity calculation; sample first input to higher order neural network A plurality of features; sample of the second input to the higher order neural network And features.
S303, after the forward propagation and the self-similarity structure are adopted, the higher-order neural network performs error feedback, the prediction performance of the model is evaluated by using a loss function by calculating the loss between the predicted value and the actual label, the error is propagated back through each layer by a back propagation algorithm, and the mode of calculating the gradient of the loss function of the higher-order neural network relative to the weight parameter of the higher-order neural network is expressed as follows:
in the formula, A loss function for a higher order neural network; Is the first order neural network Transpose of the output of the layer,The gradient effect on the weight for the self-similarity loss.
S304, in order to ensure the stability of the model training process, a self-adaptive learning rate mechanism is adopted to dynamically adjust the learning rate according to the current gradient size and change condition in each gradient updating process, when the gradient change is larger, the learning rate is properly reduced to avoid unstable training caused by over-fast updating, when the gradient change is smaller, the learning rate is properly increased to accelerate convergence, and the adjustment mode of the learning rate of the high-order neural network is expressed as:
in the formula, The initial learning rate is the initial learning rate of the high-order neural network; Is the first The learning rate of the iterative higher order neural network; Is the first Factors controlling the change of learning rate of the secondary iteration are training parameters; Is the first The L2 norm of the gradient of the second iteration. Preferably, the method comprises the steps of,Set to 0.01.
S305, repeating the steps until a preset iteration stopping condition is met, namely, model training is completed. In one embodiment, the preset stop iteration condition is that a preset maximum number of iterations is reached, preferably the preset maximum number of iterations is set to 1000.
After model training is completed, the training state of the kayak is classified by using the trained model, in one embodiment, the collected original kayak data is input into the model in the feature processing module after training to perform feature processing, and further, the processed features are input into the model in the decision module to perform classification, so that a classification result is obtained. In the present embodiment, the classification categories include a standard standardized training state of the kayak, an irregular training state of the kayak, a non-operating state of the kayak, and other states of the kayak.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the principles and embodiments of the present invention have been described in detail in the foregoing application of the principles and embodiments of the present invention, the above examples are provided for the purpose of aiding in the understanding of the principles and concepts of the present invention and may be varied in many ways by those of ordinary skill in the art in light of the teachings of the present invention, and the above descriptions should not be construed as limiting the invention.
Those of ordinary skill in the art will recognize that the embodiments described herein are for the purpose of aiding the reader in understanding the principles of the present invention and should be understood that the scope of the invention is not limited to such specific statements and embodiments. Those of ordinary skill in the art can make various other specific modifications and combinations from the teachings of the present disclosure without departing from the spirit thereof, and such modifications and combinations remain within the scope of the present disclosure.

Claims (5)

1.一种基于数据分析的皮划艇动作识别方法,其特征在于,包括以下步骤:1. A kayaking action recognition method based on data analysis, characterized in that it comprises the following steps: 获取皮划艇动作数据;所述皮划艇动作数据包括皮划艇力学数据、皮划艇运动学数据和皮划艇环境数据;Acquire kayak motion data; the kayak motion data includes kayak mechanics data, kayak kinematics data and kayak environment data; 采用基于稀疏约束的全连接神经网络对皮划艇动作数据进行特征提取,获得皮划艇动作特征;所述特征提取的方法包括:A fully connected neural network based on sparse constraints is used to extract features from kayaking motion data to obtain kayaking motion features; the feature extraction method includes: 初始化全连接神经网络的参数;Initialize the parameters of the fully connected neural network; 对皮划艇动作数据进行标准化处理;Standardize kayaking motion data; 将输入神经网络的皮划艇数据经过神经网络的每一层进行前向传播,每一层的输出都将作为下一层的输入,通过逐层传递,最终输出一组高维特征表示;The kayaking data input into the neural network is forward propagated through each layer of the neural network. The output of each layer will be used as the input of the next layer, and finally a set of high-dimensional feature representations will be output through layer-by-layer transmission; 根据特征提取结果的误差,并采用稀疏性约束项和激活函数的约束项建立总损失函数进行训练;According to the error of the feature extraction result, the total loss function is established for training by using the sparsity constraint term and the activation function constraint term; 在训练过程中采用可调节的稀疏因子和激活参数使得神经网络根据皮划艇数据的不同阶段自动调整自身的结构;The use of adjustable sparsity factors and activation parameters during training allows the neural network to automatically adjust its structure according to the different stages of the kayaking data; 重复迭代上述步骤,直至满足预设的停止迭代条件;Repeat the above steps until the preset stop iteration condition is met; 对皮划艇动作特征进行特征降维,获得皮划艇动作特征;Perform feature dimension reduction on kayaking action features to obtain kayaking action features; 对所述皮划艇动作特征进行皮划艇动作识别;Performing kayaking action recognition on the kayaking action features; 所述根据特征提取结果的误差,并采用稀疏性约束项和激活函数的约束项建立总损失函数进行训练,具体为:The error of the feature extraction result is based on the sparsity constraint term and the activation function constraint term to establish a total loss function for training, specifically: 其中,为总损失函数;为当前批次输入到神经网络的样本数量;为第个样本的真实标签;为模型的预测输出;为L2范数;为L2正则化系数;为权重矩阵的弗罗贝尼斯范数;为弗罗贝尼斯范数;为稀疏化约束系数;为稀疏性控制项;为非零连接数计算范数;为神经网络的总层数; in, is the total loss function; The number of samples input to the neural network for the current batch; For the The true labels of samples; is the prediction output of the model; is the L2 norm; is the L2 regularization coefficient; is the Frobenius norm of the weight matrix; is the Frobenius norm; is the sparsification constraint coefficient; is the sparsity control term; Compute the norm for non-zero connectivity numbers; is the total number of layers of the neural network; 所述在训练过程中采用可调节的稀疏因子和激活参数使得神经网络根据皮划艇数据的不同阶段自动调整自身的结构,具体为:The adjustable sparse factor and activation parameters are used in the training process to enable the neural network to automatically adjust its structure according to different stages of the kayaking data, specifically: 其中,为更新后的经网络中的稀疏因子;为神经网络中的稀疏因子;为控制稀疏性和剪切强度变化速率的超参数;为当神经网络的误差项;为神经网络第层的激活输出;为更新后的神经网络中的剪切因子;为神经网络中的剪切因子;为L2范数。 , in, is the sparse factor in the updated warp network; is the sparse factor in the neural network; and is a hyperparameter that controls the rate of change of sparsity and shear strength; is the error term of the neural network; For the neural network The activation output of the layer; is the shear factor in the updated neural network; is the shear factor in the neural network; is the L2 norm. 2.根据权利要求1所述的一种基于数据分析的皮划艇动作识别方法,其特征在于,采用基于自相似特征的高阶神经网络对皮划艇动作特征进行识别。2. A kayaking motion recognition method based on data analysis according to claim 1, characterized in that a high-order neural network based on self-similar features is used to recognize kayaking motion features. 3.根据权利要求1所述的一种基于数据分析的皮划艇动作识别方法,其特征在于,获取皮划艇动作数据后,基于量子概率引导的SMOTE方法对皮划艇动作数据进行数据扩充。3. According to a kayaking motion recognition method based on data analysis as described in claim 1, it is characterized in that after obtaining the kayaking motion data, the kayaking motion data is expanded based on the SMOTE method guided by quantum probability. 4.一种存储指令的计算机可读存储介质,其特征在于,所述存储介质中存储有计算机程序或指令,当所述计算机程序或指令被图像处理装置执行时,实现如权利要求1-3中任一项所述的一种基于数据分析的皮划艇动作识别方法。4. A computer-readable storage medium storing instructions, characterized in that a computer program or instruction is stored in the storage medium, and when the computer program or instruction is executed by an image processing device, a kayaking action recognition method based on data analysis as described in any one of claims 1 to 3 is implemented. 5.一种计算机程序产品,其特征在于,所述计算机程序产品包括:计算机程序代码,当所述计算机程序代码运行时,使得处理器执行权利要求1-3中任一项所述的一种基于数据分析的皮划艇动作识别方法。5. A computer program product, characterized in that the computer program product comprises: a computer program code, which, when executed, enables a processor to execute a kayaking motion recognition method based on data analysis according to any one of claims 1 to 3.
CN202510309008.4A 2025-03-17 2025-03-17 A kayaking action recognition method, system, storage medium and program product based on data analysis Active CN119810928B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202510309008.4A CN119810928B (en) 2025-03-17 2025-03-17 A kayaking action recognition method, system, storage medium and program product based on data analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202510309008.4A CN119810928B (en) 2025-03-17 2025-03-17 A kayaking action recognition method, system, storage medium and program product based on data analysis

Publications (2)

Publication Number Publication Date
CN119810928A CN119810928A (en) 2025-04-11
CN119810928B true CN119810928B (en) 2025-05-16

Family

ID=95262976

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202510309008.4A Active CN119810928B (en) 2025-03-17 2025-03-17 A kayaking action recognition method, system, storage medium and program product based on data analysis

Country Status (1)

Country Link
CN (1) CN119810928B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114202803A (en) * 2021-12-17 2022-03-18 北方工业大学 Multi-stage human body abnormal action detection method based on residual error network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE112018006630T5 (en) * 2017-12-28 2020-09-24 Intel Corporation VISUAL FOG

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114202803A (en) * 2021-12-17 2022-03-18 北方工业大学 Multi-stage human body abnormal action detection method based on residual error network

Also Published As

Publication number Publication date
CN119810928A (en) 2025-04-11

Similar Documents

Publication Publication Date Title
CN118468413B (en) A building safety prediction method based on digital twins and big data
US20240013048A1 (en) Method and System for Solving QUBO Problems with Hybrid Classical-Quantum Solvers
US12320794B1 (en) Layout optimization method of water quality monitoring points based on rf-c-som clustering algorithm
CN119249240B (en) Big data analysis method for photovoltaic power generation
CN115661550B (en) Graph data category unbalanced classification method and device based on generation of countermeasure network
CN106503867A (en) A kind of genetic algorithm least square wind power forecasting method
Luo et al. Learning from the past: Continual meta-learning with Bayesian graph neural networks
CN111241289B (en) Text clustering method based on graph theory and SOM network
CN118469158B (en) Equipment maintenance cost estimation method and equipment
CN118070682B (en) Artificial intelligence-based damage assessment method and device for screw bolt lifting rings
CN116538127B (en) Axial flow fan and control system thereof
CN119150152B (en) Mine safety-oriented equipment predictive maintenance monitoring method and electronic equipment
CN118917555B (en) Equipment comprehensive efficiency evaluation method and device based on industrial interconnection
WO2025025222A1 (en) Gene regulatory network inference method based on spatiotemporal transcriptomic data
CN120064986A (en) Battery state evaluation method based on machine learning
CN119557745A (en) Electrical equipment fault diagnosis method and device based on artificial intelligence
Gupta et al. Impact of too many neural network layers on overfitting
CN119810928B (en) A kayaking action recognition method, system, storage medium and program product based on data analysis
CN117312865B (en) Nonlinear dynamic optimization-based data classification model construction method and device
Liu et al. A new ART-counterpropagation neural network for solving a forecasting problem
CN118968272A (en) Method, device, electronic device and storage medium for identifying underwater objects
CN111914915A (en) Data classifier integration method and device based on support vector machine and storage medium
CN119293410A (en) Energy efficiency evaluation and optimization method and device for cleaning equipment based on artificial intelligence
CN118535888A (en) A method and device for predicting water level in a pumping station
Ding et al. Evolving neural network using hybrid genetic algorithm and simulated annealing for rainfall-runoff forecasting

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant