CN119474644A - A small sample regression prediction method for tunnel blasting effect - Google Patents
A small sample regression prediction method for tunnel blasting effect Download PDFInfo
- Publication number
- CN119474644A CN119474644A CN202411512790.1A CN202411512790A CN119474644A CN 119474644 A CN119474644 A CN 119474644A CN 202411512790 A CN202411512790 A CN 202411512790A CN 119474644 A CN119474644 A CN 119474644A
- Authority
- CN
- China
- Prior art keywords
- data
- blasting
- blasting effect
- kernel
- effect
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/18—Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/004—Artificial life, i.e. computing arrangements simulating life
- G06N3/006—Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Computational Mathematics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Operations Research (AREA)
- Probability & Statistics with Applications (AREA)
- Algebra (AREA)
- Databases & Information Systems (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The application provides a small sample regression prediction method for tunnel blasting effect, which comprises the following steps of obtaining feature data before blasting and blasting effect data after blasting, carrying out data normalization processing on the data, analyzing and selecting input features of a regression prediction model by utilizing gray correlation degree, establishing a generated countermeasure network model, carrying out expansion and expansion on a training data set, establishing a blasting effect multi-core Gaussian process regression prediction model, searching super parameters of the prediction model based on a sparrow algorithm integrated and optimized by multiple strategies, and predicting a new blasting effect based on the prediction model under the optimal parameters. The application effectively solves the problems of difficult collection of blasting engineering data, more samples with excellent effect and less samples with poor effect of the data while the number of the samples is small. The prediction precision and the prediction efficiency of the blasting effect are improved, the generalization capability of the blasting effect prediction is improved to the greatest extent, and the construction safety and the efficiency of engineering projects are ensured.
Description
Technical Field
The invention belongs to the technical field of tunnel blasting, and particularly relates to a small sample regression prediction method for tunnel blasting effect.
Background
The drilling and blasting method is widely applied to mountain tunnel engineering by virtue of the advantages of high efficiency, economy, flexibility, wide application range and the like. However, due to different blasting designs, tunnel structures and surrounding rock properties, the effects of explosion impact on the walls of blast holes are different, and bad blasting effects such as tunnel super-underexcavation, insufficient settlement of step blasting piles, large blocks and the like can be generated, so that the construction safety and the construction efficiency are influenced. Therefore, the method has the advantages that the blasting effect is predicted and judged before blasting operation, and the blasting design is timely and reasonably adjusted, so that the method has important engineering value.
Aiming at the problem of blasting effect prediction, most methods only use characteristic data before blasting and a deep learning algorithm to predict the characteristic parameters of the blasting effect. However, the blasting engineering data is difficult to collect, the number of samples is small, and meanwhile, the data has the unbalanced characteristics of more excellent effect samples and less poor effect samples. Deep learning algorithms often require a large amount of data to train and adjust weight parameters and paranoid parameters in the neural network so that the model can extract the characteristics and modes of the data. This is in contradiction with the data characteristics of the blasting engineering, which makes it difficult to obtain an optimal prediction model. Therefore, how to fully consider the characteristics of small samples of the blasting process characteristic data and realize accurate prediction of the blasting effect of the tunnel is a technical problem to be solved in the field.
Disclosure of Invention
The invention aims to solve the technical problems in the background and provides a small sample regression prediction method for tunnel blasting effect, which comprises the following steps:
s1, collecting characteristic data before blasting and blasting effect data after blasting;
S2, data normalization is carried out, and gray correlation degree analysis is utilized to select input features of a regression prediction model;
S3, establishing and generating an countermeasure network model, and carrying out augmentation and expansion on a training data set;
s4, establishing a multi-core Gaussian process regression prediction model of the blasting effect;
S5, searching a prediction model super-parameter based on a sparrow algorithm of multi-strategy integrated optimization;
S6, predicting the blasting effect.
In a preferred embodiment, in step S1, the collected characteristic data before blasting is characteristic data related to blasting effect, including blasting parameters, surrounding rock parameters, tunnel design, etc., which may include, but are not limited to, surrounding rock category, rock compressive strength, rock integrity coefficient, cross-sectional area, drilling depth, external insertion angle, blasting unit consumption, grooving angle, and peripheral hole spacing, which are expressed by the following matrix:
Wherein n represents the number of collected pre-blasting feature data pieces, and m represents the dimension of the collected pre-blasting feature;
The collected blasting effect data is characteristic data capable of reflecting the blasting effect quality, and specifically comprises, but is not limited to, super-underexcavated quantity, footage, half-porosity and blockiness, and is represented by the following matrix form:
wherein n represents the number of the collected blasting effect characteristic data, and p represents the dimension of the collected blasting effect characteristic;
the data normalization method in the step S2 is as follows:
Wherein x norm is the normalized data value, x norm E [0,1], x is the original data value, and x max,xmin is the maximum value and the minimum value of the parameter sequence respectively.
In a preferred embodiment, the specific process of selecting the model input feature in step S2 includes:
first, the range is calculated by comparing the characteristic column with the reference characteristic column:
Wherein a and b are minimum and maximum differences of two poles, y kp is the kth line data of the p-th column reference feature, and x im is the kth line data of the m-th column comparison feature;
calculating the association degree:
Wherein r mp is the association degree between the comparison feature of the mth column and the reference feature of the p column, namely the association degree between the feature parameter before blasting and the blasting effect. ρ is the resolution factor;
In the preferred scheme, the step S2 also comprises the steps of arranging the association degree of the features before blasting according to the sequence from big to small, setting feature selection dimensions, and taking the comparison feature with the front association degree as input feature data after association degree screening according to the set feature selection dimensions Ni;
and dividing the screened input characteristic data and blasting effect data into a training data set and a testing data set.
In a preferred scheme, the generating countermeasure network model established in the step S3, the generating network comprises a one-dimensional convolution layer, a full connection layer and an activation function layer;
The generator network inputs noise vectors which are in the dimension of (Ni+No) and accord with Gaussian distribution, and outputs generated samples in the dimension of (Ni+No), the discriminator network comprises two full-connection layers and a Sigmoid activation function layer, and the output of the network is a scalar of 0 to 1, which represents the probability estimation of the discriminator network on the input data vector being real data, namely represents the distribution similarity of the input data and the real data;
The loss of the training process discriminator network is formed by the sum of the real blasting data and the loss of the generated sample, the loss of the generated network is calculated by judging whether the data generated by the generator is real data or not through the discriminator, and the loss function is a binary cross entropy loss function:
BCELoss=-(ylog(p)+(1-y)log(1-p)),
and in the formula, y is a real label, the real blasting data corresponding label is 1, and the generated sample corresponding label is 0;p which is the output probability estimation of the identifier network Sigmoid activation function layer.
In the preferred scheme, in the process of the game training of the GAN, the generation network gradually improves the capability of generating the realistic data, and the discriminator improves the capability of discriminating whether the data is a real blasting sample. After the maximum training times are reached, the generated model is stored. And the original training data set is amplified by using the generated sample of the generated model, so that an amplified data set containing the generated sample and the real sample is obtained.
In a preferred scheme, the blasting effect gaussian process regression prediction model in step S4 is established by:
Gaussian process regression is a regression method of a Bayesian inferred nonparametric type, and the regression model expression is:
y=f(x)+ε;
Wherein x represents the input pre-blasting characteristic, y represents the actual observed blasting effect, epsilon is a noise variable, epsilon obeys to F (x) is the predicted blasting effect, assuming it follows the gaussian process f (x) to GP (m (x), k (x, x *)), the mean function of which is determined by the training data, and the covariance function is determined by the kernel function:
Wherein x and x * represent any two different pieces of pre-blasting characteristic data, m is a mean function, E is a mathematical expectation, and k is a covariance function;
The prior distribution of the actual observed blasting effect y can thus be obtained as:
Let X be the pre-burst feature data in the training set, where k=k (X, X) =k ij be a positive definite covariance matrix of size n×n, K ij=k(xi,xj) is used to measure the correlation between X i and X j, k=k (X *,X*);K*=K(X*, X) is an n×1 order covariance matrix between the test point and the training set, and I n is an n order identity matrix. The posterior distribution of predicted values is thus obtained:
in the formula, Mean of predicted values is represented, cov (f *) is variance.
In a preferred scheme, further, the blasting effect multi-core gaussian process regression prediction model in step S4, and the fusion kernel function construction process includes:
(1) The RBF kernel is used for extracting data local characteristics, and the covariance function is as follows:
(2) Matern kernel, which is a popularization of RBF kernel, is used for extracting local correlation characteristics, and the covariance function is:
(3) A linear kernel for describing the linear correlation between the data, the covariance function being:
(4) A polynomial core for describing a nonlinear relationship between data, the covariance function being:
delta 1~δ9 is a nuclear function super parameter to be set;
the multiple kernel functions constitute a fused kernel function as follows:
kM(xi,xj)=δ10·kRBF(xi,xj)+δ11·kMatern(xi,xj)+δ12·kl(xi,xj)+δ13·kP(xi,xj),
Wherein delta 10~δ13 is covariance weight of different kernel functions;
finally, the augmented data set comprising the generated samples and the real samples is divided into a training set, a verification set and a test set, wherein the training set is used for training a multi-core Gaussian process regression prediction model, the verification set is used for adjusting super parameters of the optimal model and selecting the optimal model, the test set is used for evaluating the performance and generalization capability of the model, and the model performance evaluation indexes are as follows:
Wherein MAE, RMSE, R is the mean absolute error, root mean square error, and decision coefficient, y i、y′i is the true sample value and the predicted value, The average value of the samples is shown.
In a preferred scheme, the optimized sparrow algorithm is integrated by multiple strategies in the step S5, wherein the optimized strategies comprise a chaotic mapping mechanism, dynamic self-adaptive weights, an improved scout position updating mode, a fused cauchy variation and random reverse learning strategy, and the specific search model hyper-parameter process comprises the following steps:
S501, setting super-parameters delta 1~δ13 in a fusion kernel function as search solutions of a sparrow algorithm, setting polynomial kernel super-parameters delta 9 as the highest times of polynomials, setting the value range as [1,2,3,4,5,6], setting the value range of other super-parameters as [0,1], and setting the number of discoverers, the occupation ratio of the scouts and a safety threshold;
s502, initializing the sparrow population position through an ICMIC chaotic mapping model:
Wherein Z is a generated chaotic sequence, x i is an ith individual of the sparrow population generated according to the chaotic mapping sequence, x lb、xub is the lower limit and the upper limit of the range of each individual in the search dimension, and the polynomial super-parameter delta 9 still needs to be rounded after initialization.
S503, calculating the fitness of individuals in the population, sorting and determining the optimal individuals and the worst individuals according to the fitness, and calculating the fitness value of each sparrow in the population according to the following formula:
fFIT=RMSE+MAE+(1-R2);
s504, updating the position of the finder by using a finder position updating mode with self-adaptive weight:
Wherein ω is the dynamic self-adaptive weight of the position change of the finder, k is the control weight value range of [0,2]; T max represents the maximum iteration number of the iteration; representing the position information of the ith sparrow when the j-dimensional iteration number is t; The method comprises the steps of respectively representing optimal fitness and worst fitness values of population individuals when iteration times are t, wherein R epsilon [0,1] represents an early warning value, ST epsilon [0.5,1] represents a safety threshold, sparrows perform global search and foraging when R < ST, and perform random walk with regular distribution when R > ST;
S505, updating the position of the joiner:
in the formula, The position with the optimal fitness value in the current discoverer; The method comprises the steps of obtaining a current global fitness worst position, wherein A +=AT(AAT)-1, A represents a single column vector with the same dimension as a sparrow individual, an internal element is formed by a 1-1 random set, n is the population size, when i is less than or equal to n/2, a user actively follows a finder to move towards a better foraging position, and when i is more than n/2, the user gets rid of the current worse foraging position.
S506, randomly selecting m multiplied by n sparrows as the scouts, wherein m is the ratio of the scouts in the population, and undertaking an early warning task, wherein the position of the scouts is out of control easily due to the fact that the traditional scout updates a formula, and deviates from an optimal position;
the scout location is updated using the following optimization:
beta is a random number obeying normal distribution with mean value of 0 and variance of 1;
the improved position updating formula shows that if the sparrow individual is positioned at the current optimal position, the sparrow individual flies to any random position between the optimal position and the worst position, otherwise, the sparrow individual can fly to any random position between the sparrow individual and the optimal position;
S507, introducing random factors, utilizing a random reverse learning avoidance algorithm to fall into a local optimal solution and improving population diversity:
in the formula, The method is a reverse solution of the optimal solution under the t-th iteration, wherein x ub、xlb is an upper and lower bound, and r is a random value from 0 to 1; Representing the individual position of the optimal solution individual after random reverse learning under t iterations; b 1 is an information exchange control parameter;
s508, simultaneously introducing a Cauchy variation strategy to improve the tendency of the algorithm to fall into a local optimal solution due to the competition of the joiner and the discoverer for food:
wherein cauchy (0, 1) is a standard Cauchy distribution;
S509, selecting to update the optimal individual location by a dynamic selection policy:
When rand (1, e) > P s, selecting the position update based on the random reverse learning strategy, otherwise, selecting the position update based on the Cauchy variation disturbance strategy;
S510, judging whether the maximum iteration times are reached, continuing the next step when the maximum iteration times are reached, otherwise repeating S53;
S511, outputting the optimal individual position, namely the optimal super-parameter combination of the blasting effect multi-core Gaussian process regression prediction model.
In the preferred scheme, in step S6, the blasting effect prediction is to use the multi-core gaussian process regression prediction model established by the optimal super-parameter combination output in step S5, and obtain the blasting effect by inputting new data.
The method has the advantages of high accuracy, simple structure of a prediction model, strong model generalization capability and the like aiming at the prediction of the small sample of the blasting effect, and is specific to the following steps:
1) Aiming at the characteristics of more normal evaluation samples, less abnormal evaluation samples, less data volume and the like presented by the blasting data, the generated countermeasure network is used for supplementing the data set, so that the training difficulty of the prediction model is reduced;
2) Aiming at the problems of high dimensionality of the small sample and insufficient nonlinear prediction precision, multi-core Gaussian process regression is used, so that the generalization capability and the prediction precision of the prediction model are improved;
3) Aiming at the problem of mixed nuclear super-parameter setting in the multi-nuclear Gaussian process regression prediction model, the sparrow algorithm optimized by multi-strategy integration is used for searching the optimal prediction super-parameter, so that the prediction precision of the prediction model is further enhanced.
Drawings
Fig. 1 is a flow chart overall.
FIG. 2 is a graph of gray correlation analysis.
Fig. 3 generates an augmented data architecture diagram for an antagonism network acquisition.
Fig. 4 is a diagram of an iterative process of optimal fitness.
FIG. 5 is a graph comparing predicted effects.
Detailed Description
As shown in fig. 1 to 5, a regression prediction method for a small sample of a tunnel blasting effect is implemented by the following steps:
S1.1, selecting characteristic data related to blasting effect before blasting according to expert experience, wherein the characteristic data comprise blasting parameters, surrounding rock parameters, tunnel design and the like, and the characteristic data can comprise, but are not limited to, surrounding rock types, rock compressive strength, rock integrity coefficients, cross-sectional areas, drilling depths, external insertion angles, blasting unit consumption, grooving angles, peripheral hole spacing and the like. And is represented by the following matrix form:
where n represents the number of collected pre-blast feature data pieces and m represents the dimension of the collected pre-blast feature.
S1.2, collecting characteristic data which can reflect the quality of the blasting effect after blasting, wherein the characteristic data can specifically comprise but not be limited to super-underexcavated quantity, footage, half-porosity, blockiness and the like. And is represented by the following matrix form:
Where n represents the number of collected blast effect feature data and p represents the dimension of the collected blast effect feature.
S2.1 normalizes the collected data:
Wherein x norm is the normalized data value, x norm E [0,1], x is the original data value, and x max,xmin is the maximum value and the minimum value of the parameter sequence respectively.
S2.2, taking the characteristics before blasting as a comparison characteristic column, taking the effect data after blasting as a reference characteristic column, and calculating the association degree of each characteristic before blasting and the blasting effect through gray association degree analysis. Comparing the characteristic column with the reference characteristic column to calculate the range:
Where a and b are the minimum and maximum differences of the poles, y kp is the kth line data of the p-th column reference feature, and x im is the kth line data of the m-th column comparison feature.
S2.3, calculating the association degree:
Wherein r mp is the association degree between the comparison feature of the mth column and the reference feature of the p column, namely the association degree between the feature parameter before blasting and the blasting effect. ρ is a resolution factor, typically 0.5.
S2.4, the association degrees of the features before blasting are arranged in a sequence from big to small (see figure 2), feature selection dimensions are set, and comparison features with the previous association degrees are taken as input feature data after association degree screening according to the set feature selection dimensions Ni. The screened input characteristic data (dimension Ni) and blasting effect data (dimension No) are divided into a training data set and a test data set.
S3.1, establishing and generating an countermeasure network model. The generating network is composed of a one-dimensional convolution layer, a full connection layer and an activation function layer. The generator network inputs noise vectors which are in the dimension of (Ni+No) and accord with Gaussian distribution, and outputs generated samples in the dimension of (Ni+No), and the discriminator network comprises two full-connection layers and a Sigmoid activation function layer.
S3.2, training a discriminator network and a generation network. The loss of the training process arbiter network is constituted by the sum of the real burst data and the loss of the generated samples. Loss of the generation network is calculated by the arbiter as to whether the data generated by the generator is real data. The loss function is a binary cross entropy loss function:
BCELoss=-(ylog(p)+(1-y)log(1-p))
Where y is the true label, which is 1 if true blast data, and 0 if generated samples. p is the output probability estimation of the arbiter network Sigmoid activation function layer.
And S3.3, saving the generator network. And inputting noise vectors conforming to Gaussian distribution to generate a generated sample to amplify the original data set, so as to obtain an amplified data set. The specific structure of the architecture for generating the augmented data for the countermeasure network is shown in fig. 3.
S4.1, establishing a Gaussian process regression model:
y=f(x)+ε
Wherein x represents the input pre-blasting characteristic, y represents the actual observed blasting effect, epsilon is a noise variable, epsilon obeys to F (x) is the predicted blasting effect, assuming it follows the gaussian process f (x) to GP (m (x), k (x, x *)), the mean function of which is determined by the training data, and the covariance function is determined by the kernel function:
Wherein x and x * represent any two different pieces of pre-blasting characteristic data, m is a mean function, E is a mathematical expectation, and k is a covariance function.
S4.2, obtaining prior distribution of the actual observed blasting effect y:
Let X be the pre-burst feature data in the training set, where k=k (X, X) =k ij be a positive definite covariance matrix of size n×n, K ij=k(xi,xj) is used to measure the correlation between X i and X j, k=k (X *,X*);K*=K(X*, X) is an n×1 order covariance matrix between the test point and the training set, and I n is an n order identity matrix.
S4.3, obtaining posterior distribution of predicted values:
in the formula, Mean of predicted values is represented, cov (f *) is variance.
S4.4, constructing a fusion kernel function:
1) The RBF kernel is used for extracting data local characteristics, and the covariance function is as follows:
2) Matern kernel, which is a popularization of RBF kernel, is also used for extracting local correlation characteristics, and the covariance function is:
3) A linear kernel for describing the linear correlation between the data, the covariance function being:
4) A polynomial core for describing a nonlinear relationship between data, the covariance function being:
Wherein delta 1~δ9 is the hyper-parameter of the kernel function to be set.
The multiple kernel functions constitute a fused kernel function as follows:
kM(xi,xj)=δ10·kRBF(xi,xj)+δ11·kMatern(xi,xj)+δ12·kl(xi,xj)+δ13·kP(xi,xj)
where δ 10~δ13 is the covariance weight of the different kernel functions.
S4.5, the augmented data set comprising the generated sample and the real sample is divided into a training set and a verification set, wherein the training set is used for training the multi-core Gaussian process regression prediction model, and the verification set is used for adjusting super parameters of the optimal model and selecting the optimal model. The model performance evaluation index is as follows:
Wherein MAE, RMSE, R is the mean absolute error, root mean square error, and decision coefficient, y i、y′i is the true sample value and the predicted value, The average value of the samples is shown.
S5.1, setting the super-parameter delta 1~δ13 in the fusion kernel function as search solution of the sparrow algorithm. The polynomial core super parameter delta 9 is the highest degree of the polynomial, and the value range is set as [1,2,3,4,5,6]. The value range of the rest super parameters is set as [0,1]. The number of discoverers, the duty ratio of the scouts and the safety threshold are set.
S5.2, initializing the sparrow population position through an ICMIC chaotic mapping model:
Wherein Z is a generated chaotic sequence, x i is an ith individual of the sparrow population generated according to the chaotic mapping sequence, and x lb、xub is the lower limit and the upper limit of the range of each individual in the search dimension respectively. In particular, polynomial super-parameters delta 9 still need rounding after initialization.
And S5.3, calculating the fitness of individuals in the population. And (3) establishing a multi-core Gaussian process regression prediction model with the procedure in the step S4 according to individual parameters of sparrows, and inputting an augmentation data set for training and verification. And calculating performance indexes and fitness functions, and sequencing according to the fitness, and determining the optimal individuals and the worst individuals. The fitness value of each sparrow in the population is calculated according to the following formula:
fFIT=RMSE+MAE+(1-R2)
S5.4 updating the finder position:
Wherein ω is the dynamic self-adaptive weight of the position change of the finder, k is the control weight value range of [0,2]; T max represents the maximum iteration number of the iteration; representing the position information of the ith sparrow when the j-dimensional iteration number is t; the method comprises the steps of respectively representing optimal fitness and worst fitness values of population individuals when iteration times are t, representing early warning values by R epsilon [0,1], representing a safety threshold value by ST epsilon [0.5,1], carrying out global search foraging by sparrows when R is less than ST, carrying out random walk by the sparrows in a regular too-distributed mode when R is more than ST, and carrying out random number obeying normal distribution by Q.
S5.5 updating the enrollee location:
in the formula, The position with the optimal fitness value in the current discoverer; The method comprises the steps of obtaining a current global fitness worst position, wherein A +=AT(AAT)-1, A represents a single column vector with the same dimension as a sparrow individual, an internal element is formed by a 1-1 random set, n is the population size, when i is less than or equal to n/2, a user actively follows a finder to move towards a better foraging position, and when i is more than n/2, the user gets rid of the current worse foraging position.
S5.6, randomly selecting m multiplied by n sparrows as the scouts, wherein m is the ratio of the scouts in the population, carrying out early warning tasks and updating the positions of the scouts:
Is the optimal individual position when the iteration algebra is t, and beta is a random number obeying normal distribution with the mean value of 0 and the variance of 1. The improved position updating formula shows that if the sparrow individual is located at the current optimal position, the sparrow individual can fly to any random position between the optimal position and the worst position, otherwise, the sparrow individual can select to fly to any random position between the sparrow individual and the optimal position.
S5.7, introducing a random factor, utilizing random reverse learning to avoid the algorithm to fall into a local optimal solution and improving population diversity:
in the formula, The method is a reverse solution of the optimal solution under the t-th iteration, wherein x ub、xlb is an upper and lower bound, and r is a random value from 0 to 1; And b 1 is an information exchange control parameter, which represents the individual position of the optimal solution individual after random reverse learning under t iterations.
S5.8, introducing a Cauchy mutation strategy to improve the tendency of the algorithm to fall into a locally optimal solution due to the competition of the participants and the discoverers for food:
wherein cauchy (0, 1) is a standard Cauchy distribution;
S5.9 selecting to update the optimal individual location by means of a dynamic selection strategy:
If rand (1, e) > P s, selecting the position update based on the random reverse learning strategy, otherwise, selecting the position update based on the Kex variation disturbance strategy.
S5.10, judging whether the maximum iteration number is reached, repeating S5.3 if the maximum iteration number is not reached, and continuing if the maximum iteration number is reached. The optimal fitness iteration process is shown in fig. 4.
And S5.11, outputting an optimal individual position, namely an optimal super-parameter combination of the blasting effect multi-core Gaussian process regression prediction model, and establishing the blasting effect multi-core Gaussian process regression optimal prediction model under the optimal super-parameter combination through the processes S4.1-S4.5.
S6, storing an optimal prediction model of blasting effect multi-core Gaussian process regression under an optimal super-parameter combination, inputting a test data set to obtain blasting effect prediction of the test data set, and comparing the prediction precision of a commonly used prediction model SVM (support vector machine), BP (back propagation neural network), RF (random forest), GPR (single-core Gaussian process regression) and the patent Model (MGPR) (see figure 5).
The foregoing embodiments are merely for illustrating the technical solution of the present invention, but not for limiting the same, and although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those skilled in the art that modifications may be made to the technical solution described in the foregoing embodiments or equivalents may be substituted for parts of the technical features thereof, and that such modifications or substitutions do not depart from the spirit and scope of the technical solution of the embodiments of the present invention in essence.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411512790.1A CN119474644A (en) | 2024-10-28 | 2024-10-28 | A small sample regression prediction method for tunnel blasting effect |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411512790.1A CN119474644A (en) | 2024-10-28 | 2024-10-28 | A small sample regression prediction method for tunnel blasting effect |
Publications (1)
Publication Number | Publication Date |
---|---|
CN119474644A true CN119474644A (en) | 2025-02-18 |
Family
ID=94569153
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202411512790.1A Pending CN119474644A (en) | 2024-10-28 | 2024-10-28 | A small sample regression prediction method for tunnel blasting effect |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN119474644A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN119846485A (en) * | 2025-03-21 | 2025-04-18 | 西南科技大学 | Transformer-based lithium battery health state estimation method |
CN120086956A (en) * | 2025-05-06 | 2025-06-03 | 山东科技大学 | A tunnel blasting vibration velocity prediction method and system integrating multi-factor intelligent optimization |
-
2024
- 2024-10-28 CN CN202411512790.1A patent/CN119474644A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN119846485A (en) * | 2025-03-21 | 2025-04-18 | 西南科技大学 | Transformer-based lithium battery health state estimation method |
CN120086956A (en) * | 2025-05-06 | 2025-06-03 | 山东科技大学 | A tunnel blasting vibration velocity prediction method and system integrating multi-factor intelligent optimization |
CN120086956B (en) * | 2025-05-06 | 2025-07-11 | 山东科技大学 | Tunnel blasting vibration speed prediction method and system integrating multi-factor intelligent optimization |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Haji et al. | Comparison of optimization techniques based on gradient descent algorithm: A review | |
CN119474644A (en) | A small sample regression prediction method for tunnel blasting effect | |
US7340440B2 (en) | Hybrid neural network generation system and method | |
CN103164742B (en) | A kind of server performance Forecasting Methodology based on particle group optimizing neural network | |
CN118734196B (en) | Method for predicting failure of gyroscope group based on MBKA-GBDT | |
CN117291069B (en) | LSTM sewage water quality prediction method based on improved DE and attention mechanism | |
US20220284261A1 (en) | Training-support-based machine learning classification and regression augmentation | |
CN118629531B (en) | MSDBA-LSTM pollutant concentration prediction method based on self-attention | |
CN104504442A (en) | Neural network optimization method | |
CN118780575B (en) | A reservoir group optimization scheduling method and system based on improved whale algorithm | |
CN111597757A (en) | GP Model-assisted SLPSO Algorithm Based on Multi-objective Addition Criterion | |
CN119691526A (en) | Method for predicting faults of sighting system based on IARO-GBDT | |
Syberfeldt et al. | A parallel surrogate-assisted multi-objective evolutionary algorithm for computationally expensive optimization problems | |
CN119862791A (en) | Electronic voltage transformer error prediction method and system | |
CN119538741A (en) | A method, system, device and medium for optimizing global trajectory of battery temperature of electric vehicle | |
CN112132259B (en) | Neural network model input parameter dimension reduction method and computer readable storage medium | |
CN119180318A (en) | Hybrid evolutionary neural network searching method, device, equipment and storage medium | |
Qi et al. | Analysis and prediction of energy consumption in neural networks based on machine learning | |
CN118277737A (en) | Industrial boiler water quality prediction method based on SSA-LSTM | |
CN117332682A (en) | High-ground-stress hard rock tunnel blasting block prediction method, system, equipment and medium | |
CN116109926A (en) | Robust SAR image recognition system and method combining feature loss and three-objective optimization | |
CN119903652B (en) | Prediction method and application of slope blasting fragmentation based on Bubble Sort-GWO-ELM model | |
CN119026441B (en) | Drilling parameter optimization method and device | |
Watanabe et al. | Enhancement of CNN-based 2048 player with Monte-Carlo tree search | |
Mohanty et al. | Liquefaction susceptibility of soil using multi objective feature selection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |