[go: up one dir, main page]

CN119474644A - A small sample regression prediction method for tunnel blasting effect - Google Patents

A small sample regression prediction method for tunnel blasting effect Download PDF

Info

Publication number
CN119474644A
CN119474644A CN202411512790.1A CN202411512790A CN119474644A CN 119474644 A CN119474644 A CN 119474644A CN 202411512790 A CN202411512790 A CN 202411512790A CN 119474644 A CN119474644 A CN 119474644A
Authority
CN
China
Prior art keywords
data
blasting
blasting effect
kernel
effect
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202411512790.1A
Other languages
Chinese (zh)
Inventor
陈培帅
刘哲
唐启
杨钊
姬付全
曹昂
杨林
唐湘隆
彭松林
于锦
周伟
袁青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rail Transit Branch Of China Communications Construction Co ltd
CCCC Second Harbor Engineering Co
Original Assignee
Rail Transit Branch Of China Communications Construction Co ltd
CCCC Second Harbor Engineering Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rail Transit Branch Of China Communications Construction Co ltd, CCCC Second Harbor Engineering Co filed Critical Rail Transit Branch Of China Communications Construction Co ltd
Priority to CN202411512790.1A priority Critical patent/CN119474644A/en
Publication of CN119474644A publication Critical patent/CN119474644A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/18Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Operations Research (AREA)
  • Probability & Statistics with Applications (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application provides a small sample regression prediction method for tunnel blasting effect, which comprises the following steps of obtaining feature data before blasting and blasting effect data after blasting, carrying out data normalization processing on the data, analyzing and selecting input features of a regression prediction model by utilizing gray correlation degree, establishing a generated countermeasure network model, carrying out expansion and expansion on a training data set, establishing a blasting effect multi-core Gaussian process regression prediction model, searching super parameters of the prediction model based on a sparrow algorithm integrated and optimized by multiple strategies, and predicting a new blasting effect based on the prediction model under the optimal parameters. The application effectively solves the problems of difficult collection of blasting engineering data, more samples with excellent effect and less samples with poor effect of the data while the number of the samples is small. The prediction precision and the prediction efficiency of the blasting effect are improved, the generalization capability of the blasting effect prediction is improved to the greatest extent, and the construction safety and the efficiency of engineering projects are ensured.

Description

Tunnel blasting effect small sample regression prediction method
Technical Field
The invention belongs to the technical field of tunnel blasting, and particularly relates to a small sample regression prediction method for tunnel blasting effect.
Background
The drilling and blasting method is widely applied to mountain tunnel engineering by virtue of the advantages of high efficiency, economy, flexibility, wide application range and the like. However, due to different blasting designs, tunnel structures and surrounding rock properties, the effects of explosion impact on the walls of blast holes are different, and bad blasting effects such as tunnel super-underexcavation, insufficient settlement of step blasting piles, large blocks and the like can be generated, so that the construction safety and the construction efficiency are influenced. Therefore, the method has the advantages that the blasting effect is predicted and judged before blasting operation, and the blasting design is timely and reasonably adjusted, so that the method has important engineering value.
Aiming at the problem of blasting effect prediction, most methods only use characteristic data before blasting and a deep learning algorithm to predict the characteristic parameters of the blasting effect. However, the blasting engineering data is difficult to collect, the number of samples is small, and meanwhile, the data has the unbalanced characteristics of more excellent effect samples and less poor effect samples. Deep learning algorithms often require a large amount of data to train and adjust weight parameters and paranoid parameters in the neural network so that the model can extract the characteristics and modes of the data. This is in contradiction with the data characteristics of the blasting engineering, which makes it difficult to obtain an optimal prediction model. Therefore, how to fully consider the characteristics of small samples of the blasting process characteristic data and realize accurate prediction of the blasting effect of the tunnel is a technical problem to be solved in the field.
Disclosure of Invention
The invention aims to solve the technical problems in the background and provides a small sample regression prediction method for tunnel blasting effect, which comprises the following steps:
s1, collecting characteristic data before blasting and blasting effect data after blasting;
S2, data normalization is carried out, and gray correlation degree analysis is utilized to select input features of a regression prediction model;
S3, establishing and generating an countermeasure network model, and carrying out augmentation and expansion on a training data set;
s4, establishing a multi-core Gaussian process regression prediction model of the blasting effect;
S5, searching a prediction model super-parameter based on a sparrow algorithm of multi-strategy integrated optimization;
S6, predicting the blasting effect.
In a preferred embodiment, in step S1, the collected characteristic data before blasting is characteristic data related to blasting effect, including blasting parameters, surrounding rock parameters, tunnel design, etc., which may include, but are not limited to, surrounding rock category, rock compressive strength, rock integrity coefficient, cross-sectional area, drilling depth, external insertion angle, blasting unit consumption, grooving angle, and peripheral hole spacing, which are expressed by the following matrix:
Wherein n represents the number of collected pre-blasting feature data pieces, and m represents the dimension of the collected pre-blasting feature;
The collected blasting effect data is characteristic data capable of reflecting the blasting effect quality, and specifically comprises, but is not limited to, super-underexcavated quantity, footage, half-porosity and blockiness, and is represented by the following matrix form:
wherein n represents the number of the collected blasting effect characteristic data, and p represents the dimension of the collected blasting effect characteristic;
the data normalization method in the step S2 is as follows:
Wherein x norm is the normalized data value, x norm E [0,1], x is the original data value, and x max,xmin is the maximum value and the minimum value of the parameter sequence respectively.
In a preferred embodiment, the specific process of selecting the model input feature in step S2 includes:
first, the range is calculated by comparing the characteristic column with the reference characteristic column:
Wherein a and b are minimum and maximum differences of two poles, y kp is the kth line data of the p-th column reference feature, and x im is the kth line data of the m-th column comparison feature;
calculating the association degree:
Wherein r mp is the association degree between the comparison feature of the mth column and the reference feature of the p column, namely the association degree between the feature parameter before blasting and the blasting effect. ρ is the resolution factor;
In the preferred scheme, the step S2 also comprises the steps of arranging the association degree of the features before blasting according to the sequence from big to small, setting feature selection dimensions, and taking the comparison feature with the front association degree as input feature data after association degree screening according to the set feature selection dimensions Ni;
and dividing the screened input characteristic data and blasting effect data into a training data set and a testing data set.
In a preferred scheme, the generating countermeasure network model established in the step S3, the generating network comprises a one-dimensional convolution layer, a full connection layer and an activation function layer;
The generator network inputs noise vectors which are in the dimension of (Ni+No) and accord with Gaussian distribution, and outputs generated samples in the dimension of (Ni+No), the discriminator network comprises two full-connection layers and a Sigmoid activation function layer, and the output of the network is a scalar of 0 to 1, which represents the probability estimation of the discriminator network on the input data vector being real data, namely represents the distribution similarity of the input data and the real data;
The loss of the training process discriminator network is formed by the sum of the real blasting data and the loss of the generated sample, the loss of the generated network is calculated by judging whether the data generated by the generator is real data or not through the discriminator, and the loss function is a binary cross entropy loss function:
BCELoss=-(ylog(p)+(1-y)log(1-p)),
and in the formula, y is a real label, the real blasting data corresponding label is 1, and the generated sample corresponding label is 0;p which is the output probability estimation of the identifier network Sigmoid activation function layer.
In the preferred scheme, in the process of the game training of the GAN, the generation network gradually improves the capability of generating the realistic data, and the discriminator improves the capability of discriminating whether the data is a real blasting sample. After the maximum training times are reached, the generated model is stored. And the original training data set is amplified by using the generated sample of the generated model, so that an amplified data set containing the generated sample and the real sample is obtained.
In a preferred scheme, the blasting effect gaussian process regression prediction model in step S4 is established by:
Gaussian process regression is a regression method of a Bayesian inferred nonparametric type, and the regression model expression is:
y=f(x)+ε;
Wherein x represents the input pre-blasting characteristic, y represents the actual observed blasting effect, epsilon is a noise variable, epsilon obeys to F (x) is the predicted blasting effect, assuming it follows the gaussian process f (x) to GP (m (x), k (x, x *)), the mean function of which is determined by the training data, and the covariance function is determined by the kernel function:
Wherein x and x * represent any two different pieces of pre-blasting characteristic data, m is a mean function, E is a mathematical expectation, and k is a covariance function;
The prior distribution of the actual observed blasting effect y can thus be obtained as:
Let X be the pre-burst feature data in the training set, where k=k (X, X) =k ij be a positive definite covariance matrix of size n×n, K ij=k(xi,xj) is used to measure the correlation between X i and X j, k=k (X *,X*);K*=K(X*, X) is an n×1 order covariance matrix between the test point and the training set, and I n is an n order identity matrix. The posterior distribution of predicted values is thus obtained:
in the formula, Mean of predicted values is represented, cov (f *) is variance.
In a preferred scheme, further, the blasting effect multi-core gaussian process regression prediction model in step S4, and the fusion kernel function construction process includes:
(1) The RBF kernel is used for extracting data local characteristics, and the covariance function is as follows:
(2) Matern kernel, which is a popularization of RBF kernel, is used for extracting local correlation characteristics, and the covariance function is:
(3) A linear kernel for describing the linear correlation between the data, the covariance function being:
(4) A polynomial core for describing a nonlinear relationship between data, the covariance function being:
delta 1~δ9 is a nuclear function super parameter to be set;
the multiple kernel functions constitute a fused kernel function as follows:
kM(xi,xj)=δ10·kRBF(xi,xj)+δ11·kMatern(xi,xj)+δ12·kl(xi,xj)+δ13·kP(xi,xj),
Wherein delta 10~δ13 is covariance weight of different kernel functions;
finally, the augmented data set comprising the generated samples and the real samples is divided into a training set, a verification set and a test set, wherein the training set is used for training a multi-core Gaussian process regression prediction model, the verification set is used for adjusting super parameters of the optimal model and selecting the optimal model, the test set is used for evaluating the performance and generalization capability of the model, and the model performance evaluation indexes are as follows:
Wherein MAE, RMSE, R is the mean absolute error, root mean square error, and decision coefficient, y i、y′i is the true sample value and the predicted value, The average value of the samples is shown.
In a preferred scheme, the optimized sparrow algorithm is integrated by multiple strategies in the step S5, wherein the optimized strategies comprise a chaotic mapping mechanism, dynamic self-adaptive weights, an improved scout position updating mode, a fused cauchy variation and random reverse learning strategy, and the specific search model hyper-parameter process comprises the following steps:
S501, setting super-parameters delta 1~δ13 in a fusion kernel function as search solutions of a sparrow algorithm, setting polynomial kernel super-parameters delta 9 as the highest times of polynomials, setting the value range as [1,2,3,4,5,6], setting the value range of other super-parameters as [0,1], and setting the number of discoverers, the occupation ratio of the scouts and a safety threshold;
s502, initializing the sparrow population position through an ICMIC chaotic mapping model:
Wherein Z is a generated chaotic sequence, x i is an ith individual of the sparrow population generated according to the chaotic mapping sequence, x lb、xub is the lower limit and the upper limit of the range of each individual in the search dimension, and the polynomial super-parameter delta 9 still needs to be rounded after initialization.
S503, calculating the fitness of individuals in the population, sorting and determining the optimal individuals and the worst individuals according to the fitness, and calculating the fitness value of each sparrow in the population according to the following formula:
fFIT=RMSE+MAE+(1-R2);
s504, updating the position of the finder by using a finder position updating mode with self-adaptive weight:
Wherein ω is the dynamic self-adaptive weight of the position change of the finder, k is the control weight value range of [0,2]; T max represents the maximum iteration number of the iteration; representing the position information of the ith sparrow when the j-dimensional iteration number is t; The method comprises the steps of respectively representing optimal fitness and worst fitness values of population individuals when iteration times are t, wherein R epsilon [0,1] represents an early warning value, ST epsilon [0.5,1] represents a safety threshold, sparrows perform global search and foraging when R < ST, and perform random walk with regular distribution when R > ST;
S505, updating the position of the joiner:
in the formula, The position with the optimal fitness value in the current discoverer; The method comprises the steps of obtaining a current global fitness worst position, wherein A +=AT(AAT)-1, A represents a single column vector with the same dimension as a sparrow individual, an internal element is formed by a 1-1 random set, n is the population size, when i is less than or equal to n/2, a user actively follows a finder to move towards a better foraging position, and when i is more than n/2, the user gets rid of the current worse foraging position.
S506, randomly selecting m multiplied by n sparrows as the scouts, wherein m is the ratio of the scouts in the population, and undertaking an early warning task, wherein the position of the scouts is out of control easily due to the fact that the traditional scout updates a formula, and deviates from an optimal position;
the scout location is updated using the following optimization:
beta is a random number obeying normal distribution with mean value of 0 and variance of 1;
the improved position updating formula shows that if the sparrow individual is positioned at the current optimal position, the sparrow individual flies to any random position between the optimal position and the worst position, otherwise, the sparrow individual can fly to any random position between the sparrow individual and the optimal position;
S507, introducing random factors, utilizing a random reverse learning avoidance algorithm to fall into a local optimal solution and improving population diversity:
in the formula, The method is a reverse solution of the optimal solution under the t-th iteration, wherein x ub、xlb is an upper and lower bound, and r is a random value from 0 to 1; Representing the individual position of the optimal solution individual after random reverse learning under t iterations; b 1 is an information exchange control parameter;
s508, simultaneously introducing a Cauchy variation strategy to improve the tendency of the algorithm to fall into a local optimal solution due to the competition of the joiner and the discoverer for food:
wherein cauchy (0, 1) is a standard Cauchy distribution;
S509, selecting to update the optimal individual location by a dynamic selection policy:
When rand (1, e) > P s, selecting the position update based on the random reverse learning strategy, otherwise, selecting the position update based on the Cauchy variation disturbance strategy;
S510, judging whether the maximum iteration times are reached, continuing the next step when the maximum iteration times are reached, otherwise repeating S53;
S511, outputting the optimal individual position, namely the optimal super-parameter combination of the blasting effect multi-core Gaussian process regression prediction model.
In the preferred scheme, in step S6, the blasting effect prediction is to use the multi-core gaussian process regression prediction model established by the optimal super-parameter combination output in step S5, and obtain the blasting effect by inputting new data.
The method has the advantages of high accuracy, simple structure of a prediction model, strong model generalization capability and the like aiming at the prediction of the small sample of the blasting effect, and is specific to the following steps:
1) Aiming at the characteristics of more normal evaluation samples, less abnormal evaluation samples, less data volume and the like presented by the blasting data, the generated countermeasure network is used for supplementing the data set, so that the training difficulty of the prediction model is reduced;
2) Aiming at the problems of high dimensionality of the small sample and insufficient nonlinear prediction precision, multi-core Gaussian process regression is used, so that the generalization capability and the prediction precision of the prediction model are improved;
3) Aiming at the problem of mixed nuclear super-parameter setting in the multi-nuclear Gaussian process regression prediction model, the sparrow algorithm optimized by multi-strategy integration is used for searching the optimal prediction super-parameter, so that the prediction precision of the prediction model is further enhanced.
Drawings
Fig. 1 is a flow chart overall.
FIG. 2 is a graph of gray correlation analysis.
Fig. 3 generates an augmented data architecture diagram for an antagonism network acquisition.
Fig. 4 is a diagram of an iterative process of optimal fitness.
FIG. 5 is a graph comparing predicted effects.
Detailed Description
As shown in fig. 1 to 5, a regression prediction method for a small sample of a tunnel blasting effect is implemented by the following steps:
S1.1, selecting characteristic data related to blasting effect before blasting according to expert experience, wherein the characteristic data comprise blasting parameters, surrounding rock parameters, tunnel design and the like, and the characteristic data can comprise, but are not limited to, surrounding rock types, rock compressive strength, rock integrity coefficients, cross-sectional areas, drilling depths, external insertion angles, blasting unit consumption, grooving angles, peripheral hole spacing and the like. And is represented by the following matrix form:
where n represents the number of collected pre-blast feature data pieces and m represents the dimension of the collected pre-blast feature.
S1.2, collecting characteristic data which can reflect the quality of the blasting effect after blasting, wherein the characteristic data can specifically comprise but not be limited to super-underexcavated quantity, footage, half-porosity, blockiness and the like. And is represented by the following matrix form:
Where n represents the number of collected blast effect feature data and p represents the dimension of the collected blast effect feature.
S2.1 normalizes the collected data:
Wherein x norm is the normalized data value, x norm E [0,1], x is the original data value, and x max,xmin is the maximum value and the minimum value of the parameter sequence respectively.
S2.2, taking the characteristics before blasting as a comparison characteristic column, taking the effect data after blasting as a reference characteristic column, and calculating the association degree of each characteristic before blasting and the blasting effect through gray association degree analysis. Comparing the characteristic column with the reference characteristic column to calculate the range:
Where a and b are the minimum and maximum differences of the poles, y kp is the kth line data of the p-th column reference feature, and x im is the kth line data of the m-th column comparison feature.
S2.3, calculating the association degree:
Wherein r mp is the association degree between the comparison feature of the mth column and the reference feature of the p column, namely the association degree between the feature parameter before blasting and the blasting effect. ρ is a resolution factor, typically 0.5.
S2.4, the association degrees of the features before blasting are arranged in a sequence from big to small (see figure 2), feature selection dimensions are set, and comparison features with the previous association degrees are taken as input feature data after association degree screening according to the set feature selection dimensions Ni. The screened input characteristic data (dimension Ni) and blasting effect data (dimension No) are divided into a training data set and a test data set.
S3.1, establishing and generating an countermeasure network model. The generating network is composed of a one-dimensional convolution layer, a full connection layer and an activation function layer. The generator network inputs noise vectors which are in the dimension of (Ni+No) and accord with Gaussian distribution, and outputs generated samples in the dimension of (Ni+No), and the discriminator network comprises two full-connection layers and a Sigmoid activation function layer.
S3.2, training a discriminator network and a generation network. The loss of the training process arbiter network is constituted by the sum of the real burst data and the loss of the generated samples. Loss of the generation network is calculated by the arbiter as to whether the data generated by the generator is real data. The loss function is a binary cross entropy loss function:
BCELoss=-(ylog(p)+(1-y)log(1-p))
Where y is the true label, which is 1 if true blast data, and 0 if generated samples. p is the output probability estimation of the arbiter network Sigmoid activation function layer.
And S3.3, saving the generator network. And inputting noise vectors conforming to Gaussian distribution to generate a generated sample to amplify the original data set, so as to obtain an amplified data set. The specific structure of the architecture for generating the augmented data for the countermeasure network is shown in fig. 3.
S4.1, establishing a Gaussian process regression model:
y=f(x)+ε
Wherein x represents the input pre-blasting characteristic, y represents the actual observed blasting effect, epsilon is a noise variable, epsilon obeys to F (x) is the predicted blasting effect, assuming it follows the gaussian process f (x) to GP (m (x), k (x, x *)), the mean function of which is determined by the training data, and the covariance function is determined by the kernel function:
Wherein x and x * represent any two different pieces of pre-blasting characteristic data, m is a mean function, E is a mathematical expectation, and k is a covariance function.
S4.2, obtaining prior distribution of the actual observed blasting effect y:
Let X be the pre-burst feature data in the training set, where k=k (X, X) =k ij be a positive definite covariance matrix of size n×n, K ij=k(xi,xj) is used to measure the correlation between X i and X j, k=k (X *,X*);K*=K(X*, X) is an n×1 order covariance matrix between the test point and the training set, and I n is an n order identity matrix.
S4.3, obtaining posterior distribution of predicted values:
in the formula, Mean of predicted values is represented, cov (f *) is variance.
S4.4, constructing a fusion kernel function:
1) The RBF kernel is used for extracting data local characteristics, and the covariance function is as follows:
2) Matern kernel, which is a popularization of RBF kernel, is also used for extracting local correlation characteristics, and the covariance function is:
3) A linear kernel for describing the linear correlation between the data, the covariance function being:
4) A polynomial core for describing a nonlinear relationship between data, the covariance function being:
Wherein delta 1~δ9 is the hyper-parameter of the kernel function to be set.
The multiple kernel functions constitute a fused kernel function as follows:
kM(xi,xj)=δ10·kRBF(xi,xj)+δ11·kMatern(xi,xj)+δ12·kl(xi,xj)+δ13·kP(xi,xj)
where δ 10~δ13 is the covariance weight of the different kernel functions.
S4.5, the augmented data set comprising the generated sample and the real sample is divided into a training set and a verification set, wherein the training set is used for training the multi-core Gaussian process regression prediction model, and the verification set is used for adjusting super parameters of the optimal model and selecting the optimal model. The model performance evaluation index is as follows:
Wherein MAE, RMSE, R is the mean absolute error, root mean square error, and decision coefficient, y i、y′i is the true sample value and the predicted value, The average value of the samples is shown.
S5.1, setting the super-parameter delta 1~δ13 in the fusion kernel function as search solution of the sparrow algorithm. The polynomial core super parameter delta 9 is the highest degree of the polynomial, and the value range is set as [1,2,3,4,5,6]. The value range of the rest super parameters is set as [0,1]. The number of discoverers, the duty ratio of the scouts and the safety threshold are set.
S5.2, initializing the sparrow population position through an ICMIC chaotic mapping model:
Wherein Z is a generated chaotic sequence, x i is an ith individual of the sparrow population generated according to the chaotic mapping sequence, and x lb、xub is the lower limit and the upper limit of the range of each individual in the search dimension respectively. In particular, polynomial super-parameters delta 9 still need rounding after initialization.
And S5.3, calculating the fitness of individuals in the population. And (3) establishing a multi-core Gaussian process regression prediction model with the procedure in the step S4 according to individual parameters of sparrows, and inputting an augmentation data set for training and verification. And calculating performance indexes and fitness functions, and sequencing according to the fitness, and determining the optimal individuals and the worst individuals. The fitness value of each sparrow in the population is calculated according to the following formula:
fFIT=RMSE+MAE+(1-R2)
S5.4 updating the finder position:
Wherein ω is the dynamic self-adaptive weight of the position change of the finder, k is the control weight value range of [0,2]; T max represents the maximum iteration number of the iteration; representing the position information of the ith sparrow when the j-dimensional iteration number is t; the method comprises the steps of respectively representing optimal fitness and worst fitness values of population individuals when iteration times are t, representing early warning values by R epsilon [0,1], representing a safety threshold value by ST epsilon [0.5,1], carrying out global search foraging by sparrows when R is less than ST, carrying out random walk by the sparrows in a regular too-distributed mode when R is more than ST, and carrying out random number obeying normal distribution by Q.
S5.5 updating the enrollee location:
in the formula, The position with the optimal fitness value in the current discoverer; The method comprises the steps of obtaining a current global fitness worst position, wherein A +=AT(AAT)-1, A represents a single column vector with the same dimension as a sparrow individual, an internal element is formed by a 1-1 random set, n is the population size, when i is less than or equal to n/2, a user actively follows a finder to move towards a better foraging position, and when i is more than n/2, the user gets rid of the current worse foraging position.
S5.6, randomly selecting m multiplied by n sparrows as the scouts, wherein m is the ratio of the scouts in the population, carrying out early warning tasks and updating the positions of the scouts:
Is the optimal individual position when the iteration algebra is t, and beta is a random number obeying normal distribution with the mean value of 0 and the variance of 1. The improved position updating formula shows that if the sparrow individual is located at the current optimal position, the sparrow individual can fly to any random position between the optimal position and the worst position, otherwise, the sparrow individual can select to fly to any random position between the sparrow individual and the optimal position.
S5.7, introducing a random factor, utilizing random reverse learning to avoid the algorithm to fall into a local optimal solution and improving population diversity:
in the formula, The method is a reverse solution of the optimal solution under the t-th iteration, wherein x ub、xlb is an upper and lower bound, and r is a random value from 0 to 1; And b 1 is an information exchange control parameter, which represents the individual position of the optimal solution individual after random reverse learning under t iterations.
S5.8, introducing a Cauchy mutation strategy to improve the tendency of the algorithm to fall into a locally optimal solution due to the competition of the participants and the discoverers for food:
wherein cauchy (0, 1) is a standard Cauchy distribution;
S5.9 selecting to update the optimal individual location by means of a dynamic selection strategy:
If rand (1, e) > P s, selecting the position update based on the random reverse learning strategy, otherwise, selecting the position update based on the Kex variation disturbance strategy.
S5.10, judging whether the maximum iteration number is reached, repeating S5.3 if the maximum iteration number is not reached, and continuing if the maximum iteration number is reached. The optimal fitness iteration process is shown in fig. 4.
And S5.11, outputting an optimal individual position, namely an optimal super-parameter combination of the blasting effect multi-core Gaussian process regression prediction model, and establishing the blasting effect multi-core Gaussian process regression optimal prediction model under the optimal super-parameter combination through the processes S4.1-S4.5.
S6, storing an optimal prediction model of blasting effect multi-core Gaussian process regression under an optimal super-parameter combination, inputting a test data set to obtain blasting effect prediction of the test data set, and comparing the prediction precision of a commonly used prediction model SVM (support vector machine), BP (back propagation neural network), RF (random forest), GPR (single-core Gaussian process regression) and the patent Model (MGPR) (see figure 5).
The foregoing embodiments are merely for illustrating the technical solution of the present invention, but not for limiting the same, and although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those skilled in the art that modifications may be made to the technical solution described in the foregoing embodiments or equivalents may be substituted for parts of the technical features thereof, and that such modifications or substitutions do not depart from the spirit and scope of the technical solution of the embodiments of the present invention in essence.

Claims (10)

1.一种隧道爆破效果小样本回归预测方法,其特征是:包括以下步骤:1. A small sample regression prediction method for tunnel blasting effect, characterized by comprising the following steps: S1、收集爆破前的特征数据与爆破后的爆破效果数据;S1. Collect characteristic data before blasting and blasting effect data after blasting; S2、数据归一化,并利用灰色关联度分析选取回归预测模型的输入特征;S2, data normalization, and use grey relational analysis to select input features of the regression prediction model; S3、建立生成对抗网络模型,对训练数据集进行增广扩充;S3. Establish a generative adversarial network model to augment and expand the training data set; S4、建立爆破效果多核高斯过程回归预测模型;S4. Establish a multi-kernel Gaussian process regression prediction model for blasting effect; S5、基于多策略集成优化的麻雀算法搜索预测模型超参数;S5. Sparrow algorithm search prediction model hyperparameters based on multi-strategy integrated optimization; S6、预测爆破效果。S6. Predict blasting effects. 2.根据权利要求1所述一种隧道爆破效果小样本回归预测方法,其特征是:在步骤S1中,所收集爆破前的特征数据是根据专家经验选择的与爆破效果相关的特征数据,包含爆破参数、围岩参数、隧道设计等方面,具体的包括:围岩类别、岩体抗压强度、岩体完整性系数、断面面积、钻孔深度、外插角、爆破单耗、挖槽角度和周边孔间距,通过以下矩阵形式表示:2. According to claim 1, a small sample regression prediction method for tunnel blasting effect is characterized in that: in step S1, the characteristic data collected before blasting is characteristic data related to blasting effect selected according to expert experience, including blasting parameters, surrounding rock parameters, tunnel design and other aspects, specifically including: surrounding rock type, rock mass compressive strength, rock mass integrity coefficient, cross-sectional area, drilling depth, external insertion angle, blasting unit consumption, trenching angle and peripheral hole spacing, which are expressed in the following matrix form: 式中,n表示所收集爆破前的特征数据条数,m表示所收集爆破前特征的维数;In the formula, n represents the number of feature data collected before blasting, and m represents the dimension of the features collected before blasting; 所收集的爆破后的效果数据为能反应此次爆破效果优劣的特征数据,具体的包括但不限于:超欠挖量、进尺、半孔率和块度,通过以下矩阵形式表示:The collected post-blasting effect data are characteristic data that can reflect the quality of the blasting effect, including but not limited to: over-excavation and under-excavation, footage, half-hole rate and block size, which are expressed in the following matrix form: 式中,n表示所收集爆破效果特征数据数量,p表示所收集爆破效果特征的维度;In the formula, n represents the number of blasting effect feature data collected, and p represents the dimension of the blasting effect features collected; 步骤S2中的数据归一化方法为:The data normalization method in step S2 is: 式中,xnorm为归一化后的数据值,xnorm∈[0,1];x为原始数据值;xmax,xmin分别表示该参数列的最大值与最小值。In the formula, x norm is the normalized data value, x norm ∈[0,1]; x is the original data value; x max , x min represent the maximum and minimum values of the parameter column respectively. 3.根据权利要求1所述一种隧道爆破效果小样本回归预测方法,其特征是:步骤S2中选择模型输入特征的具体过程包括:3. According to the small sample regression prediction method for tunnel blasting effect in claim 1, the specific process of selecting the model input features in step S2 comprises: 首先由比较特征列与参考特征列计算极差:First, calculate the range from the comparison feature column to the reference feature column: 式中,a与b为两极最小差和最大差,ykp是第p列参考特征的第k行数据,xim是第m列比较特征的第k行数据;Where a and b are the minimum and maximum differences between the two poles, y kp is the k-th row data of the reference feature in the p-th column, and x im is the k-th row data of the comparison feature in the m-th column; 计算关联度:Calculate the correlation: 式中,rmp为第m列比较特征与第p列参考特征之间的关联度,即爆破前特征参数与爆破效果之间的关联度,ρ为分辨系数。Where r mp is the correlation between the comparison feature in the mth column and the reference feature in the pth column, that is, the correlation between the characteristic parameters before blasting and the blasting effect, and ρ is the resolution coefficient. 4.根据权利要求1所述一种隧道爆破效果小样本回归预测方法,其特征是:步骤S2中还包括:将各个爆破前特征的关联度按照从大到小顺序排列,设定特征选择维数,按照设定的特征选择维数Ni取关联度靠前的比较特征作为关联度筛选后的输入特征数据;4. According to the small sample regression prediction method of tunnel blasting effect in claim 1, the feature is that: step S2 further comprises: arranging the correlation degrees of each pre-blasting feature in descending order, setting the feature selection dimension, and taking the comparative features with the highest correlation degree as the input feature data after correlation degree screening according to the set feature selection dimension Ni; 将筛选后的输入特征数据与爆破效果数据划分为训练数据集与测试数据集。The filtered input feature data and blasting effect data are divided into training data set and test data set. 5.根据权利要求1所述一种隧道爆破效果小样本回归预测方法,其特征是:步骤S3中建立的生成对抗网络模型,生成网络包括一维卷积层、全连接层和激活函数层;5. According to the small sample regression prediction method of tunnel blasting effect in claim 1, it is characterized in that: the generative adversarial network model established in step S3, the generative network includes a one-dimensional convolutional layer, a fully connected layer and an activation function layer; 生成器网络输入为(Ni+No)维度并符合高斯分布的噪声向量,输出的是(Ni+No)维度的生成样本;判别器网络包含两层全连接层与Sigmoid激活函数层,网络输出的是0到1的一个标量,表示判别器网络对于输入数据向量是真实数据的概率估计,即表示输入数据与真实数据的分布相似度;The input of the generator network is a noise vector of (Ni+No) dimension and conforming to the Gaussian distribution, and the output is a generated sample of (Ni+No) dimension; the discriminator network contains two fully connected layers and a Sigmoid activation function layer. The network output is a scalar from 0 to 1, which indicates the probability estimate of the discriminator network that the input data vector is real data, that is, the distribution similarity between the input data and the real data; 训练过程判别器网络的损失通过由真实的爆破数据与生成样本的损失之和构成;生成网络的损失通过判别器认为生成器生成的数据是否是真实数据计算,损失函数为二元交叉熵损失函数:The loss of the discriminator network in the training process is composed of the sum of the losses of the real burst data and the generated samples; the loss of the generated network is calculated by whether the discriminator believes that the data generated by the generator is real data, and the loss function is the binary cross entropy loss function: BCELoss=-(ylog(p)+(1-y)log(1-p)),BCELoss=-(ylog(p)+(1-y)log(1-p)), 式中y是真实标签,真实爆破数据对应标签为1,生成样本对应标签为0;p为判别器网络Sigmoid激活函数层的输出概率估计。Where y is the true label, the label corresponding to the real blasting data is 1, and the label corresponding to the generated sample is 0; p is the output probability estimate of the Sigmoid activation function layer of the discriminator network. 6.根据权利要求1所述一种隧道爆破效果小样本回归预测方法,其特征是:在GAN的对抗博弈训练过程中,生成网络逐步提升生成逼真数据的能力,判别器提升判别数据是否为真实爆破样本的能力;在到达最大最大训练次数后,保存生成模型,并利用生成模型的生成样本对原始训练数据集进行增广,得到包含生成样本与真实样本的增广数据集。6. According to claim 1, a small sample regression prediction method for tunnel blasting effect is characterized in that: during the adversarial game training process of GAN, the generative network gradually improves its ability to generate realistic data, and the discriminator improves its ability to discriminate whether the data is a real blasting sample; after reaching the maximum number of training times, the generative model is saved, and the generated samples of the generative model are used to augment the original training data set to obtain an augmented data set containing generated samples and real samples. 7.根据权利要求1所述一种隧道爆破效果小样本回归预测方法,其特征是:步骤S4中的爆破效果高斯过程回归预测模型建立过程为:7. According to the small sample regression prediction method for tunnel blasting effect in claim 1, the characteristic is that the blasting effect Gaussian process regression prediction model establishment process in step S4 is: 高斯过程回归是一种贝叶斯推断非参数类型的回归方法,其回归模型表达式为:Gaussian process regression is a Bayesian inference non-parametric regression method, and its regression model expression is: y=f(x)+ε;y=f(x)+ε; 式中,x表示输入的爆破前特征,y表示实际观测的爆破效果,ε为噪声变量,ε服从f(x)为预测的爆破效果,假设其服从高斯过程f(x)~GP(m(x),k(x,x*)),其均值函数由训练数据决定,而协方差函数由核函数确定:In the formula, x represents the input pre-blasting feature, y represents the actual observed blasting effect, ε is the noise variable, and ε obeys f(x) is the predicted burst effect, assuming that it obeys the Gaussian process f(x)~GP(m(x),k(x,x * )), whose mean function is determined by the training data and covariance function is determined by the kernel function: 式中x与x*表示任意两条不同的爆破前特征数据,m为均值函数,E为数学期望,k为协方差函数;Where x and x * represent any two different characteristic data before blasting, m is the mean function, E is the mathematical expectation, and k is the covariance function; 由此得到实际观测的爆破效果y的先验分布为:The prior distribution of the actual observed explosion effect y is obtained as follows: 假设X为训练集中的爆破前特征数据,式中,K=K(X,X)=kij为大小n×n的正定协方差矩阵,kij=k(xi,xj)用来衡量xi与xj之间的相关性;k=k(X*,X*);K*=K(X*,X)为测试点与训练集之间的的n×1阶协方差矩阵;In为n阶单位矩阵,由此得到预测值的后验分布:Assume that X is the pre-blast feature data in the training set, where K = K(X, X) = kij is a positive definite covariance matrix of size n×n, kij = k( xi , xj ) is used to measure the correlation between xi and xj ; k = k(X * , X * ); K * = K(X * , X) is the n×1 order covariance matrix between the test point and the training set; In is the n-order unit matrix, and the posterior distribution of the predicted value is obtained: 式中,表示预测值的均值,cov(f*)表示方差。In the formula, represents the mean of the predicted values, and cov(f * ) represents the variance. 8.根据权利要求1所述一种隧道爆破效果小样本回归预测方法,其特征是:进一步地,步骤S4中的爆破效果多核高斯过程回归预测模型,融合核函数构造过程包括:8. According to the small sample regression prediction method of tunnel blasting effect in claim 1, it is characterized in that: further, the blasting effect multi-kernel Gaussian process regression prediction model in step S4, the fusion kernel function construction process includes: RBF核,用于提取数据局部特征,其协方差函数为:The RBF kernel is used to extract local features of data, and its covariance function is: Matérn核,为RBF核的推广,用于提取局部相关特性,其协方差函数为:The Matérn kernel is a generalization of the RBF kernel and is used to extract local correlation characteristics. Its covariance function is: 线性核,用于描述数据间的线性相关性,协方差函数为:Linear kernel is used to describe the linear correlation between data. The covariance function is: 多项式核,用于描述数据间的非线性关系,协方差函数为:Polynomial kernel is used to describe the nonlinear relationship between data. The covariance function is: 式中δ1~δ9为需要设置的核函数超参数;Where δ 19 are the kernel function hyperparameters that need to be set; 多种核函数按照如下方式构成融合核函数:Multiple kernel functions form a fusion kernel function in the following way: kM(xi,xj)=δ10·kRBF(xi,xj)+δ11·kMatern(xi,xj)+δ12·kl(xi,xj)+δ13·kP(xi,xj),k M (x i ,x j )=δ 10 ·k RBF (x i ,x j )+δ 11 ·k Matern (x i ,x j )+δ 12 ·k l (x i ,x j )+δ 13 ·k P (x i ,x j ), 式中δ10~δ13为不同核函数的协方差权重;Where δ 1013 are the covariance weights of different kernel functions; 最后,将包含生成样本与真实样本的增广数据集分为训练集、验证集和测试集,训练集用于训练多核高斯过程回归预测模型,验证集用于调优模型的超参数和选择最佳模型,测试集用于评估模型性能与泛化能力,模型性能评估指标为:Finally, the augmented dataset containing generated samples and real samples is divided into training set, validation set and test set. The training set is used to train the multi-kernel Gaussian process regression prediction model, the validation set is used to tune the model's hyperparameters and select the best model, and the test set is used to evaluate the model performance and generalization ability. The model performance evaluation indicators are: 式中MAE、RMSE、R2分别表示平均绝对误差、均方根误差和决定系数,yi、y′i分别表示真实样本值与预测值,表示样本平均值。Where MAE, RMSE, and R2 represent the mean absolute error, root mean square error, and determination coefficient, respectively; y i and y ′ i represent the true sample value and the predicted value, respectively. Represents the sample mean. 9.根据权利要求1所述一种隧道爆破效果小样本回归预测方法,其特征是:步骤S5中的多策略集成优化的麻雀算法,优化策略包括:混沌映射机制、动态自适应权重、改进侦察者位置更新方式、融合柯西变异和随机反向学习策略,具体的搜索模型超参数过程包括:9. According to claim 1, a small sample regression prediction method for tunnel blasting effect is characterized in that: the multi-strategy integrated optimization sparrow algorithm in step S5, the optimization strategy includes: chaos mapping mechanism, dynamic adaptive weight, improved scout position update method, fusion Cauchy mutation and random reverse learning strategy, the specific search model hyperparameter process includes: S501、设定融合核函数中超参数δ1~δ13为麻雀算法的搜索解,多项式核超参δ9为多项式最高次数,取值范围设定为[1,2,3,4,5,6],其余超参数取值范围设定为[0,1],设定发现者数量、侦察者占比和安全阈值;S501, set the hyperparameters δ 1 to δ 13 in the fusion kernel function as the search solution of the sparrow algorithm, the polynomial kernel hyperparameter δ 9 as the highest degree of the polynomial, the value range is set to [1,2,3,4,5,6], the value range of the remaining hyperparameters is set to [0,1], set the number of discoverers, the proportion of scouts and the safety threshold; S502、通过ICMIC混沌映射模型,初始化麻雀种群位置:S502. Initialize the position of the sparrow population through the ICMIC chaotic mapping model: 式中Z为产生的混沌序列,xi为根据混沌映射序列产生的麻雀种群的第i个个体,xlb、xub分别为每个个体在搜索维度上的范围下限与上限,多项式超参δ9在初始化后仍需要取整;Where Z is the generated chaotic sequence, xi is the i-th individual of the sparrow population generated according to the chaotic mapping sequence, xlb and xub are the lower and upper limits of the range of each individual in the search dimension, and the polynomial hyperparameter δ9 still needs to be rounded after initialization; S503、计算种群中个体的适应度,按照适应度排序并确定最优与最劣个体,种群中每只麻雀的适应度值按以下公式进行计算:S503, calculating the fitness of individuals in the population, sorting them according to the fitness and determining the best and worst individuals, and the fitness value of each sparrow in the population is calculated according to the following formula: fFIT=RMSE+MAE+(1-R2);f FIT = RMSE + MAE + (1-R2); S504、使用一种具有自适应权重的发现者位置更新方式,更新发现者位置:S504: Use a finder location update method with an adaptive weight to update the finder location: 式中,ω为发现者位置变化的动态自适应权重;k为控制权重取值范围为[0,2];Tmax表示迭代最大迭代次数;表示第i只麻雀在j维迭代次数为t时的位置信息; 分别表示迭代次数为t时种群个体的最优适应度和最差适应度值;R∈[0,1]表示预警值;ST∈[0.5,1]表示安全阈值,当R<ST时,麻雀进行全局搜索觅食,当R>ST时,麻雀以正太分布进行随机游走;Q为服从正态分布的随机数;Where ω is the dynamic adaptive weight of the discoverer position change; k is the control weight value range [0,2]; T max represents the maximum number of iterations; It represents the position information of the i-th sparrow when the number of iterations in the j-dimension is t; They represent the optimal fitness and the worst fitness of the individuals in the population when the number of iterations is t; R∈[0,1] represents the warning value; ST∈[0.5,1] represents the safety threshold. When R<ST, the sparrow conducts a global search for food. When R>ST, the sparrow walks randomly with a normal distribution; Q is a random number that obeys a normal distribution; S505、更新加入者位置:S505, update the joiner's position: 式中,是当前发现者中适应度值最优的位置;是当前全局适应度最差的位置;A+=AT(AAT)-1;A表示与麻雀个体同维度的单一列向量,内部元素由1与-1随机组;n为种群大小,当i≤n/2时,加入者将积极追随发现者向着更好的觅食位置进行移动;当i>n/2时,加入者将摆脱当前较差觅食位置;In the formula, It is the position with the best fitness value among the current discoverers; is the position with the worst global fitness at present; A + = AT (AA T ) -1 ; A represents a single column vector of the same dimension as the individual sparrow, and its internal elements are randomly grouped by 1 and -1; n is the population size, when i≤n/2, the joiner will actively follow the discoverer to move to a better foraging position; when i>n/2, the joiner will get rid of the current poor foraging position; S506、随机选择m×n只麻雀作为侦察者,m为种群中侦察者占比,承担预警任务,传统侦察者更新公式容易导致侦察者位置失控,偏离最优位置;S506. Randomly select m×n sparrows as scouts, where m is the proportion of scouts in the population, to undertake the early warning task. The traditional scout update formula easily causes the scout position to be out of control and deviate from the optimal position. 使用以下优化方式更新侦察者位置:Update the scout position using the following optimizations: 是迭代代数为t时的最优个体位置;β是服从均值为0,方差为1的正态分布的随机数; is the optimal individual position when the iteration number is t; β is a random number that obeys a normal distribution with a mean of 0 and a variance of 1; 改进后的位置更新公式表示当麻雀个体位于当前最优位置时,则其将会飞往最优位置和最差位置之间的任一随机位置;否则,麻雀个体会选择飞往自己和最优位置之间的任一随机位置;The improved position update formula means that when the sparrow is at the current optimal position, it will fly to any random position between the optimal position and the worst position; otherwise, the sparrow will choose to fly to any random position between itself and the optimal position; S507、引入随机因子,利用随机反向学习避免算法陷入局部最优解并提升种群多样性:S507, introduce random factors and use random reverse learning to avoid the algorithm from falling into the local optimal solution and improve population diversity: 式中,为第t次迭代下,最优解的反向解;xub、xlb为上下界;r为0到1的随机值;表示在t次迭代下的最优解个体在随机反向学习后的个体位置;b1为信息交换控制参数;In the formula, is the reverse solution of the optimal solution in the tth iteration; x ub and x lb are the upper and lower bounds; r is a random value between 0 and 1; represents the individual position of the optimal solution individual after random reverse learning under t iterations; b 1 is the information exchange control parameter; S508、同时引入柯西变异策略来改善加入者与发现者争夺觅食导致算法陷入局部最优解的趋势:S508. At the same time, the Cauchy mutation strategy is introduced to improve the tendency of the algorithm to fall into the local optimal solution due to the competition between joiners and discoverers for foraging: 式中,cauchy(0,1)为标准柯西分布;Where cauchy(0,1) is the standard Cauchy distribution; S509、通过动态选择策略来选择更新最优个体位置:S509, select and update the optimal individual position through a dynamic selection strategy: 当rand(1,e)>Ps时,则选择基于随机反向学习策略的位置更新,否则选择基于柯西变异扰动策略的位置更新;When rand(1,e)> Ps , the position update based on the random reverse learning strategy is selected, otherwise the position update based on the Cauchy mutation perturbation strategy is selected; S510、判断是否达到最大迭代次数,当达到最大迭代次数时继续下一步,否则重复S53;S510, determine whether the maximum number of iterations has been reached, and proceed to the next step if the maximum number of iterations has been reached, otherwise repeat S53; S511、输出最优个体位置,即爆破效果多核高斯过程回归预测模型最优超参数组合。S511. Output the optimal individual position, that is, the optimal hyperparameter combination of the multi-kernel Gaussian process regression prediction model for blasting effect. 10.根据权利要求1所述一种隧道爆破效果小样本回归预测方法,其特征是:在步骤S6中,爆破效果预测是使用步骤S5中输出的最优超参组合建立的多核高斯过程回归预测模型,通过输入新的数据,预测得到爆破效果。10. A small sample regression prediction method for tunnel blasting effect according to claim 1, characterized in that: in step S6, the blasting effect prediction is a multi-kernel Gaussian process regression prediction model established using the optimal hyperparameter combination output in step S5, and the blasting effect is predicted by inputting new data.
CN202411512790.1A 2024-10-28 2024-10-28 A small sample regression prediction method for tunnel blasting effect Pending CN119474644A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411512790.1A CN119474644A (en) 2024-10-28 2024-10-28 A small sample regression prediction method for tunnel blasting effect

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411512790.1A CN119474644A (en) 2024-10-28 2024-10-28 A small sample regression prediction method for tunnel blasting effect

Publications (1)

Publication Number Publication Date
CN119474644A true CN119474644A (en) 2025-02-18

Family

ID=94569153

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411512790.1A Pending CN119474644A (en) 2024-10-28 2024-10-28 A small sample regression prediction method for tunnel blasting effect

Country Status (1)

Country Link
CN (1) CN119474644A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119846485A (en) * 2025-03-21 2025-04-18 西南科技大学 Transformer-based lithium battery health state estimation method
CN120086956A (en) * 2025-05-06 2025-06-03 山东科技大学 A tunnel blasting vibration velocity prediction method and system integrating multi-factor intelligent optimization

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119846485A (en) * 2025-03-21 2025-04-18 西南科技大学 Transformer-based lithium battery health state estimation method
CN120086956A (en) * 2025-05-06 2025-06-03 山东科技大学 A tunnel blasting vibration velocity prediction method and system integrating multi-factor intelligent optimization
CN120086956B (en) * 2025-05-06 2025-07-11 山东科技大学 Tunnel blasting vibration speed prediction method and system integrating multi-factor intelligent optimization

Similar Documents

Publication Publication Date Title
Haji et al. Comparison of optimization techniques based on gradient descent algorithm: A review
CN119474644A (en) A small sample regression prediction method for tunnel blasting effect
US7340440B2 (en) Hybrid neural network generation system and method
CN103164742B (en) A kind of server performance Forecasting Methodology based on particle group optimizing neural network
CN118734196B (en) Method for predicting failure of gyroscope group based on MBKA-GBDT
CN117291069B (en) LSTM sewage water quality prediction method based on improved DE and attention mechanism
US20220284261A1 (en) Training-support-based machine learning classification and regression augmentation
CN118629531B (en) MSDBA-LSTM pollutant concentration prediction method based on self-attention
CN104504442A (en) Neural network optimization method
CN118780575B (en) A reservoir group optimization scheduling method and system based on improved whale algorithm
CN111597757A (en) GP Model-assisted SLPSO Algorithm Based on Multi-objective Addition Criterion
CN119691526A (en) Method for predicting faults of sighting system based on IARO-GBDT
Syberfeldt et al. A parallel surrogate-assisted multi-objective evolutionary algorithm for computationally expensive optimization problems
CN119862791A (en) Electronic voltage transformer error prediction method and system
CN119538741A (en) A method, system, device and medium for optimizing global trajectory of battery temperature of electric vehicle
CN112132259B (en) Neural network model input parameter dimension reduction method and computer readable storage medium
CN119180318A (en) Hybrid evolutionary neural network searching method, device, equipment and storage medium
Qi et al. Analysis and prediction of energy consumption in neural networks based on machine learning
CN118277737A (en) Industrial boiler water quality prediction method based on SSA-LSTM
CN117332682A (en) High-ground-stress hard rock tunnel blasting block prediction method, system, equipment and medium
CN116109926A (en) Robust SAR image recognition system and method combining feature loss and three-objective optimization
CN119903652B (en) Prediction method and application of slope blasting fragmentation based on Bubble Sort-GWO-ELM model
CN119026441B (en) Drilling parameter optimization method and device
Watanabe et al. Enhancement of CNN-based 2048 player with Monte-Carlo tree search
Mohanty et al. Liquefaction susceptibility of soil using multi objective feature selection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination