CN117875408A - Federal learning method of pulse neural network for flaw detection - Google Patents
Federal learning method of pulse neural network for flaw detection Download PDFInfo
- Publication number
- CN117875408A CN117875408A CN202410285505.0A CN202410285505A CN117875408A CN 117875408 A CN117875408 A CN 117875408A CN 202410285505 A CN202410285505 A CN 202410285505A CN 117875408 A CN117875408 A CN 117875408A
- Authority
- CN
- China
- Prior art keywords
- neural network
- branch
- network branch
- pulse
- product image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/098—Distributed learning, e.g. federated learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
- G06N3/0455—Auto-encoder networks; Encoder-decoder networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/049—Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Databases & Information Systems (AREA)
- Multimedia (AREA)
- Medical Informatics (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
Description
技术领域Technical Field
本发明涉及联邦学习、脉冲神经网络技术领域,特别涉及一种面向瑕疵检测的脉冲神经网络的联邦学习方法。The present invention relates to the technical field of federated learning and pulse neural network, and in particular to a federated learning method of a pulse neural network for defect detection.
背景技术Background Art
随着工业场景下边缘端设备的不断普及,小型边缘端设备应用范围越来越广泛,例如工厂环境下,检测生产环节的摄像头、传感器、树莓派等,这些小型设备具有一定的计算能力,但计算能力弱,如何利用这些小型边缘端设备的计算能力对产线的产品外观瑕疵进行检测,从而有效提高良品率。传统的分布式计算学习方式以及大型的深度学习模型不太适用于这种小型边缘端设备,因此需要一种能够在小型边缘端设备实现轻量化模型部署的分布协同学习的新方法。With the increasing popularity of edge devices in industrial scenarios, small edge devices are increasingly widely used. For example, in factory environments, cameras, sensors, and Raspberry Pis are used to detect production processes. These small devices have certain computing capabilities, but they are weak. How to use the computing power of these small edge devices to detect product appearance defects on the production line, thereby effectively improving the yield rate. Traditional distributed computing learning methods and large deep learning models are not suitable for such small edge devices. Therefore, a new distributed collaborative learning method is needed that can achieve lightweight model deployment on small edge devices.
随着第三代神经网络模型的出现,脉冲神经网络分支中的数据以神经元脉冲信号的空间信息编码,网络的输入输出以及神经元之间的信息传递表现为神经元发送的脉冲和发送脉冲的时间信息,神经元需要同时并行运行;通过模拟生物神经系统的脉冲编码和突触可塑性实现事件驱动式计算,具有与人工神经系统更加相似的计算特性。由于其能耗低,计算量小、推理速度快等特点,使其非常适合于工业小型边缘端设备,但脉冲神经网络分支的神经元模型的不连续性、编码的复杂性、网络结构的不确定性导致很难在数学上完成对网络整体的描述,又容易出现噪声尖峰,因此难以限制其计算规模和准确度。With the emergence of the third-generation neural network model, the data in the pulse neural network branch is encoded with the spatial information of the neuron pulse signal. The input and output of the network and the information transmission between neurons are expressed as the pulses sent by the neurons and the time information of the pulses sent. Neurons need to run in parallel at the same time; event-driven computing is achieved by simulating the pulse coding and synaptic plasticity of the biological nervous system, and it has computing characteristics that are more similar to artificial nervous systems. Due to its low energy consumption, small amount of calculation, and fast reasoning speed, it is very suitable for small industrial edge devices, but the discontinuity of the neuron model of the pulse neural network branch, the complexity of the coding, and the uncertainty of the network structure make it difficult to complete the mathematical description of the entire network, and it is easy to have noise spikes, so it is difficult to limit its computing scale and accuracy.
发明内容Summary of the invention
本发明提供了一种面向瑕疵检测的脉冲神经网络的联邦学习方法,其目的是为了提高小型边缘端设备瑕疵检测的准确度。The present invention provides a federated learning method of a pulse neural network for defect detection, the purpose of which is to improve the accuracy of defect detection of small edge devices.
为了达到上述目的,本发明提供了一种面向瑕疵检测的脉冲神经网络的联邦学习方法,包括:In order to achieve the above object, the present invention provides a federated learning method of a pulse neural network for defect detection, comprising:
步骤1,通过多个本地工业设备获取产品图像,并对产品图像进行预处理,得到感受野矩阵编码和瑕疵种类标签;Step 1: Obtain product images through multiple local industrial devices, and pre-process the product images to obtain receptive field matrix encoding and defect type labels;
步骤2,中央服务器将构建的包括人工神经网络分支和脉冲神经网络分支的初始联邦脉冲神经网络下发至每个本地工业设备,并为每个本地工业设备分配时间窗口,其中人工神经网络分支的激活层的输出端、脉冲神经网络分支中每一个时间步的输出端均与脉冲神经网络分支中的乘法器的输入端连接,乘法器用于将人工神经网络分支输出的特征与脉冲神经网络分支中每一个时间步输出的特征进行融合,人工神经网络分支共享脉冲神经网络分支的网络参数;Step 2, the central server sends the constructed initial federated spiking neural network including the artificial neural network branch and the spiking neural network branch to each local industrial device, and allocates a time window to each local industrial device, wherein the output end of the activation layer of the artificial neural network branch and the output end of each time step in the spiking neural network branch are connected to the input end of the multiplier in the spiking neural network branch, the multiplier is used to fuse the features output by the artificial neural network branch with the features output by each time step in the spiking neural network branch, and the artificial neural network branches share the network parameters of the spiking neural network branch;
步骤3,本地工业设备将产品图像输入人工神经网络分支进行前向传播、在时间窗口内将感受野矩阵编码输入脉冲神经网络分支进行前向传播,并在脉冲神经网络分支的前向传播过程中将人工神经网络分支输出的特征与脉冲神经网络分支中每一个时间步输出的特征进行融合;Step 3: The local industrial equipment inputs the product image into the artificial neural network branch for forward propagation, inputs the receptive field matrix encoding into the spiking neural network branch within the time window for forward propagation, and fuses the features output by the artificial neural network branch with the features output by each time step in the spiking neural network branch during the forward propagation process of the spiking neural network branch.
步骤4,本地工业设备根据脉冲神经网络分支中最后一层中所有时间步输出的特征与瑕疵种类标签构建损失函数,并基于反向传播机制、动态反馈自适应阈值机制、损失函数对脉冲神经网络分支进行反向传播训练,得到本地联邦脉冲神经网络;Step 4: The local industrial equipment constructs a loss function based on the features and defect category labels outputted at all time steps in the last layer of the spiking neural network branch, and performs back-propagation training on the spiking neural network branch based on the back-propagation mechanism, the dynamic feedback adaptive threshold mechanism, and the loss function to obtain a local federated spiking neural network.
步骤5,中央服务器将多个本地工业设备上传的本地联邦脉冲神经网络的网络参数进行聚合,得到全局模型并将中央服务器分配给本地工业设备的时间窗口更新为当前次网络参数聚合时的时间窗口;Step 5: The central server aggregates the network parameters of the local federated pulse neural network uploaded by multiple local industrial devices to obtain a global model and updates the time window assigned by the central server to the local industrial device to the time window of the current network parameter aggregation;
步骤6,判断全局模型是否满足预设训练条件;若是,则训练结束;否则,将全局模型作为步骤2中的初始联邦脉冲神经网络以及更新后的时间窗口传输至多个本地工业设备,并返回执行步骤3。Step 6, determine whether the global model meets the preset training conditions; if so, the training ends; otherwise, the global model is transmitted to multiple local industrial devices as the initial federated pulse neural network in step 2 and the updated time window, and return to execute step 3.
进一步来说,对产品图像进行预处理,得到感受野矩阵编码和瑕疵种类标签,包括:Furthermore, the product image is preprocessed to obtain the receptive field matrix encoding and defect type labels, including:
步骤11,采用自编码器对产品图像进行图像去噪,得到去噪后的产品图像,并采用直方图均衡化对去噪后的产品图像进行对比度增强,得到突出瑕疵区域的产品图像并将打标,得到瑕疵种类标签;Step 11, using an autoencoder to perform image denoising on the product image to obtain a denoised product image, and using histogram equalization to perform contrast enhancement on the denoised product image to obtain a product image with highlighted defect areas and mark them to obtain a defect type label;
步骤12,将突出瑕疵区域的产品图像进行平均处理,得到平均像素值矩阵;Step 12, averaging the product images of the prominent defective areas to obtain an average pixel value matrix;
步骤13,根据平均像素值矩阵构建高斯混合模型,并采用最大期望算法优化高斯混合模型的模型参数;Step 13, constructing a Gaussian mixture model according to the average pixel value matrix, and optimizing the model parameters of the Gaussian mixture model using the maximum expectation algorithm;
步骤14,根据优化后的高斯混合模型计算每个高斯分量在图像平面的概率密度,得到感受野矩阵;Step 14, calculating the probability density of each Gaussian component in the image plane according to the optimized Gaussian mixture model to obtain a receptive field matrix;
步骤15,根据感受野矩阵和平均像素值矩阵对增强后的产品图像进行编码,得到感受野矩阵编码。Step 15, encoding the enhanced product image according to the receptive field matrix and the average pixel value matrix to obtain the receptive field matrix encoding.
进一步来说,步骤11包括:More specifically, step 11 includes:
将产品图像输入自编码器,自编码器包括编码器和解码器;Input the product image into the autoencoder, which includes an encoder and a decoder;
通过编码器将产品图像进行压缩,得到低维特征向量;The product image is compressed through an encoder to obtain a low-dimensional feature vector;
将低维特征向量输入解码器进行重构,得到去噪后的产品图像;The low-dimensional feature vector is input into the decoder for reconstruction to obtain the denoised product image;
计算去噪后的产品图像的灰度直方图;Calculate the grayscale histogram of the denoised product image;
对直方图进行均衡化处理,得到均衡化后的直方图;Perform equalization processing on the histogram to obtain a equalized histogram;
根据均衡化后的直方图映射像素灰度;Map pixel grayscale according to the equalized histogram;
根据像素灰度对去噪后的产品图像进行对比度增强,得到突出瑕疵区域的产品图像;Contrast enhancement is performed on the denoised product image according to the pixel grayscale to obtain a product image with highlighted defect areas;
对突出瑕疵区域的产品图像进行打标处理,得到瑕疵种类标签。The product images with prominent defect areas are marked to obtain defect type labels.
进一步来说,步骤13包括:More specifically, step 13 includes:
根据平均像素值矩阵构建高斯混合模型;Construct a Gaussian mixture model based on the mean pixel value matrix;
对高斯混合模型的参数进行初始化;Initialize the parameters of the Gaussian mixture model;
将平均像素值矩阵进行向量化展开,并定义最大期望算法的目标函数为:The average pixel value matrix is vectorized and the objective function of the maximum expectation algorithm is defined as:
其中,表示每个高斯分量的均值,表示每个高斯分量的方差,初始化为,高斯分量个数初始化为瑕疵分类种类数,表示隐函数,表示平均像素值矩阵的像素值;in, represents the mean of each Gaussian component, represents the variance of each Gaussian component, Initialize to , the number of Gaussian components is initialized to the number of defect classification types, represents an implicit function, represents the pixel value of the average pixel value matrix;
通过最大期望算法的E步重复计算每个像素值属于每一种高斯模型的概率;Repeat the E step of the maximum expectation algorithm to calculate the probability that each pixel value belongs to each Gaussian model;
通过最大期望算法的M步重复最大化目标函数,得到高斯模型的参数估计,直至得到满足预设优化要求的高斯混合模型,停止计算。The objective function is maximized repeatedly through the M-step maximum expectation algorithm to obtain the parameter estimation of the Gaussian model until a Gaussian mixture model that meets the preset optimization requirements is obtained and the calculation is stopped.
进一步来说,感受野矩阵编码的表达式为:Furthermore, the expression of the receptive field matrix encoding is:
其中,表示归一化后的平均像素值矩阵中第行、第列元素,表示添加了感受野矩阵的平均像素值矩阵中的第行、第列元素,,表示感受野矩阵。in, Represents the normalized average pixel value matrix Middle Row, No. Column elements, Represents the average pixel value matrix with the receptive field matrix added The Row, No. Column elements, , represents the receptive field matrix.
进一步来说,中央服务器通过权衡函数来计算第轮次与本地工业设备通信的时间窗口,权衡函数的表达式为:Furthermore, the central server calculates the The time window for round communication with local industrial equipment, the expression of the trade-off function is:
其中,表示本地工业设备在第轮次通信时的时间窗口,表示本地工业设备在第轮次通信时的时间窗口,、表示调整权衡函数的参数,表示本地工业设备实际训练时长,表示训练集准确率。in, Indicates that local industrial equipment is The time window during the round of communication indicates that the local industrial equipment The time window for round communication, , represents the parameters for adjusting the trade-off function, Indicates the actual training time of local industrial equipment. Represents the accuracy of the training set.
进一步来说,所反向传播的表达式为:Specifically, the expression for backpropagation is:
其中,表示损失函数,表示脉冲神经网络分支中第层的权重值,表示人工神经网络分支中激活函数ReLU的输出值,表示脉冲神经网络分支在第个时间步的输出值,表示脉冲神经网络分支的线性微分方程,表示符号函数,用于映射人工神经网络分支在特征图上的激活位置,表示脉冲神经网络分支第层在第个时间步的输出值,表示人工神经网络分支中第层神经元的偏置项。in, represents the loss function, represents the first branch of the spiking neural network The weight value of the layer, Represents the output value of the activation function ReLU in the artificial neural network branch, Indicates that the spiking neural network branch is The output value of the time step, Linear differential equations representing the branches of the spiking neural network, represents a symbolic function used to map the activation positions of artificial neural network branches on the feature map, represents the branch of the spiking neural network Layer The output value of the time step, Indicates the branch of the artificial neural network The bias term of the neurons in the layer.
进一步来说,动态反馈自适应阈值机制的设计过程包括:Specifically, the design process of the dynamic feedback adaptive threshold mechanism includes:
定义第个本地工业设备在第轮次中的平均膜电位强度;Definition Local industrial equipment in the average membrane potential strength during the round;
定义第轮次的有效尖峰值与噪声尖峰值分别衡量平均的有效尖峰数量与噪声尖峰数量;Definition The effective spike value and noise spike value of a round measure the average number of effective spikes and the number of noise spikes, respectively;
根据平均膜电位强度、有效尖峰数量、噪声尖峰数量,定义第轮次的噪声激活因子;According to the average membrane potential intensity, the number of effective spikes, and the number of noise spikes, the The noise activation factor of the round;
根据噪声激活因子构建动态反馈自适应阈值机制更新公式为:The update formula of the dynamic feedback adaptive threshold mechanism constructed according to the noise activation factor is:
; ;
其中,表示第轮次中的阈值,表示调节因子,表示第轮次的噪声激活因子,,表示第轮次中的损失函数值,表示噪声尖峰数量,表示有效尖峰数量,表示平均膜电位强度,为常数1e-6,用于防止分母为零。in, Indicates The threshold in the round, represents the adjustment factor, Indicates The noise activation factor of the round, , Indicates The loss function value in the round, represents the number of noise spikes, represents the number of effective spikes, represents the average membrane potential intensity, is a constant 1e-6, used to prevent the denominator from being zero.
进一步来说,当全局模型满足预设训练条件时,将待检测的产品图像输入联邦脉冲神经网络进行检测,得到瑕疵检测结果。Furthermore, when the global model meets the preset training conditions, the image of the product to be inspected is input into the federated pulse neural network for inspection to obtain the defect detection result.
本发明的上述方案有如下的有益效果:The above scheme of the present invention has the following beneficial effects:
本发明通过多个本地工业设备获取产品图像,并对产品图像进行预处理,得到感受野矩阵编码和瑕疵种类标签;中央服务器将构建的包括人工神经网络分支和脉冲神经网络分支的初始联邦脉冲神经网络下发至每个本地工业设备,并为每个本地工业设备分配时间窗口,其中人工神经网络分支的激活层的输出端、脉冲神经网络分支中每一个时间步的输出端均与脉冲神经网络分支中的乘法器的输入端连接,乘法器用于将人工神经网络分支输出的特征与脉冲神经网络分支中每一个时间步输出的特征进行融合,人工神经网络分支共享脉冲神经网络分支的网络参数;本地工业设备将产品图像输入人工神经网络分支进行前向传播、在时间窗口内将感受野矩阵编码输入脉冲神经网络分支进行前向传播,并在脉冲神经网络分支的前向传播过程中将人工神经网络分支输出的特征与脉冲神经网络分支中每一个时间步输出的特征进行融合;根据脉冲神经网络分支中最后一层中所有时间步输出的特征与瑕疵种类标签构建损失函数,并基于反向传播机制、动态反馈自适应阈值机制、损失函数对脉冲神经网络分支进行反向传播训练,得到本地联邦脉冲神经网络;中央服务器将多个本地工业设备上传的本地联邦脉冲神经网络的网络参数进行聚合,得到全局模型并将中央服务器分配给本地工业设备的时间窗口更新为当前次网络参数聚合时的时间窗口;判断全局模型是否满足预设训练条件;若是,则训练结束;否则,将全局模型作为步骤2中的初始联邦脉冲神经网络以及更新后的时间窗口传输至多个本地工业设备,并返回执行步骤3;与现有技术相比,本发明通过将产品图像输入人工神经网络分支进行前向传播、在时间窗口内将感受野矩阵编码输入脉冲神经网络分支进行前向传播,并在脉冲神经网络分支的前向传播过程中将人工神经网络分支输出的特征与脉冲神经网络分支中每一个时间步输出的特征进行融合,有效缓解了脉冲神经网络分支的尖峰噪声影响;根据脉冲神经网络分支中最后一层中所有时间步输出的特征与瑕疵种类标签构建损失函数,基于反向传播机制、动态反馈自适应阈值机制、损失函数对脉冲神经网络分支进行反向传播训练,得到本地联邦脉冲神经网络,在实现轻量化分布协同训练的同时,有效的解决了脉冲神经网络分支用于瑕疵检测时准确率低的劣势。The present invention obtains product images through multiple local industrial devices, and pre-processes the product images to obtain receptive field matrix codes and defect category labels; the central server sends the constructed initial federal pulse neural network including artificial neural network branches and pulse neural network branches to each local industrial device, and allocates a time window to each local industrial device, wherein the output end of the activation layer of the artificial neural network branch and the output end of each time step in the pulse neural network branch are connected to the input end of the multiplier in the pulse neural network branch, and the multiplier is used to fuse the features output by the artificial neural network branch with the features output at each time step in the pulse neural network branch, and the artificial neural network branches share the network parameters of the pulse neural network branch; the local industrial equipment inputs the product image into the artificial neural network branch for forward propagation, inputs the receptive field matrix code into the pulse neural network branch within the time window for forward propagation, and fuses the features output by the artificial neural network branch with the features output at each time step in the pulse neural network branch during the forward propagation process of the pulse neural network branch; a loss function is constructed according to the features output at all time steps in the last layer of the pulse neural network branch and the defect category label, and the pulse neural network branch is back-propagated and trained based on the back-propagation mechanism, the dynamic feedback adaptive threshold mechanism, and the loss function to obtain a local federal pulse neural network. neural network; the central server aggregates the network parameters of the local federated pulse neural network uploaded by multiple local industrial devices to obtain a global model and updates the time window assigned by the central server to the local industrial device to the time window when the network parameters are currently aggregated; determines whether the global model meets the preset training conditions; if so, the training ends; otherwise, the global model is used as the initial federated pulse neural network in step 2 and the updated time window is transmitted to multiple local industrial devices, and returns to execute step 3; compared with the prior art, the present invention inputs the product image into the artificial neural network branch for forward propagation, inputs the receptive field matrix encoding into the pulse neural network branch within the time window for forward propagation, and fuses the features output by the artificial neural network branch with the features output at each time step in the pulse neural network branch during the forward propagation process of the pulse neural network branch, thereby effectively alleviating the influence of spike noise on the pulse neural network branch; a loss function is constructed based on the features output at all time steps in the last layer of the pulse neural network branch and the defect type label, and the pulse neural network branch is back-propagated and trained based on the back-propagation mechanism, the dynamic feedback adaptive threshold mechanism, and the loss function to obtain a local federated pulse neural network, which effectively solves the disadvantage of low accuracy of the pulse neural network branch when used for defect detection while realizing lightweight distributed collaborative training.
本发明的其它有益效果将在随后的具体实施方式部分予以详细说明。Other beneficial effects of the present invention will be described in detail in the subsequent specific implementation section.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
图1为本发明实施例的流程示意图;FIG1 is a schematic diagram of a flow chart of an embodiment of the present invention;
图2为本发明实施例的模型框架示意图。FIG. 2 is a schematic diagram of a model framework according to an embodiment of the present invention.
具体实施方式DETAILED DESCRIPTION
为使本发明要解决的技术问题、技术方案和优点更加清楚,下面将结合附图及具体实施例进行详细描述。显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。In order to make the technical problems, technical solutions and advantages to be solved by the present invention clearer, the following will be described in detail with reference to the accompanying drawings and specific embodiments. Obviously, the described embodiments are part of the embodiments of the present invention, rather than all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by ordinary technicians in this field without creative work are within the scope of protection of the present invention.
在本发明的描述中,需要说明的是,术语“中心”、“上”、“下”、“左”、“右”、“竖直”、“水平”、“内”、“外”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本发明和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本发明的限制。此外,术语“第一”、“第二”、“第三”仅用于描述目的,而不能理解为指示或暗示相对重要性。In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicating the orientation or positional relationship, are based on the orientation or positional relationship shown in the drawings, and are only for the convenience of describing the present invention and simplifying the description, rather than indicating or implying that the device or element referred to must have a specific orientation, be constructed and operated in a specific orientation, and therefore cannot be understood as limiting the present invention. In addition, the terms "first", "second", and "third" are used for descriptive purposes only, and cannot be understood as indicating or implying relative importance.
在本发明的描述中,需要说明的是,除非另有明确的规定和限定,术语“安装”、“相连”、“连接”应做广义理解,例如,可以是锁定连接,也可以是可拆卸连接,或一体地连接;可以是机械连接,也可以是电连接;可以是直接相连,也可以通过中间媒介间接相连,可以是两个元件内部的连通。对于本领域的普通技术人员而言,可以具体情况理解上述术语在本发明中的具体含义。In the description of the present invention, it should be noted that, unless otherwise clearly specified and limited, the terms "installation", "connection" and "connection" should be understood in a broad sense, for example, it can be a locking connection, a detachable connection, or an integral connection; it can be a mechanical connection or an electrical connection; it can be a direct connection or an indirect connection through an intermediate medium, or it can be the internal communication of two components. For ordinary technicians in this field, the specific meanings of the above terms in the present invention can be understood according to specific circumstances.
此外,下面所描述的本发明不同实施方式中所涉及的技术特征只要彼此之间未构成冲突就可以相互结合。In addition, the technical features involved in the different embodiments of the present invention described below can be combined with each other as long as they do not conflict with each other.
本发明针对现有的问题,提供了一种面向瑕疵检测的脉冲神经网络的联邦学习方法。In view of the existing problems, the present invention provides a federated learning method of a pulse neural network for defect detection.
在本发明实施例中,联邦学习的框架包括:中央服务器和多个本地客户端;In an embodiment of the present invention, the framework of federated learning includes: a central server and multiple local clients;
联邦学习过程如下:The federated learning process is as follows:
将本地工业设备作为联邦学习中的本地客户端,在本发明实施例中本地工业设备可以是工业小型边缘端设备,例如生产线上的摄像头、传感器或树莓派,用于采集产品图像作为本地数据集,产品可以是钢材、布匹、热轧带钢等,利用本地数据集对中央服务器下发的初始联邦脉冲神经网络进行训练,得到本地联邦脉冲神经网络;The local industrial equipment is used as the local client in the federated learning. In the embodiment of the present invention, the local industrial equipment can be a small industrial edge device, such as a camera, sensor or Raspberry Pi on a production line, which is used to collect product images as a local data set. The product can be steel, cloth, hot-rolled strip steel, etc. The local data set is used to train the initial federated pulse neural network sent by the central server to obtain a local federated pulse neural network.
在本发明实施例中联邦学习中的中央服务器可以是桌上型计算机、笔记本、掌上电脑、服务器、服务器集群及云端服务器等计算设备,用于构建初始联邦脉冲神经网络、为本地工业设备分配时间窗口、将多个本地工业设备上传的本地联邦脉冲神经网络进行聚合得到全局模型;In the embodiment of the present invention, the central server in the federated learning can be a computing device such as a desktop computer, a notebook, a PDA, a server, a server cluster, and a cloud server, which is used to construct an initial federated spiking neural network, allocate time windows to local industrial devices, and aggregate local federated spiking neural networks uploaded by multiple local industrial devices to obtain a global model;
最后通过全局模型对待检测的产品图像进行外观瑕疵检测,例如当全局模型对钢材图像进行外观瑕疵检测,得到的瑕疵检测结果是钢材是否存在划痕、裂纹等,当全局模型对布匹图像进行外观瑕疵检测,得到的瑕疵检测结果是布匹是否存在污点、麻点、或斑块。Finally, the global model is used to perform appearance defect detection on the product image to be inspected. For example, when the global model performs appearance defect detection on the steel image, the defect detection result obtained is whether the steel has scratches, cracks, etc.; when the global model performs appearance defect detection on the cloth image, the defect detection result obtained is whether the cloth has stains, pockmarks, or patches.
如图1、2所示,本发明的实施例提供了一种面向瑕疵检测的脉冲神经网络的联邦学习方法,包括:As shown in FIGS. 1 and 2 , an embodiment of the present invention provides a federated learning method of a pulse neural network for defect detection, including:
步骤1,通过多个本地工业设备获取产品图像,并对产品图像进行预处理,得到感受野矩阵编码和瑕疵种类标签;Step 1: Obtain product images through multiple local industrial devices, and pre-process the product images to obtain receptive field matrix encoding and defect type labels;
步骤2,中央服务器将构建的包括人工神经网络分支和脉冲神经网络分支的初始联邦脉冲神经网络下发至每个本地工业设备,并为每个本地工业设备分配时间窗口,其中人工神经网络分支的激活层的输出端、脉冲神经网络分支中每一个时间步的输出端均与脉冲神经网络分支中的乘法器的输入端连接,乘法器用于将人工神经网络分支输出的特征与脉冲神经网络分支中每一个时间步输出的特征进行融合,人工神经网络分支共享脉冲神经网络分支的网络参数;Step 2, the central server sends the constructed initial federated spiking neural network including the artificial neural network branch and the spiking neural network branch to each local industrial device, and allocates a time window to each local industrial device, wherein the output end of the activation layer of the artificial neural network branch and the output end of each time step in the spiking neural network branch are connected to the input end of the multiplier in the spiking neural network branch, the multiplier is used to fuse the features output by the artificial neural network branch with the features output by each time step in the spiking neural network branch, and the artificial neural network branches share the network parameters of the spiking neural network branch;
步骤3,本地工业设备将产品图像输入人工神经网络分支进行前向传播、在时间窗口内将感受野矩阵编码输入脉冲神经网络分支进行前向传播,并在脉冲神经网络分支的前向传播过程中将人工神经网络分支输出的特征与脉冲神经网络分支中每一个时间步输出的特征进行融合;Step 3: The local industrial equipment inputs the product image into the artificial neural network branch for forward propagation, inputs the receptive field matrix encoding into the spiking neural network branch within the time window for forward propagation, and fuses the features output by the artificial neural network branch with the features output by each time step in the spiking neural network branch during the forward propagation process of the spiking neural network branch.
步骤4,本地工业设备根据脉冲神经网络分支中最后一层中所有时间步输出的特征与瑕疵种类标签构建损失函数,并基于反向传播机制、动态反馈自适应阈值机制、损失函数对脉冲神经网络分支进行反向传播训练,得到本地联邦脉冲神经网络;Step 4: The local industrial equipment constructs a loss function based on the features and defect category labels outputted at all time steps in the last layer of the spiking neural network branch, and performs back-propagation training on the spiking neural network branch based on the back-propagation mechanism, the dynamic feedback adaptive threshold mechanism, and the loss function to obtain a local federated spiking neural network.
步骤5,中央服务器将多个本地工业设备上传的本地联邦脉冲神经网络的网络参数进行聚合,得到全局模型并将中央服务器分配给本地工业设备的时间窗口更新为当前次网络参数聚合时的时间窗口;Step 5: The central server aggregates the network parameters of the local federated pulse neural network uploaded by multiple local industrial devices to obtain a global model and updates the time window assigned by the central server to the local industrial device to the time window of the current network parameter aggregation;
步骤6,判断全局模型是否满足预设训练条件;若是,则训练结束;否则,将全局模型作为步骤2中的初始联邦脉冲神经网络以及更新后的时间窗口传输至多个本地工业设备,并返回执行步骤3。Step 6, determine whether the global model meets the preset training conditions; if so, the training ends; otherwise, the global model is transmitted to multiple local industrial devices as the initial federated pulse neural network in step 2 and the updated time window, and return to execute step 3.
具体来说,对产品图像进行预处理,得到感受野矩阵编码和瑕疵种类标签,包括:Specifically, the product image is preprocessed to obtain the receptive field matrix encoding and defect type labels, including:
步骤11,采用自编码器对产品图像进行图像去噪,得到去噪后的产品图像,并采用直方图均衡化对去噪后的产品图像进行对比度增强,得到突出瑕疵区域的产品图像并将打标,得到瑕疵种类标签;Step 11, using an autoencoder to perform image denoising on the product image to obtain a denoised product image, and using histogram equalization to perform contrast enhancement on the denoised product image to obtain a product image with highlighted defect areas and mark them to obtain a defect type label;
步骤12,将突出瑕疵区域的产品图像进行平均处理,得到平均像素值矩阵;Step 12, averaging the product images of the prominent defective areas to obtain an average pixel value matrix;
步骤13,根据平均像素值矩阵构建高斯混合模型,并采用最大期望算法优化高斯混合模型的模型参数;Step 13, constructing a Gaussian mixture model according to the average pixel value matrix, and optimizing the model parameters of the Gaussian mixture model using the maximum expectation algorithm;
步骤14,根据优化后的高斯混合模型计算每个高斯分量在图像平面的概率密度,得到感受野矩阵;Step 14, calculating the probability density of each Gaussian component in the image plane according to the optimized Gaussian mixture model to obtain a receptive field matrix;
步骤15,根据感受野矩阵和平均像素值矩阵对增强后的产品图像进行编码,得到感受野矩阵编码。Step 15, encoding the enhanced product image according to the receptive field matrix and the average pixel value matrix to obtain the receptive field matrix encoding.
更具体的来说,步骤11包括:More specifically, step 11 includes:
将产品图像输入自编码器,自编码器包括编码器和解码器;Input the product image into the autoencoder, which includes an encoder and a decoder;
通过编码器将产品图像进行压缩,得到低维特征向量;The product image is compressed through an encoder to obtain a low-dimensional feature vector;
将低维特征向量输入解码器进行重构,得到去噪后的产品图像;The low-dimensional feature vector is input into the decoder for reconstruction to obtain the denoised product image;
计算去噪后的产品图像的灰度直方图;Calculate the grayscale histogram of the denoised product image;
对直方图进行均衡化处理,得到均衡化后的直方图;Perform equalization processing on the histogram to obtain a equalized histogram;
根据均衡化后的直方图映射像素灰度;Map pixel grayscale according to the equalized histogram;
根据像素灰度对去噪后的产品图像进行对比度增强,得到突出瑕疵区域的产品图像;Contrast enhancement is performed on the denoised product image according to the pixel grayscale to obtain a product image with highlighted defect areas;
对突出瑕疵区域的产品图像进行打标处理,得到瑕疵种类标签。The product images with prominent defect areas are marked to obtain defect type labels.
在本发明实施例中,考虑到产品图像可能存在噪声、现场环境等因素的影响,本发明实施例首先对产品图像进行预处理从而提高输入图像的质量;In the embodiment of the present invention, considering that the product image may be affected by factors such as noise and on-site environment, the embodiment of the present invention first pre-processes the product image to improve the quality of the input image;
首先采用自编码器进行图像去噪,自编码器是一个无监督的深度学习模型,由编码器和解码器两个部分组成,编码器和解码器通常由多层神经网络表示,编码器将输入的产品图像映射为低维特征,其表达式为:First, an autoencoder is used for image denoising. The autoencoder is an unsupervised deep learning model consisting of an encoder and a decoder. The encoder and the decoder are usually represented by a multi-layer neural network. The encoder maps the input product image to a low-dimensional feature, which is expressed as:
其中,表示产品图像的像素值张量,表示低维特征;in, A tensor of pixel values representing a product image, Represents low-dimensional features;
解码器将低维特征中重构去噪后的产品图像:The decoder reconstructs the denoised product image from the low-dimensional features:
其中,表示去噪后的产品图像;in, represents the denoised product image;
自编码器的目标是最小化重构损失,其表达式为:The goal of the autoencoder is to minimize the reconstruction loss, which is expressed as:
通过训练使得去噪后的产品图像尽可能地接近原始图像,从而去除噪声。Through training, the denoised product image is made as close as possible to the original image, thereby removing the noise.
然后采用直方图均衡化来增强图像的对比度,从而突出背景和瑕疵部分,过程如下:Then histogram equalization is used to enhance the contrast of the image, thereby highlighting the background and defects. The process is as follows:
计算图像的灰度直方图,公式如下:Calculate the grayscale histogram of the image. The formula is as follows:
对直方图进行均衡化处理:Equalize the histogram:
根据均衡化后的直方图重新映射像素灰度:Remap pixel grayscale according to the equalized histogram:
其中,表示图像像素值大小;直方图均衡化扩展了图像的灰度范围,增强了对比度,背景区域灰度分布相对集中,均进行均衡化后与瑕疵部分的对比度增大,从而突出瑕疵区域。in, Indicates the size of the image pixel value; histogram equalization expands the grayscale range of the image and enhances the contrast. The grayscale distribution in the background area is relatively concentrated. After equalization, the contrast with the defective part is increased, thereby highlighting the defective area.
具体来说,步骤12包括:Specifically, step 12 includes:
将每个本地工业设备的增强后的产品图像进行平均处理得到平均像素值矩阵,表达式如下:Enhanced product images of each local industrial equipment The average pixel value matrix is obtained by averaging, and the expression is as follows:
其中,表示平均像素值矩阵,表示增强后的产品图像的数量,表示第n个增强后的产品图像,,,;in, represents the average pixel value matrix, represents the number of enhanced product images, represents the nth enhanced product image, , , ;
将平均的平均像素值矩阵进行归一化,得到:Normalize the average pixel value matrix to get:
其中,表示平均像素值矩阵的第行、第列元素,与分别表示平均像素值矩阵的最大与最小像素值。in, Represents the average pixel value matrix No. Row, No. Column elements, and Represents the average pixel value matrix The maximum and minimum pixel values.
具体来说,步骤13包括:Specifically, step 13 includes:
根据平均像素值矩阵构建高斯混合模型为:The Gaussian mixture model is constructed based on the average pixel value matrix:
其中,表示高斯函数表达式,表示瑕疵分类种类数;in, represents the Gaussian function expression, Indicates the number of defect classification types;
由此可得混合了感受野矩阵的平均像素值矩阵的表达式为:From this, the expression of the average pixel value matrix mixed with the receptive field matrix is:
表示感受野矩阵; represents the receptive field matrix;
采用最大期望算法优化高斯混合模型的模型参数;The maximum expectation algorithm is used to optimize the model parameters of the Gaussian mixture model;
需要说明的是,本发明实施例所采用的最大期望算法EM算法为常规的迭代优化算法,EM算法经过两个步骤交替进行计算,第一步是计算期望(E),简称E步,利用对隐藏变量的现有估计值,计算其最大似然估计值;第二步是最大化(M),简称M步,最大化在E步上求得的最大似然值来计算参数的值,M步上找到的参数估计值被用于下一个E步计算中,这个过程不断交替进行,直至模型收敛。It should be noted that the maximum expectation algorithm EM algorithm adopted in the embodiment of the present invention is a conventional iterative optimization algorithm. The EM algorithm performs calculations alternately in two steps. The first step is to calculate the expectation (E), referred to as the E-step, and use the existing estimated values of the hidden variables to calculate their maximum likelihood estimates; the second step is maximization (M), referred to as the M-step, which maximizes the maximum likelihood value obtained in the E-step to calculate the value of the parameter. The parameter estimate found in the M-step is used in the next E-step calculation, and this process is continuously alternated until the model converges.
本发明实施例采用最大期望算法优化高斯混合模型的模型参数,具体过程如下:The embodiment of the present invention uses the maximum expectation algorithm to optimize the model parameters of the Gaussian mixture model. The specific process is as follows:
对高斯混合模型的参数进行初始化;Parameters of the Gaussian mixture model Initialize;
将平均像素值矩阵进行向量化展开,得到:The average pixel value matrix After vectorization expansion, we get:
定义最大期望算法的目标函数为:The objective function of the maximum expectation algorithm is defined as:
其中,表示每个高斯分量的均值,表示每个高斯分量的方差,初始化为,高斯分量个数初始化为瑕疵分类种类数;in, represents the mean of each Gaussian component, represents the variance of each Gaussian component, Initialize to , the number of Gaussian components is initialized to the number of defect classification types;
通过最大期望算法的E步重复计算每个像素值属于每一种高斯模型的概率,则隐参数的表达式为:By repeatedly calculating the probability of each pixel value belonging to each Gaussian model through the E step of the maximum expectation algorithm, the hidden parameter The expression is:
其中,表示迭代次数,表示平均像素值矩阵的像素值;in, represents the number of iterations, represents the pixel value of the average pixel value matrix;
通过最大期望算法的M步重复最大化目标函数,得到高斯模型的参数估计为:By repeatedly maximizing the objective function through the M-step maximum expectation algorithm, the parameter estimate of the Gaussian model is obtained as follows:
直至得到满足预设优化要求的高斯混合模型,停止计算。The calculation is stopped until a Gaussian mixture model that meets the preset optimization requirements is obtained.
具体来说,步骤14包括:Specifically, step 14 includes:
根据优化后的高斯混合模型计算每个高斯分量在图像平面的概率密度,得到感受野矩阵,感受野矩阵强化了瑕疵部分的感受野,弱化了背景信息带来的干扰。According to the optimized Gaussian mixture model, the probability density of each Gaussian component in the image plane is calculated to obtain the receptive field matrix , the receptive field matrix strengthens the receptive field of the defective part and weakens the interference caused by background information.
更具体的来说,步骤15包括:More specifically, step 15 includes:
在本发明实施例中,通过上述处理,得到了归一化后的平均像素值矩阵与感受野矩阵,给定,对于瑕疵检测任务,本发明实施例主要关注图像的瑕疵部分,而不是背景信息,因此根据感受野矩阵和平均像素值矩阵对增强后的产品图像进行编码,将产品图像中的瑕疵部分编码为1,背景部分编码为0,感受野矩阵编码的公式如下:In the embodiment of the present invention, through the above processing, the normalized average pixel value matrix is obtained: And the receptive field matrix , given For the defect detection task, the embodiment of the present invention mainly focuses on the defect part of the image rather than the background information. Therefore, the enhanced product image is encoded according to the receptive field matrix and the average pixel value matrix, and the defect part in the product image is encoded as 1 and the background part is encoded as 0. The formula for encoding the receptive field matrix is as follows:
其中,表示归一化后的平均像素值矩阵中第行、第列元素,表示添加了感受野矩阵的平均像素值矩阵中的第行、第列元素。in, Represents the normalized average pixel value matrix Middle Row, No. Column elements, Represents the average pixel value matrix with the receptive field matrix added The Row, No. Column elements.
具体来说,在本发明实施例中脉冲神经网络分支分值是模仿生物神经元之间传递信息的方式,通过神经脉冲的形式进行信息的传导,具有低能耗、运算量小等优势,脉冲神经网络分支模型采用了漏电流整合放电神经元(Leaky-Integration-and-Fire)模型,其线性微分方程表达式如下:Specifically, in the embodiment of the present invention, the pulse neural network branch score imitates the way of transmitting information between biological neurons, and transmits information in the form of neural pulses, which has the advantages of low energy consumption and small amount of calculation. The pulse neural network branch model adopts the leaky current integrated discharge neuron (Leaky-Integration-and-Fire) model, and its linear differential equation expression is as follows:
其中,为leaky因子,,脉冲神经网络分支的层数为L,表示神经元与第层神经元连接的权重,表示脉冲神经网络分支在第层神经元在时刻的输出值,此时,第层中神经元在时刻的输出值可以表示为:in, is the leaky factor, , the number of layers of the pulse neural network branch is L, Representing neurons With Layer of neurons The weight of the connection, Indicates that the spiking neural network branch is Layer of neurons exist The output value at this moment, Neurons in layer exist The output value at the moment can be expressed as:
其中,表示第p个迭代轮次的阈值。in, Represents the threshold for the pth iteration round.
具体来说,本发明实施例中的人工神经网络分值的激活函数采用ReLU函数,其参数与脉冲神经网络分支的参数进行共享,其中间层的映射公式为:Specifically, the activation function of the artificial neural network score in the embodiment of the present invention adopts the ReLU function, whose parameters are shared with the parameters of the pulse neural network branch, and the mapping formula of the middle layer is:
其中,表示神经元与第层神经元连接的权重,表示第层神经元的偏置项,表示经过人工神经网络分支中神经元经过激活函数的输出值,在本地工业设备上,其、与脉冲神经网络分支的权重参数进行共享。in, Representing neurons With Layer of neurons The weight of the connection, Indicates Layer of neurons The bias term, Represents the neurons in the artificial neural network branch The output value after the activation function is on the local industrial equipment. , The weight parameters of the spiking neural network branch are shared.
本发明实施例考虑到脉冲神经网络分支输入具有时序特征,使用一种具有时间步长更新的批量归一化方法,具体公式如下:In the embodiment of the present invention, a batch normalization method with time step update is used, taking into account that the input of the pulse neural network branch has a time series feature. The specific formula is as follows:
其中,表示批量归一化层的学习权重,是一个防止分母为0的常数,、表示第层中时刻批量样本的均值和方差,对于批量归一化层的更新,给定,权重的梯度可以表示为:in, represents the learned weights of the batch normalization layer, is a constant that prevents the denominator from being zero. , Indicates Layer The mean and variance of the batch samples at time t, for the update of the batch normalization layer, given , weight The gradient of can be expressed as:
其中,,因此,时间步的批量归一化层的学习权重更新如下所示:in, , so the time step The learned weights of the batch normalization layer are updated as follows:
其中,为学习率。in, is the learning rate.
在本发明实施例中,本地工业设备接收的脉冲神经网络分支采用了基于脉冲的时空反向传播算法(STBP),其反向传播公式为:In the embodiment of the present invention, the pulse neural network branch received by the local industrial equipment adopts a pulse-based space-time back propagation algorithm (STBP), and its back propagation formula is:
其中,针对阶跃信号无法求导问题,STBP算法设计了一个矩形函数作为脉冲梯度进行反向传播:Among them, for the step signal In order to solve the problem of being unable to derive, the STBP algorithm designed a rectangular function as the pulse gradient for back propagation:
其中,表示梯度的宽度。in, Indicates the width of the gradient.
在本发明实施例中,为了平衡本地工业设备的算力异质性和训练准确度,设计一种动态的时间窗口参数T,以便本地工业设备根据其训练表现和设备性能来自适应地调整该时间窗口参数T,具体过程如下:In the embodiment of the present invention, in order to balance the computing power heterogeneity and training accuracy of local industrial equipment, a dynamic time window parameter T is designed so that the local industrial equipment can adaptively adjust the time window parameter T according to its training performance and equipment performance. The specific process is as follows:
(1)设置初始时间窗口:中央服务器给每个本地工业设备分配一个初始时间窗口,该初始时间窗口可以根据全局性能估计、设备性能、数据规模等因素进行分配;(1) Set the initial time window :The central server assigns an initial time window to each local industrial device ,The initial time window can be allocated based on factors such as global ,performance estimation, device performance, data size, etc.;
(2)本地工业设备训练过程:本地工业设备进行脉冲神经网络分支的训练,在每个时间窗口内尽可能多的完成训练,客户端在训练过程中需要记录以下数据:一个轮次的训练时间、训练集准确率、当前本地工业设备的时间窗口;(2) Local industrial equipment training process: Local industrial equipment trains the pulse neural network branch in each time window The client needs to record the following data during the training process: the training time of one round , training set accuracy , the time window of the current local industrial equipment ;
(3)本地工业设备上传数据:本地工业设备在与中央服务器进行通信时,上传一个轮次的训练时间、训练集准确率、当前本地工业设备的时间窗口等参数;(3) Local industrial equipment uploads data: When local industrial equipment communicates with the central server, it uploads the training time of one round. , training set accuracy , the time window of the current local industrial equipment And other parameters;
(4)调整动态时间窗口参数:中央服务器根据上传的数据,使用权衡函数来计算第i+1轮次通信的时间窗口参数以平衡训练时间和训练准确度,权衡函数如下所示:(4) Adjusting dynamic time window parameters: The central server uses a trade-off function to calculate the time window parameters for the i+1th round of communication based on the uploaded data. To balance training time and training accuracy, the trade-off function is as follows:
其中,、是调整权衡的参数,可以根据具体需求进行调整,例如,通过增加使其更加注重准确率,而通过增加则是更加注重训练时间,这个权衡函数的目标是根据本地工业设备的训练表现来自适应地调整时间窗口,时间窗口只能为正整数,以使得本地工业设备在不同性能和准确度要求下都能有合适的训练时间;in, , is a parameter that adjusts the trade-off and can be adjusted according to specific needs, for example, by increasing Make it more accurate, and by increasing The goal of this trade-off function is to adaptively adjust the time window based on the training performance of local industrial equipment. ,The time window can only be a positive integer, so that local industrial equipment can have appropriate training time under different performance and accuracy requirements;
(5)重新分配时间窗口:中央服务器根据计算得到的新时间窗口,可以将它分配给本地工业设备,取代之前的时间窗口,这样,本地工业设备将在下一轮训练中使用新的时间窗口参数;(5) Reallocate time windows: The central server allocates new time windows based on the calculations. , which can be assigned to local industrial equipment, replacing the previous time window , so that the local industrial equipment will use the new time window parameters in the next round of training ;
(6)重复训练和调整:重复上述步骤,不断迭代,以逐渐调整时间窗口参数,使得本地工业设备的训练时间和准确度得到平衡。(6) Repeat training and adjustment: Repeat the above steps and iterate continuously to gradually adjust the time window parameters so that the training time and accuracy of local industrial equipment are balanced.
本发明实施例考虑到脉冲神经网络分支会出现尖峰噪声以及脉冲神经网络分支的精度问题,通过融合人工神经网络分支输出的特征,有效缓解脉冲神经网络分支的尖峰噪声影响,从而提升检测准确率;具体过程如下:The embodiment of the present invention takes into account the spike noise and accuracy issues of the spike neural network branches, and effectively alleviates the impact of the spike noise of the spike neural network branches by integrating the features of the artificial neural network branch outputs, thereby improving the detection accuracy; the specific process is as follows:
设计网络分支的特征融合机制:Design feature fusion mechanism for network branches:
本地工业设备将训练两个网络分支,两个网络分支的模型参数是共享的,且两个网络分支共享相同的本地工业设备采集的数据集,对于脉冲神经网络分支,需要将输入的图像数据进行图像的编码处理,其具体的时间窗口根据全局性能估计、设备性能、数据规模等因素进行分配,对于人工神经网络分支则是经过增强处理后作为人工神经网络分支的输入;The local industrial equipment will train two network branches. The model parameters of the two network branches are shared, and the two network branches share the same data set collected by the local industrial equipment. For the pulse neural network branch, the input image data needs to be encoded, and its specific time window is allocated according to factors such as global performance estimation, equipment performance, and data scale. For the artificial neural network branch, it is used as the input of the artificial neural network branch after enhancement processing;
考虑到脉冲神经网络分支可能会激活人工神经网络分支中原本不活跃的神经元位置,导致噪声尖峰的产生,本发明实施例利用两个网络的特征层知识融合的方式来解决脉冲神经网络的噪声尖峰问题,表达式如下:Considering that the spiking neural network branch may activate the originally inactive neuron position in the artificial neural network branch, resulting in the generation of noise spikes, the embodiment of the present invention solves the noise spike problem of the spiking neural network by fusing the feature layer knowledge of the two networks. The expression is as follows:
其中,表示人工神经网络分支中激活函数ReLU的输出值,表示为第t个时间步第n层的脉冲神经网络分支的输出值,为符号函数,用于将映射到中,用于映射人工神经网络分支在特征图上的激活位置,为点乘符号,通过点乘的方式过滤脉冲神经网络的噪声,将过滤后的输出值经过脉冲神经网络进行进一步的前向传播;in, Represents the output value of the activation function ReLU in the artificial neural network branch, It is represented as the output value of the spike neural network branch of the nth layer at the tth time step, is a symbolic function used to convert Map to , used to map the activation positions of artificial neural network branches on the feature map, is the dot product symbol, the noise of the pulse neural network is filtered by the dot product method, and the filtered output value is further forward propagated through the pulse neural network;
设计本地工业设备模型训练的反向传播机制:Design a back-propagation mechanism for local industrial equipment model training:
首先定义整体网络的损失函数,由于在反向传播的设计中去除了人工神经网络分支的反向传播,则损失函数表达式为:First, define the loss function of the overall network. Since the back propagation of the artificial neural network branch is removed in the back propagation design, the loss function expression is:
其中,表示总的网络层数,表示脉冲神经网络分支输出层第个时间步的输出值,表示第个本地工业设备的时间窗口大小,表示脉冲神经网络分支输出层的神经元个数,即瑕疵分类的总类别数。in, Represents the total number of network layers, represents the output layer of the spike neural network branch The output value of the time step, Indicates The time window size of each local industrial device, Represents the number of neurons in the output layer of the spiking neural network branch, that is, the total number of defect classification categories.
由于两个网络的特征融合,其反向传播公式为:Due to the fusion of the features of the two networks, the back propagation formula is:
其中,表示损失函数,表示脉冲神经网络分支中第n层的权重值,由于训练是在脉冲神经网络分支,其人工神经网络分支部分不参与网络的反向传播更新,则令;结合脉冲神经网络分支的反向传播公式,其最终的反向传播公式为:in, represents the loss function, Represents the weight value of the nth layer in the spiking neural network branch. Since the training is in the spiking neural network branch, the artificial neural network branch does not participate in the back propagation update of the network. ; Combined with the back propagation formula of the pulse neural network branch, the final back propagation formula is:
其中,表示脉冲神经网络分支在第t个时间步的输出值,表示脉冲神经网络分支的线性微分方程,表示符号函数,用于映射人工神经网络分支在特征图上的激活位置,表示脉冲神经网络分支第n层在第t个时间步的输出值,表示人工神经网络分支中第n层神经元的偏置项。in, represents the output value of the spiking neural network branch at the tth time step, Linear differential equations representing the branches of the spiking neural network, represents a symbolic function used to map the activation positions of artificial neural network branches on the feature map, represents the output value of the nth layer of the spiking neural network branch at the tth time step, Represents the bias term of the nth layer of neurons in an artificial neural network branch.
设计脉冲神经网络分支的动态反馈自适应阈值机制:Design of dynamic feedback adaptive threshold mechanism for spiking neural network branches:
噪声尖峰是影响脉冲神经网络分支检测准确率的重要因素,针对脉冲神经网络分支可能出现噪声尖峰,本发明实施例融合了人工神经网络分支的特征层信息以及前一轮损失值进行了动态反馈自适应阈值机制设计:Noise spikes are an important factor affecting the accuracy of spiking neural network branch detection. In view of the possible noise spikes in spiking neural network branches, the embodiment of the present invention integrates the feature layer information of the artificial neural network branches and the previous round of loss values to design a dynamic feedback adaptive threshold mechanism:
首先定义为第个本地工业设备第个轮次的平均膜电位强度为:First define For the Local industrial equipment The average membrane potential intensity of each round is:
其中,表示第个本地工业设备中产品图像的数量,表示总的网络层数,表示第个本地工业设备的时间窗口大小,表示在时刻时第层的膜电位值;in, Indicates The number of product images in local industrial devices, Represents the total number of network layers, Indicates The time window size of each local industrial device, Indicated in Time The membrane potential value of the layer;
根据上文中的特征融合方式,本发明实施例定义了第轮次的有效尖峰值与噪声尖峰值分别衡量平均的有效尖峰数量与噪声尖峰数量:According to the feature fusion method described above, the embodiment of the present invention defines the first The effective spike value and noise spike value of the round measure the average number of effective spikes. and the number of noise spikes :
其中,表示含噪锋电位的位置集合,表示整个特征图位置集合;通过融合第轮次中的损失函数值,本发明实施例定义了第轮次的噪声激活因子:in, represents the set of locations of noisy spikes, Represents the entire feature map location set; by fusing the The loss function value in the round , the embodiment of the present invention defines Noise activation factor for the round:
其中,噪声激活因子分别由、、、影响,其中与越大,则噪声因子越小,表明在第p+1轮次中阈值不需要进行较大调整,、越大则噪声激活因子越大,表明在第p+1轮次中,则需要提高阈值从而降低的数量,因此可以定义动态反馈自适应阈值机制更新公式为:Among them, the noise activation factor Respectively by , , , Impact, among which and The larger the value is, the smaller the noise factor is, indicating that the threshold does not need to be adjusted significantly in the p+1th round. , The larger the value is, the larger the noise activation factor is, indicating that in the p+1th round, the threshold needs to be increased to reduce the number. Therefore, the dynamic feedback adaptive threshold mechanism update formula can be defined as:
其中,表示第轮次中的阈值,表示调节因子,用于控制上一轮次阈值对本轮次阈值的影响程度,用于调节噪声激活因子对本轮次阈值的影响大小,表示第轮次的噪声激活因子,表示第轮次中的损失函数值,表示噪声尖峰数量,表示有效尖峰数量,表示平均膜电位强度,为常数1e-6,用于防止分母为零。in, Indicates The threshold in the round, represents the adjustment factor, It is used to control the influence of the threshold of the previous round on the threshold of this round. Used to adjust the impact of the noise activation factor on the threshold of this round. Indicates The noise activation factor of the round, Indicates The loss function value in the round, represents the number of noise spikes, represents the number of effective spikes, represents the average membrane potential intensity, is a constant 1e-6, used to prevent the denominator from being zero.
最后中央服务器将多个本地工业设备上传的本地联邦脉冲神经网络的网络参数进行聚合,得到全局模型并将中央服务器分配给本地工业设备的时间窗口更新为当前次网络参数聚合时的时间窗口;判断全局模型是否满足预设训练条件;若是,则训练结束;否则,将全局模型作为步骤2中的初始联邦脉冲神经网络以及更新后的时间窗口传输至多个本地工业设备,并返回执行步骤3。Finally, the central server aggregates the network parameters of the local federated pulse neural networks uploaded by multiple local industrial devices to obtain the global model and updates the time window assigned to the local industrial devices by the central server to the time window when the network parameters are currently aggregated; determines whether the global model meets the preset training conditions; if so, the training ends; otherwise, transmits the global model as the initial federated pulse neural network in step 2 and the updated time window to multiple local industrial devices, and returns to execute step 3.
具体来说,当全局模型满足预设训练条件时,将待检测的产品图像输入联邦脉冲神经网络进行检测,得到瑕疵检测结果;当全局模型对钢材图像进行外观瑕疵检测,得到的瑕疵检测结果是钢材是否存在划痕、裂纹等,当全局模型对布匹图像进行外观瑕疵检测,得到的瑕疵检测结果是布匹是否存在污点、麻点、或斑块。Specifically, when the global model meets the preset training conditions, the image of the product to be inspected is input into the federated pulse neural network for inspection to obtain the defect detection result; when the global model performs appearance defect detection on the steel image, the defect detection result obtained is whether the steel has scratches, cracks, etc.; when the global model performs appearance defect detection on the cloth image, the defect detection result obtained is whether the cloth has stains, pockmarks, or plaques.
本发明实施例通过多个本地工业设备获取产品图像,并对产品图像进行预处理,得到感受野矩阵编码和瑕疵种类标签;中央服务器将构建的包括人工神经网络分支和脉冲神经网络分支的初始联邦脉冲神经网络下发至每个本地工业设备,并为每个本地工业设备分配时间窗口,其中人工神经网络分支的激活层的输出端、脉冲神经网络分支中每一个时间步的输出端均与脉冲神经网络分支中的乘法器的输入端连接,乘法器用于将人工神经网络分支输出的特征与脉冲神经网络分支中每一个时间步输出的特征进行融合,人工神经网络分支共享脉冲神经网络分支的网络参数;本地工业设备将产品图像输入人工神经网络分支进行前向传播、在时间窗口内将感受野矩阵编码输入脉冲神经网络分支进行前向传播,并在脉冲神经网络分支的前向传播过程中将人工神经网络分支输出的特征与脉冲神经网络分支中每一个时间步输出的特征进行融合;根据脉冲神经网络分支中最后一层中所有时间步输出的特征与瑕疵种类标签构建损失函数,并基于反向传播机制、动态反馈自适应阈值机制、损失函数对脉冲神经网络分支进行反向传播训练,得到本地联邦脉冲神经网络;将多个本地工业设备上传的本地联邦脉冲神经网络的网络参数进行聚合,得到全局模型并将中央服务器分配给本地工业设备的时间窗口更新为当前次网络参数聚合时的时间窗口;判断全局模型是否满足预设训练条件;若是,则训练结束;否则,将全局模型作为步骤2中的初始联邦脉冲神经网络以及更新后的时间窗口传输至多个本地工业设备,并返回执行步骤3;与现有技术相比,本发明通过将产品图像输入人工神经网络分支进行前向传播、在时间窗口内将感受野矩阵编码输入脉冲神经网络分支进行前向传播,并在脉冲神经网络分支的前向传播过程中将人工神经网络分支输出的特征与脉冲神经网络分支中每一个时间步输出的特征进行融合,有效缓解了脉冲神经网络分支的尖峰噪声影响;根据脉冲神经网络分支中最后一层中所有时间步输出的特征与瑕疵种类标签构建损失函数,基于反向传播机制、动态反馈自适应阈值机制、损失函数对脉冲神经网络分支进行反向传播训练,得到多个训练后的本地联邦脉冲神经网络,在实现轻量化分布协同训练的同时,有效的解决了脉冲神经网络分支用于瑕疵检测时准确率低的劣势。The embodiment of the present invention obtains product images through multiple local industrial devices, and pre-processes the product images to obtain receptive field matrix codes and defect category labels; the central server sends the constructed initial federated spiking neural network including artificial neural network branches and spiking neural network branches to each local industrial device, and allocates a time window to each local industrial device, wherein the output end of the activation layer of the artificial neural network branch and the output end of each time step in the spiking neural network branch are connected to the input end of the multiplier in the spiking neural network branch, and the multiplier is used to fuse the features output by the artificial neural network branch with the features output at each time step in the spiking neural network branch, and the artificial neural network branches share the network parameters of the spiking neural network branch; the local industrial device inputs the product image into the artificial neural network branch for forward propagation, inputs the receptive field matrix code into the spiking neural network branch within the time window for forward propagation, and fuses the features output by the artificial neural network branch with the features output at each time step in the spiking neural network branch during the forward propagation process of the spiking neural network branch; a loss function is constructed according to the features output at all time steps in the last layer of the spiking neural network branch and the defect category label, and the spiking neural network branch is back-propagated and trained based on the back-propagation mechanism, the dynamic feedback adaptive threshold mechanism, and the loss function to obtain a local federated spiking neural network. a local federated spike neural network; aggregate the network parameters of the local federated spike neural network uploaded by multiple local industrial devices to obtain a global model and update the time window assigned by the central server to the local industrial device to the time window when the network parameters are aggregated for the current time; determine whether the global model meets the preset training conditions; if so, the training ends; otherwise, the global model is used as the initial federated spike neural network in step 2 and the updated time window is transmitted to multiple local industrial devices, and return to execute step 3; compared with the prior art, the present invention inputs the product image into the artificial neural network branch for forward propagation, inputs the receptive field matrix encoding into the spike neural network branch within the time window for forward propagation, and fuses the features output by the artificial neural network branch with the features output at each time step in the spike neural network branch during the forward propagation process of the spike neural network branch, thereby effectively alleviating the influence of spike noise on the spike neural network branch; construct a loss function based on the features output at all time steps in the last layer of the spike neural network branch and the defect type label, and perform back propagation training on the spike neural network branch based on the back propagation mechanism, the dynamic feedback adaptive threshold mechanism, and the loss function to obtain multiple trained local federated spike neural networks, which effectively solves the disadvantage of low accuracy of the spike neural network branch when used for defect detection while realizing lightweight distributed collaborative training.
以上所述是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明所述原理的前提下,还可以作出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。The above is a preferred embodiment of the present invention. It should be pointed out that for ordinary technicians in this technical field, several improvements and modifications can be made without departing from the principles of the present invention. These improvements and modifications should also be regarded as the scope of protection of the present invention.
Claims (9)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410285505.0A CN117875408B (en) | 2024-03-13 | 2024-03-13 | A Federated Learning Method of Spiking Neural Networks for Defect Detection |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410285505.0A CN117875408B (en) | 2024-03-13 | 2024-03-13 | A Federated Learning Method of Spiking Neural Networks for Defect Detection |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN117875408A true CN117875408A (en) | 2024-04-12 |
| CN117875408B CN117875408B (en) | 2024-06-25 |
Family
ID=90592307
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202410285505.0A Active CN117875408B (en) | 2024-03-13 | 2024-03-13 | A Federated Learning Method of Spiking Neural Networks for Defect Detection |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN117875408B (en) |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN118101719A (en) * | 2024-04-23 | 2024-05-28 | 湘江实验室 | A multi-task vehicle-road collaborative intelligent perception method based on federated learning |
| CN119848604A (en) * | 2024-12-26 | 2025-04-18 | 华中科技大学 | Spike signal decoding model construction method applied to invasive brain-computer interface |
| CN120747255A (en) * | 2025-06-19 | 2025-10-03 | 北京中星微人工智能芯片技术有限公司 | Infrared image coding method, device and equipment for impulse neural network |
Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9189730B1 (en) * | 2012-09-20 | 2015-11-17 | Brain Corporation | Modulated stochasticity spiking neuron network controller apparatus and methods |
| CN108875555A (en) * | 2018-04-25 | 2018-11-23 | 中国人民解放军军事科学院军事医学研究院 | Video interest neural network based region and well-marked target extraction and positioning system |
| US20190377998A1 (en) * | 2017-01-25 | 2019-12-12 | Tsinghua University | Neural network information receiving method, sending method, system, apparatus and readable storage medium |
| CN112633497A (en) * | 2020-12-21 | 2021-04-09 | 中山大学 | Convolutional pulse neural network training method based on reweighted membrane voltage |
| CN115271033A (en) * | 2022-07-05 | 2022-11-01 | 西南财经大学 | Medical image processing model construction and processing method based on federal knowledge distillation |
| CN115809700A (en) * | 2022-06-09 | 2023-03-17 | 电子科技大学 | A Learning Method for Spiking Neural Networks Based on Synapse-Threshold Synergy |
| CN116382267A (en) * | 2023-03-09 | 2023-07-04 | 大连理工大学 | A dynamic obstacle avoidance method for robots based on multi-modal spiking neural network |
| CN117372843A (en) * | 2023-10-31 | 2024-01-09 | 华中科技大学 | Image classification model training method and image classification method based on first pulse coding |
-
2024
- 2024-03-13 CN CN202410285505.0A patent/CN117875408B/en active Active
Patent Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9189730B1 (en) * | 2012-09-20 | 2015-11-17 | Brain Corporation | Modulated stochasticity spiking neuron network controller apparatus and methods |
| US20190377998A1 (en) * | 2017-01-25 | 2019-12-12 | Tsinghua University | Neural network information receiving method, sending method, system, apparatus and readable storage medium |
| CN108875555A (en) * | 2018-04-25 | 2018-11-23 | 中国人民解放军军事科学院军事医学研究院 | Video interest neural network based region and well-marked target extraction and positioning system |
| CN112633497A (en) * | 2020-12-21 | 2021-04-09 | 中山大学 | Convolutional pulse neural network training method based on reweighted membrane voltage |
| CN115809700A (en) * | 2022-06-09 | 2023-03-17 | 电子科技大学 | A Learning Method for Spiking Neural Networks Based on Synapse-Threshold Synergy |
| CN115271033A (en) * | 2022-07-05 | 2022-11-01 | 西南财经大学 | Medical image processing model construction and processing method based on federal knowledge distillation |
| CN116382267A (en) * | 2023-03-09 | 2023-07-04 | 大连理工大学 | A dynamic obstacle avoidance method for robots based on multi-modal spiking neural network |
| CN117372843A (en) * | 2023-10-31 | 2024-01-09 | 华中科技大学 | Image classification model training method and image classification method based on first pulse coding |
Non-Patent Citations (2)
| Title |
|---|
| JIANGRONG SHEN 等: "HybridSNN:Combining bio-machine strengths by boosting adaptive spiking neural networks", IEEE, 31 December 2021 (2021-12-31), pages 5841 - 5855 * |
| 庄祖江;房玉;雷建超;刘栋博;王海滨;: "基于STDP规则的脉冲神经网络研究", 计算机工程, no. 09, 31 December 2020 (2020-12-31), pages 89 - 94 * |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN118101719A (en) * | 2024-04-23 | 2024-05-28 | 湘江实验室 | A multi-task vehicle-road collaborative intelligent perception method based on federated learning |
| CN118101719B (en) * | 2024-04-23 | 2024-07-23 | 湘江实验室 | A multi-task vehicle-road collaborative intelligent perception method based on federated learning |
| CN119848604A (en) * | 2024-12-26 | 2025-04-18 | 华中科技大学 | Spike signal decoding model construction method applied to invasive brain-computer interface |
| CN120747255A (en) * | 2025-06-19 | 2025-10-03 | 北京中星微人工智能芯片技术有限公司 | Infrared image coding method, device and equipment for impulse neural network |
| CN120747255B (en) * | 2025-06-19 | 2026-01-02 | 北京中星微人工智能芯片技术有限公司 | Infrared image encoding method, apparatus and device for spiking neural networks |
Also Published As
| Publication number | Publication date |
|---|---|
| CN117875408B (en) | 2024-06-25 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN117875408B (en) | A Federated Learning Method of Spiking Neural Networks for Defect Detection | |
| CN111192237B (en) | A glue detection system and method based on deep learning | |
| CN112818969B (en) | Knowledge distillation-based face pose estimation method and system | |
| CN106355151B (en) | A kind of three-dimensional S AR images steganalysis method based on depth confidence network | |
| CN109523013B (en) | Estimation method of air particulate pollution degree based on shallow convolutional neural network | |
| CN113780242B (en) | A cross-scenario underwater acoustic target classification method based on model transfer learning | |
| CN111242862A (en) | Multi-scale fusion parallel dense residual convolutional neural network image denoising method | |
| CN111709888B (en) | An aerial image dehazing method based on improved generative adversarial network | |
| CN111340754A (en) | A method for detection and classification of surface defects based on aircraft skin | |
| CN107463966A (en) | Radar range profile's target identification method based on dual-depth neutral net | |
| CN108133188A (en) | A kind of Activity recognition method based on motion history image and convolutional neural networks | |
| CN114863348A (en) | Video target segmentation method based on self-supervision | |
| CN106407903A (en) | Multiple dimensioned convolution neural network-based real time human body abnormal behavior identification method | |
| CN111079847A (en) | Remote sensing image automatic labeling method based on deep learning | |
| CN115471423A (en) | Point cloud denoising method based on generation countermeasure network and self-attention mechanism | |
| CN104156943B (en) | Multi objective fuzzy cluster image change detection method based on non-dominant neighborhood immune algorithm | |
| CN111462191A (en) | Non-local filter unsupervised optical flow estimation method based on deep learning | |
| CN107833241A (en) | To real-time vision object detection method of the ambient lighting change with robustness | |
| CN118114734A (en) | Convolutional neural network optimization method and system based on sparse regularization theory | |
| CN118298499A (en) | Human motion state detection method and system | |
| CN112468230A (en) | Wireless ultraviolet light scattering channel estimation method based on deep learning | |
| CN120125940A (en) | A method for detecting structural defects of lithium battery electrodes based on Fourier transform | |
| CN112507826B (en) | End-to-end ecological variation monitoring method, terminal, computer equipment and medium | |
| CN117095227B (en) | Convolutional Neural Network Training Method Based on Non-intersection Differential Privacy Federated Learning | |
| CN113313179A (en) | Noise image classification method based on l2p norm robust least square method |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |