[go: up one dir, main page]

CN115082817A - Flame identification and detection method based on improved convolutional neural network - Google Patents

Flame identification and detection method based on improved convolutional neural network Download PDF

Info

Publication number
CN115082817A
CN115082817A CN202110260288.6A CN202110260288A CN115082817A CN 115082817 A CN115082817 A CN 115082817A CN 202110260288 A CN202110260288 A CN 202110260288A CN 115082817 A CN115082817 A CN 115082817A
Authority
CN
China
Prior art keywords
model
flame
image
layer
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110260288.6A
Other languages
Chinese (zh)
Inventor
栗婧
刘紫薇
张志珍
宋天宝
辛艳丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Mining and Technology Beijing CUMTB
Original Assignee
China University of Mining and Technology Beijing CUMTB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Mining and Technology Beijing CUMTB filed Critical China University of Mining and Technology Beijing CUMTB
Priority to CN202110260288.6A priority Critical patent/CN115082817A/en
Publication of CN115082817A publication Critical patent/CN115082817A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

本发明涉及消防技术领域,尤其涉及火灾探测技术方法,特别涉及一种基于卷积神经网络的火焰识别和检测方法。为了优化借助于深度学习中广泛应用于模式识别、图像处理的卷积神经网络算法来对火焰进行识别和检测问题。本发明通过增加数据多样性和数据增强的方式,构建火焰图像样本库;通过增加卷积核数量、卷积‑卷积‑池化的结构和使用多个小尺寸卷积核替代大尺寸卷积核优化卷积神经网络性能并设计火焰识别模型FlameNet;基于Faster‑RCNN算法,设计火焰检测模型FRCNN‑ZF模型;最后设计火焰检测系统GUI。形成的模型和系统、直观、清晰实现对火焰的准确识别和检测,同时具备一定的抗干扰能力,简单易用,也利于非专业人员的使用。

Figure 202110260288

The invention relates to the field of fire protection technology, in particular to a fire detection technology method, and in particular to a flame identification and detection method based on a convolutional neural network. In order to optimize the recognition and detection of flames, the convolutional neural network algorithm widely used in pattern recognition and image processing in deep learning is used. The invention constructs a flame image sample library by increasing data diversity and data enhancement; by increasing the number of convolution kernels, the structure of convolution-convolution-pooling, and using multiple small-sized convolution kernels to replace large-sized convolutions The kernel optimizes the performance of the convolutional neural network and designs the flame recognition model FlameNet; based on the Faster-RCNN algorithm, the flame detection model FRCNN-ZF model is designed; finally, the flame detection system GUI is designed. The formed model and system are intuitive and clear to realize the accurate identification and detection of the flame, and at the same time have a certain anti-interference ability, easy to use, and also conducive to the use of non-professionals.

Figure 202110260288

Description

Flame identification and detection method based on improved convolutional neural network
Technical Field
The invention relates to the field of fire fighting, mainly relates to the technical problem of fire detection, and provides a flame identification and detection method based on a convolutional neural network.
Background
At present, fire detection technologies widely used in construction sites include smoke-sensitive, temperature-sensitive, photosensitive detectors and composite detectors. However, with the rapid development of social economy, the urbanization process is accelerated, and various high, large, new and odd buildings are emerging. The disadvantages of each conventional fire detection technique are gradually manifested. In a large-space building, smoke cannot reach the top of the building due to the existence of a heat barrier layer of the smoke detector; or under the influence of air flow, the smoke is blown away by the air flow, the smoke concentration rising to the top of the building can be greatly reduced, the response threshold value of the smoke detector cannot be reached, an alarm signal cannot be generated, and in addition, if the dust concentration is too high, the false alarm condition of the smoke detector can be caused. The temperature-sensing detector has low sensitivity and long response time, hardly plays a role in the initial smoldering stage, and has a very limited monitoring area. The photosensitive equipment is high in manufacturing cost, and the reliability and the effectiveness of the photosensitive equipment are unstable, so that the use space of the photosensitive equipment in practice is limited. The composite fire detector integrates the smoke detector, the temperature detector and the photosensitive detector, the overall performance of the detector is improved, but the defects of the smoke detector, the temperature detector and the photosensitive detector are not completely eliminated, but the composite fire detector can not be applied to detection and alarm of large-space fire. In view of the great uncertainty, suddenness and variability of the occurrence of fire, conventional fire detectors are not suitable for fire detection in large factories, warehouses and outdoor enlarged spaces such as forest parks. In addition, the traditional fire detector cannot provide more detailed information of a fire scene, such as the fire position, the fire intensity and the like, and cannot well meet the requirements of modern fire detection.
In recent years, a large number of security monitoring systems are arranged in various buildings, and the detection of fire by using monitoring videos becomes a new research direction. With the development of image processing technology, researchers find that flame images in fire have special visual characteristics in the aspects of textures, colors and the like, so that on the basis of preprocessing such as denoising, enhancing, gray level transformation and the like of real-time monitoring video images, static and dynamic visual characteristics of flame and smoke are extracted, and then the technologies such as neural network, mode recognition and the like are applied for classification and recognition. Compared with the traditional fire detection technology, the fire detection technology based on the visual characteristics has the advantages of high response speed, high accuracy, rich and visual information and the like. However, how effective this detection technique is depends largely on the manual selection and extraction of fire image features. The manual feature selection is reasonable and effective, the recognition effect is good, and the manual feature selection depends on professional knowledge and extensive practice to a great extent.
Professional terms such as artificial intelligence and deep learning become a popular word of society, and a new generation of artificial intelligence technology represented by deep learning, such as face recognition, voice recognition, image recognition and the like, is integrated into daily life of people. Deep learning, a branch subject of the field of artificial intelligence research, is a subject that studies how to make a computer acquire learning ability similar to that of a human and can continuously acquire new knowledge. The deep learning is called because the deep learning can independently learn to find the essential characteristics of data from massive data, and an innovative revolution is made in the field of artificial intelligence.
The application provides a flame recognition and detection method based on a Convolutional Neural Network, which is mainly used for recognizing and detecting flames by means of a Convolutional Neural Network (CNN) which is widely applied to pattern recognition and image processing in deep learning. The method avoids the consumption of manpower and material resources caused by the prior image characteristic selection, improves the accuracy of flame identification and detection, and provides a new method for fire detection.
Disclosure of Invention
The method mainly constructs a flame sample library by means of a data enhancement technology; designing a flame identification model flameNet based on an optimization method of replacing a large convolution kernel and a double convolution layer with a small convolution kernel, and comparing convolution neural network models with different convolution kernel numbers and different convolution kernel sizes; and designing a flame detection model by using the fast-RCNN target detection technology for reference. The flame detection device is designed and realized on a Matlab GUI platform, has certain anti-interference capability while ensuring the recognition and detection effects, and provides a new idea for flame detection. The specific content is as follows:
1. a method for optimizing flame identification and monitoring based on a convolutional neural network comprises the following steps:
a) optimizing the performance of the convolutional neural network;
b) constructing a flame image sample library in a mode of increasing data diversity and data enhancement;
c) designing a flame identification model by optimizing the number of convolution kernels, the size of the convolution kernels and the number of model layers;
d) and designing a flame detection model based on a Faster-RCNN algorithm.
2. The flame identification and monitoring method based on the convolutional neural network optimizes the performance of the convolutional neural network and comprises the following steps:
a) the extraction capability of the convolutional neural network on the image can be enhanced by increasing the number of the convolutional kernels, the accuracy of image recognition is further improved, but the time for training convergence is prolonged;
b) the same object can be achieved by a convolution-pooling arrangement, where one or more identical convolution layers are arranged after the convolution layers to form a plurality of sets of convolution structures. The test performance can be improved to a certain extent by adopting a plurality of groups of convolution structures;
c) a plurality of small-size convolution kernels can be used for replacing large-size convolution kernels, so that the original convolution operation is realized, the parameter quantity of the model is greatly reduced, and the parameter quantity of the former and later convolution kernels is replaced.
3. Establishing a flame image sample library, and downloading partial image samples from a large visual image data website; and extracting the video shot by the experiment one by one.
The flame image sample library is constructed, and the sample library construction method comprises the following steps:
a) the data diversity is increased, and the test accuracy can be effectively improved;
b) by means of data enhancement, such as: the image is turned over, cut, the contrast is changed, noise is added, and the like, so that the test accuracy is improved to a certain extent, but the convergence speed is reduced.
4. A flame identification model, comprising:
layer1 is the input layer of the model, and defines the RBG image with the size of 64 multiplied by 64;
the layer 2 is a multiple convolutional layer Conv1, wherein the multiple convolutional layers all comprise two small convolutional layers, the sizes of convolution kernels of the small convolutional layers Conv1-1 and Conv1-2 in the Conv1 are both 3 multiplied by 3, the number of the convolution kernels is 32, the step length is 1, and after being output by the 2 small convolutional layers, the small convolutional layers are activated by an activation function ReLU and then enter the next link;
the 3 rd layer is a pooling layer and adopts a maximum pooling mode with the area of 2 multiplied by 2 and the step length of 2;
the 4 th layer is a multiple convolutional layer Conv2, wherein the multiple convolutional layers all comprise two small convolutional layers, the sizes of convolution kernels of the small convolutional layers Conv2-1 and Conv2-2 in the Conv2 are both 3 multiplied by 3, the number of the convolution kernels is 64, the step length is 1, and after being output by the 2 small convolutional layers, the small convolutional layers are activated by an activation function ReLU and then enter the next link;
the 5 th layer is a pooling layer and adopts a maximum pooling mode with the area of 2 multiplied by 2 and the step length of 2;
the 6 th layer is a full-connection layer of the model and comprises 500 hidden neural nodes;
and the 7 th layer is an output layer of the model, and a Softmax classifier is adopted to judge whether the input picture is a flame or a background image.
5. The flame identification model flameNet is characterized in that the whole network layer is connected layer by layer, and more effective information can be extracted from a flame sample library through a continuous convolution and pooling structure.
The continuous convolution of 2 convolution layers of 3 x 3 is equivalent to the convolution layer of 5 x 5 size in terms of the receptive field, but the parameters of the network model are greatly reduced.
The ReLU activation function is used for enhancing the expression of the nonlinearity degree of the model, and is beneficial to enhancing the abstraction capability of the local model.
The addition of the Dropout layer reduces the calculation amount of the network and effectively controls the overfitting problem.
6. A flame detection model is designed based on a Faster-RCNN target detection algorithm of Nissn paniculate swallowwort root, and is used as a shared convolution Layer, namely Layer1-Layer5, after convolution pooling of a ZFNET model is completed.
The ZFNET model is finely adjusted on the basis of an Alexnet model, the convolution kernel of the convolution layer of the first layer is changed from 11 multiplied by 11 to 7 multiplied by 7, the step length stride is changed from 4 to 2, and the middle flame detection model is named as an FRCNN-ZF model.
The RNP network is designed primarily considering the size of the anchor sliding window, which is modified to match the flame sizes of the images in the sample library, both 64 × 64 and 224 × 224:
anchor point window original size (height x width) modified size (height x width)
Window 1 (128X 128) (64X 32)
Window 2 (128X 256) (32X 64)
Window 3 (256X 128) (64X 64)
Window 4 (256X 256) (128X 64)
Window 5 (256X 512) (64X 128)
Window 6 (512X 256) (128X 128)
Window 7 (512X 512) (256X 128)
Window 8 (512X 1024) (128X 256)
Window 9 (1024X 512) (256X 256)
7. The flame detection model FRCNN-ZF model is characterized by the following by training:
the recall ratio and precision ratio are higher as can be seen from the P-R curve;
the detection rate and the accuracy rate of the flame are better according to the analysis and the representation of the monitoring image;
the monitoring image analysis shows that the flame marking area has certain generalization capability, strong detection capability and certain drift when the color is light and the distance is long;
the monitoring image analysis shows that the monitoring device has certain anti-interference capability.
8. A flame detection system GUI is designed using Matlab GUI functionality. And loading the Fatser-RCNN model and the flame video in sequence, wherein the system extracts the video frame by frame, inputs the video into the Fatser-RCNN model for detection, frames a flame area in an image by the model and gives a score if the flame score is judged by the model to be more than 0.8, and the computer sends an alarm instruction to the alarm to attract the attention of people.
9. The system comprises a flame detection system GUI, wherein a GUI interface is mainly divided into an image area and an operation area, and the image area mainly realizes the presentation of an original image and a detected image; the operation area is a series of operations for loading the detection model, loading the image, the video and detecting.
The detection function and part of the code of the GUI interface comprise:
(1) and detecting the model. And loading the fast-RCNN model. The code for this function is as follows:
function LoadFRCNN_Callback(hObject,eventdata,handles)
global Predictor;
[ filter, pathname ] ═ uigetfile ({ '. mat' }, 'read fast _ RCNN model');
if isequal(filename,0)
msgbox ('no model selected, system settings will be chosen by default');
else
pathfile=fullfile(pathname,filename);
Predictor.LoadFRCNN(pathfile);
msgbox ('model load successful');
end
(2) and loading the image. The code for this function is as follows:
function LoadPicture_Callback(hObject,eventdata,handles)
global Predictor;
[ filename, pathname ] ═ uigetfile ({ '. jpg'; '. png' } 'read picture file');
if isequal(filename,0)
msgbox ('no picture selected');
else
pathfile=fullfile(pathname,filename);
frame=imread(pathfile);
Predictor.Mat=imresize(frame,[240 320]);
axes(handles.axes1);
imshow(Predictor.Mat);
end
(3) and (5) detecting the image. The code for this function is as follows:
function vid_detect_Callback(hObject,eventdata,handles)
outputImage=frame;
[bboxes,scores,~]=detect(frcnn,frame);
[scores,idx]=max(scores);
if~isempty(bboxes)
size_array=size(bboxes);
length=size_array(1);
for i=1:length
box=bboxes(i,:);
frame_=imcrop(frame,box);
annotation=sprintf('%s:%f','Flame',scores);
outputImage=insertObjectAnnotation(outputImage,'rectangle',box,annotation);
end
end
imshow(outputImage);
end
the beneficial technical achievements of the invention are as follows:
(1) the flame identification model flameNet is designed, the sample libraries before and after data enhancement are respectively trained, the identification accuracy of the model is improved to 98.32% from 91.21%, and the identification accuracy of the model can be improved by a data enhancement technology.
(2) By setting, it was found that when the number of Conv1 layer convolution kernels is 32 and the size of the convolution kernel is 3 × 3, the convergence speed of the model is fast, and the flame identification accuracy is 98.54% at the maximum.
(3) On the basis of the Faster-RCNN target detection method, the size of the anchor point sliding window is modified, and the FRCNN-ZF model is verified to be stronger in flame detection capability, generalization capability and anti-interference capability.
(4) The flame detection model is used for detecting real fire, the response time is 6 seconds, the flame can be detected in the early stage, the alarm is realized, and the response time is far shorter than that of a smoke detector.
(5) A simple flame detection system is designed on the Matlab platform, the display effect of flame detection is more intuitively shown, the system is simple and easy to use, and non-professionals can know that the flame detection is carried out by using a convolutional neural network intuitively.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a flame identification model FlameNet structure diagram of a convolutional neural network according to an embodiment of the present disclosure.
Fig. 2 is an output characteristic diagram after processing by the FlameNet model Conv1-1 convolutional layer according to the embodiment of the present application.
Fig. 3 is a schematic structural diagram of a flame detection model ZFNet based on a convolutional neural network according to an embodiment of the present application.
Fig. 4 is a flowchart of a target detection algorithm fast-RCNN structure provided in the embodiment of the present application.
FIG. 5 is a diagram of anchor points defined in the fast-RCNN structure flow provided by the embodiment of the present application.
FIG. 6 is a schematic view of a flame detection system provided in an embodiment of the present application.
Detailed Description
The invention is described in further detail below with reference to the figures and the detailed description.
1. Flame recognition model
The flame identification model flameNet structure based on the convolutional neural network is shown in figure 1.
The flame identification model flameNet designed in the method has 12 layers in total, the whole network layer is connected layer by layer, and the continuous convolution and pooling structure can extract more effective information from a flame sample library; 2 convolution layers of 3 x 3 are continuously convoluted, and are equivalent to convolution layers of 5 x 5 in size in the aspect of a receptive field, but parameters of a network model are greatly reduced; the ReLU activation function is used for enhancing the expression of the model nonlinearity degree, and the abstraction capability of a local model is enhanced; the addition of the Dropout layer reduces the calculation amount of the network and effectively controls the overfitting problem.
The convolutional neural network automatically learns the input image in an end-to-end mode, so that the data characteristics learned by the convolutional neural network are visualized, and the method is very helpful for deeply knowing the convolutional neural network. The specific identification process is as follows:
(1) an image of a flame is input into the FlameNet model.
The output characteristic diagram of the Conv1-1 convolutional layer obtained after the input flame image is processed by the Flamenet model Conv1-1 convolutional layer is shown in FIG. 2. Each small image in the figure shows the outline of the flame, on the basis of which it can be inferred that the convolution kernel of the Conv1-1 convolution layer of the FlameNet model mainly learns the edge outline information of objects in the input image.
(2) Further analysis can find that the visual angles of the adjacent flame outline images are similar, and the visual angles of the flame outline images which are far away from each other slightly differ. This shows that the larger the number of convolution kernels, the more different visual angles the convolutional neural network model can observe the object and learn more characteristic information of the object, which is more beneficial to the model identification. So a suitable number of convolution kernels is selected for further analysis of the picture.
2. Flame detection model
Flame detection is the framing of the exact location of the flame image from the input image and the labeling of the flame.
The flame detection model based on the convolutional neural network is used as a shared convolutional layer by using the convolutional pooling of the ZFNET model in a Faster-RCNN target detection algorithm. The ZFNET model is fine-tuned on the basis of the Alexnet model, the convolution kernel of the first layer convolution layer is changed from 11 multiplied by 11 to 7 multiplied by 7, the step size stride is changed from 4 to 2, and the structure of the ZFNET model is shown in figure 3. The convolution pooled portion in the ZFNET serves as the shared convolution Layer, i.e., Layer1-Layer 5. As shown in fig. 3. The specific detection process is as follows:
the fast-RCNN structure flow is shown in FIG. 4, and the flow is A, B, C three modules in sequence.
(1) In the module A, the shared convolution layer is used for extracting the features of the input image to obtain a global shared feature map. The Faster-RCNN model proposed by Niancheng Gaultheria utilizes the convolutional layer of ZFNET model as the shared convolutional layer.
(2) In the module B, the candidate frame is mainly solved. Inputting the global shared graph output by the module A into a shared convolution layer of the RPN to obtain an RPN shared characteristic graph; and respectively inputting the RPN shared feature map into the classification and regression convolution layers to obtain M candidate frames (ROI), wherein the candidate frames comprise coordinate information and probability information. And then taking the first N candidate frames with the maximum foreground probability. The candidate frames which are crossed, overlapped and do not contain the target are eliminated by using the candidate frame with larger probability and the candidate frame with smaller probability which is more than a certain proportion of the overlapped area of the candidate frames with larger probability by using non-maximum value suppression (NMS) to obtain K candidate frames which are used as the output of the RPN.
It is noted that k anchor points (anchors) are defined in module B, as shown in fig. 5. The method comprises the steps of taking the center of each current sliding window as the center, mapping the sliding window to a receptive field corresponding to an original image, taking the center of the receptive field as the center, defining k anchors, wherein each anchor corresponds to a frame with one area size and one length-width ratio, and all anchors almost completely cover the position of a true value frame in the image after being corrected.
(3) And in the module C, mapping the coordinates of the K candidate frames output by the module B to the global shared feature map to obtain the global shared feature map of the candidate frames. Since the size of each frame candidate may be different, size normalization (ROI Pooling) is required to obtain a frame candidate feature map with the same size. And the candidate frame feature map passes through a full connection layer to obtain candidate frame feature vectors, and the candidate frame feature vectors pass through a regression full connection layer and a correction full connection layer respectively to obtain classification vectors and correction vectors. And finally, suppressing and reserving the candidate frame with the highest probability through a non-maximum value to obtain a final target detection result.
The flame detection model is used for real fire detection, the response time is 6 seconds, flame can be detected in the early stage, alarm is realized, and the response time is far shorter than that of a smoke detector.
3. Flame detection system GUI
The flame detection system is designed by using the GUI function of Matlab, and the schematic diagram of the flame detection system is shown in FIG. 6. And loading the Fatser-RCNN model and the flame video in sequence, wherein the system extracts the video frame by frame, inputs the video into the Fatser-RCNN model for detection, frames a flame area in an image by the model and gives a score if the flame score is judged by the model to be more than 0.8, and the computer sends an alarm instruction to the alarm to attract the attention of people.
The flame detection system is mainly divided into an image area and an operation area, wherein the image area mainly realizes the presentation of an original image and a detected image; the operation area is a series of operations for loading the detection model, loading the image, the video and detecting. The specific process is as follows:
(1) and clicking the detection model. And loading the fast-RCNN model.
(2) Click to load the image. And selecting the image to be loaded, and realizing the loading of the image. The video can also be loaded and played in a figure window in the GUI interface.
(3) And detecting the click image. The algorithm can carry out detection tasks on the currently loaded images, the program can automatically call a pre-loaded fast-RCNN flame detection model during detection, the detection result can obtain the coordinates of flames in the images and a rectangular selection frame surrounded by the coordinates, the probability value of the flames detected by the system can be given, in a GUI (graphical user interface), the video is firstly decomposed according to the number of frames, then each frame is detected, and the detection result is synchronously displayed.

Claims (10)

1.一种基于卷积神经网络的火焰识别和监测的优化方法,其特征在于,包括:1. an optimization method based on the flame identification of convolutional neural network and monitoring, is characterized in that, comprises: a)优化卷积神经网络性能;a) Optimize the performance of convolutional neural network; b)通过增加数据多样性和数据增强的方式,构建火焰图像样本库;b) Build a flame image sample library by increasing data diversity and data enhancement; c)通过优化卷积核数量和卷积核尺寸以及模型层数,设计火焰识别模型FlameNet;c) Design the flame recognition model FlameNet by optimizing the number of convolution kernels, the size of the convolution kernels and the number of model layers; d)基于Faster-RCNN算法,设计火焰检测模型FRCNN-ZF模型。d) Based on Faster-RCNN algorithm, design the flame detection model FRCNN-ZF model. 2.根据专利要求1所述的优化卷积神经网络性能,其特征在于,包括:2. the optimized convolutional neural network performance according to patent claim 1, is characterized in that, comprises: 增加卷积核数量可以加强卷积神经网络对于图像的提取能力,进一步提升图像识别的准确度,但是训练收敛的时间会变长;Increasing the number of convolution kernels can enhance the ability of the convolutional neural network to extract images and further improve the accuracy of image recognition, but the training convergence time will be longer; 通过卷积-卷积-池化的结构来提取图像样本的特征,在卷积层后面设置一个或多个相同的卷积层构成多组卷积结构,采用多组卷积结构在一定程度上提升测试性能;The features of the image samples are extracted through the convolution-convolution-pooling structure, and one or more identical convolutional layers are set behind the convolutional layers to form multiple groups of convolutional structures. Improve test performance; 使用多个小尺寸卷积核替代大尺寸卷积核,既实现了原有的卷积运算,又大大降低了模型的参数量,替代前后卷积核的参数量。Using multiple small-sized convolution kernels to replace large-sized convolution kernels not only realizes the original convolution operation, but also greatly reduces the parameter amount of the model, replacing the parameter amount of the front and rear convolution kernels. 3.根据专利要求1所述构建火焰图像样本库,其特征在于,包括:3. according to the described construction flame image sample library of patent claim 1, it is characterized in that, comprise: 从大型可视化图像数据网站ImageNet(http://image-net.org/)下载部分图像样本;Download some image samples from ImageNet (http://image-net.org/), a large-scale visualization image data website; 将自己实验拍摄的视频,逐帧提取出来;Extract the video shot by your own experiment frame by frame; 增加数据多样性,有效的提升测试准确度;Increase data diversity and effectively improve test accuracy; 数据增强,通过图像翻转、剪切、改变对比度和添加噪声等,提高测试准确度,但是收敛速度变慢。Data enhancement, through image flipping, cropping, changing contrast and adding noise, etc., improves test accuracy, but the convergence speed is slower. 4.一种火焰识别模型FlameNet,其特征在于,包括:4. a flame recognition model FlameNet, is characterized in that, comprises: 第1层为模型的输入层,规定必须为尺寸为64×64的RBG图像;The first layer is the input layer of the model, which must be an RBG image with a size of 64×64; 第2层为多重卷积层Conv1,其中多重卷积层内都包含两个小卷积层,Conv1内的小卷积层Conv1-1和Conv1-2的卷积核尺寸均为3×3,卷积核数量均为32,步长均为1,2个小卷积层输出后都经过激活函数ReLU激活后,进入下一环节;The second layer is the multi-convolutional layer Conv1, in which the multi-convolutional layers contain two small convolutional layers. The convolutional kernels of the small convolutional layers Conv1-1 and Conv1-2 in Conv1 are both 3×3. The number of convolution kernels is 32, and the step size is 1. After the output of the two small convolution layers, they are activated by the activation function ReLU, and then enter the next link; 第3层是池化层,采用区域为2×2,步长为2的最大池化模式;The third layer is the pooling layer, which adopts the maximum pooling mode with an area of 2 × 2 and a stride of 2; 第4层为多重卷积层Conv2,其中多重卷积层内都包含两个小卷积层,Conv2内的小卷积层Conv2-1和Conv2-2的卷积核尺寸均为3×3,卷积核数量均为64,步长均为1,2个小卷积层输出后都经过激活函数ReLU激活后,进入下一环节;The fourth layer is the multi-convolutional layer Conv2, in which the multi-convolutional layers contain two small convolutional layers. The convolutional kernels of the small convolutional layers Conv2-1 and Conv2-2 in Conv2 are both 3×3. The number of convolution kernels is 64, and the step size is 1. After the output of the two small convolution layers, they are activated by the activation function ReLU, and then enter the next link; 第5层是池化层,采用区域为2×2,步长为2的最大池化模式;The fifth layer is the pooling layer, which adopts the maximum pooling mode with an area of 2 × 2 and a stride of 2; 第6层为模型的全连接层,包含500个隐含神经节点;The sixth layer is the fully connected layer of the model, including 500 hidden neural nodes; 第7层为模型的输出层,采用Softmax分类器,判断输入图片为火焰还是背景图像。The seventh layer is the output layer of the model, using the Softmax classifier to determine whether the input image is a flame or a background image. 5.根据权利要求4所述的火焰识别模型FlameNet,其特征在于,5. flame identification model FlameNet according to claim 4, is characterized in that, 整个网络层层相连,连续的卷积、池化结构可以从火焰样本库中提取更多的有效信息;The entire network layer is connected, and the continuous convolution and pooling structure can extract more effective information from the flame sample library; 2个3×3的卷积层连续卷积,在感受野方面等同于尺寸为5×5的卷积层,且大大减少了网络模型的参数;Two 3×3 convolutional layers are continuously convolutional, which is equivalent to a 5×5 convolutional layer in terms of receptive field, and greatly reduces the parameters of the network model; 使用ReLU激活函数增强了模型非线性程度的表达,有利于增强局部模型的抽象能力;The use of the ReLU activation function enhances the expression of the nonlinear degree of the model, which is beneficial to enhance the abstraction ability of the local model; Dropout层的加入减少了网络的计算量,有效地控制了过拟合问题。The addition of the dropout layer reduces the computational complexity of the network and effectively controls the overfitting problem. 6.一种火焰检测模型FRCNN-ZF模型,其特征在于,6. A flame detection model FRCNN-ZF model, characterized in that, 基于任少卿Faster-RCNN目标检测算法设计了火焰检测模型,用ZFNet模型的卷积池化完成作为共享卷积层,即Layer1-Layer5;The flame detection model is designed based on Ren Shaoqing's Faster-RCNN target detection algorithm, and the convolution pooling of the ZFNet model is used as a shared convolution layer, namely Layer1-Layer5; 在Alexnet模型的基础上作了调整,将第一层卷积层的卷积核由11×11改为7×7,步长stride由4改为2;Based on the Alexnet model, the convolution kernel of the first convolutional layer was changed from 11×11 to 7×7, and the stride was changed from 4 to 2; RNP网络的设计中锚点滑动窗口大小,结合样本库中图像的火焰尺寸64×64和224×224两种,将其修改如表1所示。In the design of the RNP network, the anchor point sliding window size, combined with the flame size of the image in the sample library, 64 × 64 and 224 × 224, is modified as shown in Table 1. 表1:锚点滑动窗口大小优化设计Table 1: Anchor point sliding window size optimization design
Figure FDA0002969679790000031
Figure FDA0002969679790000031
7.根据权利要求6所述的火焰检测模型FRCNN-ZF模型,其特征在于,通过训练得出存在以下特点:7. flame detection model FRCNN-ZF model according to claim 6, is characterized in that, draws that there is following characteristic by training: 从P-R曲线可以看出其查全率和查准率较高;It can be seen from the P-R curve that its recall and precision are relatively high; 从监测图像分析表示,对火焰的探测率和正确率都较优;According to the analysis of the monitoring image, the detection rate and the correct rate of the flame are better; 从监测图像分析表示,在对火焰标注区方面,具有一定的泛化能力、检测能力较强、在颜色较浅且距离较远时会有某种程度的漂移;According to the analysis of monitoring images, it has certain generalization ability, strong detection ability, and some degree of drift when the color is light and the distance is long; 从监测图像分析表示,具有一定的抗干扰能力。The monitoring image analysis shows that it has a certain anti-interference ability. 8.一种火焰探测系统GUI,其特征在于,使用Matlab的GUI功能设计火焰探测系统,依次载入Fatser-RCNN模型和火焰视频。8. A flame detection system GUI, characterized in that the flame detection system is designed using the GUI function of Matlab, and the Fatser-RCNN model and the flame video are loaded successively. 9.根据权利要求8所述的一种火焰探测系统GUI,其特征在于,界面主要分为图像区和操作区,图像区主要实现原图像和经过检测的图像的呈现;操作区则是实现载入检测模型、载入图像、视频以及进行检测等一系列操作。9. The GUI of a flame detection system according to claim 8, wherein the interface is mainly divided into an image area and an operation area, and the image area mainly realizes the presentation of the original image and the detected image; A series of operations such as loading the detection model, loading images, videos, and performing detection. 10.根据权利要求8所述的GUI界面,其特征在于,界面的探测功能及部分代码包括:10. GUI interface according to claim 8, is characterized in that, the detection function and partial code of interface comprise: (1)检测模型,实现Faster-RCNN模型加载,该功能的代码如下:(1) Detect the model and realize the loading of the Faster-RCNN model. The code of this function is as follows:
Figure FDA0002969679790000032
Figure FDA0002969679790000032
(2)载入图像,该功能的代码如下:(2) Load the image, the code of this function is as follows:
Figure FDA0002969679790000041
Figure FDA0002969679790000041
(3)图像检测,该功能的代码如下:(3) Image detection, the code of this function is as follows:
Figure FDA0002969679790000042
Figure FDA0002969679790000042
CN202110260288.6A 2021-03-10 2021-03-10 Flame identification and detection method based on improved convolutional neural network Pending CN115082817A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110260288.6A CN115082817A (en) 2021-03-10 2021-03-10 Flame identification and detection method based on improved convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110260288.6A CN115082817A (en) 2021-03-10 2021-03-10 Flame identification and detection method based on improved convolutional neural network

Publications (1)

Publication Number Publication Date
CN115082817A true CN115082817A (en) 2022-09-20

Family

ID=83241196

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110260288.6A Pending CN115082817A (en) 2021-03-10 2021-03-10 Flame identification and detection method based on improved convolutional neural network

Country Status (1)

Country Link
CN (1) CN115082817A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116977634A (en) * 2023-07-17 2023-10-31 应急管理部沈阳消防研究所 Fire smoke detection method based on lidar point cloud background subtraction

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106934404A (en) * 2017-03-10 2017-07-07 深圳市瀚晖威视科技有限公司 A kind of image flame identifying system based on CNN convolutional neural networks
CN107480730A (en) * 2017-09-05 2017-12-15 广州供电局有限公司 Power equipment identification model construction method and system, the recognition methods of power equipment
CN108171112A (en) * 2017-12-01 2018-06-15 西安电子科技大学 Vehicle identification and tracking based on convolutional neural networks
CN109285139A (en) * 2018-07-23 2019-01-29 同济大学 A deep learning-based X-ray imaging weld inspection method
CN109815904A (en) * 2019-01-25 2019-05-28 吉林大学 Fire identification method based on convolutional neural network
CN110378421A (en) * 2019-07-19 2019-10-25 西安科技大学 A kind of coal-mine fire recognition methods based on convolutional neural networks
CN110751089A (en) * 2019-10-18 2020-02-04 南京林业大学 A flame target detection method based on digital images and convolutional features

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106934404A (en) * 2017-03-10 2017-07-07 深圳市瀚晖威视科技有限公司 A kind of image flame identifying system based on CNN convolutional neural networks
CN107480730A (en) * 2017-09-05 2017-12-15 广州供电局有限公司 Power equipment identification model construction method and system, the recognition methods of power equipment
CN108171112A (en) * 2017-12-01 2018-06-15 西安电子科技大学 Vehicle identification and tracking based on convolutional neural networks
CN109285139A (en) * 2018-07-23 2019-01-29 同济大学 A deep learning-based X-ray imaging weld inspection method
CN109815904A (en) * 2019-01-25 2019-05-28 吉林大学 Fire identification method based on convolutional neural network
CN110378421A (en) * 2019-07-19 2019-10-25 西安科技大学 A kind of coal-mine fire recognition methods based on convolutional neural networks
CN110751089A (en) * 2019-10-18 2020-02-04 南京林业大学 A flame target detection method based on digital images and convolutional features

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116977634A (en) * 2023-07-17 2023-10-31 应急管理部沈阳消防研究所 Fire smoke detection method based on lidar point cloud background subtraction
CN116977634B (en) * 2023-07-17 2024-01-23 应急管理部沈阳消防研究所 Fire smoke detection method based on lidar point cloud background subtraction

Similar Documents

Publication Publication Date Title
CN107609470B (en) Method for detecting early smoke of field fire by video
CN114202646B (en) A method and system for infrared image smoking detection based on deep learning
CN113807276A (en) Smoking behavior identification method based on optimized YOLOv4 model
CN112699801B (en) Fire identification method and system based on video image
CN113850242A (en) Storage abnormal target detection method and system based on deep learning algorithm
Wang et al. A deep learning-based experiment on forest wildfire detection in machine vision course
CN110427834A (en) A kind of Activity recognition system and method based on skeleton data
CN111814638B (en) Security scene flame detection method based on deep learning
CN109886153B (en) A real-time face detection method based on deep convolutional neural network
CN118609303B (en) Fire prevention early warning method and system for mountain photovoltaic power stations based on visual analysis
Zheng et al. A lightweight algorithm capable of accurately identifying forest fires from UAV remote sensing imagery
Xie et al. Early indoor occluded fire detection based on firelight reflection characteristics
CN115719463A (en) Smoke and fire detection method based on super-resolution reconstruction and adaptive extrusion excitation
CN110263654A (en) A kind of flame detecting method, device and embedded device
CN118397402B (en) Training method and system for lightweight small-target forest fire detection model
CN111539325A (en) Forest fire detection method based on deep learning
CN115049986A (en) Flame detection method and system based on improved YOLOv4
CN117789105A (en) Complex scene fire detection method based on multi-feature extraction
CN115775365A (en) Controlled smoke and fire interference identification method and device for historical relic and ancient building and computing equipment
CN118797563A (en) A fire prediction method integrating multi-source time series data suitable for edge computing
Peng et al. YOLO-HF: Early detection of home fires using YOLO
CN115082817A (en) Flame identification and detection method based on improved convolutional neural network
CN117037054A (en) Factory smoke detection method based on Gaussian smoke plume model and improved YOLOv4
CN115294647A (en) Smoking behavior detection method and system based on deep neural network
Lin et al. Smoking behavior detection based on hand trajectory tracking and mouth saturation changes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20220920