CN118429817A - Intelligent analysis method of substation site map change area based on image comparison - Google Patents
Intelligent analysis method of substation site map change area based on image comparison Download PDFInfo
- Publication number
- CN118429817A CN118429817A CN202410888225.9A CN202410888225A CN118429817A CN 118429817 A CN118429817 A CN 118429817A CN 202410888225 A CN202410888225 A CN 202410888225A CN 118429817 A CN118429817 A CN 118429817A
- Authority
- CN
- China
- Prior art keywords
- image
- difference
- point
- img
- bitmap
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/176—Urban or other man-made structures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/757—Matching configurations of points or features
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y04—INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
- Y04S—SYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
- Y04S10/00—Systems supporting electrical power generation, transmission or distribution
- Y04S10/50—Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Molecular Biology (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
Description
技术领域Technical Field
本发明涉及变电站智能巡视技术领域,尤其涉及基于图像比对的变电站点位图变化区域智能分析方法。The present invention relates to the technical field of intelligent inspection of substations, and in particular to an intelligent analysis method of substation site bitmap change areas based on image comparison.
背景技术Background technique
变电站智能巡视由于运维时间长、断电等影响,在执行巡视任务时,当前任务的同一个点位图像中的设备可能会因为设备位置变化、异物入侵等原因相较于基准点位库中该点位的基准图发生变化,这些变化并不属于变电典型缺陷,通过已有的目标检测识别算法无法分析出点位图中发生的变化,且不及时发现会影响实际运维效果。授权公告号为CN113724247B的发明专利公开了一种基于图像判别技术的变电站智能巡检方法,利用视觉领域中的热图生成与深度学习领域中的语义分割相结合的方式,对实时画面进行智能检测分析,判断画面中是否存在异常。然而,该技术方案需要提前准备训练数据,在深度学习框架下进行长时间训练后才能部署使用,存在占用显存、内存资源较大情况;此外,该发明只使用单一的图像直接对比作差,加上简单的二值化、降噪处理,可能出现判别准确率相对较低的问题。Due to the long operation and maintenance time and power outages, when executing the inspection task, the equipment in the same point image of the current task may change compared with the reference map of the point in the reference point library due to changes in equipment position, foreign body intrusion, etc. These changes do not belong to typical defects of substations. The existing target detection and recognition algorithm cannot analyze the changes in the point map, and failure to discover them in time will affect the actual operation and maintenance effect. The invention patent with the authorization announcement number CN113724247B discloses a substation intelligent inspection method based on image discrimination technology, which combines the heat map generation in the visual field with the semantic segmentation in the deep learning field to perform intelligent detection and analysis on the real-time picture to determine whether there is an abnormality in the picture. However, this technical solution requires the preparation of training data in advance, and can only be deployed and used after long-term training under the deep learning framework, which occupies a large amount of video memory and memory resources; in addition, the invention only uses a single image for direct comparison and difference, plus simple binarization and noise reduction processing, which may result in a relatively low discrimination accuracy.
发明内容Summary of the invention
为了克服现有技术中存在的不足,本发明提供了基于图像比对的变电站点位图变化区域智能分析方法,通过一系列变化区域分析算法,排除设备外的背景因素干扰,准确判别出点位图对比基准图发生变化的区域。In order to overcome the shortcomings of the prior art, the present invention provides an intelligent analysis method for substation point map change areas based on image comparison. Through a series of change area analysis algorithms, the interference of background factors outside the equipment is eliminated, and the area where the point map has changed compared with the reference map is accurately identified.
为实现上述目的,本发明公开的基于图像比对的变电站点位图变化区域智能分析方法,包括确定巡视区域内的若干巡视点位,对各点位进行编码,并在巡视点位处设置拍摄装置;建立巡视区域各点位基准图库;执行巡视任务时,利用拍摄装置抓拍巡视点位的点位图;将基准图库中的基准图按照点位编码提取出来与对应点位图进行图像比对;具体包括以下步骤:To achieve the above-mentioned purpose, the present invention discloses an intelligent analysis method for substation site bitmap change area based on image comparison, which includes determining a number of patrol points in a patrol area, encoding each point, and setting a shooting device at the patrol point; establishing a reference library of each point in the patrol area; when performing a patrol task, using a shooting device to capture a point map of the patrol point; extracting a reference map in the reference library according to the point code and performing image comparison with the corresponding point map; specifically including the following steps:
S1:图像预处理,所述图像预处理包括图像降采样、关键点筛选以及点位图偏移校正;S1: image preprocessing, which includes image downsampling, key point screening, and point map offset correction;
S2:根据颜色特征进行筛选,所述根据颜色特征进行筛选包括RGB颜色通道作差、RGB颜色通道最大值与平均值作差以及差值图选择算法;S2: screening according to color features, wherein the screening according to color features includes subtracting RGB color channels, subtracting the maximum value and average value of RGB color channels, and a difference map selection algorithm;
S3:图像变化区域分析,所述图像变化区域分析包括腐蚀、季节背景变化剔除、膨胀光线阴影剔除、连通域过滤以及变化框合并还原;S3: image change region analysis, including corrosion, seasonal background change elimination, expansion light shadow elimination, connected domain filtering, and change frame merging and restoration;
S4:在发生变化的点位图上将变化区域框出,并进行告警。S4: Frame the changed area on the point map where the change occurs and issue an alarm.
进一步地,所述图像降采样包括将基准图的宽高比例压缩为原来的1/4,得到基准图img1;所述图像降采样还包括将点位图的宽高比例压缩为原来的1/4,得到点位图img2。Furthermore, the image downsampling includes compressing the aspect ratio of the reference image to 1/4 of the original aspect ratio to obtain the reference image img1; the image downsampling also includes compressing the aspect ratio of the point map to 1/4 of the original aspect ratio to obtain the point map img2.
进一步地,所述关键点筛选步骤如下:Furthermore, the key point screening steps are as follows:
S11:利用基于sift的特征点匹配算法,对基准图img1和点位图img2进行特征点检测,并为每个特征点计算特征描述符;S11: Using the SIFT-based feature point matching algorithm, feature points of the reference image img1 and the point image img2 are detected, and feature descriptors are calculated for each feature point;
S12:通过暴力匹配法,识别基准图img1和点位图img2特征描述符之间的最近邻关系,为了确保匹配的准确性,引入K最近邻匹配策略,确定出最佳匹配特征点与次佳匹配特征点;S12: The nearest neighbor relationship between the feature descriptors of the reference image img1 and the point image img2 is identified through the brute force matching method. In order to ensure the accuracy of the matching, the K nearest neighbor matching strategy is introduced to determine the best matching feature points and the second best matching feature points;
S13:设置距离比值过滤阈值,利用最佳匹配特征点、次佳匹配特征点分别与相应特征点之间的距离比值,完成关键点对筛选。S13: Setting a distance ratio filtering threshold, and using the distance ratios between the best matching feature point, the second best matching feature point and the corresponding feature point, to complete the key point pair screening.
进一步地,所述点位图偏移校正包括:若筛选的关键点对数量超过4个,则计算这些关键点对之间的最优单映射变换矩阵,并据此对点位图进行X和Y方向的还原对齐,生成与基准图img1空间对齐的配准图img_s;若保留的特征点对不足4个,直接使用点位图img2作为配准图img_s。Furthermore, the point map offset correction includes: if the number of screened key point pairs exceeds 4, the optimal single mapping transformation matrix between these key point pairs is calculated, and the point map is restored and aligned in the X and Y directions based on this, to generate a registration map img_s that is spatially aligned with the reference map img1; if the number of retained feature point pairs is less than 4, the point map img2 is directly used as the registration map img_s.
进一步地,所述根据颜色特征进行筛选包括如下步骤:Furthermore, the screening according to the color characteristics comprises the following steps:
S21:将基准图img1与配准图img_s直接作差,得到差值图img_diff;S21: Directly subtract the reference image img1 from the registration image img_s to obtain the difference image img_diff;
S22:所述RGB颜色通道作差包括将差值图img_diff的R、G、B通道进行分离,再计算红绿、绿蓝、蓝红两两通道之间的绝对差值diff_R_G、diff_G_B、diff_B_R,并为每个差值创建掩码;设置阈值范围一为35到255,在所述阈值范围一内的像素值在掩码中被保留,再将3个掩码合并,得到:通道差值二值图diff_RGB;S22: The RGB color channel difference includes separating the R, G, and B channels of the difference image img_diff, and then calculating the absolute differences diff_R_G, diff_G_B, and diff_B_R between the red and green, green and blue, and blue and red channels, and creating a mask for each difference; setting the threshold range 1 to 35 to 255, and the pixel values within the threshold range 1 are retained in the mask, and then merging the three masks to obtain: a channel difference binary image diff_RGB;
S23:所述RGB颜色通道最大值与平均值作差包括:将差值图img_diff的R、G、B通道进行分离,3个通道先分别取平均值R_mean、G_mean、B_mean,然后计算R、G、B与各自通道平均值的绝对差值R-R_mean、G-G_mean、B-B_mean;再分别取R、G、B通道的最大值R_max、G_max、B_max,按照如下公式:,得到像素点保留的最低阈值rgb_scale;最后为每个通道的R-R_mean、G-G_mean、B-B_mean创建掩码,设置阈值范围二为rgb_scale到255,所述阈值范围二内的像素点在掩码中被保留,再将3个掩码合并,得到:通道最大值平均值差值二值图thresh_RGB。S23: Subtracting the maximum and average values of the RGB color channels includes: separating the R, G, and B channels of the difference image img_diff, first taking the average values R_mean, G_mean, and B_mean of the three channels respectively, and then calculating the absolute differences R-R_mean, G-G_mean, and B-B_mean between R, G, and B and the average values of the respective channels; and then taking the maximum values R_max, G_max, and B_max of the R, G, and B channels respectively, according to the following formula: , get the minimum threshold rgb_scale for pixel retention; finally, create masks for R-R_mean, G-G_mean, and B-B_mean of each channel, set threshold range 2 to rgb_scale to 255, and the pixels within the threshold range 2 are retained in the mask. Then merge the three masks to get: channel maximum value average difference binary map thresh_RGB.
进一步地,根据场景图像中设备的颜色特征设计差值图选择算法,选择出差值二值图img_new,所述差值图选择算法包括,计算通道差值二值图diff_RGB及通道最大值平均值差值二值图thresh_RGB中的白色像素点个数以及白色像素点占整张图像大小的比例,具体算法选择逻辑如下:Furthermore, a difference map selection algorithm is designed according to the color characteristics of the device in the scene image to select the difference binary map img_new. The difference map selection algorithm includes calculating the number of white pixels in the channel difference binary map diff_RGB and the channel maximum value average value difference binary map thresh_RGB And the proportion of white pixels to the entire image size , the specific algorithm selection logic is as follows:
若a1=0且b2>0.5,选择diff_RGB作为差值二值图img_new;If a1=0 and b2>0.5, select diff_RGB as the difference binary image img_new;
若a2=0且b1>0.5,选择thresh_RGB作为差值二值图img_new;If a2=0 and b1>0.5, select thresh_RGB as the difference binary image img_new;
若a1=0且a2>0,选择thresh_RGB作为差值二值图img_new;If a1=0 and a2>0, select thresh_RGB as the difference binary image img_new;
若a2=0且a1>0,选择diff_RGB作为差值二值图img_new。If a2=0 and a1>0, select diff_RGB as the difference binary image img_new.
进一步地,所述季节背景变化剔除包括:找到差值二值图img_new中的所有点状连通域,统计点状连通域数量,如果点状连通域数量大于30个,且每个连通域的面积都小于200个像素点,则把这些点状连通域填充成背景,得到差值二值图img_new1;对差值二值图img_new1进行图像腐蚀处理,得到差值二值图img_erode。Furthermore, the seasonal background change elimination includes: finding all the point-like connected domains in the difference binary image img_new, counting the number of the point-like connected domains, if the number of the point-like connected domains is greater than 30, and the area of each connected domain is less than 200 pixels, then filling these point-like connected domains as the background to obtain the difference binary image img_new1; performing image erosion processing on the difference binary image img_new1 to obtain the difference binary image img_erode.
进一步地,所述光线阴影剔除包括,统计差值二值图img_erode和差值二值图img_new1中的连通域数量,分别计算两张图中所有连通域的总像素数与所有连通域总外接矩形框面积的比例,若比例小于0.4,且差值二值图img_new1的连通域数量大于20个、差值二值图img_erode的连通域数量小于3个,则将img_erode图中的连通域填充为背景,得到差值二值图img_new2;对差值二值图img_new2进行图像膨胀处理,得到差值二值图img_dilate。Furthermore, the light shadow removal includes counting the number of connected domains in the difference binary image img_erode and the difference binary image img_new1, and calculating the ratio of the total number of pixels of all connected domains in the two images to the total circumscribed rectangular frame area of all connected domains. If the ratio is less than 0.4, and the number of connected domains in the difference binary image img_new1 is greater than 20 and the number of connected domains in the difference binary image img_erode is less than 3, then the connected domains in the img_erode image are filled as the background to obtain the difference binary image img_new2; and image dilation processing is performed on the difference binary image img_new2 to obtain the difference binary image img_dilate.
进一步地,所述连通域过滤包括将面积小于200个像素点、占空比低于0.2的连通域、宽高比大于5的线段类连通域填充为背景,得到差值二值图img_new3。Furthermore, the connected domain filtering includes filling the connected domains with an area less than 200 pixels and a duty cycle less than 0.2 and the line segment-like connected domains with an aspect ratio greater than 5 as the background to obtain a difference binary image img_new3.
进一步地,所述变化框合并还原包括将差值二值图img_new3剩余的所有连通域分别对应的外接矩形框合并成一个变化框,取所有外接矩形框坐标的4个最值:xmin、xmax、ymin、ymax,得到输出变化框坐标((xmin,ymin),(xmax,ymax));将所述输出变化框扩大4倍成19201080像素中的变化框坐标。Furthermore, the change frame merging and restoring includes merging the external rectangular frames corresponding to all the remaining connected domains of the difference binary image img_new3 into one change frame, taking the four maximum values of the coordinates of all the external rectangular frames: x min , x max , y min , y max , and obtaining the output change frame coordinates ((x min , y min ), (x max , y max )); expanding the output change frame by 4 times to 1920 The changing box coordinates in 1080 pixels.
本发明至少具有以下有益效果:The present invention has at least the following beneficial effects:
本发明通过根据颜色特征进行筛选、图像变化区域分析一系列算法,无需基于深度学习框架,也无需长时间训练部署模型,可直接部署使用,实现对变电站点位图像变化区域的准确分析;本发明的变化区域智能识别算法在实际部署应用时分析速度快,占用显存、内存等硬件资源少,节约成本;本发明基于变电领域的设备图像特征通过两种颜色通道图算法对变化区域进行筛选,同时增加了复杂的针对不同情况的变化分析算法,提高了变化区域识别的准确率。The present invention uses a series of algorithms for screening and image change area analysis based on color features. It does not need to be based on a deep learning framework and does not require long-term training and deployment of models. It can be directly deployed and used to achieve accurate analysis of substation location image change areas. The change area intelligent recognition algorithm of the present invention has a fast analysis speed when actually deployed and applied, occupies less hardware resources such as video memory and internal memory, and saves costs. The present invention uses two color channel map algorithms to screen change areas based on the image features of equipment in the substation field, and at the same time adds complex change analysis algorithms for different situations, thereby improving the accuracy of change area recognition.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
图1为本发明技术方案流程图。FIG1 is a flow chart of the technical solution of the present invention.
具体实施方式Detailed ways
以下结合附图对本发明的原理和特征进行描述,所举实例只用于解释本发明,并非用于限定本发明的范围。The principles and features of the present invention are described below in conjunction with the accompanying drawings. The examples given are only used to explain the present invention and are not used to limit the scope of the present invention.
基于图像比对的变电站点位图变化区域智能分析方法,包括确定巡视区域内的若干巡视点位,对各点位进行编码,并在巡视点位处设置拍摄装置;拍摄各个点位正常情况下的基准图,建立巡视区域各点位基准图库;执行巡视任务时,利用拍摄装置抓拍巡视点位的点位图;将基准图库中的基准图按照点位编码提取出来与对应点位图进行图像比对。具体包括以下步骤:The method for intelligent analysis of substation site bitmap change areas based on image comparison includes determining several patrol points in the patrol area, encoding each point, and setting up a shooting device at the patrol point; shooting a reference map of each point under normal conditions, and establishing a reference map library of each point in the patrol area; when performing patrol tasks, using a shooting device to capture the point map of the patrol point; extracting the reference map in the reference map library according to the point code and performing image comparison with the corresponding point map. Specifically, the following steps are included:
S1:图像预处理,图像预处理步骤中包括图像降采样、关键点筛选以及点位图偏移校正。S1: Image preprocessing. The image preprocessing steps include image downsampling, key point screening, and point map offset correction.
其中,图像降采样具体包括如下步骤:The image downsampling specifically includes the following steps:
S10:图像降采样算法用于将图片19201080分辨率的图像进行4倍降采样,即将基准图的宽高比例压缩为原来的1/4,得到基准图img1,将点位图的宽高比例的宽高比例压缩为原来的1/4,得到点位图img2,以此提高图像分析速度,避免图像分析耗时过长导致点位任务堆积。S10: Image downsampling algorithm is used to convert images to 1920 The 1080-resolution image is downsampled 4 times, that is, the width-to-height ratio of the reference image is compressed to 1/4 of the original, to obtain the reference image img1, and the width-to-height ratio of the point map is compressed to 1/4 of the original, to obtain the point map img2, so as to improve the image analysis speed and avoid the accumulation of point tasks caused by the long image analysis time.
关键点筛选基于特征点匹配进行,具体包括如下步骤:Key point screening is based on feature point matching, which includes the following steps:
S11:利用基于sift的特征点匹配算法,对基准图img1和点位图img2进行特征点检测,并为每个特征点计算特征描述符。S11: Using the SIFT-based feature point matching algorithm, feature point detection is performed on the reference image img1 and the point image img2, and a feature descriptor is calculated for each feature point.
特征点是图像中的显著点,它们在图像的某些变换下(如旋转、缩放、部分遮挡)仍然保持稳定,因此可以用来匹配不同图像中的对象或场景。特征描述符是对特征点周围局部区域的一种量化描述,它捕捉了该区域的特定属性,用于唯一标识特征点。通过特征点和描述符的结合使用,可以找到跨图像匹配的特征点对。Feature points are salient points in an image. They remain stable under certain image transformations (such as rotation, scaling, and partial occlusion), so they can be used to match objects or scenes in different images. Feature descriptors are quantitative descriptions of the local area around a feature point, which captures specific properties of the area and is used to uniquely identify the feature point. By combining feature points and descriptors, feature point pairs that match across images can be found.
S12:通过暴力匹配法,识别基准图img1和点位图img2特征描述符之间的最近邻关系,为了确保匹配的准确性,引入K最近邻匹配策略,确定出最佳匹配特征点与次佳匹配特征点。S12: The nearest neighbor relationship between the feature descriptors of the reference image img1 and the point image img2 is identified through the brute force matching method. In order to ensure the accuracy of the matching, the K nearest neighbor matching strategy is introduced to determine the best matching feature points and the second best matching feature points.
利用最近邻搜索确定出最佳匹配特征点,方法为对于基准图img1中的每个特征点,找到点位图img2中与基准图img1描述符最接近的特征点,作为最佳匹配特征点;同样地,利用次紧邻搜索,确定出次佳匹配特征点,方法为对于基准图img1中的每个特征点,找到点位图img2中与基准图img1描述符第二接近的特征点,作为次佳匹配特征点。The best matching feature point is determined by using the nearest neighbor search. For each feature point in the reference image img1, the feature point in the point map img2 that is closest to the descriptor of the reference image img1 is found as the best matching feature point. Similarly, the second-best matching feature point is determined by using the next-best neighbor search. For each feature point in the reference image img1, the feature point in the point map img2 that is second closest to the descriptor of the reference image img1 is found as the second-best matching feature point.
S13:设置距离比值过滤阈值,利用最佳匹配特征点、次佳匹配特征点分别与相应特征点之间的距离比值,完成点位图img2与基准图img1中关键点对的筛选。S13: Setting a distance ratio filtering threshold, and using the distance ratios between the best matching feature point, the second best matching feature point and the corresponding feature point, to complete the screening of key point pairs in the point map img2 and the reference map img1.
方法为当最佳匹配特征点与次佳匹配特征点与相应特征点之间的距离比值低于0.65时,说明这些特征点具有高度的相似性和相关性,因此保留这些特征点作为关键点,以此完成关键点对的筛选。The method is that when the distance ratio between the best matching feature point and the second best matching feature point and the corresponding feature point is less than 0.65, it means that these feature points have high similarity and correlation, so these feature points are retained as key points to complete the screening of key point pairs.
点位图偏移校正具体包括如下步骤:The point map offset correction specifically includes the following steps:
S14:若筛选的关键点对数量超过4个,则计算这些关键点对之间的最优单映射变换矩阵,并据此对点位图进行X和Y方向的还原对齐,生成与基准图img1空间对齐的配准图img_s;若保留的特征点对不足4个,直接使用点位图img2作为配准图img_s。S14: If the number of the selected key point pairs exceeds 4, the optimal single mapping transformation matrix between these key point pairs is calculated, and the point map is restored and aligned in the X and Y directions based on it to generate a registration map img_s that is spatially aligned with the reference map img1; if the number of retained feature point pairs is less than 4, the point map img2 is directly used as the registration map img_s.
S2:根据颜色特征进行筛选,根据颜色特征进行筛选步骤包括RGB颜色通道作差以及RGB颜色通道最大值与平均值作差步骤。在该步骤中,S2: Screening based on color features, the screening based on color features step includes the steps of subtracting the RGB color channels and subtracting the maximum and average values of the RGB color channels. In this step,
S21:将基准图img1与配准图img_s直接作差,得到差值图img_diff。S21: Directly subtract the reference image img1 from the registration image img_s to obtain a difference image img_diff.
RGB颜色通道作差具体包括如下步骤:The subtraction of RGB color channels specifically includes the following steps:
S22:将差值图img_diff的R、G、B通道进行分离,再计算红绿、绿蓝、蓝红两两通道之间的绝对差值diff_R_G、diff_G_B、diff_B_R,并为每个差值创建掩码;设置阈值范围一为35到255,在所述阈值范围一内的像素值在掩码中被保留,再将3个掩码合并,得到:通道差值二值图diff_RGB。S22: Separate the R, G, and B channels of the difference image img_diff, and then calculate the absolute differences diff_R_G, diff_G_B, and diff_B_R between the red and green, green and blue, and blue and red channels, and create a mask for each difference; set the threshold range one to 35 to 255, and the pixel values within the threshold range one are retained in the mask, and then merge the three masks to obtain: channel difference binary image diff_RGB.
RGB颜色通道最大值与平均值作差具体包括如下步骤:The subtraction of the maximum value and the average value of the RGB color channel specifically includes the following steps:
S23:将差值图img_diff的R、G、B通道进行分离,3个通道先分别取平均值R_mean、G_mean、B_mean,然后计算R、G、B与各自通道平均值的绝对差值R-R_mean、G-G_mean、B-B_mean;再分别取R、G、B通道的最大值R_max、G_max、B_max,按照如下公式:,分别将最大值和平均值作差,根据0.4的比例因子,得到像素点保留的最低阈值rgb_scale;最后为每个通道的R-R_mean、G-G_mean、B-B_mean创建掩码,设置阈值范围二为rgb_scale到255,阈值范围二内的像素点在掩码中被保留,再将3个掩码合并,得到:通道最大值平均值差值二值图thresh_RGB。S23: Separate the R, G, and B channels of the difference image img_diff, take the average values R_mean, G_mean, and B_mean of the three channels respectively, and then calculate the absolute differences R-R_mean, G-G_mean, and B-B_mean between R, G, and B and the average values of their respective channels; then take the maximum values R_max, G_max, and B_max of the R, G, and B channels respectively, according to the following formula: , respectively make the difference between the maximum value and the average value, and according to the scale factor of 0.4, get the minimum threshold rgb_scale for pixel retention; finally, create a mask for each channel's R-R_mean, G-G_mean, and B-B_mean, set the threshold range 2 to rgb_scale to 255, and the pixels within the threshold range 2 are retained in the mask. Then merge the three masks to get: channel maximum value average difference binary map thresh_RGB.
最后,根据场景图像中设备的颜色特征设计差值图选择算法,选择出差值二值图img_new,具体包括如下步骤:Finally, a difference image selection algorithm is designed according to the color features of the device in the scene image to select the difference binary image img_new, which specifically includes the following steps:
S24:计算通道差值二值图diff_RGB及通道最大值平均值差值二值图thresh_RGB中的白色像素点个数以及白色像素点占整张图像大小的比例,保留一张合适的差值二值图img_new用于后续分析处理。具体算法选择逻辑如下:S24: Calculate the number of white pixels in the channel difference binary image diff_RGB and the channel maximum value average difference binary image thresh_RGB And the proportion of white pixels to the entire image size , retain a suitable difference binary image img_new for subsequent analysis and processing. The specific algorithm selection logic is as follows:
若a1=0且b2>0.5,选择通道差值二值图diff_RGB作为差值二值图img_new;If a1=0 and b2>0.5, select the channel difference binary image diff_RGB as the difference binary image img_new;
若a2=0且b1>0.5,选择通道最大值平均值差值二值图thresh_RGB作为差值二值图img_new;If a2=0 and b1>0.5, select the channel maximum value average difference binary image thresh_RGB as the difference binary image img_new;
若a1=0且a2>0,选择通道最大值平均值差值二值图thresh_RGB作为差值二值图img_new;If a1=0 and a2>0, select the channel maximum value average difference binary image thresh_RGB as the difference binary image img_new;
若a2=0且a1>0,选择通道差值二值图diff_RGB作为差值二值图img_new。If a2=0 and a1>0, select the channel difference binary image diff_RGB as the difference binary image img_new.
S3:图像变化区域分析,由于设备在点位图中的占区域较小,图片中一半以上的区域都是背景,但是自然背景的变化并不算做设备变化,不需要进行告警。所以要对选择出来的差值二值图img_new进行图像变化区域分析,包括腐蚀、季节背景变化剔除、膨胀、光线阴影剔除、连通域过滤以及变化框合并还原。S3: Image change area analysis. Since the device occupies a small area in the point map, more than half of the area in the picture is background, but the change of natural background is not considered as device change, and no alarm is required. Therefore, the selected difference binary image img_new needs to be analyzed for image change areas, including corrosion, seasonal background change elimination, expansion, light and shadow elimination, connected domain filtering, and change frame merging and restoration.
其中,季节背景变化剔除用于排除差值二值图img_new中存在由于季节背景变化导致的点状变化连通域较多,比如,设备周边存在草地由于季节变化由黄变绿、冬季下雪地面有积雪等变化,在差值二值图img_new中的呈现密集点状分布的连通域,对设备变化分析造成干扰。具体包括如下步骤:Among them, seasonal background change elimination is used to exclude the presence of many point-like change connected domains in the difference binary map img_new due to seasonal background changes. For example, the grass around the device changes from yellow to green due to seasonal changes, and there is snow on the ground in winter. The densely distributed connected domains in the difference binary map img_new interfere with the device change analysis. The specific steps include:
S31:找到差值二值图img_new中的所有点状连通域,统计点状连通域数量,如果点状连通域数量大于30个,且每个连通域的面积都小于200个像素点,则把这些点状连通域填充成背景,算作未变化区域,以排除非设备点状连通域干扰,得到差值二值图img_new1;对差值二值图img_new1进行图像腐蚀处理,得到差值二值图img_erode。腐蚀处理为本领域熟知技术,在此不再赘述。S31: Find all the point-connected domains in the difference binary image img_new, count the number of point-connected domains, if the number of point-connected domains is greater than 30, and the area of each connected domain is less than 200 pixels, fill these point-connected domains as background, count them as unchanged areas, to exclude interference from non-device point-connected domains, and obtain the difference binary image img_new1; perform image erosion processing on the difference binary image img_new1, and obtain the difference binary image img_erode. Erosion processing is a well-known technology in the art and will not be described in detail here.
光线阴影剔除用于排除差值二值图img_new1中存在由于光线变化导致阴影位置不同,但其设备本身未出现变化的情况。比如,由于上午、下午太阳位置变化导致设备阴影位置发生变化进而导致差值二值图img_new1中水平、垂直方向的细长连通域较多。具体包括如下步骤:Light shadow culling is used to eliminate the situation where the shadow position in the difference binary image img_new1 is different due to light changes, but the device itself has not changed. For example, the position of the device shadow changes due to the change in the position of the sun in the morning and afternoon, which leads to more horizontal and vertical elongated connected domains in the difference binary image img_new1. The specific steps include the following:
S32:统计差值二值图img_erode和差值二值图img_new1中的连通域数量,分别计算两张图中所有连通域的总像素数与所有连通域总外接矩形框面积的比例,若比例小于0.4,且差值二值图img_new1的连通域数量大于20个、差值二值图img_erode的连通域数量小于3个,证明该变化框内像素数太少,以细长线为主,则将img_erode图中的连通域填充为背景,算作未变化连通域,得到差值二值图img_new2;对差值二值图img_new2进行图像膨胀处理,得到差值二值图img_dilate。膨胀处理为本领域熟知技术,在此不再赘述。S32: Count the number of connected domains in the difference binary map img_erode and the difference binary map img_new1, and calculate the ratio of the total number of pixels of all connected domains in the two maps to the total circumscribed rectangular frame area of all connected domains. If the ratio is less than 0.4, and the number of connected domains in the difference binary map img_new1 is greater than 20 and the number of connected domains in the difference binary map img_erode is less than 3, it proves that the number of pixels in the change frame is too small and mainly thin and long lines. Then fill the connected domains in the img_erode map as the background and count them as unchanged connected domains to obtain the difference binary map img_new2; perform image dilation processing on the difference binary map img_new2 to obtain the difference binary map img_dilate. Dilation processing is a well-known technology in the art and will not be repeated here.
连通域过滤不针对于某种干扰进行分析处理,用于排除所有与变电设备形状特征不相符的干扰。具体包括如下步骤:Connected domain filtering does not analyze and process a certain type of interference, but is used to exclude all interference that does not match the shape characteristics of the substation equipment. It specifically includes the following steps:
S33:将面积小于200个像素点、占空比低于0.2的连通域、宽高比大于5的线段类连通域填充为背景,算作未变化,得到差值二值图img_new3。其中占空比低于0.2的意思是像素数占外接矩形框比例低于0.2,宽高比大于5的意思是:宽/高>5或高/宽>5,因为变电领域监测的常见设备不存在宽高比过大的设备。S33: Fill the connected domains with an area less than 200 pixels and a duty ratio less than 0.2 and the line segment connected domains with an aspect ratio greater than 5 as the background, and count them as unchanged, to obtain the difference binary image img_new3. The duty ratio less than 0.2 means that the ratio of the number of pixels to the external rectangular frame is less than 0.2, and the aspect ratio greater than 5 means: width/height>5 or height/width>5, because there is no common equipment monitored in the substation field with an aspect ratio too large.
变化框合并还原具体包括如下步骤:The change frame merging and restoration specifically includes the following steps:
S34:将差值二值图img_new3剩余的所有连通域分别对应的外接矩形框合并成一个输出变化框,取所有外接矩形框坐标的4个最值:xmin、xmax、ymin、ymax,得到输出变化框坐标((xmin,ymin),(xmax,ymax))得到输出变化框;将输出变化框扩大4倍成19201080像素中的变化框坐标。S34: merge the external rectangular frames corresponding to all the remaining connected domains of the difference binary image img_new3 into an output change frame, take the four maximum values of the coordinates of all the external rectangular frames: x min , x max , y min , y max , and obtain the output change frame coordinates ((x min , y min ), (x max , y max )) to obtain the output change frame; expand the output change frame by 4 times to 1920 The changing box coordinates in 1080 pixels.
S4:在发生变化的点位图上将变化区域框出,并进行告警。S4: Frame the changed area on the point map where the change occurs and issue an alarm.
利用本发明技术方案中的方法进行测试,测试结果如表1所示:The method in the technical solution of the present invention was used for testing, and the test results are shown in Table 1:
表1:本发明方法在不同场景下的准确率测试结果Table 1: Accuracy test results of the method of the present invention in different scenarios
通过上述图例可以证明本发明对于设备变化区域敏感,且能排除自然背景变化的干扰,在19201080像素的图片上可识别出异常的变化区域,且准确率较高,并且实验中采用多种不同的设备场景进行测试,也证明了本发明具备良好的通用性,添加新的场景时不需要进行长时间的模型训练。The above illustrations prove that the present invention is sensitive to device change areas and can eliminate interference from natural background changes. Abnormal change areas can be identified on 1080-pixel images with high accuracy. In addition, the experiments were conducted using a variety of different device scenarios, which also proves that the present invention has good versatility and does not require long-term model training when adding new scenarios.
为与本发明实验效果进行对比,还进行了对比测试,测量指标如表2所示:In order to compare with the experimental results of the present invention, a comparative test was also conducted, and the measurement indicators are shown in Table 2:
表2:不同方法对比测试结果Table 2: Comparative test results of different methods
通过对比实验可以看出,基于深度学习的变化检测算法由于经过长时间训练得到的模型在精度上略有优势,但在速度上相比传统算法有着明显的劣势,而本发明所采用的算法综合准确率和速度综合来看,有着较为良好的表现。Through comparative experiments, it can be seen that the change detection algorithm based on deep learning has a slight advantage in accuracy due to the model obtained after a long period of training, but has a significant disadvantage in speed compared with the traditional algorithm. The algorithm adopted in the present invention has a relatively good performance in terms of comprehensive accuracy and speed.
以上所述仅为本发明的较佳实施例,并不用以限制本发明,凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention. Any modifications, equivalent substitutions, improvements, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (10)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410888225.9A CN118429817B (en) | 2024-07-04 | 2024-07-04 | Intelligent analysis method for substation bitmap change area based on image comparison |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410888225.9A CN118429817B (en) | 2024-07-04 | 2024-07-04 | Intelligent analysis method for substation bitmap change area based on image comparison |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN118429817A true CN118429817A (en) | 2024-08-02 |
| CN118429817B CN118429817B (en) | 2024-09-10 |
Family
ID=92326320
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202410888225.9A Active CN118429817B (en) | 2024-07-04 | 2024-07-04 | Intelligent analysis method for substation bitmap change area based on image comparison |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN118429817B (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN118796042A (en) * | 2024-09-12 | 2024-10-18 | 长沙云邮通信科技有限责任公司 | Base station digital visualization management system based on AI+VR |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20200333767A1 (en) * | 2018-02-17 | 2020-10-22 | Electro Industries/Gauge Tech | Devices, systems and methods for predicting future consumption values of load(s) in power distribution systems |
| CN113724247A (en) * | 2021-09-15 | 2021-11-30 | 国网河北省电力有限公司衡水供电分公司 | Intelligent substation inspection method based on image discrimination technology |
| CN115941930A (en) * | 2022-10-10 | 2023-04-07 | 白银银珠电力(集团)有限责任公司 | A video preset point calibration method |
| CN116189192A (en) * | 2023-04-24 | 2023-05-30 | 东方电子股份有限公司 | Intelligent reading identification method and system for pointer instrument |
| CN116363573A (en) * | 2023-01-31 | 2023-06-30 | 智洋创新科技股份有限公司 | Transformer substation equipment state anomaly identification method and system |
-
2024
- 2024-07-04 CN CN202410888225.9A patent/CN118429817B/en active Active
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20200333767A1 (en) * | 2018-02-17 | 2020-10-22 | Electro Industries/Gauge Tech | Devices, systems and methods for predicting future consumption values of load(s) in power distribution systems |
| CN113724247A (en) * | 2021-09-15 | 2021-11-30 | 国网河北省电力有限公司衡水供电分公司 | Intelligent substation inspection method based on image discrimination technology |
| CN115941930A (en) * | 2022-10-10 | 2023-04-07 | 白银银珠电力(集团)有限责任公司 | A video preset point calibration method |
| CN116363573A (en) * | 2023-01-31 | 2023-06-30 | 智洋创新科技股份有限公司 | Transformer substation equipment state anomaly identification method and system |
| CN116189192A (en) * | 2023-04-24 | 2023-05-30 | 东方电子股份有限公司 | Intelligent reading identification method and system for pointer instrument |
Non-Patent Citations (1)
| Title |
|---|
| 张可: "电网智能运检关键技术与应用研究", 《中国博士学位论文全文数据库工程科技II辑》, 15 April 2024 (2024-04-15), pages 042 - 130 * |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN118796042A (en) * | 2024-09-12 | 2024-10-18 | 长沙云邮通信科技有限责任公司 | Base station digital visualization management system based on AI+VR |
Also Published As
| Publication number | Publication date |
|---|---|
| CN118429817B (en) | 2024-09-10 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN106157323B (en) | A kind of insulator division and extracting method of dynamic division threshold value and block search combination | |
| CN108573222B (en) | Pedestrian image occlusion detection method based on cyclic confrontation generation network | |
| CN112507865B (en) | Smoke identification method and device | |
| CN110346699B (en) | Method and device for insulator discharge information extraction based on ultraviolet image processing technology | |
| CN114241310B (en) | Improved YOLO model-based intelligent identification method for piping dangerous case of dike | |
| CN104318266B (en) | A kind of image intelligent analyzes and processes method for early warning | |
| CN110807396B (en) | Face changing video tampering detection method and system based on illumination direction consistency | |
| CN116597323B (en) | High-temperature abnormality diagnosis algorithm for overhead high-voltage wire | |
| CN118429817B (en) | Intelligent analysis method for substation bitmap change area based on image comparison | |
| CN114445661A (en) | Embedded image identification method based on edge calculation | |
| CN111047614A (en) | Feature extraction-based method for extracting target corner of complex scene image | |
| CN112418226B (en) | Method and device for identifying opening and closing states of fisheyes | |
| CN115239646A (en) | Defect detection method, device, electronic device and storage medium for transmission line | |
| Zhu et al. | Towards automatic wild animal detection in low quality camera-trap images using two-channeled perceiving residual pyramid networks | |
| CN117593499A (en) | A fault identification method for electromechanical equipment in hydropower stations based on distributed inspection strategy | |
| CN113096103B (en) | An intelligent image perception method for flue gas in flare venting | |
| CN107392887A (en) | A kind of heterogeneous method for detecting change of remote sensing image based on the conversion of homogeneity pixel | |
| CN107016360A (en) | The object detection method that electricity substation is merged based on behavioral characteristics and region | |
| CN118801578B (en) | Monitoring method of intelligent component system for monitoring arc light of low-voltage switch in intelligent substation | |
| CN111080562B (en) | Substation suspender identification method based on enhanced image contrast | |
| CN118675093A (en) | Method for identifying abnormal river bank behaviors based on video understanding | |
| CN119299617A (en) | Image data analysis system based on algorithm analysis | |
| CN115761588A (en) | Defafake video detection method based on image source abnormality | |
| CN117391918A (en) | Camera watermark generation and WIoU verification score algorithm based on intelligent substation inspection | |
| CN114494931B (en) | A method and system for intelligent classification and processing of video image faults |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |