[go: up one dir, main page]

CN114612815A - Improved ViBe indoor real-time foreground detection method and storage medium - Google Patents

Improved ViBe indoor real-time foreground detection method and storage medium Download PDF

Info

Publication number
CN114612815A
CN114612815A CN202210056284.0A CN202210056284A CN114612815A CN 114612815 A CN114612815 A CN 114612815A CN 202210056284 A CN202210056284 A CN 202210056284A CN 114612815 A CN114612815 A CN 114612815A
Authority
CN
China
Prior art keywords
frame
grayscale
grayscale image
pixel
foreground detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210056284.0A
Other languages
Chinese (zh)
Other versions
CN114612815B (en
Inventor
张文韬
杨林权
张婷婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Geosciences Wuhan
Original Assignee
China University of Geosciences Wuhan
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Geosciences Wuhan filed Critical China University of Geosciences Wuhan
Priority to CN202210056284.0A priority Critical patent/CN114612815B/en
Publication of CN114612815A publication Critical patent/CN114612815A/en
Application granted granted Critical
Publication of CN114612815B publication Critical patent/CN114612815B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

本发明提供一种改进的ViBe室内实时前景检测方法及存储介质,在ViBe算法的基础上加入了帧间变化率计算以及基于帧间变化率调整随机因子的背景更新策略来抑制由于时空传播机制带来的“鬼影”和孔洞问题,以及利用基于帧间变化率和感知哈希算法的光照检测来判断是否存在光照突变,如果发生光照突变则重新构建背景模型,较为有效的克服了ViBe算法的各种问题。本发明的有益效果:该方法在不牺牲原ViBe算法对于前景检测的实时性的基础上,有效抑制“鬼影”和孔洞现象,在较大程度上提升了算法的检测能力,并且增强了对光照变化的鲁棒性,相比ViBe算法更适用于室内环境的前景检测。

Figure 202210056284

The invention provides an improved ViBe indoor real-time foreground detection method and a storage medium. On the basis of the ViBe algorithm, the inter-frame change rate calculation and the background update strategy of adjusting random factors based on the inter-frame change rate are added to suppress the band caused by the time-space propagation mechanism. The resulting "ghosting" and hole problems, and the use of light detection based on the rate of change between frames and the perceptual hash algorithm to determine whether there is a light mutation, if there is a light mutation, the background model is rebuilt, which effectively overcomes the ViBe algorithm. various problems. Beneficial effects of the present invention: the method effectively suppresses the "ghost image" and hole phenomenon without sacrificing the real-time performance of the original ViBe algorithm for foreground detection, improves the detection ability of the algorithm to a large extent, and enhances the detection ability of the algorithm. The robustness of illumination changes is more suitable for foreground detection in indoor environments than the ViBe algorithm.

Figure 202210056284

Description

一种改进的ViBe室内实时前景检测方法及存储介质An improved ViBe indoor real-time foreground detection method and storage medium

技术领域technical field

本发明涉及移动目标检测领域,尤其涉及一种改进的ViBe室内实时前景检测方法及存储介质。The invention relates to the field of moving target detection, in particular to an improved ViBe indoor real-time foreground detection method and a storage medium.

背景技术Background technique

近年来,视频监控技术被越来越多的应用于国家和社会公共安全、航空航天等重要领域以及许多民用领域。在无人值守的区域安装视频监控系统,可以有效地排除安全隐患,及时预警,避免事故的发生。目前普遍采用的智能视频监控算法以前景检测(移动目标检测,前景提取)算法为主,前景检测是从图像序列中提取前景目标的一种技术。In recent years, video surveillance technology has been increasingly used in important fields such as national and social public security, aerospace, and many civil fields. Installing video surveillance systems in unattended areas can effectively eliminate potential safety hazards, give early warnings and avoid accidents. At present, the commonly used intelligent video surveillance algorithms are mainly foreground detection (moving target detection, foreground extraction) algorithms. Foreground detection is a technology for extracting foreground objects from image sequences.

ViBe算法是一种针对像素级建立背景模型并和当前帧作背景差分的前景检测算法。该算法利用像素在同一区域内具有相近像素值这一特点,仅选取视频的第一帧图像进行背景初始化。ViBe算法包含背景建模、前景检测和背景模型更新三个步骤。在具体运用的过程中也面临一些挑战:(1)在实际的检测过程中会存在一些常见问题,如由于背景模型更新时采用的时空传播机制造成的“鬼影”、孔洞等不利现象;(2)由于室内空间的特殊性,人为的开关灯和窗户、拉窗帘等操作使用都会使室内的光照强度在全局上有一个较大的改变,因此算法必须对光照变化有着较好的鲁棒性,需要有一个较为综合的解决方案。The ViBe algorithm is a foreground detection algorithm that establishes a background model at the pixel level and makes a background difference with the current frame. The algorithm takes advantage of the fact that pixels in the same area have similar pixel values, and only selects the first frame of the video to initialize the background. The ViBe algorithm consists of three steps: background modeling, foreground detection and background model updating. There are also some challenges in the process of specific application: (1) There will be some common problems in the actual detection process, such as "ghosting", holes and other unfavorable phenomena caused by the spatiotemporal propagation mechanism used in the update of the background model; ( 2) Due to the particularity of indoor space, man-made operations such as switching lights, windows, and curtains will cause a large change in the indoor light intensity globally, so the algorithm must be robust to light changes. Therefore, a more comprehensive solution is required.

发明内容SUMMARY OF THE INVENTION

本发明要解决的主要技术问题是解决传统ViBe算法造成的“鬼影”和孔洞问题以及对光照变化鲁棒性差的问题,为了解决上述问题,本发明提供了一种改进的ViBe室内实时前景检测方法,在不牺牲原ViBe算法对于前景检测的实时性的基础上,运用基于帧间变化率的随机因子调整策略有效抑制由于背景更新中时空传播机制造成的“鬼影”和和孔洞问题,并且通过基于帧间变化率和感知哈希算法的光照突变检测方法加强了算法对于光照突变的鲁棒性,相比ViBe算法更适用于室内环境的前景检测。The main technical problem to be solved by the present invention is to solve the problem of "ghosting" and holes caused by the traditional ViBe algorithm and the problem of poor robustness to illumination changes. In order to solve the above problems, the present invention provides an improved ViBe indoor real-time foreground detection method, without sacrificing the real-time performance of the original ViBe algorithm for foreground detection, the random factor adjustment strategy based on the rate of change between frames is used to effectively suppress the "ghosting" and hole problems caused by the spatiotemporal propagation mechanism in the background update, and The robustness of the algorithm to light mutation is enhanced by the light mutation detection method based on the rate of change between frames and the perceptual hash algorithm, which is more suitable for foreground detection in indoor environments than the ViBe algorithm.

根据本发明的一个方面,本发明提供了一种改进的ViBe室内实时前景检测方法,包括以下步骤:According to one aspect of the present invention, the present invention provides an improved ViBe indoor real-time foreground detection method, comprising the following steps:

S1:接收视频流并进行解码、预处理,得到逐帧视频图像的灰度图像;S1: Receive the video stream and perform decoding and preprocessing to obtain the grayscale image of the frame-by-frame video image;

S2:监测当前帧视频图像的帧数,判断是否为视频首帧,若是,则进行前景检测算法的背景模型建立并进入S1,否则,进入S3;S2: monitor the number of frames of the current frame of video image, determine whether it is the first frame of the video, if so, establish a background model of the foreground detection algorithm and enter S1, otherwise, enter S3;

S3:通过当前帧灰度图像与前一帧灰度图像计算帧间变化率;S3: Calculate the rate of change between frames through the grayscale image of the current frame and the grayscale image of the previous frame;

S4:判断是否发生光照突变,若是,则重新进行S2中的背景模型建立并进入步骤S1,否则,进入S5;S4: determine whether a sudden change in illumination occurs, if so, re-create the background model in S2 and enter step S1, otherwise, enter S5;

S5:获取当前帧视频图像的前景检测结果;S5: Obtain the foreground detection result of the video image of the current frame;

S6:基于S3得到的帧间变化率,根据帧间变化率调整策略更新背景模型;S6: Based on the inter-frame change rate obtained in S3, update the background model according to the inter-frame change rate adjustment strategy;

S7:判断是否存在下一帧视频图像,若是,则进入S1,否则,检测流程结束。S7: Determine whether there is a next frame of video image, if so, enter S1, otherwise, the detection process ends.

优选地,S1中,接收视频流并解码、预处理的步骤包括:Preferably, in S1, the steps of receiving the video stream, decoding and preprocessing include:

S11:接收视频流并进行解码,获得逐帧的视频图像队列;S11: Receive and decode the video stream to obtain a frame-by-frame video image queue;

S12:对视频图像队列中的单帧视频图像进行灰度化转换,得到单帧视频图像的灰度图;S12: Perform grayscale conversion on the single-frame video image in the video image queue to obtain a grayscale image of the single-frame video image;

S13:利用高斯、中值滤波对灰度图去噪,得到去噪后的灰度图。S13: Use Gaussian and median filtering to denoise the grayscale image to obtain a denoised grayscale image.

优选地,S2中,背景模型的建立步骤包括:Preferably, in S2, the steps of establishing the background model include:

S21:获取视频的第一帧灰度图像,为每一像素点构建一个最大容量为N的样本集合;S21: Obtain the first frame grayscale image of the video, and construct a sample set with a maximum capacity of N for each pixel;

S22:每一像素点随机选取自身八邻域的像素的灰度值加入样本集合中,直到样本集容量到达上限;S22: Each pixel randomly selects the gray value of the pixels in its eight neighborhoods and adds it to the sample set, until the sample set capacity reaches the upper limit;

样本集的表达式如下:The expression for the sample set is as follows:

Sample(x,y)={Vi|i=1,2...N}Sample(x,y)={Vi|i=1,2...N}

式中,Sample(x,y)为像素的样本集,(x,y)为任一像素的笛卡尔坐标值,Vi为样本,N为样本个数。In the formula, Sample(x, y) is the sample set of pixels, (x, y) is the Cartesian coordinate value of any pixel, Vi is the sample, and N is the number of samples.

优选地,S3中,计算帧间变化率的步骤包括:Preferably, in S3, the step of calculating the rate of change between frames includes:

S31:选取当前帧灰度图像与前一帧灰度图像,对笛卡尔坐标系下相同位置的像素点进行差值运算并且取其绝对值,得到差分灰度图,公式如下:S31: Select the grayscale image of the current frame and the grayscale image of the previous frame, perform a difference operation on the pixel points at the same position in the Cartesian coordinate system and take the absolute value to obtain a differential grayscale image. The formula is as follows:

D(x,y)=|fk(x,y)-fk-1(x,y)|D(x,y)=|fk(x,y)-fk-1(x,y)|

式中,D(x,y)为得到的差分灰度图,fk(x,y)为当前帧灰度图像,fk-1(x,y)为前一帧灰度图像;In the formula, D(x,y) is the obtained differential grayscale image, fk(x,y) is the current frame grayscale image, and fk-1(x,y) is the previous frame grayscale image;

S32:判断所述差分灰度图中每一像素点的灰度值的绝对值是否大于阈值T,若是,则修改其灰度值为255,否则,设置为0;S32: Determine whether the absolute value of the grayscale value of each pixel in the differential grayscale image is greater than the threshold T, and if so, modify its grayscale value to 255, otherwise, set it to 0;

S33:统计差分灰度图中灰度值为255的像素个数为num1,图像总像素个数为num2,计算帧间变化率P=num1/num2×100%。S33: The number of pixels with a grayscale value of 255 in the statistical difference grayscale image is num1, the total number of pixels in the image is num2, and the inter-frame change rate P=num1/num2×100% is calculated.

优选地,S4中,判断是否发生光照突变的方法包括:Preferably, in S4, the method for judging whether a sudden change in illumination occurs includes:

S41:判断由S33计算出的帧间变化率P的数值是否大于阈值Tlight,若是,则进入S42,否则,判断未发生光照突变;S41: Determine whether the value of the inter-frame change rate P calculated in S33 is greater than the threshold Tlight, if so, enter S42, otherwise, determine that no sudden change in illumination occurs;

S42:运用感知哈希算法比较当前帧灰度图像与S31得到的差分灰度图,判断两张图像的汉明距离是否大于阈值Tenter,若是,则判断未发生光照突变,否则,判断为发生了光照突变。S42: Use the perceptual hash algorithm to compare the grayscale image of the current frame with the differential grayscale image obtained in S31, and determine whether the Hamming distance of the two images is greater than the threshold Tenter. Light mutation.

优选地,S5中,获取前景检测结果的步骤包括:Preferably, in S5, the step of obtaining the foreground detection result includes:

S51:对于当前帧灰度图像,将每一像素点x的灰度值与其对应的样本集Sample(x,y)中的N个灰度值进行对比,判断样本集Sample(x,y)中是否存在#min个及以上灰度值与像素点x的灰度值的二维欧氏距离小于距离阈值R,若是,则将该像素的灰度值设为255,否则,设置为0,#min表示设定的前景检测阈值,N表示样本个数;S51: For the grayscale image of the current frame, compare the grayscale value of each pixel point x with the N grayscale values in the corresponding sample set Sample(x,y), and determine whether the sample set Sample(x,y) is in the Whether there are #min or more grayscale values and the two-dimensional Euclidean distance of the grayscale value of the pixel x less than the distance threshold R, if so, set the grayscale value of the pixel to 255, otherwise, set it to 0, # min represents the set foreground detection threshold, and N represents the number of samples;

S52:对经过S51处理的灰度图像进行形态学处理,得到形态学处理后的灰度图像。S52: Perform morphological processing on the grayscale image processed in S51 to obtain a morphologically processed grayscale image.

优选地,S52中,形态学处理的步骤包括:Preferably, in S52, the step of morphological processing includes:

S521:对灰度值为255的像素进行连通性分析并计算连通区域的大小;S521: Perform connectivity analysis on the pixels with a grayscale value of 255 and calculate the size of the connected region;

S522:对于连通区域面积小于9的像素区域,将每一像素灰度值设置为0;S522: For the pixel area with the connected area area less than 9, set the gray value of each pixel to 0;

S523:对灰度图像进行腐蚀和膨胀操作,得到形态学处理后的灰度图像。S523: Perform erosion and expansion operations on the grayscale image to obtain a morphologically processed grayscale image.

优选地,S6中,基于S3得到的帧间变化率,根据帧间变化率调整策略更新背景模型的步骤包括:Preferably, in S6, based on the inter-frame change rate obtained in S3, the step of updating the background model according to the inter-frame change rate adjustment strategy includes:

S61:根据S3得到的帧间变化率获得随机因子θ;S61: Obtain a random factor θ according to the rate of change between frames obtained in S3;

S62:每个像素根据随机因子θ更新背景模型。S62: Update the background model for each pixel according to the random factor θ.

优选地,S62包括:Preferably, S62 includes:

所有灰度值被设置为0的像素,有1/θ的几率更新自己和自己八邻域像素对应的样本集,具体的更新方式为:All pixels whose gray value is set to 0 have a probability of 1/θ to update the sample set corresponding to themselves and their eight-neighborhood pixels. The specific update method is:

将当前像素的灰度值随机替代当前像素对应的样本集中的一个灰度值;Randomly replace the gray value of the current pixel with a gray value in the sample set corresponding to the current pixel;

将当前像素的灰度值随机替代当前像素的八个邻域像素对应的样本集中的一个灰度值。The gray value of the current pixel is randomly replaced with one gray value in the sample set corresponding to the eight neighboring pixels of the current pixel.

根据本发明的另一方面,还提供了一种存储介质,所述存储介质为计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序运行时用于实现所述的改进的ViBe室内实时前景检测方法的步骤。According to another aspect of the present invention, a storage medium is also provided, the storage medium is a computer-readable storage medium, and a computer program is stored on the computer-readable storage medium, and the computer program is used to realize the The steps of the improved ViBe indoor real-time foreground detection method described above.

本发明提供的技术方案具有以下有益效果:The technical scheme provided by the invention has the following beneficial effects:

在不损失ViBe原算法的实时性的基础上,引入了帧间变化率这一概念,基于帧间变化率的随机因子调整策略,可以有效抑制由于时空传播机制造成的“鬼影”和孔洞的生成,在较大程度上增强了算法的检测能力。另外,基于帧间变化率和感知哈希算法的光照检测方法,在一定程度上增强了算法对于光照变化的鲁棒性,相比ViBe原算法更适用于室内环境的前景检测。On the basis of not losing the real-time performance of the original ViBe algorithm, the concept of inter-frame change rate is introduced. The random factor adjustment strategy based on the inter-frame change rate can effectively suppress the "ghosting" and hole caused by the space-time propagation mechanism. generation, which greatly enhances the detection ability of the algorithm. In addition, the illumination detection method based on the rate of change between frames and the perceptual hash algorithm enhances the algorithm's robustness to illumination changes to a certain extent, and is more suitable for foreground detection in indoor environments than the original ViBe algorithm.

附图说明Description of drawings

下面将结合附图及实施例对本发明作进一步说明,附图中:The present invention will be further described below in conjunction with the accompanying drawings and embodiments, in which:

图1是本发明实施例提供的改进的ViBe室内实时前景检测方法的流程图。FIG. 1 is a flowchart of an improved ViBe indoor real-time foreground detection method provided by an embodiment of the present invention.

具体实施方式Detailed ways

为了对本发明的技术特征、目的和效果有更加清楚的理解,现对照附图详细说明本发明的具体实施方式。In order to have a clearer understanding of the technical features, objects and effects of the present invention, the specific embodiments of the present invention will now be described in detail with reference to the accompanying drawings.

请参考图1,本发明的实施例提供了一种改进的ViBe室内实时前景检测方法,包括以下步骤:Please refer to FIG. 1, an embodiment of the present invention provides an improved ViBe indoor real-time foreground detection method, comprising the following steps:

S1:接收视频流并进行解码、预处理,得到逐帧视频图像的灰度图像。S1: Receive a video stream and perform decoding and preprocessing to obtain a grayscale image of frame-by-frame video images.

S1具体包括:S1 specifically includes:

S11:本实施例使用OpenCV库的VideoCapture函数对输入视频进行解码操作,并按照时序获取每帧视频图像;S11: The present embodiment uses the VideoCapture function of the OpenCV library to decode the input video, and obtains each frame of video image according to the time sequence;

S12:将获取的视频图像转换为灰度图像,具体的公式如下:S12: Convert the acquired video image to a grayscale image, and the specific formula is as follows:

Grey=0.299*R+0.587*G+0.114*BGrey=0.299*R+0.587*G+0.114*B

其中R、G、B分别为彩色图像的三原色,本实施例使用OpenCV库的cvtColor函数实现彩色图像到灰度图像的转换;Wherein R, G, B are the three primary colors of the color image respectively, and the present embodiment uses the cvtColor function of the OpenCV library to realize the conversion of the color image to the grayscale image;

S13:按照次序使用高斯滤波和中值滤波对灰度图像去噪,本实施例中,高斯滤波使用大小为5*5的高斯内核,中值滤波使用尺寸为5的滤波模板,具体的实现方式为使用OpenCV库的GaussianBlur与medianBlur函数。S13: Use Gaussian filtering and median filtering to denoise the grayscale image in sequence. In this embodiment, Gaussian filtering uses a Gaussian kernel with a size of 5*5, and median filtering uses a filtering template with a size of 5. The specific implementation method GaussianBlur and medianBlur functions using OpenCV library.

S2:监测当前帧视频图像的帧数,判断是否为视频首帧,若是,则进行前景检测算法的背景模型建立并进入S1,否则,进入S3;S2: monitor the number of frames of the current frame of video image, determine whether it is the first frame of the video, if so, establish a background model of the foreground detection algorithm and enter S1, otherwise, enter S3;

S2具体包括:S2 specifically includes:

S21:获取视频的第一帧灰度图像,为每一像素点构建一个最大容量为N的样本集合;S21: Obtain the first frame grayscale image of the video, and construct a sample set with a maximum capacity of N for each pixel;

S22:每一像素点随机选取自身八邻域的像素的灰度值加入样本集合中,直到样本集容量到达上限;S22: Each pixel randomly selects the gray value of the pixels in its eight neighborhoods and adds it to the sample set, until the sample set capacity reaches the upper limit;

样本集的表达式如下:The expression for the sample set is as follows:

Sample(x,y)={Vi|i=1,2...N}Sample(x,y)={Vi|i=1,2...N}

式中,Sample(x,y)为像素的样本集,(x,y)为任一像素的笛卡尔坐标值,Vi为样本,N为样本个数,本实施例设置的上限数N为20。In the formula, Sample(x, y) is the sample set of pixels, (x, y) is the Cartesian coordinate value of any pixel, Vi is the sample, N is the number of samples, and the upper limit N set in this embodiment is 20. .

S3:通过当前帧灰度图像与前一帧灰度图像计算帧间变化率;S3: Calculate the rate of change between frames through the grayscale image of the current frame and the grayscale image of the previous frame;

S3具体包括:S3 specifically includes:

S31:选取当前帧与前一帧的灰度图像,按照下式对任一像素点进行差分运算,得到差分灰度图:S31: Select the grayscale images of the current frame and the previous frame, and perform a differential operation on any pixel point according to the following formula to obtain a differential grayscale image:

D(x,y)=|fk(x,y)-fk-1(x,y)|D(x,y)=|f k (x,y)-f k-1 (x,y)|

其中D(x,y)为得到的差分灰度图,fk(x,y)为当前帧灰度图像,fk-1(x,y)为前一帧灰度图像;where D(x,y) is the obtained differential grayscale image, fk( x ,y) is the grayscale image of the current frame, and fk -1 (x,y) is the grayscale image of the previous frame;

S32:判断差分灰度图中每一像素点的灰度值的绝对值是否大于阈值T,是,则修改其灰度值为255,否,则设置为0;本实施例中阈值T设置为5,具体可以根据实际情况设置;S32: Determine whether the absolute value of the grayscale value of each pixel in the differential grayscale image is greater than the threshold value T, if yes, modify its grayscale value to 255, if not, set it to 0; in this embodiment, the threshold value T is set to 5, can be set according to the actual situation;

S33:统计差分灰度图中灰度值为255的像素个数为num1,图像总像素个数为num2,计算帧间变化率P=num1/num2×100%。S33 : the number of pixels with a grayscale value of 255 in the statistical difference grayscale image is num 1 , the total number of pixels in the image is num 2 , and the inter-frame change rate P=num 1 /num 2 ×100% is calculated.

S4:判断是否发生光照突变,若是,则重新进行S2中的背景模型建立并进入步骤S1,否则,进入S5;S4: determine whether a sudden change in illumination occurs, if so, re-create the background model in S2 and enter step S1, otherwise, enter S5;

S4具体包括:S4 specifically includes:

S41:判断由S33计算出的帧间变化率P的数值是否大于阈值Tlight,若是,则进入S42,否则,判断未发生光照突变。本实施例中,阈值Tlight设置为30%,具体可以根据实际情况设置;S41 : Determine whether the value of the inter-frame change rate P calculated in S33 is greater than the threshold value T light , if so, proceed to S42 , otherwise, determine that no sudden change in illumination occurs. In this embodiment, the threshold T light is set to 30%, which can be set according to the actual situation;

S42:运用感知哈希算法比较当前帧灰度图像与S31得到的差分灰度图,计算两张图像的汉明距离,本实施例中,汉明距离的阈值Tenter设置为10,具体可以根据实际情况设置,如果大于该阈值,则判断未发生光照突变,反之,则判断为发生了光照突变。S42: Comparing the grayscale image of the current frame and the differential grayscale image obtained in S31 by using the perceptual hash algorithm, and calculating the Hamming distance of the two images. The actual situation is set. If it is greater than the threshold, it is judged that no sudden change of light has occurred, otherwise, it is judged that sudden change of light has occurred.

S5:获取当前帧视频图像的前景检测结果;S5: Obtain the foreground detection result of the video image of the current frame;

S5具体包括:S5 specifically includes:

S51:对于当前帧灰度图像,将每一像素点x的灰度值与其对应的样本集Sample(x)中的N个灰度值进行对比,判断样本集Sample(x,y)中是否存在2个及以上灰度值与像素x的灰度值的二维欧氏距离小于阈值20,是,则将该像素的灰度值设为255,否,则设置为0,此处的阈值可以根据实际情况自行设置;S51: For the grayscale image of the current frame, compare the grayscale value of each pixel point x with the N grayscale values in the corresponding sample set Sample(x), and determine whether the sample set Sample(x, y) exists The two-dimensional Euclidean distance between two or more grayscale values and the grayscale value of pixel x is less than the threshold value of 20. If yes, set the grayscale value of the pixel to 255. If not, set it to 0. The threshold here can be Set according to the actual situation;

S52:对S51处理后的灰度图像进行形态学处理;S52: Perform morphological processing on the grayscale image processed in S51;

在本实施例中,使用OpenCV库的findContours函数找到所有连通域,遍历所有连通域,使用OpenCV库的contourArea函数计算连通域的面积,对于所有面积小于阈值9(可根据实际情况设置)的连通域,视其为噪音,将其每一像素的灰度值设置为0。之后,对灰度图进行先腐蚀后膨胀的操作,具体为:使用OpenCV库的dilate和erode函数,腐蚀和碰着操作的卷积核大小设置为3*3。In this embodiment, use the findContours function of the OpenCV library to find all connected domains, traverse all the connected domains, and use the contourArea function of the OpenCV library to calculate the area of the connected domains. , regard it as noise, and set the gray value of each pixel to 0. After that, the grayscale image is first eroded and then expanded, specifically: using the dilate and erode functions of the OpenCV library, the size of the convolution kernel for the erosion and collision operations is set to 3*3.

S6:基于S3得到的帧间变化率,根据帧间变化率调整策略更新背景模型;S6: Based on the inter-frame change rate obtained in S3, update the background model according to the inter-frame change rate adjustment strategy;

S6具体包括:S6 specifically includes:

S61:根据S33求得的帧间变化率获得随机因子θ,具体如表1所示:S61: Obtain the random factor θ according to the inter-frame change rate obtained in S33, as shown in Table 1:

表1:帧间变化率与随机因子的对应关系Table 1: Correspondence between the rate of change between frames and the random factor

Figure BDA0003476370440000071
Figure BDA0003476370440000071

表1的数据由实验得到,适用于大部分场景。The data in Table 1 are obtained from experiments and are suitable for most scenarios.

S62:所有灰度值为0的像素,有1/θ的几率更新自己对应的样本集。具体的更新方式为:将当前像素的灰度值随机替代当前像素对应的样本集中的一个灰度值。S62: All pixels with a gray value of 0 have a probability of 1/θ to update their corresponding sample set. The specific update method is: randomly replace the gray value of the current pixel with a gray value in the sample set corresponding to the current pixel.

S63:所有灰度值为0的像素,有1/θ的几率更新自己邻域像素对应的样本集。具体的更新方式为:将当前像素的灰度值随机替代当前像素的八个邻域像素对应的样本集中的一个灰度值。S63: All pixels with a gray value of 0 have a probability of 1/θ to update the sample set corresponding to their own neighborhood pixels. The specific update method is: randomly replace the gray value of the current pixel with one gray value in the sample set corresponding to the eight neighboring pixels of the current pixel.

S7:判断是否存在下一帧视频图像,若是,则进入S1,否则,检测流程结束。在本实施例中,使用OpenCV库的isOpened函数判断是否存在下一帧视频图像。S7: Determine whether there is a next frame of video image, if so, enter S1, otherwise, the detection process ends. In this embodiment, the isOpened function of the OpenCV library is used to determine whether there is a next frame of video image.

为了验证本发明的有益效果,本实施例选取ChangeDetection2012数据集中的七个室内子集进行了与5中常用的前景检测方法(混合高斯模型GMM、三帧差法TFD、最邻近节点算法KNN、码本算法codebook、视觉背景提取器算法ViBe)进行了对比实验,评估指标为误分类百分比PWC(Percentage of Wrong Classifications)和F值(F-Measure)两个指标。具体的实验结果如表2、3所示:In order to verify the beneficial effects of the present invention, this embodiment selects seven indoor subsets in the ChangeDetection2012 data set to carry out the same method as the commonly used foreground detection methods in 5 (Gaussian mixture model GMM, three frame difference method TFD, nearest neighbor node algorithm KNN, code This algorithm codebook and the visual background extractor algorithm ViBe) have carried out comparative experiments. The evaluation indicators are PWC (Percentage of Wrong Classifications) and F-Measure (F-Measure). The specific experimental results are shown in Tables 2 and 3:

表2本发明方法与5种常用的前景检测方法的PWC对比Table 2 PWC comparison between the method of the present invention and 5 commonly used foreground detection methods

Figure BDA0003476370440000072
Figure BDA0003476370440000072

表3本发明方法与5种常用的前景检测方法的F值对比Table 3 Comparison of F values between the method of the present invention and 5 commonly used foreground detection methods

Figure BDA0003476370440000081
Figure BDA0003476370440000081

在一些实施例中,还提供了一种存储介质,所述存储介质为计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序运行时用于实现所述的改进的ViBe室内实时前景检测方法的步骤。In some embodiments, a storage medium is also provided, the storage medium is a computer-readable storage medium, and a computer program is stored on the computer-readable storage medium, and the computer program is used to implement the described Steps of an improved ViBe indoor real-time foreground detection method.

需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者系统不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者系统所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者系统中还存在另外的相同要素。It should be noted that, herein, the terms "comprising", "comprising" or any other variation thereof are intended to encompass non-exclusive inclusion, such that a process, method, article or system comprising a series of elements includes not only those elements, It also includes other elements not expressly listed or inherent to such a process, method, article or system. Without further limitation, an element qualified by the phrase "comprising a..." does not preclude the presence of additional identical elements in the process, method, article or system that includes the element.

上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。在列举了若干装置的单元权利要求中,这些装置中的若干个可以是通过同一个硬件项来具体体现。词语第一、第二、以及第三等的使用不表示任何顺序,可将这些词语解释为标识。The above-mentioned serial numbers of the embodiments of the present invention are only for description, and do not represent the advantages or disadvantages of the embodiments. In a unit claim enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, and third, etc. do not denote any order, and these words may be construed as identifications.

以上仅为本发明的优选实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。The above are only preferred embodiments of the present invention, and are not intended to limit the scope of the present invention. Any equivalent structure or equivalent process transformation made by using the contents of the description and drawings of the present invention, or directly or indirectly applied in other related technical fields , are similarly included in the scope of patent protection of the present invention.

Claims (10)

1.一种改进的ViBe室内实时前景检测方法,其特征在于,包括以下步骤:1. an improved ViBe indoor real-time foreground detection method, is characterized in that, comprises the following steps: S1:接收视频流并进行解码、预处理,得到逐帧视频图像的灰度图像;S1: Receive the video stream and perform decoding and preprocessing to obtain the grayscale image of the frame-by-frame video image; S2:监测当前帧视频图像的帧数,判断是否为视频首帧,若是,则进行前景检测算法的背景模型建立并进入S1,否则,进入S3;S2: monitor the number of frames of the current frame of video image, determine whether it is the first frame of the video, if so, establish a background model of the foreground detection algorithm and enter S1, otherwise, enter S3; S3:通过当前帧灰度图像与前一帧灰度图像计算帧间变化率;S3: Calculate the rate of change between frames through the grayscale image of the current frame and the grayscale image of the previous frame; S4:判断是否发生光照突变,若是,则重新进行S2中的背景模型建立并进入步骤S1,否则,进入S5;S4: determine whether a sudden change in illumination occurs, if so, re-create the background model in S2 and enter step S1, otherwise, enter S5; S5:获取当前帧视频图像的前景检测结果;S5: Obtain the foreground detection result of the video image of the current frame; S6:基于S3得到的帧间变化率,根据帧间变化率调整策略更新背景模型;S6: Based on the inter-frame change rate obtained in S3, update the background model according to the inter-frame change rate adjustment strategy; S7:判断是否存在下一帧视频图像,若是,则进入S1,否则,检测流程结束。S7: Determine whether there is a next frame of video image, if so, enter S1, otherwise, the detection process ends. 2.如权利要求1所述的改进的ViBe室内实时前景检测方法,其特征在于,S1中,接收视频流并解码、预处理的步骤包括:2. the improved ViBe indoor real-time foreground detection method as claimed in claim 1, is characterized in that, in S1, the step of receiving video stream and decoding, preprocessing comprises: S11:接收视频流并进行解码,获得逐帧的视频图像队列;S11: Receive and decode the video stream to obtain a frame-by-frame video image queue; S12:对视频图像队列中的单帧视频图像进行灰度化转换,得到单帧视频图像的灰度图;S12: Perform grayscale conversion on the single-frame video image in the video image queue to obtain a grayscale image of the single-frame video image; S13:利用高斯、中值滤波对灰度图去噪,得到去噪后的灰度图。S13: Use Gaussian and median filtering to denoise the grayscale image to obtain a denoised grayscale image. 3.如权利要求1所述的改进的ViBe室内实时前景检测方法,其特征在于,S2中,背景模型的建立步骤包括:3. improved ViBe indoor real-time foreground detection method as claimed in claim 1, is characterized in that, in S2, the establishment step of background model comprises: S21:获取视频的第一帧灰度图像,为每一像素点构建一个最大容量为N的样本集合;S21: Obtain the first frame grayscale image of the video, and construct a sample set with a maximum capacity of N for each pixel; S22:每一像素点随机选取自身八邻域的像素的灰度值加入样本集合中,直到样本集容量到达上限;S22: Each pixel randomly selects the gray value of the pixels in its eight neighborhoods and adds it to the sample set, until the sample set capacity reaches the upper limit; 样本集的表达式如下:The expression for the sample set is as follows: Sample(x,y)={Vi|i=1,2...N}Sample(x,y)={V i |i=1,2...N} 式中,Sample(x,y)为像素的样本集,(x,y)为任一像素的笛卡尔坐标值,Vi为样本,N为样本个数。In the formula, Sample(x, y) is the sample set of pixels, (x, y) is the Cartesian coordinate value of any pixel, V i is the sample, and N is the number of samples. 4.如权利要求1所述的改进的ViBe室内实时前景检测方法,其特征在于,S3中,计算帧间变化率的步骤包括:4. improved ViBe indoor real-time foreground detection method as claimed in claim 1, is characterized in that, in S3, the step of calculating rate of change between frames comprises: S31:选取当前帧灰度图像与前一帧灰度图像,对笛卡尔坐标系下相同位置的像素点进行差值运算并且取其绝对值,得到差分灰度图,公式如下:S31: Select the grayscale image of the current frame and the grayscale image of the previous frame, perform a difference operation on the pixel points at the same position in the Cartesian coordinate system and take the absolute value to obtain a differential grayscale image. The formula is as follows: D(x,y)=|fk(x,y)-fk-1(x,y)|D(x,y)=|f k (x,y)-f k-1 (x,y)| 式中,D(x,y)为得到的差分灰度图,fk(x,y)为当前帧灰度图像,fk-1(x,y)为前一帧灰度图像;In the formula, D(x,y) is the obtained differential grayscale image, fk( x ,y) is the current frame grayscale image, and fk -1 (x,y) is the previous frame grayscale image; S32:判断所述差分灰度图中每一像素点的灰度值的绝对值是否大于阈值T,若是,则修改其灰度值为255,否则,设置为0;S32: Determine whether the absolute value of the grayscale value of each pixel in the differential grayscale image is greater than the threshold T, and if so, modify its grayscale value to 255, otherwise, set it to 0; S33:统计差分灰度图中灰度值为255的像素个数为num1,图像总像素个数为num2,计算帧间变化率P=num1/num2×100%。S33 : the number of pixels with a grayscale value of 255 in the statistical difference grayscale image is num 1 , the total number of pixels in the image is num 2 , and the inter-frame change rate P=num 1 /num 2 ×100% is calculated. 5.如权利要求4所述的改进的ViBe室内实时前景检测方法,其特征在于,S4中,判断是否发生光照突变的方法包括:5. the improved ViBe indoor real-time foreground detection method as claimed in claim 4, is characterized in that, in S4, judge whether the method for sudden change in illumination comprises: S41:判断由S33计算出的帧间变化率P的数值是否大于阈值Tlight,若是,则进入S42,否则,判断未发生光照突变;S41: Determine whether the value of the inter-frame change rate P calculated in S33 is greater than the threshold T light , if so, enter S42, otherwise, determine that no sudden change in illumination occurs; S42:运用感知哈希算法比较当前帧灰度图像与S31得到的差分灰度图,判断两张图像的汉明距离是否大于阈值Tenter,若是,则判断未发生光照突变,否则,判断为发生了光照突变。S42: Use the perceptual hash algorithm to compare the grayscale image of the current frame and the differential grayscale image obtained in S31, and determine whether the Hamming distance of the two images is greater than the threshold T enter , if so, determine that no sudden change in illumination has occurred, otherwise, determine that it has occurred light mutation. 6.如权利要求1所述的改进的ViBe室内实时前景检测方法,其特征在于,S5中,获取前景检测结果的步骤包括:6. improved ViBe indoor real-time foreground detection method as claimed in claim 1, is characterized in that, in S5, the step of obtaining foreground detection result comprises: S51:对于当前帧灰度图像,将每一像素点x的灰度值与其对应的样本集Sample(x,y)中的N个灰度值进行对比,判断样本集Sample(x,y)中是否存在#min个及以上灰度值与像素点x的灰度值的二维欧氏距离小于距离阈值R,若是,则将该像素的灰度值设为255,否则,设置为0,#min表示设定的前景检测阈值,N表示样本个数;S51: For the grayscale image of the current frame, compare the grayscale value of each pixel point x with the N grayscale values in the corresponding sample set Sample(x,y), and determine whether the sample set Sample(x,y) Whether there are #min or more grayscale values and the two-dimensional Euclidean distance of the grayscale value of the pixel x less than the distance threshold R, if so, set the grayscale value of the pixel to 255, otherwise, set it to 0, # min represents the set foreground detection threshold, and N represents the number of samples; S52:对经过S51处理的灰度图像进行形态学处理,得到形态学处理后的灰度图像。S52: Perform morphological processing on the grayscale image processed in S51 to obtain a morphologically processed grayscale image. 7.根据权利要求6所述的改进的ViBe室内实时前景检测方法,其特征在于,S52中,形态学处理的步骤包括:7. improved ViBe indoor real-time foreground detection method according to claim 6 is characterized in that, in S52, the step of morphological processing comprises: S521:对灰度值为255的像素进行连通性分析并计算连通区域的大小;S521: Perform connectivity analysis on the pixels with a grayscale value of 255 and calculate the size of the connected region; S522:对于连通区域面积小于9的像素区域,将每一像素灰度值设置为0;S522: for the pixel area with the connected area area less than 9, set the gray value of each pixel to 0; S523:对灰度图像进行腐蚀和膨胀操作,得到形态学处理后的灰度图像。S523: Perform erosion and expansion operations on the grayscale image to obtain a morphologically processed grayscale image. 8.根据权利要求1所述的改进的ViBe室内实时前景检测方法,其特征在于,S6中,基于S3得到的帧间变化率,根据帧间变化率调整策略更新背景模型的步骤包括:8. improved ViBe indoor real-time foreground detection method according to claim 1, is characterized in that, in S6, based on the inter-frame change rate that S3 obtains, the step of updating background model according to the inter-frame change rate adjustment strategy comprises: S61:根据S3得到的帧间变化率获得随机因子θ;S61: Obtain a random factor θ according to the rate of change between frames obtained in S3; S62:每个像素根据随机因子θ更新背景模型。S62: Update the background model for each pixel according to the random factor θ. 9.根据权利要求8所述的改进的ViBe室内实时前景检测方法,其特征在于,S62包括:9. improved ViBe indoor real-time foreground detection method according to claim 8 is characterized in that, S62 comprises: 所有灰度值被设置为0的像素,有1/θ的几率更新自己和自己八邻域像素对应的样本集,具体的更新方式为:All pixels whose gray value is set to 0 have a probability of 1/θ to update the sample set corresponding to themselves and their eight-neighborhood pixels. The specific update method is: 将当前像素的灰度值随机替代当前像素对应的样本集中的一个灰度值;Randomly replace the gray value of the current pixel with a gray value in the sample set corresponding to the current pixel; 将当前像素的灰度值随机替代当前像素的八个邻域像素对应的样本集中的一个灰度值。The gray value of the current pixel is randomly replaced with one gray value in the sample set corresponding to the eight neighboring pixels of the current pixel. 10.一种存储介质,其特征在于,所述存储介质为计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序运行时用于实现如权利要求1-9任一所述的改进的ViBe室内实时前景检测方法的步骤。10. A storage medium, characterized in that, the storage medium is a computer-readable storage medium, and a computer program is stored on the computer-readable storage medium, and the computer program is used to implement the methods of claims 1-9 when running. Any of the steps of the improved ViBe indoor real-time foreground detection method.
CN202210056284.0A 2022-01-18 2022-01-18 An improved ViBe indoor real-time foreground detection method and storage medium Active CN114612815B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210056284.0A CN114612815B (en) 2022-01-18 2022-01-18 An improved ViBe indoor real-time foreground detection method and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210056284.0A CN114612815B (en) 2022-01-18 2022-01-18 An improved ViBe indoor real-time foreground detection method and storage medium

Publications (2)

Publication Number Publication Date
CN114612815A true CN114612815A (en) 2022-06-10
CN114612815B CN114612815B (en) 2024-12-31

Family

ID=81858081

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210056284.0A Active CN114612815B (en) 2022-01-18 2022-01-18 An improved ViBe indoor real-time foreground detection method and storage medium

Country Status (1)

Country Link
CN (1) CN114612815B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117968583A (en) * 2024-03-18 2024-05-03 埃睿迪信息技术(北京)有限公司 Waterlogging ponding monitoring and early warning system and method
CN119359692A (en) * 2024-11-13 2025-01-24 北京市眼科研究所 Method for automatically measuring meibomian gland quality

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106548488A (en) * 2016-10-25 2017-03-29 电子科技大学 It is a kind of based on background model and the foreground detection method of inter-frame difference
CN106651782A (en) * 2016-09-26 2017-05-10 江苏科海智能系统有限公司 ViBe-oriented foreground ghosting removal method
CN107085836A (en) * 2017-05-16 2017-08-22 合肥工业大学 A Universal Ghost Elimination Method in Moving Object Detection
US20180365845A1 (en) * 2016-08-19 2018-12-20 Soochow University Moving object detection method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180365845A1 (en) * 2016-08-19 2018-12-20 Soochow University Moving object detection method and system
CN106651782A (en) * 2016-09-26 2017-05-10 江苏科海智能系统有限公司 ViBe-oriented foreground ghosting removal method
CN106548488A (en) * 2016-10-25 2017-03-29 电子科技大学 It is a kind of based on background model and the foreground detection method of inter-frame difference
CN107085836A (en) * 2017-05-16 2017-08-22 合肥工业大学 A Universal Ghost Elimination Method in Moving Object Detection

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
熊玲;陈勇;: "基于改进ViBe的室内目标检测方法", 现代计算机(专业版), no. 32, 15 November 2018 (2018-11-15) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117968583A (en) * 2024-03-18 2024-05-03 埃睿迪信息技术(北京)有限公司 Waterlogging ponding monitoring and early warning system and method
CN119359692A (en) * 2024-11-13 2025-01-24 北京市眼科研究所 Method for automatically measuring meibomian gland quality

Also Published As

Publication number Publication date
CN114612815B (en) 2024-12-31

Similar Documents

Publication Publication Date Title
Braham et al. Semantic background subtraction
CN110033471B (en) Frame line detection method based on connected domain analysis and morphological operation
CN110889813A (en) Low-light image enhancement method based on infrared information
JP2014089626A (en) Image detection device and control program and image detection method
CN110503613A (en) Single Image-Oriented Rain Removal Method Based on Cascaded Atrous Convolutional Neural Network
Sengar et al. Detection of moving objects based on enhancement of optical flow
CN111191535B (en) Pedestrian detection model construction method based on deep learning and pedestrian detection method
CN113066077B (en) Flame detection method and device
CN107248174A (en) A kind of method for tracking target based on TLD algorithms
CN103617632A (en) Moving target detection method with adjacent frame difference method and Gaussian mixture models combined
CN106251348B (en) An adaptive multi-cue fusion background subtraction method for depth cameras
CN114612815A (en) Improved ViBe indoor real-time foreground detection method and storage medium
Alajarmeh et al. Real-time framework for image dehazing based on linear transmission and constant-time airlight estimation
WO2017088479A1 (en) Method of identifying digital on-screen graphic and device
CN118862061A (en) A deep fake adversarial sample defense method based on mask conditional diffusion model
CN111460964A (en) Moving target detection method under low-illumination condition of radio and television transmission machine room
CN118229954A (en) An end-to-end approach to generate imperceptible adversarial patches
CN110084201A (en) A kind of human motion recognition method of convolutional neural networks based on specific objective tracking under monitoring scene
CN105894020A (en) Specific target candidate box generating method based on gauss model
Pham et al. Biseg: Simultaneous instance segmentation and semantic segmentation with fully convolutional networks
Chou et al. A noise-ranking switching filter for images with general fixed-value impulse noises
López-Rubio et al. Local color transformation analysis for sudden illumination change detection
CN111553931B (en) ViBe-ID foreground detection method for indoor real-time monitoring
CN115861349A (en) Color Image Edge Extraction Method Based on Reduced Conceptual Structural Elements and Matrix Order
CN107832732B (en) Lane line detection method based on ternary tree traversal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant