CN106056579A - Saliency detection method based on background contrast - Google Patents
Saliency detection method based on background contrast Download PDFInfo
- Publication number
- CN106056579A CN106056579A CN201610339693.6A CN201610339693A CN106056579A CN 106056579 A CN106056579 A CN 106056579A CN 201610339693 A CN201610339693 A CN 201610339693A CN 106056579 A CN106056579 A CN 106056579A
- Authority
- CN
- China
- Prior art keywords
- image
- saliency
- contrast
- filter
- ccs
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
本发明涉及一种基于背景对比的显著性检测方法。包含六个步骤:输入图像I、对图像I进行过度分割、计算背景显著性、计算图像颜色对比显著性、计算图像紧凑度显著性、得到最终的显著性。其中步骤2中的过度分割又包括将输入图像I转换成视觉上均匀的CIELAB颜色空间、用一个17维滤波器对上述转换结果进行卷积,这个滤波器由高斯滤波器,高斯导数滤波器,拉普拉斯算子的高斯滤波器组成、采用欧氏距离K‑均值聚类算法对图像进行无监督聚类、每个像素被分配到最近的群集中心,最终产生过分割图像。本发明的方法相比对整个图像的计算对比度的方法更加稳健,生成的显著图与人眼标识的结果最接近,在自然场景图像处理上十分有效。
The invention relates to a significance detection method based on background contrast. It includes six steps: input image I, over-segment image I, calculate background saliency, calculate image color contrast saliency, calculate image compactness saliency, and obtain final saliency. The over-segmentation in step 2 includes converting the input image I into a visually uniform CIELAB color space, and convolving the above-mentioned conversion result with a 17-dimensional filter. This filter consists of a Gaussian filter, a Gaussian derivative filter, The Gaussian filter of the Laplacian operator is used, and the Euclidean distance K-means clustering algorithm is used to perform unsupervised clustering of the image. Each pixel is assigned to the nearest cluster center, and finally an over-segmented image is generated. Compared with the method of calculating the contrast of the whole image, the method of the present invention is more robust, the generated saliency map is the closest to the result of human eye identification, and is very effective in natural scene image processing.
Description
技术领域technical field
本发明涉及图像数据处理领域的图像分析技术,具体涉及一种基于背景对比的显著性检测方法。The invention relates to an image analysis technology in the field of image data processing, in particular to a background contrast-based significance detection method.
背景技术Background technique
显著性检测在计算机视觉研究领域是一个具有挑战性的问题,并且也是许多应用中的一个重要任务,如对象识别,图像编码,图像编辑,图像分割,以及视频跟踪等。通过大量的努力,许多成功的显著的检测方法已被开发出来,Saliency detection is a challenging problem in the field of computer vision research, and is also an important task in many applications, such as object recognition, image coding, image editing, image segmentation, and video tracking, etc. Through extensive efforts, many successful and notable assays have been developed,
它们大致分为自上而下(监督)的方法和自底向上(无监督)的方法两大类型。前者常常用从训练过程构建的视觉知识描述显著信息,然后再用这种知识来在测试图像进行显著性检测;后者通常依据与周边不含任何显著区域或物体优先权的邻域的差异程度确定一个像素的显著性。They are roughly divided into two types: top-down (supervised) methods and bottom-up (unsupervised) methods. The former often describe salient information with the visual knowledge built from the training process, and then use this knowledge to perform saliency detection in the test image; the latter is usually based on the degree of difference from the surrounding neighborhood without any salient regions or object priorities Determine the saliency of a pixel.
最近的研究已经表明,基于自底向上的方法的计算模型是相当成功的,非常适合扩展到大型数据集。感知方面的研究和以前的方法的结果表明,对自底向上的视觉显著性最有影响力的因素是对比度。Niloy J.Mitra等人提出了基于全局对比度的显著性区域检测方法,也就是直方图对比度法(histogram-based contrast,简称HC)和基于空间信息增强的区域对比度法(region contrast,简称RC)。HC方法是高效的且产生的结果具有精细的细节;RC方法生成空间增强的高质量显著图,但是计算效率较低。并且此方法对高纹理图像的处理能力较弱。YANG等人提出了一种基于全局颜色对比的显著性目标检测方法,首先提取全局颜色对比度特征,把显著图和全局颜色对比的显著性目标检测算法框架中,得到二值显著性掩模,最后经区域描绘子计算得到包含显著性目标的最小外接矩形。Recent studies have shown that computational models based on bottom-up approaches are quite successful and well suited for scaling to large datasets. Perceptual studies and results from previous methods suggest that the most influential factor for bottom-up visual saliency is contrast. Niloy J. Mitra et al. proposed a salient region detection method based on global contrast, that is, histogram-based contrast (HC for short) and region contrast (RC for short) based on spatial information enhancement. HC methods are efficient and produce results with fine details; RC methods generate spatially enhanced high-quality saliency maps, but are less computationally efficient. And this method has weak processing ability for high-texture images. YANG et al. proposed a salient object detection method based on global color contrast. Firstly, the global color contrast feature is extracted, and a binary saliency mask is obtained by combining the saliency map with the salient object detection algorithm framework of global color contrast. Finally, The minimum circumscribed rectangle containing the salient objects is obtained through the calculation of the area delineation sub-calculation.
自上而下和自底向上这两种方法往往只依赖于区域中心环绕对比度或全局对比的显著性。依赖于区域中心环绕对比度的算法往往是不精细的,而全局对比的显著性检测方法忽视了图像各部分之间的空间关系,在图像边缘处的物体检测不完全。Both top-down and bottom-up methods tend to rely only on the saliency of the region center surround contrast or the global contrast. Algorithms that rely on the contrast around the center of the region are often imprecise, while global contrast saliency detection methods ignore the spatial relationship between parts of the image, and object detection at the edge of the image is incomplete.
发明内容Contents of the invention
本发明要解决的技术问题是针对上述自上而下和自底向上这两种方法中因为只依赖于区域中心环绕对比度或全局对比的显著性而产生的显著性算法不精细或在图像边缘处的物体检测不完全的问题。The technical problem to be solved by the present invention is to solve the problem that the saliency algorithm is not refined or the saliency at the edge of the image is not refined because the saliency of the above-mentioned top-down and bottom-up methods only depends on the saliency of the area center surround contrast or the global contrast. The problem of incomplete object detection.
鉴于此,基于背景区域的对比度在这个过程中也起着重要的作用,本发明提出一种基于背景对比的显著性检测方法,包含以下步骤:In view of this, the contrast based on the background region also plays an important role in this process, and the present invention proposes a method for saliency detection based on the background contrast, which includes the following steps:
步骤1:输入图像I;Step 1: input image I;
步骤2:对图像I进行过度分割;Step 2: over-segment image I;
步骤3:通过公式计算背景显著性SBS(ri),其中 分别通过计算x轴y轴上的像素数得到;Step 3: Through the formula Calculate background saliency S BS (r i ), where Obtained by calculating the number of pixels on the x-axis and y-axis respectively;
步骤4:通过公式计算图像颜色对比显著性SCCS(ri), cR(·),cG(·),cB(·)分别是RGB信道里平均颜色;Step 4: Through the formula Calculate the image color contrast saliency S CCS (r i ), c R ( ), c G ( ), c B ( ) are the average colors in the RGB channel;
步骤5:通过公式计算图像紧凑度显著性SCS(ri),其中L是ri中的像素数,分别是ri在x轴,y轴上的像素数分布,参数μi为核函数中心,σi设为min{W,H};Step 5: Through the formula Compute the image compactness saliency S CS (r i ), where L is the number of pixels in r i , are the pixel number distribution of r i on the x-axis and y-axis respectively, The parameter μ i is the center of the kernel function, and σ i is set to min{W,H};
步骤6:根据上述步骤3、4、5分别得到的SBS(ri),SCCS(ri),SCS(ri),通过公式S(ri)=S'CCS(ri)·S'CS(ri),得到最终的显著性,其中S'CCS(ri)=SCCS(ri)·SFS(ri),S'CS'=SCS(ri)·SFS(ri),SFS(ri)=exp{-α·SBS(ri)}。Step 6: S BS (r i ), S CCS (r i ), and S CS (r i ) respectively obtained according to the above steps 3, 4, and 5, through the formula S(r i )=S' CCS (r i ) S' CS (r i ) to get the final significance, where S' CCS (r i ) = S CCS (r i ) S FS (r i ), S'CS' = S CS (r i ) S FS (r i ), S FS (r i )=exp{−α·S BS (r i )}.
进一步,上述步骤2包括如下步骤:Further, the above step 2 includes the following steps:
步骤2-1:将输入图像I转换成视觉上均匀的CIELAB颜色空间;Step 2-1: Convert the input image I into a visually uniform CIELAB color space;
步骤2-2:用一个17维滤波器对步骤2-1的结果进行卷积,这个滤波器由高斯滤波器,高斯导数滤波器,拉普拉斯算子的高斯滤波器组成;Step 2-2: Convolute the result of step 2-1 with a 17-dimensional filter, which consists of a Gaussian filter, a Gaussian derivative filter, and a Gaussian filter of the Laplacian operator;
步骤2-3:采用欧氏距离K-均值聚类算法对图像进行无监督聚类;Step 2-3: Unsupervised clustering of images using Euclidean distance K-means clustering algorithm;
步骤2-4:每个像素被分配到最近的群集中心,以产生过分割图像。Steps 2-4: Each pixel is assigned to the nearest cluster center to produce an over-segmented image.
有益效果:Beneficial effect:
1、本发明中,候选区域可以根据背景的空间分布自动生成,利用背景区域的空间分布,来对背景的显著性进行编码。而且采用了颜色对比度和紧凑度,对前景显著性进行编码。1. In the present invention, the candidate region can be automatically generated according to the spatial distribution of the background, and the salience of the background can be encoded by using the spatial distribution of the background region. Moreover, color contrast and compactness are adopted to encode foreground saliency.
2、本发明使用相对于背景的对比度,要比相对整个图像的计算对比度的方法更加稳健,生成的显著图与人眼标识的结果最接近。2. The present invention uses the contrast relative to the background, which is more robust than the method of calculating the contrast relative to the entire image, and the generated saliency map is the closest to the result of human eye identification.
3、本发明的方法在MSRA1000数据集进行了测试,测试结果证明在自然场景图像处理上十分有效。另外,根据本发明得到的精度--召回率曲线和F值图,也得出本发明优于本领域其他12种显著性检测方法的结论。同时与人眼表示的结果相比,本发明的效果也十分有效。3. The method of the present invention is tested on the MSRA1000 data set, and the test results prove that it is very effective in natural scene image processing. In addition, according to the precision-recall rate curve and F value diagram obtained by the present invention, it is also concluded that the present invention is superior to other 12 kinds of significance detection methods in the field. At the same time, compared with the results expressed by human eyes, the effect of the present invention is also very effective.
附图说明Description of drawings
图1为本发明的方法流程图。Fig. 1 is a flow chart of the method of the present invention.
图2为本发明中各步骤得到的显著图。Fig. 2 is the salience figure obtained by each step in the present invention.
图3为本发明中的背景显著图。Fig. 3 is a background saliency map in the present invention.
图4为本发明中的颜色对比显著图。Fig. 4 is a color contrast saliency map in the present invention.
图5为本发明中的紧凑度显著图。Fig. 5 is a saliency map of compactness in the present invention.
图6为召回率曲线和F值图。Figure 6 is the recall rate curve and F value diagram.
具体实施方式detailed description
下面结合说明书附图对本发明作进一步的详细说明。The present invention will be further described in detail below in conjunction with the accompanying drawings.
本发明中使用的英文缩略语的含义如下:The meanings of English abbreviations used in the present invention are as follows:
BS表示背景显著度(Background Saliency);BS means background saliency (Background Saliency);
CCS表示颜色对比显著度(Color Contrast Saliency);CCS stands for Color Contrast Saliency;
CS表示紧凑度显著度(Compactness Saliency);CS stands for Compactness Saliency;
FS表示前景显著度(Foreground Saliency)。FS stands for Foreground Saliency.
本发明解决其技术问题所采取的技术方案是:引入了一个自底向上的显著性检测框架,使用相对于的背景候选区域的对比。输入图像首先被过度分割成许多均匀的分割块。由于背景区域往往占据图像中比较多的区域,并一直延伸到图像的边缘,由此可得到背景的显著图,如图2(c),离图像中心远的区域的背景显著值高。除了通过利用背景区域的空间分布,来对背景的显著性进行编码,本发明也采用了颜色对比度和紧凑度,对前景显著性进行编码。背景候选区域根据背景的显著图被抽象后,如图2(d)所示,通过计算每一个分割块相对于其背景区域的对比度,最后得到颜色对比显著值。紧凑性显著值是根据完形法则产生,说明形状规则的区域往往更加显著,反之则不。结合这三个显著图,能够得到最终的显著图,如图2(f)。The technical solution adopted by the present invention to solve the technical problem is: a bottom-up saliency detection framework is introduced, and the comparison with the background candidate area is used. The input image is first over-segmented into many uniform segmentation blocks. Since the background area often occupies a relatively large area in the image and extends to the edge of the image, the saliency map of the background can be obtained, as shown in Figure 2(c), the background saliency value of the area far from the image center is high. In addition to encoding background salience by utilizing the spatial distribution of background regions, the present invention also uses color contrast and compactness to encode foreground saliency. After the background candidate area is abstracted according to the saliency map of the background, as shown in Figure 2(d), by calculating the contrast of each segmented block relative to its background area, the color contrast saliency value is finally obtained. The compactness saliency value is generated according to the Gestalt rule, indicating that regions with regular shapes are often more significant, and vice versa. Combining these three saliency maps, the final saliency map can be obtained, as shown in Figure 2(f).
方法流程包含以下步骤:The method flow consists of the following steps:
步骤1:输入图像I;Step 1: input image I;
步骤2:对图像I进行过度分割;Step 2: over-segment image I;
步骤3:通过公式计算背景显著性SBS(ri),其中 分别通过计算x轴y轴上的像素数得到;Step 3: Through the formula Calculate background saliency S BS (r i ), where Obtained by calculating the number of pixels on the x-axis and y-axis respectively;
步骤4:通过公式计算图像颜色对比显著性SCCS(ri), cR(·),cG(·),cB(·)分别是RGB信道里平均颜色;Step 4: Through the formula Calculate the image color contrast saliency S CCS (r i ), c R ( ), c G ( ), c B ( ) are the average colors in the RGB channel;
步骤5:通过公式计算图像紧凑度显著性SCS(ri),其中L是ri中的像素数,分别是ri在x轴,y轴上的像素数分布,参数μi为核函数中心,σi设为min{W,H};Step 5: Through the formula Compute the image compactness saliency S CS (r i ), where L is the number of pixels in r i , are the pixel number distribution of r i on the x-axis and y-axis respectively, The parameter μ i is the center of the kernel function, and σ i is set to min{W,H};
步骤6:根据上述步骤3、4、5分别得到的SBS(ri),SCCS(ri),SCS(ri),通过公式S(ri)=S'CCS(ri)·S'CS(ri),得到最终的显著性,其中S'CCS(ri)=SCCS(ri)·SFS(ri),S'CS'=SCS(ri)·SFS(ri),SFS(ri)=exp{-α·SBS(ri)}。Step 6: S BS (r i ), S CCS (r i ), and S CS (r i ) respectively obtained according to the above steps 3, 4, and 5, through the formula S(r i )=S' CCS (r i ) S' CS (r i ) to get the final significance, where S' CCS (r i ) = S CCS (r i ) S FS (r i ), S'CS' = S CS (r i ) S FS (r i ), S FS (r i )=exp{−α·S BS (r i )}.
进一步,上述步骤2包括如下步骤:Further, the above step 2 includes the following steps:
步骤2-1:将输入图像I转换成视觉上均匀的CIELAB颜色空间;Step 2-1: Convert the input image I into a visually uniform CIELAB color space;
步骤2-2:用一个17维滤波器对步骤2-1的结果进行卷积,这个滤波器由高斯滤波器,高斯导数滤波器,拉普拉斯算子的高斯滤波器组成;Step 2-2: Convolute the result of step 2-1 with a 17-dimensional filter, which consists of a Gaussian filter, a Gaussian derivative filter, and a Gaussian filter of the Laplacian operator;
步骤2-3:采用欧氏距离K-均值聚类算法对图像进行无监督聚类;Step 2-3: Unsupervised clustering of images using Euclidean distance K-means clustering algorithm;
步骤2-4:每个像素被分配到最近的群集中心,以产生过分割图像。Steps 2-4: Each pixel is assigned to the nearest cluster center to produce an over-segmented image.
如图2(b)所示,第一步是,根据原始图像的原始像素强度生成一系列分割块。其中,较小的均匀的分割块,在图像中有明显边界的区域,对检测显著对象有用。As shown in Figure 2(b), the first step is to generate a series of segmentation blocks based on the raw pixel intensities of the original image. Among them, smaller uniform segmentation blocks, regions with sharp boundaries in the image, are useful for detecting salient objects.
输入图像首先被转换成视觉上均匀CIELAB颜色空间,然后用一个17维滤波器进行卷积,这个滤波器由高斯滤波器,高斯导数滤波器,拉普拉斯算子的高斯滤波器组成。然后采用欧氏距离K-均值聚类算法进行无监督聚类。最后,每个像素被分配到最近的群集中心,以产生过分割图像。虽然这些被分割的区域往往是大小和形状高度不规则的,但是本发明的优点是,它可以将具备相似的外观的且较大的均匀区域聚合起来,同时将异质区域再分成许多更小的块。The input image is first converted into a visually uniform CIELAB color space, and then convolved with a 17-dimensional filter consisting of a Gaussian filter, a Gaussian derivative filter, and a Gaussian filter for the Laplacian operator. Then use the Euclidean distance K-means clustering algorithm for unsupervised clustering. Finally, each pixel is assigned to the nearest cluster center to produce the over-segmented image. Although these segmented regions are often highly irregular in size and shape, an advantage of the present invention is that it can aggregate larger homogeneous regions of similar appearance while subdividing heterogeneous regions into many smaller of blocks.
具体的获取图像显著值的实现过程如下:The specific implementation process of obtaining the saliency value of the image is as follows:
(1)背景显著图的获取(1) Acquisition of background saliency map
a.如图3(a)所示给定一个大小为W×H的输入图像I,W和H分别是图像的宽度和高度。首先建立坐标系,以图像左上点为坐标原点建立坐标系,水平和垂直方向分别表示:x轴和y轴,带阻滤波器Fh(x,W)在x轴上定义:a. Given an input image I of size W×H as shown in Figure 3(a), W and H are the width and height of the image, respectively. First establish a coordinate system, and establish a coordinate system with the upper left point of the image as the coordinate origin. The horizontal and vertical directions represent: x-axis and y-axis respectively, and the band-stop filter F h (x, W) is defined on the x-axis:
x是x轴坐标,参数τ控制上限,η控制带阻滤波器的形状。x is the x-axis coordinate, the parameter τ controls the upper limit, and η controls the shape of the bandstop filter.
b.类似地在y轴上定义Fv(y,H)。b. Define Fv( y ,H) similarly on the y-axis.
c.如图3(c)所示,区域ri的(x,y)上有L个像素。计算ri的归一化空间分布分别通过计算x轴,y轴上的像素数得到。c. As shown in FIG. 3(c), there are L pixels on (x, y) in the region ri . Compute the normalized spatial distribution of ri It is obtained by calculating the number of pixels on the x-axis and y-axis respectively.
d.背景显著性定义为ri中所有像素的加权平均滤波响应:d. Background saliency is defined as the weighted average filtered response of all pixels in ri :
因此,那些远离图像中心且大而均匀的区域将比中心区域,分配到更多的显著值,如图3(d)所示。Therefore, those regions that are far away from the image center and are large and uniform will be assigned more saliency values than the central region, as shown in Fig. 3(d).
(2)颜色显著图的获取(2) Acquisition of color saliency map
经常发生这样的情况:显著对象没有正好处于图像的中心,但是目标区域的颜色相对于整个场景仍然大有不同。所以本发明中选择选择基于候选背景区域的对比度进行计算,而不是基于整个图像来计算显著值。根据上述步骤三中的BS,选择具有较高的显著性和较大尺寸的作为候选背景区域。It often happens that the salient object is not exactly in the center of the image, but the color of the target area is still very different relative to the whole scene. Therefore, in the present invention, the calculation is performed based on the contrast of the candidate background area, rather than the calculation of the saliency value based on the entire image. According to the BS in the above step 3, the one with higher saliency and larger size is selected as the candidate background region.
选定{B1,B2,…BN}作为候选背景区域。然后计算在RGB颜色空间里分割块ri的色彩差异显著(CCS):Select {B 1 ,B 2 ,...B N } as the candidate background area. Then calculate the color difference significant (CCS) of the segment r i in the RGB color space:
其中,cR(·),cG(·),cB(·)分别是RGB信道里平均颜色。图4展示了CCS的计算结果。in, c R (·), c G (·), c B (·) are the average colors in the RGB channels, respectively. Figure 4 shows the calculation results of CCS.
(3)紧凑度显著图的获取(3) Acquisition of compact saliency map
背景区域分布在整个图像,表现出高度的空间异质性,而前景对象通常更紧凑并且有规则的形状。根据这样的性质,本发明定义紧凑显著性(CS):Background regions are distributed throughout the image, exhibiting high spatial heterogeneity, while foreground objects are usually more compact and have regular shapes. According to such properties, the present invention defines compact saliency (CS):
其中,L是ri中的像素数,分别是ri在x轴,y轴上的像素数分布,N(·)表示高斯核函数,参数μi为核函数中心,σi设为min{W,H}。图5展示了CS的计算结果Among them, L is the number of pixels in r i , are the pixel number distribution of r i on the x-axis and y-axis respectively, and N(·) represents the Gaussian kernel function, The parameter μ i is the center of the kernel function, and σ i is set to min{W,H}. Figure 5 shows the calculation results of CS
(4)综合显著图的获取(4) Acquisition of comprehensive saliency map
本发明中,假设三种测量是独立的。一开始将BS,CCS和CS规范化至[0,1]。在实践中发现,BS在表示背景时,具有更高的辨别力。因此,本发明使用一个指数函数以强调前景显著度(FS)为:In the present invention, it is assumed that the three measurements are independent. Initially normalize BS, CCS and CS to [0,1]. In practice, it is found that BS has higher discriminative power when representing the background. Therefore, the present invention uses an exponential function to emphasize the foreground saliency (FS) as:
SFS(ri)=exp{-α·SBS(ri)},其中,α是缩放因子。S FS (ri )=exp{-α·S BS ( ri ) } , where α is a scaling factor.
若仅使用CCS和CS可能突出一些背景区域(如图2所示),产生了错误的显著性分配。为了弥补这一缺陷,本发明中乘FS对其进行修改,以消除背景的影响干扰:Using only CCS and CS may highlight some background regions (as shown in Figure 2), resulting in wrong saliency assignments. In order to make up for this defect, it is modified by multiplying FS in the present invention, to eliminate the influence interference of background:
S'CCS(ri)=SCCS(ri)·SFS(ri)S' CCS (r i )=S CCS (r i )·S FS (r i )
S'CS'=SCS(ri)·SFS(ri)S'CS' = S CS (r i ) · S FS (r i )
最终的显著图定义为:S(ri)=S'CCS(ri)·S'CS(ri)The final saliency map is defined as: S( ri )=S' CCS ( ri )·S' CS ( ri )
显著图S(ri)归一化到固定范围[0,255],ri的每个图像像素分配到一个显著值S(ri)。The saliency map S( ri ) is normalized to a fixed range [0, 255], and each image pixel of ri is assigned a saliency value S( ri ).
召回率(Recall)和精度(precision)是用来评价结果的质量的两个度量值。其中召回率是是检索出的相关文档数和文档库中所有的相关文档数的比率,衡量的是检索系统的查全率。精度是检索出的相关文档数与检索出的文档总数的比率,衡量的是检索系统的查准率。可以将这两个度量值融合成一个度量值——F值(F-measure)。用本发明的方法在MSRA数据库中进行大量测试得到显著图。绘制本发明的方法与其他12种方法的精度--召回率曲线和F值图,如图6所示,发现本发明的方法无论是在召回率还是精度方面都高于其他12种方法,说明本发明的方法有效性优于其他12种方法。Recall and precision are two metrics used to evaluate the quality of the results. The recall rate is the ratio of the number of relevant documents retrieved to the number of all relevant documents in the document library, and measures the recall rate of the retrieval system. Accuracy is the ratio of the number of relevant documents retrieved to the total number of documents retrieved, and measures the precision of the retrieval system. These two measures can be fused into one measure - F-measure. A large number of tests are carried out in the MSRA database with the method of the present invention to obtain a saliency map. Draw the precision of the method of the present invention and other 12 kinds of methods--recall rate curve and F value figure, as shown in Figure 6, find that the method of the present invention is all higher than other 12 kinds of methods no matter aspect recall rate or precision, illustrate The effectiveness of the method of the present invention is superior to other 12 kinds of methods.
Claims (2)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201610339693.6A CN106056579A (en) | 2016-05-20 | 2016-05-20 | Saliency detection method based on background contrast |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201610339693.6A CN106056579A (en) | 2016-05-20 | 2016-05-20 | Saliency detection method based on background contrast |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN106056579A true CN106056579A (en) | 2016-10-26 |
Family
ID=57176650
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201610339693.6A Pending CN106056579A (en) | 2016-05-20 | 2016-05-20 | Saliency detection method based on background contrast |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN106056579A (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108198172A (en) * | 2017-12-28 | 2018-06-22 | 北京大学深圳研究生院 | Image significance detection method and device |
| CN117292102A (en) * | 2023-04-17 | 2023-12-26 | 国网安徽省电力有限公司电力科学研究院 | Optimization method and system for extracting seal wrinkles based on fusion features |
Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101520894A (en) * | 2009-02-18 | 2009-09-02 | 上海大学 | Method for extracting significant object based on region significance |
| EP2339533A1 (en) * | 2009-11-20 | 2011-06-29 | Vestel Elektronik Sanayi ve Ticaret A.S. | Saliency based video contrast enhancement method |
| CN102129693A (en) * | 2011-03-15 | 2011-07-20 | 清华大学 | Image vision significance calculation method based on color histogram and global contrast |
| CN103020993A (en) * | 2012-11-28 | 2013-04-03 | 杭州电子科技大学 | Visual saliency detection method by fusing dual-channel color contrasts |
| CN103136766A (en) * | 2012-12-28 | 2013-06-05 | 上海交通大学 | Object significance detecting method based on color contrast and color distribution |
| CN103914834A (en) * | 2014-03-17 | 2014-07-09 | 上海交通大学 | Significant object detection method based on foreground priori and background priori |
| AU2012268887A1 (en) * | 2012-12-24 | 2014-07-10 | Canon Kabushiki Kaisha | Saliency prediction method |
-
2016
- 2016-05-20 CN CN201610339693.6A patent/CN106056579A/en active Pending
Patent Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101520894A (en) * | 2009-02-18 | 2009-09-02 | 上海大学 | Method for extracting significant object based on region significance |
| EP2339533A1 (en) * | 2009-11-20 | 2011-06-29 | Vestel Elektronik Sanayi ve Ticaret A.S. | Saliency based video contrast enhancement method |
| CN102129693A (en) * | 2011-03-15 | 2011-07-20 | 清华大学 | Image vision significance calculation method based on color histogram and global contrast |
| CN103020993A (en) * | 2012-11-28 | 2013-04-03 | 杭州电子科技大学 | Visual saliency detection method by fusing dual-channel color contrasts |
| AU2012268887A1 (en) * | 2012-12-24 | 2014-07-10 | Canon Kabushiki Kaisha | Saliency prediction method |
| CN103136766A (en) * | 2012-12-28 | 2013-06-05 | 上海交通大学 | Object significance detecting method based on color contrast and color distribution |
| CN103914834A (en) * | 2014-03-17 | 2014-07-09 | 上海交通大学 | Significant object detection method based on foreground priori and background priori |
Non-Patent Citations (3)
| Title |
|---|
| MING-MING CHENG 等: "Global Contrast Based Salient Region Detection", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 * |
| QUAN ZHOU 等: "SALIENT OBJECT DETECTION VIA BACKGROUND CONTRAST", 《2015 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP)》 * |
| 周全 等: "对比度融合的视觉显著性检测算法", 《信号处理》 * |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108198172A (en) * | 2017-12-28 | 2018-06-22 | 北京大学深圳研究生院 | Image significance detection method and device |
| CN108198172B (en) * | 2017-12-28 | 2022-01-28 | 北京大学深圳研究生院 | Image significance detection method and device |
| CN117292102A (en) * | 2023-04-17 | 2023-12-26 | 国网安徽省电力有限公司电力科学研究院 | Optimization method and system for extracting seal wrinkles based on fusion features |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN115861135B (en) | Image enhancement and recognition method applied to panoramic detection of box body | |
| CN101971190B (en) | Real-time body segmentation system | |
| CN110929593B (en) | Real-time significance pedestrian detection method based on detail discrimination | |
| CN103020965B (en) | A kind of foreground segmentation method based on significance detection | |
| CN102915544B (en) | Video image motion target extracting method based on pattern detection and color segmentation | |
| CN107977661B (en) | Region-of-interest detection method based on FCN and low-rank sparse decomposition | |
| CN109409384A (en) | Image-recognizing method, device, medium and equipment based on fine granularity image | |
| WO2018145470A1 (en) | Image detection method and device | |
| JP2017531883A (en) | Method and system for extracting main subject of image | |
| CN104408711A (en) | Multi-scale region fusion-based salient region detection method | |
| CN106228544A (en) | A kind of significance detection method propagated based on rarefaction representation and label | |
| CN108829711B (en) | Image retrieval method based on multi-feature fusion | |
| CN110598030A (en) | A Classification Method of Oracle Bone Rubbings Based on Local CNN Framework | |
| CN109034136A (en) | Image processing method, device, picture pick-up device and storage medium | |
| CN110706235A (en) | Far infrared pedestrian detection method based on two-stage cascade segmentation | |
| CN104778466B (en) | A kind of image attention method for detecting area for combining a variety of context cues | |
| Wu et al. | Salient region detection improved by principle component analysis and boundary information | |
| CN106447695A (en) | Method and device for judging same object in multi-object tracking | |
| CN113744241A (en) | Cell Image Segmentation Method Based on Improved SLIC Algorithm | |
| CN113850792A (en) | Cell classification counting method and system based on computer vision | |
| CN108647605B (en) | Human eye gaze point extraction method combining global color and local structural features | |
| CN106548195A (en) | A kind of object detection method based on modified model HOG ULBP feature operators | |
| CN106056579A (en) | Saliency detection method based on background contrast | |
| Sathiya et al. | Pattern recognition based detection recognition of traffic sign using SVM | |
| CN119274004A (en) | Identification method, device, equipment and medium based on biological image recognition model |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| RJ01 | Rejection of invention patent application after publication | ||
| RJ01 | Rejection of invention patent application after publication |
Application publication date: 20161026 |