CN108846404B - A method and device for image saliency detection based on correlation constraint graph ranking - Google Patents
A method and device for image saliency detection based on correlation constraint graph ranking Download PDFInfo
- Publication number
- CN108846404B CN108846404B CN201810658629.3A CN201810658629A CN108846404B CN 108846404 B CN108846404 B CN 108846404B CN 201810658629 A CN201810658629 A CN 201810658629A CN 108846404 B CN108846404 B CN 108846404B
- Authority
- CN
- China
- Prior art keywords
- node
- value
- image
- ith
- formula
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
本发明公开了一种基于相关约束图排序的图像显著性检测方法及装置,方法包括:对待检测图像进行超像素分割,建立一个闭环图模型,进而计算每一个超像素节点的先验信息;提取输入图像的颜色、纹理、位置等信息;获取每一个超像素节点的前景概率值;将前景概率值大于第一预设阈值的节点的集合作为前景种子点集合ind_fore;将前景概率值小于第二预设阈值的节点的集合作为背景种子点集合ind_back;第一预设阈值大于第二预设阈值;使用相关约束图排序的模型计算得到每个超像素节点的前景概率S_f,并使用前景概率值S_f作为最终的显著估计值S_final。应用本发明实施例,可以使显著性检测结果更加准确。
The invention discloses an image saliency detection method and device based on related constraint graph sorting. The method includes: performing superpixel segmentation on a to-be-detected image, establishing a closed-loop graph model, and then calculating the prior information of each superpixel node; extracting Input the color, texture, position and other information of the image; obtain the foreground probability value of each superpixel node; take the set of nodes whose foreground probability value is greater than the first preset threshold as the foreground seed point set ind_fore; set the foreground probability value less than the second The set of nodes with the preset threshold is used as the set of background seed points ind_back; the first preset threshold is greater than the second preset threshold; the foreground probability S_f of each superpixel node is obtained by using the model for sorting the relevant constraint graph, and the foreground probability value is used. S_f as the final saliency estimate S_final. By applying the embodiments of the present invention, the saliency detection result can be made more accurate.
Description
技术领域technical field
本发明涉及一种显著性检测方法及装置,更具体涉及一种基于相关约束图排序的图像显著性检测方法及装置。The present invention relates to a saliency detection method and device, and more particularly to an image saliency detection method and device based on correlation constraint graph ranking.
背景技术Background technique
随着计算机和网络通信技术的快速发展,图像数据越来越多。海量的多媒体图像数据给信息处理带来了巨大的挑战,近年来的主要研究热点就是如何高效地存储、分析和处理这些图像信息。显著性检测作为计算机视觉领域用来降低计算复杂度的重要预处理步骤,显著性目标检测的任务是从场景中定位并分割出最显著的前景目标。该技术的应用领域特别广泛,如:目标检测与识别,基于内容的图像检索,基于上下文感知的图像大小调整,视频目标检测等。如何快速准确地找到图像的显著区域尚未形成完整的理论体系,且与具体应用有着密切的关系,对研究人员来说仍是一个富有挑战的课题。With the rapid development of computer and network communication technology, there are more and more image data. Massive multimedia image data brings great challenges to information processing. In recent years, the main research focus is how to efficiently store, analyze and process these image information. Saliency detection is an important preprocessing step used to reduce computational complexity in the field of computer vision. The task of saliency detection is to locate and segment the most salient foreground objects from the scene. The application fields of this technology are particularly wide, such as: object detection and recognition, content-based image retrieval, context-aware image resizing, video object detection, etc. How to quickly and accurately find the salient regions of an image has not yet formed a complete theoretical system and is closely related to specific applications, which is still a challenging topic for researchers.
目前通常使用自底向上的方法进行视觉信息处理。自底向上的方法通常基于底层视觉信息,所以可以有效检测图像的细节信息,而不是全局形状信息,所检测到的显著区域可能只包含目标的一部分,或容易与背景混合。近几年来出现了很多自底向上的显著性检测模型:最初Itti等人提出了一种基于神经网络的显著检测模型,该模型结合了多个尺度内的三种特征通道,实现快速场景分析,虽然该模型能够辨识部分显著像素,但是结果也包含了大量的误检。Harel等人提出了一种基于图的显著性检测方法,该模型是一种自底向上的模型,通过计算相异性获得最终的显著结果。Chang等人构建了一种图模型,结合似物性和区域显著性从而获得较好的显著性估计。Wang等人提出了一种结合局部的图结构和背景先验,并提出了一种优化框架的显著性检测模型,最终的实验结果在大多数场景下都有良好的表现。Jiang等人提出了使用吸收马尔科夫链模型来进行图像的显著性检测。Tu等人提出了使用最小生成树模型来进行图像的显著性检测。Li等人提出了使用正则化随机游走的排序模型来估计显著性值。Yang等人提出了一种基于图的流形排序的显著性检测算法,(以下简称MR算法)该算法通过筛选出一些前景种子点和背景种子点,然后使用流形排序的模型计算其余节点与这些种子点之间的相关性,从而得到最终的显著值。Currently, bottom-up approaches are commonly used for visual information processing. Bottom-up methods are usually based on the underlying visual information, so they can effectively detect the detailed information of the image rather than the global shape information, and the detected salient regions may only contain part of the object or easily blend with the background. In recent years, many bottom-up saliency detection models have emerged: Initially, Itti et al. proposed a neural network-based saliency detection model, which combines three feature channels in multiple scales to achieve fast scene analysis, Although the model was able to identify some salient pixels, the results also contained a large number of false positives. Harel et al. proposed a graph-based saliency detection method, which is a bottom-up model that obtains the final saliency result by computing dissimilarity. Chang et al. constructed a graphical model that combines similarity and regional saliency to obtain better saliency estimates. Wang et al. proposed a saliency detection model combining local graph structure and background priors, and proposed an optimized framework for saliency detection, and the final experimental results showed good performance in most scenarios. Jiang et al. proposed an absorption Markov chain model for image saliency detection. Tu et al. proposed the use of a minimum spanning tree model for image saliency detection. Li et al. proposed a ranking model using regularized random walks to estimate significance values. Yang et al. proposed a graph-based saliency detection algorithm for manifold sorting (hereinafter referred to as the MR algorithm). The algorithm filters out some foreground seed points and background seed points, and then uses the manifold sorting model to calculate the relationship between the remaining nodes and the correlation between these seed points to get the final saliency value.
但是,MR算法分为两个阶段,首先计算其余节点与获得的背景种子点之间的相关性,进行取反后获得初步的显著结果,然后在第一个阶段的基础上,获得前景种子点,然后计算其余节点与这些前景种子点的相关性,从而获得最终的结果。该方法两次排序过程是完成独立进行的,会导致图像显著性检测的准确率不高的技术问题。However, the MR algorithm is divided into two stages. First, the correlation between the remaining nodes and the obtained background seed points is calculated, and the preliminary significant results are obtained after inversion. Then, on the basis of the first stage, the foreground seed points are obtained. , and then calculate the correlation of the remaining nodes with these foreground seed points to obtain the final result. The two sorting processes of this method are completed independently, which will lead to the technical problem that the accuracy of image saliency detection is not high.
发明内容SUMMARY OF THE INVENTION
本发明所要解决的技术问题在于提供了一种基于相关约束图排序的图像显著性检测方法及装置,以解决现有基于图的流形排序模型中存在的不足。The technical problem to be solved by the present invention is to provide an image saliency detection method and device based on related constraint graph sorting, so as to solve the deficiencies existing in the existing graph-based manifold sorting model.
本发明是通过以下技术方案解决上述技术问题的:The present invention solves the above-mentioned technical problems through the following technical solutions:
本发明实施例提供了一种基于相关约束图排序的图像显著性检测方法,所述方法包括:An embodiment of the present invention provides an image saliency detection method based on correlation constraint graph ranking, and the method includes:
A:针对每一幅待检测图像,使用简单线性迭代聚类SLIC算法对所述待检测图像进行超像素分割,得到不重叠的超像素块,然后将每一个所述不重叠的超像素块作为节点建立一个闭环图模型,进而计算每一个节点的中心先验信息;A: For each image to be detected, use the simple linear iterative clustering SLIC algorithm to perform superpixel segmentation on the image to be detected to obtain non-overlapping superpixel blocks, and then use each non-overlapping superpixel block as The node establishes a closed-loop graph model, and then calculates the center prior information of each node;
B:提取输入图像的颜色、纹理、位置等信息;B: Extract the color, texture, position and other information of the input image;
C:利用MR算法获取每一个节点的前景概率值;C: Use the MR algorithm to obtain the foreground probability value of each node;
D:将前景概率值大于第一预设阈值的节点的集合作为前景种子点集合ind_fore;将前景概率值小于第二预设阈值的节点的集合作为背景种子点集合ind_back;第一预设阈值大于所述第二预设阈值;D: The set of nodes whose foreground probability value is greater than the first preset threshold is used as the foreground seed point set ind_fore; the set of nodes whose foreground probability value is less than the second preset threshold is used as the background seed point set ind_back; the first preset threshold is greater than the second preset threshold;
E:使用相关约束图排序的模型计算得到每个超像素节点的前景概率S_f,并使用前景概率值S_f作为最终的显著估计值S_final。E: Calculate the foreground probability S_f of each superpixel node using the model sorted by the relevant constraint graph, and use the foreground probability value S_f as the final saliency estimate value S_final.
可选的,所述A步骤包括:Optionally, the step A includes:
A1:针对每一幅待检测图像,使用SLIC算法对该图像进行超像素分割成N个超像素块,每个超像素作为集合V中的一个节点;再获取与每一个节点对应的无向边,进而构建无向图模型G1=(V,E);A1: For each image to be detected, use the SLIC algorithm to divide the image into N superpixel blocks, and each superpixel is used as a node in the set V; then obtain the undirected edge corresponding to each node , and then construct an undirected graph model G 1 =(V, E);
A2:利用公式,计算每一个节点的中心先验信息,其中,A2: Using the formula, Calculate the center prior information of each node, where,
ci为第i个节点的中心先验信息;xi为第i个节点的中心位置横坐标;yi为第i个节点的中心位置纵坐标;(x0,y0)表示的是整幅图像的中心位置坐标;σ1为平衡参数,用于控制计算位置距离的离散程度;exp()为以自然底数为底数的指数函数;i为节点的个数。c i is the center prior information of the ith node; xi is the abscissa of the center position of the ith node; y i is the ordinate of the center position of the ith node; (x 0 , y 0 ) represents the integer is the coordinate of the center position of the image; σ 1 is the balance parameter, which is used to control the discrete degree of the calculated position distance; exp() is the exponential function with the natural base as the base; i is the number of nodes.
可选的,所述无向边是通过以下步骤获取的:Optionally, the undirected edge is obtained through the following steps:
针对每个节点i的二近邻k的相邻节点l,利用公式,dist(k,l)=||xk-xl||2,计算其与k的颜色欧氏距离dist(k,l);若所述颜色欧氏距离小于阈值θ,则在节点l与节点i之间连接一条无向边,找到相连的节点后继续查找,直至所有节点连接完成,其中,dist(k,l)为第l个节点与第k个节点之间的颜色欧氏距离;xk为第k个节点的颜色值;xl为第l个节点的颜色值;||||为求模函数。For the adjacent node l of the second neighbor k of each node i, using the formula, dist(k,l)=||x k -x l || 2 , calculate the color Euclidean distance dist(k,l ); if the color Euclidean distance is less than the threshold θ, connect an undirected edge between node l and node i, and continue searching after finding the connected nodes until all nodes are connected, where dist(k,l) is the color Euclidean distance between the lth node and the kth node; x k is the color value of the kth node; x l is the color value of the lth node; |||| is the modulo function.
可选的,所述B步骤,包括:根据提取的图像的颜色、纹理、位置信息;利用公式,计算每个无向边的权重,构建第一关联矩阵其中,Optionally, the step B includes: according to the color, texture, and position information of the extracted image; using a formula, Calculate the weight of each undirected edge and construct the first correlation matrix in,
为第i个节点与第j个节点之间的无向边的权重;i和j为节点的序号,且0≤i,j≤N;vi为第i个节点的特征描述符,且vi∈R65,vi=[xi,yi,Li,ai,bi,ci,ωi];(xi,yi)表示的是每个超像素节点的中心位置坐标;(Li,ai,bi)表示的是每个超像素节点的在CIE LAB颜色空间中包含的所有像素点的颜色均值;ci为第i个节点的中心先验信息;ωi为第i个节点的LBP值;vj为第j个节点的特征描述符;σ为预设的控制权重平衡的常数;n为超像素块的数量。 is the weight of the undirected edge between the ith node and the jth node; i and j are the serial numbers of the nodes, and 0≤i,j≤N; vi is the feature descriptor of the ith node, and v i ∈R 65 , v i =[x i ,y i ,L i ,a i , bi , ci ,ω i ]; (x i ,y i ) represents the center position coordinates of each superpixel node ; (L i , a i , b i ) represents the color mean of all pixels contained in the CIE LAB color space of each superpixel node; c i is the center prior information of the ith node; ω i is the LBP value of the i-th node; v j is the feature descriptor of the j-th node; σ is the preset constant to control the weight balance; n is the number of superpixel blocks.
可选的,所述C步骤,包括:Optionally, the step C includes:
C1:获取MR算法中每一个节点的各个无向边的权重;C1: Obtain the weight of each undirected edge of each node in the MR algorithm;
C2:根据每一个所述无向边的权重,构建MR算法的第二关联矩阵其中,C2: According to the weight of each of the undirected edges, construct the second correlation matrix of the MR algorithm in,
为所述第i个超像素和第j个超像素之间边的权重,且W2为第二关联矩阵;i,j∈V,i为第i个节点的序号;j为第j个节点的序号;ci为第i个节点在CIE LAB颜色空间中所有像素点的颜色均值;cj为第j个节点在CIELAB颜色空间中所有像素点的颜色均值;σ为控制权重平衡的常数; is the weight of the edge between the ith superpixel and the jth superpixel, and W 2 is the second association matrix; i,j∈V, i is the serial number of the ith node; j is the serial number of the jth node; c i is the color of all pixels of the ith node in the CIE LAB color space Mean; c j is the color mean of all pixels of the jth node in the CIELAB color space; σ is a constant that controls the weight balance;
C3:根据公式,D=diag{d11,…,dnn},计算度矩阵,其中,C3: According to the formula, D=diag{d 11 ,...,d nn }, calculate the degree matrix, where,
D为度矩阵;diag{}为对角线矩阵构建函数;dii为度矩阵元素,且 为第二关联矩阵对应的无向边的权重;D is the degree matrix; diag{} is the diagonal matrix construction function; d ii is the degree matrix element, and is the weight of the undirected edge corresponding to the second association matrix;
C4:针对边界上的每一节点,根据边界先验,标记所述节点的标记值;C4: For each node on the boundary, mark the label value of the node according to the boundary prior;
C5:利用公式,f:X→Rm,计算所述待检测图像对应的排序权重,其中,C5: Using the formula, f:X→R m , calculate the sorting weight corresponding to the image to be detected, wherein,
f为排序函数,且f=[f1,…,fn]T;f1为第1个节点的排序值;fn为第n个节点的排序值;n为节点的个数;令y=[y1,y2,…yn]T表示标签向量,种子点的标签值为1,其余的节点的标签值为0;X为输入的图像对应的特征矩阵;R为实数空间;Rm为m维实数空间;m为空间维度;y为所有种子节点标签值组成的向量;f is the sorting function, and f=[f 1 ,...,f n ] T ; f 1 is the sorting value of the first node; f n is the sorting value of the n-th node; n is the number of nodes; let y =[y 1 , y 2 ,...y n ] T represents the label vector, the label value of the seed point is 1, and the label value of the remaining nodes is 0; X is the feature matrix corresponding to the input image; R is the real number space; R m is the m-dimensional real number space; m is the space dimension; y is the vector composed of the label values of all seed nodes;
C6:利用排序函数公式,计算闭合解,其中,C6: Using the sorting function formula, Compute the closed solution, where,
f*为排序函数;为求解函数最小值自变量函数;∑为求和函数;fi为第i个节点的排序值;fj为第j个节点的排序值;yi为第i个节点的标签值;为所述无向边的权重;dii为度矩阵中第i行i列的元素;djj为度矩阵中第j行j列的元素;μ为平衡参数;f* is the sorting function; is the independent variable function of the minimum value of the solution function; ∑ is the summation function; f i is the sorting value of the i-th node; f j is the sorting value of the j-th node; y i is the label value of the i-th node; is the weight of the undirected edge; d ii is the element in the i-th row and i-column in the degree matrix; d jj is the element in the j-th row and j column in the degree matrix; μ is the balance parameter;
C7:根据所述闭合解利用公式,获取非归一化解,其中,C7: Using the formula according to the closed solution, to obtain a non-normalized solution, where,
D为度矩阵;W2为第二关联矩阵;S是W2的归一化矩阵;D is the degree matrix; W 2 is the second correlation matrix; S is the normalization matrix of W 2 ;
C8:利用公式,分别计算每个节点与四个边界上的背景种子点之间的相关性,得到四种情况下每个节点的背景概率值f,其中,λ为预设参数;C8: Using the formula, Calculate the correlation between each node and the background seed points on the four boundaries respectively, and obtain the background probability value f of each node in the four cases, where λ is a preset parameter;
C9:对每个节点与四个边界上的背景种子点之间的相关性值进行归一化得到再进行取反得到每个节点的显著值;将四种情况下得到的显著值进行点乘获取初始结果S_MR,作为节点的前景概率值。C9: Normalize the correlation value between each node and the background seed points on the four boundaries to get Then inversion is performed to obtain the salient value of each node; the salient value obtained in the four cases is dot-multiplied to obtain the initial result S_MR, which is used as the foreground probability value of the node.
可选的,所述D步骤,包括:Optionally, the D step includes:
D1:利用公式,获取第一预设阈值和第二预设阈值;其中,D1: Using the formula, Obtain the first preset threshold and the second preset threshold; wherein,
h1为第一预设阈值;h2为第二预设阈值;mean为求平均值函数;max为最大值求值函数;h 1 is the first preset threshold value; h 2 is the second preset threshold value; mean is the averaging function; max is the maximum value evaluating function;
D2:利用公式,获取前景种子点集合ind_fore和背景种子点集合ind_back,其中,D2: Using the formula, Get the foreground seed point set ind_fore and the background seed point set ind_back, where,
ind_fore为前景种子点集合;ind_back为背景种子点集合;θ为预设参数。ind_fore is a set of foreground seed points; ind_back is a set of background seed points; θ is a preset parameter.
可选的,所述E步骤,包括:Optionally, the E step includes:
E1:利用公式,F:X→Rn,计算所述待检测图像对应的排序权重,其中,E1: Using the formula, F:X→R n , calculate the sorting weight corresponding to the image to be detected, wherein,
F为排序函数;Fi表示的是第i个节点的排序值,且F=(f,g);f为每个节点属于前景的概率,g为每个节点属于背景的概率;F is the sorting function; F i represents the sorting value of the ith node, and F=(f, g); f is the probability that each node belongs to the foreground, and g is the probability that each node belongs to the background;
E2:利用公式,获取每一个节点的标签值;再利用公式,Y=(y1,y2)∈Rm×2,获取每一个节点的标签向量,其中,E2: Using the formula, Obtain the label value of each node; then use the formula, Y=(y 1 , y 2 )∈R m×2 , obtain the label vector of each node, where,
Y为每一个节点的标签向量;y1为所述节点属于前景的标签值;y2为所述节点属于背景的标签值;Rm×2为2m维实数空间;Y is the label vector of each node; y 1 is the label value that the node belongs to the foreground; y 2 is the label value that the node belongs to the background; R m×2 is a 2m-dimensional real number space;
E3:构建的相关约束图排序的模型公式为,E3: The model formula of the constructed related constraint graph ordering is,
其中, in,
F*为闭合解;Wij为第i个节点和第j个节点之间无向边的权重对应的第一关联矩阵;Fi为第i个节点的排序值;Fj为第j个节点的排序值;di为度矩阵元素;fi为第i个节点的前景概率;gi为第i个节点的背景概率;wi为第i个节点的特征权重;xi为第i个节点的特征;bi为偏置参数;β1为对前景概率的线性约束系数;β2为对背景概率的线性约束系数;F * is the closed solution; W ij is the first association matrix corresponding to the weight of the undirected edge between the i-th node and the j-th node; F i is the sorting value of the i-th node; F j is the j-th node d i is the degree matrix element; f i is the foreground probability of the i -th node; gi is the background probability of the i-th node; w i is the feature weight of the i-th node; x i is the i-th node The characteristics of the node; b i is the bias parameter; β 1 is the linear constraint coefficient on the foreground probability; β 2 is the linear constraint coefficient on the background probability;
E4:对使用所述E3步骤的排序模型对前景概率求偏导得到显著性值。E4: Obtain the significance value by taking the partial derivative of the foreground probability for the ranking model using the step E3.
本发明实施例还提供了一种基于相关约束图排序的图像显著性检测装置,所述装置包括:The embodiment of the present invention also provides an image saliency detection device based on the correlation constraint graph ranking, and the device includes:
第一计算模块,用于针对每一幅待检测图像,使用简单线性迭代聚类SLIC算法对所述待检测图像进行超像素分割,得到不重叠的超像素块,然后将每一个所述不重叠的超像素块作为节点建立一个闭环图模型,进而计算每一个节点的中心先验信息;The first calculation module is used to perform superpixel segmentation on the image to be detected using a simple linear iterative clustering SLIC algorithm for each image to be detected, to obtain non-overlapping superpixel blocks, and then divide each of the non-overlapping superpixel blocks. The superpixel block is used as a node to establish a closed-loop graph model, and then the center prior information of each node is calculated;
输入模块,用于提取输入图像的颜色、纹理、位置等信息;The input module is used to extract the color, texture, position and other information of the input image;
第二计算模块,用于利用MR算法获取每一个节点的前景概率值;The second calculation module is used to obtain the foreground probability value of each node by using the MR algorithm;
第一设置模块,用于将前景概率值大于第一预设阈值的节点的集合作为前景种子点集合ind_fore;将前景概率值小于第二预设阈值的节点的集合作为背景种子点集合ind_back;第一预设阈值大于所述第二预设阈值;The first setting module is used to set the set of nodes whose foreground probability value is greater than the first preset threshold as the set of foreground seed points ind_fore; the set of nodes whose foreground probability value is less than the second preset threshold is set as the set of background seed points ind_back; a preset threshold is greater than the second preset threshold;
第二设置模块,用于用相关约束图排序的模型计算得到每个超像素节点的前景概率S_f和背景概率S_g,并使用前景概率值S_f作为最终的显著估计值S_final。The second setting module is used to calculate the foreground probability S_f and the background probability S_g of each superpixel node by using the model of related constraint graph sorting, and use the foreground probability value S_f as the final significant estimation value S_final.
可选的,所述第一计算模块,还用于:Optionally, the first computing module is also used for:
A1:针对每一幅待检测图像,使用SLIC算法对该图像进行超像素分割成N个超像素块,每个超像素作为集合V中的一个节点;再获取与每一个节点对应的无向边,进而构建无向图模型G1=(V,E);A1: For each image to be detected, use the SLIC algorithm to divide the image into N superpixel blocks, and each superpixel is used as a node in the set V; then obtain the undirected edge corresponding to each node , and then construct an undirected graph model G 1 =(V, E);
A2:利用公式,计算每一个节点的中心先验信息,其中,A2: Using the formula, Calculate the center prior information of each node, where,
ci为第i个节点的中心先验信息;xi为第i个节点的中心位置横坐标;yi为第i个节点的中心位置纵坐标;(x0,y0)表示的是整幅图像的中心位置坐标;σ1为平衡参数,用于控制计算位置距离的离散程度;exp()为以自然底数为底数的指数函数;i为节点的个数。c i is the center prior information of the ith node; xi is the abscissa of the center position of the ith node; y i is the ordinate of the center position of the ith node; (x 0 , y 0 ) represents the integer is the coordinate of the center position of the image; σ 1 is the balance parameter, which is used to control the discrete degree of the calculated position distance; exp() is the exponential function with the natural base as the base; i is the number of nodes.
可选的,所述第二计算模块,还用于:Optionally, the second computing module is also used for:
C1:获取MR算法中每一个节点的各个无向边的权重;C1: Obtain the weight of each undirected edge of each node in the MR algorithm;
C2:根据每一个所述无向边的权重,构建MR算法的第二关联矩阵其中,C2: According to the weight of each of the undirected edges, construct the second correlation matrix of the MR algorithm in,
为所述第i个超像素和第j个超像素之间边的权重,且W2为第二关联矩阵;i,j∈V,i为第i个节点的序号;j为第j个节点的序号;ci为第i个节点在CIE LAB颜色空间中所有像素点的颜色均值;cj为第j个节点在CIELAB颜色空间中所有像素点的颜色均值;σ为控制权重平衡的常数; is the weight of the edge between the ith superpixel and the jth superpixel, and W 2 is the second association matrix; i,j∈V, i is the serial number of the ith node; j is the serial number of the jth node; c i is the color of all pixels of the ith node in the CIE LAB color space Mean; c j is the color mean of all pixels of the jth node in the CIELAB color space; σ is a constant that controls the weight balance;
C3:根据公式,D=diag{d11,…,dnn},计算度矩阵,其中,C3: According to the formula, D=diag{d 11 ,...,d nn }, calculate the degree matrix, where,
D为度矩阵;diag{}为对角线矩阵构建函数;dii为度矩阵元素,且 为关联矩阵对应的无向边的权重;D is the degree matrix; diag{} is the diagonal matrix construction function; d ii is the degree matrix element, and is the weight of the undirected edge corresponding to the correlation matrix;
C4:针对边界上的每一节点,根据边界先验,标记所述节点的标记值;C4: For each node on the boundary, mark the label value of the node according to the boundary prior;
C5:利用公式,f:X→Rm,计算所述待检测图像对应的排序权重,其中,C5: Using the formula, f:X→R m , calculate the sorting weight corresponding to the image to be detected, wherein,
f为排序函数,且f=[f1,…,fn]T;f1为第i个节点的排序值;fn为第n个节点的排序值;n为节点的个数;令y=[y1,y2,…yn]T表示标签向量,种子点的标签值为1,其余的节点的标签值为0;X为输入的图像对应的特征矩阵;R为实数空间;Rm为m维实数空间;m为空间维度;y为所有种子节点标签值组成的向量;f is the sorting function, and f=[f 1 ,...,f n ] T ; f 1 is the sorting value of the i-th node; f n is the sorting value of the n-th node; n is the number of nodes; let y =[y 1 , y 2 ,...y n ] T represents the label vector, the label value of the seed point is 1, and the label value of the remaining nodes is 0; X is the feature matrix corresponding to the input image; R is the real number space; R m is the m-dimensional real number space; m is the space dimension; y is the vector composed of the label values of all seed nodes;
C6:利用排序函数公式,计算闭合解,其中,C6: Using the sorting function formula, Compute the closed solution, where,
f*为排序函数;为求解函数最小值自变量函数;∑为求和函数;fi为第i个节点的排序值;fj为第j个节点的排序值;yi为第i个节点的标签值;为所述无向边的权重;dii为度矩阵中第i行i列的元素;djj为度矩阵中第j行j列的元素;μ为平衡参数;f* is the sorting function; is the independent variable function of the minimum value of the solution function; ∑ is the summation function; f i is the sorting value of the i-th node; f j is the sorting value of the j-th node; y i is the label value of the i-th node; is the weight of the undirected edge; d ii is the element in the i-th row and i-column in the degree matrix; d jj is the element in the j-th row and j column in the degree matrix; μ is the balance parameter;
C7:根据所述闭合解利用公式,获取非归一化解,其中,C7: Using the formula according to the closed solution, to obtain a non-normalized solution, where,
D为度矩阵;W2为所述第二关联矩阵;S是W2的归一化矩阵;D is the degree matrix; W 2 is the second correlation matrix; S is the normalization matrix of W 2 ;
C8:利用公式,分别计算每个节点与四个边界上的背景种子点之间的相关性,得到四种情况下每个节点的背景概率值f,其中,λ为预设参数;C8: Using the formula, Calculate the correlation between each node and the background seed points on the four boundaries respectively, and obtain the background probability value f of each node in the four cases, where λ is a preset parameter;
C9:对每个节点与四个边界上的背景种子点之间的相关性值进行归一化得到再进行取反得到每个节点的显著值;将四种情况下得到的显著值进行点乘获取初始结果S_MR,作为节点的前景概率值。C9: Normalize the correlation value between each node and the background seed points on the four boundaries to get Then inversion is performed to obtain the salient value of each node; the salient value obtained in the four cases is dot-multiplied to obtain the initial result S_MR, which is used as the foreground probability value of the node.
可选的,所述第二计算模块,还用于:Optionally, the second computing module is also used for:
根据提取的图像的颜色、纹理、位置信息;According to the color, texture and position information of the extracted image;
利用公式,计算每个无向边的权重,构建第一关联矩阵其中,Using the formula, Calculate the weight of each undirected edge and construct the first correlation matrix in,
为第i个节点与第j个节点之间的无向边的权重;i和j为节点的序号,且0≤i,j≤N;vi为第i个节点的特征描述符,且vi∈R65,vi=[xi,yi,Li,ai,bi,ci,ωi];(xi,yi)表示的是每个超像素节点的中心位置坐标;(Li,ai,bi)表示的是每个超像素节点的在CIE LAB颜色空间中包含的所有像素点的颜色均值;ci为第i个节点的中心先验信息;ωi为第i个节点的LBP值;vj为第j个节点的特征描述符;σ为预设的控制权重平衡的常数。 is the weight of the undirected edge between the ith node and the jth node; i and j are the serial numbers of the nodes, and 0≤i,j≤N; vi is the feature descriptor of the ith node, and v i ∈R 65 , vi = [x i , y i , Li , a i , b i , c i , ω i ]; (x i , y i ) represents the center position coordinates of each superpixel node ; (L i , a i , b i ) represents the color mean of all pixels contained in the CIE LAB color space of each superpixel node; c i is the center prior information of the ith node; ω i is the LBP value of the i-th node; v j is the feature descriptor of the j-th node; σ is a preset constant that controls the weight balance.
本发明相比现有技术具有以下优点:Compared with the prior art, the present invention has the following advantages:
应用本发明实施例,在构建图排序函数时,引入了前景线索和背景线索之间的关联参数,相对于现有技术中在构建图排序函数时,不考虑前景线索和背景线索之间的关联,考虑的影响因素更多,进而使显著性检测结果更加准确。同时由于传统的基于构图的方式在进行显著性计算的时候忽略了图像特征的作用,所以利用图像特征的线性学习来对最终的显著值进行约束,达到对构图信息和特征信息的充分利用,从而进一步提高最终的检测结果。Applying the embodiments of the present invention, when constructing a graph sorting function, an association parameter between foreground clues and background clues is introduced. Compared with the prior art, when constructing a graph sorting function, the association between foreground clues and background clues is not considered. , more influencing factors are considered, which makes the significance detection result more accurate. At the same time, since the traditional composition-based method ignores the role of image features in the saliency calculation, the linear learning of image features is used to constrain the final saliency value, so as to make full use of composition information and feature information. Further improve the final detection result.
附图说明Description of drawings
图1为本发明实施例提供的一种基于相关约束图排序的图像显著性检测方法的流程示意图;1 is a schematic flowchart of an image saliency detection method based on correlation constraint graph sorting according to an embodiment of the present invention;
图2为本发明实施例提供的一种基于相关约束图排序的图像显著性检测方法的原理示意图;FIG. 2 is a schematic diagram of the principle of an image saliency detection method based on correlation constraint graph sorting provided by an embodiment of the present invention;
图3为本发明实施例提供的一种构建的闭环图模型的结构示意图;3 is a schematic structural diagram of a constructed closed-loop graph model provided by an embodiment of the present invention;
图4为本发明实施例提供的一种基于相关约束图排序的图像显著性检测装置的结构示意图。FIG. 4 is a schematic structural diagram of an image saliency detection apparatus based on correlation constraint graph ranking according to an embodiment of the present invention.
具体实施方式Detailed ways
下面对本发明的实施例作详细说明,本实施例在以本发明技术方案为前提下进行实施,给出了详细的实施方式和具体的操作过程,但本发明的保护范围不限于下述的实施例。The embodiments of the present invention are described in detail below. This embodiment is implemented on the premise of the technical solution of the present invention, and provides a detailed implementation manner and a specific operation process, but the protection scope of the present invention is not limited to the following implementation. example.
为解决现有技术问题,本发明实施例提供了一种基于相关约束图排序的图像显著性检测方法及装置,下面首先就本发明实施例提供的一种基于相关约束图排序的图像显著性检测方法进行介绍。In order to solve the problems of the prior art, the embodiments of the present invention provide an image saliency detection method and apparatus based on the ordering of correlation constraint graphs. The following first describes the saliency detection of images based on the ordering of correlation constraint graphs provided by the embodiments of the present invention. method is introduced.
图1为本发明实施例提供的一种基于相关约束图排序的图像显著性检测方法的流程示意图;图2为本发明实施例提供的一种基于相关约束图排序的图像显著性检测方法的原理示意图,如图1和图2所示,所述方法包括:FIG. 1 is a schematic flowchart of an image saliency detection method based on correlation constraint graph sorting provided by an embodiment of the present invention; FIG. 2 is a principle of an image saliency detection method based on correlation constraint graph sorting provided by an embodiment of the present invention Schematic diagrams, as shown in Figure 1 and Figure 2, the method includes:
S101:针对每一幅待检测图像,使用简单线性迭代聚类SLIC算法对所述待检测图像进行超像素分割,得到不重叠的超像素块,然后将每一个所述不重叠的超像素块作为节点建立一个闭环图模型,进而计算每一个节点的先验信息;S101: For each image to be detected, use a simple linear iterative clustering SLIC algorithm to perform superpixel segmentation on the image to be detected to obtain non-overlapping superpixel blocks, and then use each non-overlapping superpixel block as The node establishes a closed-loop graph model, and then calculates the prior information of each node;
具体的,S101步骤可以包括:A1:针对每一幅待检测图像,使用SLIC算法对该图像进行超像素分割成N个超像素块,每个超像素作为集合V中的一个节点;再获取与每一个节点对应的无向边,每个节点与它直接相邻的节点、与相邻的相邻(二近邻)的节点之间有边,针对每个节点的二近邻的相邻节点,利用公式,计算得到相连的边,四个边界上的节点相互连接,进而构建出无向图模型G1=(V,E);A2:利用公式,计算每一个节点的中心先验信息,其中,ci为第i个节点的中心先验信息;xi为第i个节点的中心位置横坐标;yi为第i个节点的中心位置纵坐标;(x0,y0)表示的是整幅图像的中心位置坐标;σ1为平衡参数,用于控制计算位置距离的离散程度;exp()为以自然底数为底数的指数函数;i为节点的个数。Specifically, step S101 may include: A1: for each image to be detected, use the SLIC algorithm to perform superpixel segmentation on the image into N superpixel blocks, and each superpixel is used as a node in the set V; There is an undirected edge corresponding to each node, and each node has an edge between its directly adjacent node and the adjacent adjacent (two-neighbor) node. formula, the connected edges are calculated, and the nodes on the four boundaries are connected to each other, and then the undirected graph model G 1 =(V, E) is constructed; A2: Using the formula, Calculate the center prior information of each node, where c i is the center prior information of the ith node; x i is the abscissa of the center position of the ith node; y i is the ordinate of the center position of the ith node ; (x 0 , y 0 ) represents the center position coordinates of the entire image; σ 1 is the balance parameter, used to control the discrete degree of the calculated position distance; exp() is the exponential function with the natural base as the base; i is the the number of nodes.
图3为本发明实施例提供的一种构建的闭环图模型的结构示意图,如图3所示,图3的构建过程为:将待检测图像被进行超像素分割后,得到N个超像素块,将每一个超像素块作为一个节点,将每个超像素与和它在局部区域内属于相邻的那些超像素之间进行连接,构建边,最终建立的一个闭环图模型G1。FIG. 3 is a schematic structural diagram of a constructed closed-loop graph model provided by an embodiment of the present invention. As shown in FIG. 3 , the construction process in FIG. 3 is: after the image to be detected is divided into superpixels, N superpixel blocks are obtained. , take each superpixel block as a node, connect each superpixel with those superpixels adjacent to it in the local area, construct edges, and finally establish a closed-loop graph model G 1 .
在实际应用中,每个节点之间边的连接情况可以分为以下四种情况:In practical applications, the connection of edges between each node can be divided into the following four cases:
1、每个节点i与其直接相邻的节点j之间有边。1. There is an edge between each node i and its directly adjacent node j.
2、每个节点i与其二近邻(相邻的相邻)的节点k之间有边。2. There is an edge between each node i and its second neighbor (adjacent neighbor) node k.
3、每个节点i的二近邻k的相邻节点l,利用公式,dist(k,l)=||xk-xl||2,计算其与k的颜色欧氏距离dist(k,l),若距离小于阈值θ,则认为其与节点i之间也有一条边相连,找到相连的节点后继续查找,直至所有节点连接完成,其中,dist(k,l)为第l个节点与第k个节点之间的颜色欧氏距离;xk为第k个节点的颜色值;xl为第l个节点的颜色值;||||为求模函数。3. For the adjacent node l of the second neighbor k of each node i, using the formula, dist(k, l)=||x k -x l || 2 , calculate the color Euclidean distance dist(k, l), if the distance is less than the threshold θ, it is considered that there is also an edge between it and node i, and after finding the connected node, continue to search until all nodes are connected, where dist(k, l) is the lth node and The color Euclidean distance between the kth nodes; x k is the color value of the kth node; x l is the color value of the lth node; |||| is the modulo function.
4、位于四个边界上的每个节点之间相互连接,在图像的四周构成闭环相连。4. Each node located on the four boundaries is connected to each other, forming a closed-loop connection around the image.
应用本发明上述实施例,相对于现有技术中的闭环图模型,将第3点加入到了构建的闭环图模型,扩大了每个超像素区域的局部平滑范围,更好地诠释了在一定相邻区域内超像素区域之间具有一致的特征,进而提高了图像显著性检测的准确性。By applying the above-mentioned embodiments of the present invention, compared with the closed-loop graph model in the prior art, the third point is added to the constructed closed-loop graph model, which expands the local smoothing range of each superpixel region, and better interprets the model in a certain phase. The superpixel regions in adjacent regions have consistent features, which in turn improves the accuracy of image saliency detection.
S102:提取输入图像的颜色、纹理、位置等信息。S102: Extract the color, texture, position and other information of the input image.
具体的,可以根据提取的图像的颜色、纹理、位置信息;Specifically, it can be based on the color, texture, and position information of the extracted image;
再利用公式,计算每个无向边的权重,构建第一关联矩阵其中,Reuse the formula, Calculate the weight of each undirected edge and construct the first correlation matrix in,
为第i个节点与第j个节点之间的无向边的权重;W1为第一关联矩阵;i和j为节点的序号,且0≤i,j≤N;vi为第i个节点的特征描述符,且vi∈R65,vi=[xi,yi,Li,ai,bi,ci,ωi];(xi,yi)表示的是每个超像素节点的中心位置坐标;(Li,ai,bi)表示的是每个超像素节点的在CIE LAB颜色空间中包含的所有像素点的颜色均值;ci为第i个节点的中心先验信息;ωi为第i个节点的LBP值;vj为第j个节点的特征描述符;σ为预设的控制权重平衡的常数;n为超像素块的数量。 is the weight of the undirected edge between the i-th node and the j-th node; W 1 is the first association matrix; i and j are the serial numbers of the nodes, and 0≤i,j≤N; v i is the i-th The feature descriptor of the node, and v i ∈ R 65 , v i =[x i ,y i ,L i ,a i ,b i , ci ,ω i ]; (x i ,y i ) means that each The coordinates of the center position of each superpixel node; (L i , a i , b i ) represents the color mean of all pixels contained in the CIE LAB color space of each superpixel node; c i is the ith node ω i is the LBP value of the i-th node; v j is the feature descriptor of the j-th node; σ is a preset constant to control the weight balance; n is the number of superpixel blocks.
如图2所示,在实际应用中,图像的颜色特征采用的是每个超像素区域包含像素点的CIE LAB(Commission International Eclairage光线标准的组织规定的LAB)颜色均值;纹理特征采用的是LBP(Local Binary Patterns,局部二值模式)特征。通过计算相连的两个超像素节点之间在特征组合上的差异计算它们之间边的权重W2。As shown in Figure 2, in practical applications, the color feature of the image uses the CIE LAB (LAB specified by the Commission International Eclairage light standard) color mean of each superpixel area containing pixels; the texture feature uses LBP (Local Binary Patterns, local binary pattern) feature. By calculating the difference in feature combination between two connected superpixel nodes, the weight W 2 of the edge between them is calculated.
S103:利用MR算法获取每一个节点的前景概率值。S103: Use the MR algorithm to obtain the foreground probability value of each node.
具体的,所述S103步骤可以包括:Specifically, the step S103 may include:
C1:使用SLIC算法对待检测图像进行超像素分割,分割得到n个不重叠的超像素块集合X={x1,…xq,xq+1,…xn},其中,前q个超像素块是已标记的查询种子点,剩下的是未标记的超像素节点。然后,构建一个闭环图模型G2=(V,E)。V代表的是所有的节点集合,E代表的是所有的无向边集合。其中,每个节点与它直接相邻的节点、与相邻的相邻的节点之间有边,四个边界上的节点相互连接。C1: Use the SLIC algorithm to perform superpixel segmentation on the image to be detected, and obtain n non-overlapping superpixel block sets X={x 1 ,...x q ,x q+1 ,...x n }, where the first q superpixel blocks Pixel blocks are the labeled query seed points, and the rest are unlabeled superpixel nodes. Then, a closed-loop graphical model G 2 =(V,E) is constructed. V represents the set of all nodes, and E represents the set of all undirected edges. Among them, there is an edge between each node and its directly adjacent nodes and adjacent adjacent nodes, and the nodes on the four boundaries are connected to each other.
C2:构建第二关联矩阵其中,为所述第i个超像素和第j个超像素之间边的权重,且W2为第二关联矩阵;i,j∈V,i为第i个节点的序号;j为第j个节点的序号;ci为第i个节点在CIE LAB颜色空间中所有像素点的颜色均值;cj为第j个节点在CIE LAB颜色空间中所有像素点的颜色均值;σ为控制权重平衡的常数,通常与两个节点之间颜色均值的欧氏距离正相关。C2: Construct the second correlation matrix in, is the weight of the edge between the ith superpixel and the jth superpixel, and W 2 is the second association matrix; i,j∈V, i is the serial number of the ith node; j is the serial number of the jth node; c i is the color of all pixels of the ith node in the CIE LAB color space Mean; c j is the color mean of all pixels in the CIE LAB color space of the jth node; σ is a constant that controls the weight balance, which is usually positively related to the Euclidean distance of the color mean between two nodes.
C3:根据公式,D=diag{d11,…,dnn},计算度矩阵,其中,D为度矩阵;diag{}为对角线矩阵构建函数;dii为度矩阵元素,且 为关联矩阵对应的无向边的权重。C3: According to the formula, D=diag{d 11 ,...,d nn }, calculate the degree matrix, where D is the degree matrix; diag{} is the diagonal matrix construction function; d ii is the degree matrix element, and is the weight of the undirected edge corresponding to the correlation matrix.
C4:针对边界上的每一节点,根据边界先验,标记所述节点的标记值。C4: For each node on the boundary, according to the boundary prior, label the label value of the node.
C5:利用公式,f:X→Rm,计算所述待检测图像对应的排序权重,其中,f为排序函数,且f=[f1,…,fn]T;f1为第1个节点的排序值;fn为第n个节点的排序值;n为节点的个数;令y=[y1,y2,…yn]T表示标签向量,种子点的标签值为1,其余的节点的标签值为0;X为输入的图像对应的特征矩阵;R为实数空间;Rm为m维实数空间;m为空间维度;y为所有种子节点标签值组成的向量;C5: Calculate the sorting weight corresponding to the image to be detected by using the formula, f:X→R m , where f is a sorting function, and f=[f 1 ,...,f n ] T ; f 1 is the first The sorting value of the node; f n is the sorting value of the nth node; n is the number of nodes; let y=[y 1 , y 2 ,...y n ] T represents the label vector, and the label value of the seed point is 1, The label value of the remaining nodes is 0; X is the feature matrix corresponding to the input image; R is the real number space; R m is the m-dimensional real number space; m is the space dimension; y is the vector composed of the label values of all seed nodes;
C6:利用排序函数公式,计算闭合解,其中,f*为排序函数;为求解函数最小值自变量函数;∑为求和函数;fi为第i个节点的排序值;fj为第j个节点的排序值;y1为第i个节点的标签值;为所述无向边的权重;dii为度矩阵中第i行i列的元素;djj为度矩阵中第j行j列的元素;μ为平衡参数。C6: Using the sorting function formula, Calculate the closed solution, where f * is the sorting function; is the independent variable function of the minimum value of the solution function; ∑ is the summation function; f i is the sorting value of the i-th node; f j is the sorting value of the j-th node; y 1 is the label value of the i-th node; is the weight of the undirected edge; d ii is the element in the i-th row and i-column in the degree matrix; d jj is the element in the j-th row and j column in the degree matrix; μ is the balance parameter.
C7:根据所述闭合解利用公式,获取非归一化解,其中,D为度矩阵;W2为所述第二关联矩阵;S是W2的归一化矩阵。C7: Using the formula according to the closed solution, Obtain a non-normalized solution, where D is the degree matrix; W 2 is the second correlation matrix; S is the normalized matrix of W 2 .
C6步骤中计算得到的闭合解可以为:The closed solution calculated in step C6 can be:
其中,λ为预设参数;f*为所述闭合解;I是单位矩阵。 Among them, λ is a preset parameter; f * is the closed solution; I is the identity matrix.
根据闭合解和度矩阵,可以得到非归一化解 According to the closed solution and the degree matrix, the non-normalized solution can be obtained
C8:利用公式,分别计算每个节点与四个边界上的背景种子点之间的相关性,得到四种情况下每个节点的背景概率值f,其中,λ为预设参数。C8: Using the formula, The correlation between each node and the background seed points on the four boundaries is calculated separately, and the background probability value f of each node in the four cases is obtained, where λ is a preset parameter.
在实际应用中,在图像显著性检测中,通常使用度矩阵替换闭合解中的单位矩阵,进而得到上述公式。例如,计算的每个节点与四个边界上的背景种子点之间的相关性值,即四种情况下每个节点的背景概率值为f。In practical applications, in image saliency detection, the degree matrix is usually used to replace the identity matrix in the closed solution, and then the above formula is obtained. For example, the calculated correlation value between each node and the background seed points on the four boundaries, that is, the background probability value of each node in the four cases is f.
C9:对每个节点与四个边界上的背景种子点之间的相关性值进行归一化得到再进行取反得到每个节点的显著值;将四种情况下得到的显著值进行点乘获取初始结果S_MR,作为前景概率值。C9: Normalize the correlation value between each node and the background seed points on the four boundaries to get Then inversion is performed to obtain the salient value of each node; the salient value obtained in the four cases is dot-multiplied to obtain the initial result S_MR, which is used as the foreground probability value.
在实际应用中,对归一化后的背景概率值取反后的值为:In practical applications, the inverse value of the normalized background probability value is:
将四种情况下得到的显著值进行点乘获取初始结果可以为:Dot multiplication of the significant values obtained in the four cases to obtain the initial result can be:
Sbq(i)=St(i)×Sb(i)×Sl(i)×Sr(i)。S bq (i)=S t (i)×S b (i)×S l (i)×S r (i).
需要强调的是,S103步骤中利用MR算法获取每一个节点的前景概率值,利用的是经典的基于图的流形排序算法第一个阶段获取的待检测图像的S_MR(前景概率值)。It should be emphasized that in step S103, the MR algorithm is used to obtain the foreground probability value of each node, and the S_MR (foreground probability value) of the image to be detected obtained in the first stage of the classical graph-based manifold sorting algorithm is used.
把每个超像素当作是一个节点,将每个超像素与和它在局部区域内属于相邻的那些超像素进行连接,构建边,然后建立一个闭环图模型G2。然后提取每个超像素区域的LAB颜色特征,通过计算相连的两个超像素节点之间在颜色特征上差异值计算出边的权重W2。该算法主要分为两个阶段,根据S101步骤中计算的先验信息,选取图像的四个边界上的超像素作为背景查询点Query,然后使用流形排序算法计算每个超像素节点与背景查询点之间的相关性,计算得出每个超像素属于背景的概率,然后归一化,再进行取反,得到初始的显著结果。第二阶段首先第一阶段初始的结果进行二值化,然后筛选出前景查询点Query,然后再使用流形排序算法计算每个超像素节点与前景查询点之间的相关性,从而获取的待检测图像的S_MR(前景概率值)。Consider each superpixel as a node, connect each superpixel with those superpixels that it belongs to adjacent to in the local area, build edges, and then build a closed-loop graph model G 2 . Then the LAB color feature of each superpixel region is extracted, and the edge weight W 2 is calculated by calculating the difference in color feature between two connected superpixel nodes. The algorithm is mainly divided into two stages. According to the prior information calculated in step S101, the superpixels on the four boundaries of the image are selected as the background query points Query, and then the manifold sorting algorithm is used to calculate the relationship between each superpixel node and the background query. The correlation between points is calculated, and the probability that each superpixel belongs to the background is calculated, then normalized, and then negated to obtain the initial significant result. In the second stage, the initial results of the first stage are binarized, and then the foreground query point Query is filtered out, and then the manifold sorting algorithm is used to calculate the correlation between each superpixel node and the foreground query point, so as to obtain the pending query points. Detect the S_MR (foreground probability value) of the image.
S104:将前景概率值大于第一预设阈值的节点的集合作为前景种子点集合ind_fore;将前景概率值小于第二预设阈值的节点的集合作为背景种子点集合ind_back;第一预设阈值大于所述第二预设阈值。S104: Use the set of nodes whose foreground probability value is greater than the first preset threshold as the foreground seed point set ind_fore; use the set of nodes whose foreground probability value is less than the second preset threshold as the background seed point set ind_back; the first preset threshold is greater than the second preset threshold.
具体的,所述S104步骤,可以包括:Specifically, the step S104 may include:
D1:利用公式,获取第一预设阈值和第二预设阈值;h1为第一预设阈值;h2为第二预设阈值;mean为求平均值函数;max为最大值求值函数;D1: Using the formula, Obtain the first preset threshold and the second preset threshold; h1 is the first preset threshold; h2 is the second preset threshold; mean is an average value function; max is a maximum value evaluation function;
D2:利用公式,获取前景种子点集合ind_fore和背景种子点集合ind_back,其中,ind_fore为前景种子点集合;ind_back为背景种子点集合;θ为预设参数。D2: Using the formula, Obtain the foreground seed point set ind_fore and the background seed point set ind_back, where ind_fore is the foreground seed point set; ind_back is the background seed point set; θ is a preset parameter.
S105:使用相关约束图排序的模型计算得到每个超像素节点的前景概率S_f,并使用前景概率值S_f作为最终的显著估计值S_final。S105: Calculate the foreground probability S_f of each superpixel node using the model for sorting the relevant constraint graph, and use the foreground probability value S_f as the final significant estimation value S_final.
具体的,S105步骤可以包括:E1:利用公式,F:X→Rn,计算所述待检测图像对应的排序权重,其中,F为排序函数;Fi表示的是第i个节点的排序值,且F=(f,g);f为每个节点属于前景的概率,g为每个节点属于背景的概率;E2:利用公式,获取每一个节点的标签值;再利用公式,Y=(y1,y2)∈Rm×2,获取每一个节点的标签向量,其中,Specifically, step S105 may include: E1: Calculate the sorting weight corresponding to the image to be detected by using the formula, F:X→R n , where F is a sorting function; F i represents the sorting value of the ith node , and F=(f, g); f is the probability that each node belongs to the foreground, and g is the probability that each node belongs to the background; E2: Using the formula, Obtain the label value of each node; then use the formula, Y=(y 1 , y 2 )∈R m×2 , obtain the label vector of each node, where,
Y为每一个节点的标签向量;y1为所述节点属于前景的标签值;y2为所述节点属于背景的标签值;Rm×2为2m维实数空间。Y is the label vector of each node; y 1 is the label value of the node belonging to the foreground; y 2 is the label value of the node belonging to the background; R m×2 is a 2m-dimensional real number space.
E3:构建的相关约束图排序的模型公式为,E3: The model formula of the constructed related constraint graph ordering is,
其中,F*为闭合解;Wij为第i个节点和第j个节点之间无向边的权重对应的第一关联矩阵;Fi为第i个节点的排序值;Fj为第j个节点的排序值;di为度矩阵元素;fi为第i个节点的前景概率;gi为第i个节点的背景概率;wi为第i个节点的特征权重;xi为第i个节点的特征;bi为偏置参数;β1为对前景概率的线性约束系数;β2为对背景概率的线性约束系数;E4:对使用所述E3步骤的排序模型对前景概率求偏导得到显著性值。 Among them, F * is the closed solution; W ij is the first association matrix corresponding to the weight of the undirected edge between the i-th node and the j-th node; F i is the sorting value of the i-th node; F j is the j-th node rank value of nodes; d i is the degree matrix element; f i is the foreground probability of the ith node; gi is the background probability of the ith node; w i is the feature weight of the ith node; x i is the ith node Features of i nodes; b i is the bias parameter; β 1 is the linear constraint coefficient on the foreground probability; β 2 is the linear constraint coefficient on the background probability; E4: For the ranking model using the E3 step, the foreground probability is calculated Partial derivative to get significance value.
在E3步骤中的公式中,第一个多项式为平滑项,由于图像中具有某一特性的区域的周围区域也具有和它相似的特征,即认为相邻的局部区域内的节点之间的排序的得分尽可能的相似,因此可以加入平滑项;第二项为拟合项,使得我们最终计算的排序值和我们给定的初始标签值之间的差异尽可能的小;第三项是对f和g的约束项,目的是使得计算得到的f和g之间的相关性尽可能的小;第四、五项是分别对f和g的线性约束项,利用图像特征的线性学习来对最终的显著值进行约束。In the formula in step E3, the first polynomial is a smooth term, because the surrounding area of the area with a certain characteristic in the image also has similar characteristics to it, that is, it is considered that the ordering between nodes in adjacent local areas The scores are as similar as possible, so a smoothing term can be added; the second term is a fitting term, so that the difference between our final calculated ranking value and the initial label value we give is as small as possible; the third term is a pair of The constraint items of f and g are designed to make the correlation between the calculated f and g as small as possible; the fourth and fifth items are linear constraints on f and g respectively, and the linear learning of image features is used to The final significant value is constrained.
在实际应用中,E4步骤可以包括:In practice, the E4 steps can include:
1)、对构建的相关约束图排序的模型公式进行简化,得到优化后的公式为,1) Simplify the model formula of the constructed related constraint graph sorting, and obtain the optimized formula as,
2)、固定f可以获得b和W的最优解,因此有:2), the optimal solution of b and W can be obtained by fixing f, so there are:
和 and
其中,1是全为1的向量。I∈R是单位矩阵,所以可得到 in, 1 is a vector of all ones. I∈R is the identity matrix, so we can get
其中, in,
3)、将相关约束图排序的模型的求解问题写成如下公式:3), the solution problem of the model of the related constraint graph sorting is written as the following formula:
J=Tr[FTA*F-μFTY]+λfTg+β1||XTWf+bf1-f||2+β2||XTWg+bg1-g||2,其中,J=Tr[F T A * F-μF T Y]+λf T g+β 1 ||X T W f +b f 1-f|| 2 +β 2 ||X T W g +b g 1- g|| 2 , where,
A*=(1+μ)D-W。A * =(1+μ)DW.
4)、将上述公式中的F替换为F=(f,g),Y=(y1,y2)并求解化简可以得到公式,4), replace F in the above formula with F=(f, g), Y=(y 1 , y 2 ) and solve the simplification to get the formula,
5)、对4)步骤中的f求导,得到以下结果:5), derivation of f in step 4), obtain the following results:
6)、对4)步骤中的g求导,得到以下结果:6), to the derivation of g in 4) step, obtain the following results:
7)、由5)和6)可得到:7), from 5) and 6) can be obtained:
8)、根据7)的公式可计算得:8), can be calculated according to the formula of 7):
f*=μ(λ2I-4(A*)2-2A*β1B-2β2BA*-β2β1B2)-1(λy2-2A*y1-β2By1);f * = μ(λ 2 I-4(A * ) 2 -2A * β 1 B - 2β 2 BA * -β 2 β 1 B 2 ) -1 (λy 2 -2A * y 1 -β 2 By 1 ) ;
将f*作为最终的显著估计值S_final。Take f * as the final saliency estimate S_final.
应用本发明图1所示实施例,在构建图排序函数时,引入了前景线索和背景线索之间的关联参数,在同时计算每个超像素节点与给定的前景查询点和背景查询之间的相关性时,加上一个相关性约束条件,降低了求得的前景概率值和背景概率值之间的相关性,相对于现有技术中在构建图排序函数时,不考虑前景线索和背景线索之间的关联,考虑的影响因素更多,进而使显著性检测结果更加准确。同时由于传统的基于构图的方式在进行显著性计算的时候忽略了图像特征的作用,所以利用图像特征的线性学习来对最终的显著值进行约束,达到对构图信息和特征信息的充分利用,从而进一步提高最终的检测结果。Applying the embodiment shown in FIG. 1 of the present invention, when constructing the graph sorting function, the correlation parameters between the foreground clues and the background clues are introduced, and the relationship between each superpixel node and a given foreground query point and background query point is calculated at the same time. When the correlation is determined, a correlation constraint is added to reduce the correlation between the obtained foreground probability value and the background probability value. Compared with the prior art, when constructing the graph sorting function, the foreground clues and background are not considered. The association between clues takes more influencing factors into account, which in turn makes the saliency detection results more accurate. At the same time, since the traditional composition-based method ignores the role of image features in the saliency calculation, the linear learning of image features is used to constrain the final saliency value, so as to make full use of composition information and feature information. Further improve the final detection result.
与本发明实施例提供的一种基于相关约束图排序的图像显著性检测方法相对应,本发明实施例还提供了一种基于相关约束图排序的图像显著性检测装置。Corresponding to an image saliency detection method based on correlation constraint graph ranking provided by an embodiment of the present invention, an embodiment of the present invention further provides an image saliency detection device based on correlation constraint graph ranking.
图4为本发明实施例提供的一种基于相关约束图排序的图像显著性检测装置的结构示意图,如图4所示,所述装置包括:FIG. 4 is a schematic structural diagram of an image saliency detection apparatus based on correlation constraint graph ranking according to an embodiment of the present invention. As shown in FIG. 4 , the apparatus includes:
第一计算模块401,用于针对每一幅待检测图像,使用简单线性迭代聚类SLIC算法对所述待检测图像进行超像素分割,得到不重叠的超像素块,然后将每一个所述不重叠的超像素块作为节点建立一个闭环图模型,进而计算每一个节点的先验信息;The
输入模块402,用于提取输入图像的颜色、纹理、位置等信息;The
第二计算模块403,用于利用MR算法获取每一个节点的前景概率值;The
第一设置模块404,用于将前景概率值大于第一预设阈值的节点的集合作为前景种子点集合ind_fore;将前景概率值小于第二预设阈值的节点的集合作为背景种子点集合ind_back;第一预设阈值大于所述第二预设阈值;The
第二设置模块405,用于用相关约束图排序的模型计算得到每个超像素节点的前景概率S_f和背景概率S_g,并使用前景概率值S_f作为最终的显著估计值S_final。The
应用本发明图4所示实施例,在构建图排序函数时,引入了前景线索和背景线索之间的关联参数,在同时计算每个超像素节点与给定的前景查询点和背景查询之间的相关性时,加上一个相关性约束条件,降低了求得的前景概率值和背景概率值之间的相关性,相对于现有技术中在构建图排序函数时,不考虑前景线索和背景线索之间的关联,考虑的影响因素更多,进而使显著性检测结果更加准确。同时由于传统的基于构图的方式在进行显著性计算的时候忽略了图像特征的作用,所以利用图像特征的线性学习来对最终的显著值进行约束,达到对构图信息和特征信息的充分利用,从而进一步提高最终的检测结果。Applying the embodiment shown in FIG. 4 of the present invention, when constructing a graph sorting function, the correlation parameters between foreground clues and background clues are introduced, and the relationship between each superpixel node and a given foreground query point and background query is calculated simultaneously. When the correlation is determined, a correlation constraint is added to reduce the correlation between the obtained foreground probability value and the background probability value. Compared with the prior art, when constructing the graph sorting function, the foreground clues and background are not considered. The association between clues takes more influencing factors into account, which in turn makes the saliency detection results more accurate. At the same time, since the traditional composition-based method ignores the role of image features in the saliency calculation, the linear learning of image features is used to constrain the final saliency value, so as to make full use of composition information and feature information. Further improve the final detection result.
在本发明实施例的一种具体实施方式中,所述第一计算模块404,还用于:In a specific implementation manner of the embodiment of the present invention, the
A1:针对每一幅待检测图像,使用SLIC算法对该图像进行超像素分割成N个超像素块,每个超像素作为集合V中的一个节点;再获取与每一个节点对应的无向边,进而构建无向图模型G1=(V,E);A1: For each image to be detected, use the SLIC algorithm to divide the image into N superpixel blocks, and each superpixel is used as a node in the set V; then obtain the undirected edge corresponding to each node , and then construct an undirected graph model G 1 =(V, E);
A2:利用公式,计算每一个节点的中心先验信息,其中,A2: Using the formula, Calculate the center prior information of each node, where,
ci为第i个节点的中心先验信息;xi为第i个节点的中心位置横坐标;yi为第i个节点的中心位置纵坐标;(x0,y0)表示的是整幅图像的中心位置坐标;σ1为平衡参数,用于控制计算位置距离的离散程度;exp()为以自然底数为底数的指数函数;i为节点的个数。c i is the center prior information of the ith node; xi is the abscissa of the center position of the ith node; y i is the ordinate of the center position of the ith node; (x 0 , y 0 ) represents the integer is the coordinate of the center position of the image; σ 1 is the balance parameter, which is used to control the discrete degree of the calculated position distance; exp() is the exponential function with the natural base as the base; i is the number of nodes.
在本发明实施例的一种具体实施方式中,所述第二计算模块403,还用于:In a specific implementation manner of the embodiment of the present invention, the
C1:获取MR算法中每一个节点的各个无向边的权重;C1: Obtain the weight of each undirected edge of each node in the MR algorithm;
C2:根据每一个所述无向边的权重,构建MR算法的第二关联矩阵其中,C2: According to the weight of each of the undirected edges, construct the second correlation matrix of the MR algorithm in,
为所述第i个超像素和第j个超像素之间边的权重,且W2为第二关联矩阵;i,j∈V,i为第i个节点的序号;j为第j个节点的序号;ci为第i个节点在CIE LAB颜色空间中所有像素点的颜色均值;cj为第j个节点在CIELAB颜色空间中所有像素点的颜色均值;σ为控制权重平衡的常数; is the weight of the edge between the ith superpixel and the jth superpixel, and W 2 is the second association matrix; i,j∈V, i is the serial number of the ith node; j is the serial number of the jth node; c i is the color of all pixels of the ith node in the CIE LAB color space Mean; c j is the color mean of all pixels of the jth node in the CIELAB color space; σ is a constant that controls the weight balance;
C3:根据公式,D=diag{d11,…,dnn},计算度矩阵,其中,C3: According to the formula, D=diag{d 11 ,...,d nn }, calculate the degree matrix, where,
D为度矩阵;diag{}为对角线矩阵构建函数;dii为度矩阵元素,且 为关联矩阵对应的无向边的权重;D is the degree matrix; diag{} is the diagonal matrix construction function; d ii is the degree matrix element, and is the weight of the undirected edge corresponding to the correlation matrix;
C4:针对边界上的每一节点,根据边界先验,标记所述节点的标记值;C4: For each node on the boundary, mark the label value of the node according to the boundary prior;
C5:利用公式,f:X→Rm,计算所述待检测图像对应的排序权重,其中,C5: Using the formula, f:X→R m , calculate the sorting weight corresponding to the image to be detected, wherein,
f为排序函数,且f=[f1,…,fn]T;f1为第1个节点的排序值;fn为第n个节点的排序值;n为节点的个数;令y=[y1,y2,…yn]T表示标签向量,种子点的标签值为1,其余的节点的标签值为0;X为输入的图像对应的特征矩阵;R为实数空间;Rm为m维实数空间;m为空间维度;y为所有种子节点标签值组成的向量;f is the sorting function, and f=[f 1 ,...,f n ] T ; f 1 is the sorting value of the first node; f n is the sorting value of the n-th node; n is the number of nodes; let y =[y 1 , y 2 ,...y n ] T represents the label vector, the label value of the seed point is 1, and the label value of the other nodes is 0; X is the feature matrix corresponding to the input image; R is the real number space; R m is the m-dimensional real number space; m is the space dimension; y is the vector composed of the label values of all seed nodes;
C6:利用排序函数公式,计算闭合解,其中,C6: Using the sorting function formula, Compute the closed solution, where,
f*为排序函数;为求解函数最小值自变量函数;∑为求和函数;fi为第i个节点的排序值;fj为第j个节点的排序值;yi为第i个节点的标签值;为所述无向边的权重;dii为度矩阵中第i行i列的元素;djj为度矩阵中第j行j列的元素;μ为平衡参数;f * is the sorting function; is the independent variable function of the minimum value of the solution function; ∑ is the summation function; f i is the sorting value of the i-th node; f j is the sorting value of the j-th node; y i is the label value of the i-th node; is the weight of the undirected edge; d ii is the element in the i-th row and i-column in the degree matrix; d jj is the element in the j-th row and j column in the degree matrix; μ is the balance parameter;
C7:根据所述闭合解利用公式,获取非归一化解,其中,D为度矩阵;W2为所述第二关联矩阵;S是W2的归一化矩阵;C7: Using the formula according to the closed solution, Obtain a non-normalized solution, wherein D is the degree matrix; W 2 is the second correlation matrix; S is the normalized matrix of W 2 ;
C8:利用公式,分别计算每个节点与四个边界上的背景种子点之间的相关性,得到四种情况下每个节点的背景概率值f,其中,λ为预设参数;C8: Using the formula, Calculate the correlation between each node and the background seed points on the four boundaries respectively, and obtain the background probability value f of each node in the four cases, where λ is a preset parameter;
C9:对每个节点与四个边界上的背景种子点之间的相关性值进行归一化得到再进行取反得到每个节点的显著值;将四种情况下得到的显著值进行点乘获取初始结果S_MR,作为节点的前景概率值。C9: Normalize the correlation value between each node and the background seed points on the four boundaries to get Then inversion is performed to obtain the salient value of each node; the salient value obtained in the four cases is dot-multiplied to obtain the initial result S_MR, which is used as the foreground probability value of the node.
在本发明实施例的一种具体实施方式中,所述第二计算模块403,还用于:In a specific implementation manner of the embodiment of the present invention, the
利用公式,计算每个无向边的权重,构建第一关联矩阵其中,Using the formula, Calculate the weight of each undirected edge and construct the first correlation matrix in,
为第i个节点与第j个节点之间的无向边的权重;i和j为节点的序号,且0≤i,j≤N;vi为第i个节点的特征描述符,且vi∈R65,vi=[xi,yi,Li,ai,bi,ci,ωi];(xi,yi)表示的是每个超像素节点的中心位置坐标;(Li,ai,bi)表示的是每个超像素节点的在CIE LAB颜色空间中包含的所有像素点的颜色均值;ci为第i个节点的中心先验信息;ωi为第i个节点的LBP值;vj为第j个节点的特征描述符;σ为预设的控制权重平衡的常数。 is the weight of the undirected edge between the ith node and the jth node; i and j are the serial numbers of the nodes, and 0≤i,j≤N; vi is the feature descriptor of the ith node, and v i ∈R 65 , v i =[x i ,y i ,L i ,a i , bi , ci ,ω i ]; (x i ,y i ) represents the center position coordinates of each superpixel node ; (L i , a i , b i ) represents the color mean of all pixels contained in the CIE LAB color space of each superpixel node; c i is the center prior information of the ith node; ω i is the LBP value of the i-th node; v j is the feature descriptor of the j-th node; σ is a preset constant that controls the weight balance.
以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护范围之内。The above descriptions are only preferred embodiments of the present invention and are not intended to limit the present invention. Any modifications, equivalent replacements and improvements made within the spirit and principles of the present invention shall be included in the protection of the present invention. within the range.
Claims (8)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201810658629.3A CN108846404B (en) | 2018-06-25 | 2018-06-25 | A method and device for image saliency detection based on correlation constraint graph ranking |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201810658629.3A CN108846404B (en) | 2018-06-25 | 2018-06-25 | A method and device for image saliency detection based on correlation constraint graph ranking |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN108846404A CN108846404A (en) | 2018-11-20 |
| CN108846404B true CN108846404B (en) | 2021-10-01 |
Family
ID=64203559
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201810658629.3A Active CN108846404B (en) | 2018-06-25 | 2018-06-25 | A method and device for image saliency detection based on correlation constraint graph ranking |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN108846404B (en) |
Families Citing this family (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109993173B (en) * | 2019-03-28 | 2023-07-21 | 华南理工大学 | A Weakly Supervised Image Semantic Segmentation Method Based on Seed Growth and Boundary Constraints |
| CN110111353B (en) * | 2019-04-29 | 2020-01-24 | 河海大学 | Image saliency detection method based on Markov background and foreground absorption chain |
| CN110188763B (en) * | 2019-05-28 | 2021-04-30 | 江南大学 | Image significance detection method based on improved graph model |
| CN110287802B (en) * | 2019-05-29 | 2022-08-12 | 南京邮电大学 | Human eye gaze point prediction method based on optimized image foreground and background seeds |
| CN110298842A (en) * | 2019-06-10 | 2019-10-01 | 上海工程技术大学 | A kind of rail clip image position method based on super-pixel node sequencing |
| CN110533593B (en) * | 2019-09-27 | 2023-04-11 | 山东工商学院 | Method for quickly creating accurate trimap |
| CN117372431B (en) * | 2023-12-07 | 2024-02-20 | 青岛天仁微纳科技有限责任公司 | An image detection method for nanoimprint molds |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6038344A (en) * | 1996-07-12 | 2000-03-14 | The United States Of America As Represented By The Secretary Of The Navy | Intelligent hypersensor processing system (IHPS) |
| CN104123734A (en) * | 2014-07-22 | 2014-10-29 | 西北工业大学 | Visible light and infrared detection result integration based moving target detection method |
| CN104715251A (en) * | 2015-02-13 | 2015-06-17 | 河南科技大学 | Salient object detection method based on histogram linear fitting |
-
2018
- 2018-06-25 CN CN201810658629.3A patent/CN108846404B/en active Active
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6038344A (en) * | 1996-07-12 | 2000-03-14 | The United States Of America As Represented By The Secretary Of The Navy | Intelligent hypersensor processing system (IHPS) |
| CN104123734A (en) * | 2014-07-22 | 2014-10-29 | 西北工业大学 | Visible light and infrared detection result integration based moving target detection method |
| CN104715251A (en) * | 2015-02-13 | 2015-06-17 | 河南科技大学 | Salient object detection method based on histogram linear fitting |
Also Published As
| Publication number | Publication date |
|---|---|
| CN108846404A (en) | 2018-11-20 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN108846404B (en) | A method and device for image saliency detection based on correlation constraint graph ranking | |
| Liu et al. | Deep convolutional neural networks for thermal infrared object tracking | |
| CN112184752A (en) | Video target tracking method based on pyramid convolution | |
| CN106127197B (en) | Image saliency target detection method and device based on saliency label sorting | |
| Xia et al. | Loop closure detection for visual SLAM using PCANet features | |
| CN109086777B (en) | Saliency map refining method based on global pixel characteristics | |
| CN107301644B (en) | Natural image non-formaldehyde finishing method based on average drifting and fuzzy clustering | |
| CN111709317B (en) | Pedestrian re-identification method based on multi-scale features under saliency model | |
| CN108629783A (en) | Image partition method, system and medium based on the search of characteristics of image density peaks | |
| CN105701467A (en) | Many-people abnormal behavior identification method based on human body shape characteristic | |
| CN109034035A (en) | Pedestrian's recognition methods again based on conspicuousness detection and Fusion Features | |
| CN110490913A (en) | Image matching method based on feature description operator grouped by corner points and single line segments | |
| CN105809672A (en) | Super pixels and structure constraint based image's multiple targets synchronous segmentation method | |
| CN104680546A (en) | Image salient object detection method | |
| CN111091129B (en) | Image salient region extraction method based on manifold ordering of multiple color features | |
| CN105046714A (en) | Unsupervised image segmentation method based on super pixels and target discovering mechanism | |
| CN119152193B (en) | A YOLO target detection method and system based on differentiable architecture search | |
| CN113763474A (en) | Scene geometric constraint-based indoor monocular depth estimation method | |
| CN110135435B (en) | Saliency detection method and device based on breadth learning system | |
| CN109242854A (en) | A kind of image significance detection method based on FLIC super-pixel segmentation | |
| Dalara et al. | Entity recognition in Indian sculpture using CLAHE and machine learning | |
| CN115661754B (en) | A person re-identification method based on dimensional fusion attention | |
| CN120411539B (en) | A method and system for feature extraction from side-scan sonar images | |
| CN104778683A (en) | Multi-modal image segmenting method based on functional mapping | |
| CN119723028A (en) | A remote sensing image target detection method integrating multi-scene random fields |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |