CN105303567A - Image registration method integrating image scale invariant feature transformation and individual entropy correlation coefficient - Google Patents
Image registration method integrating image scale invariant feature transformation and individual entropy correlation coefficient Download PDFInfo
- Publication number
- CN105303567A CN105303567A CN201510672223.7A CN201510672223A CN105303567A CN 105303567 A CN105303567 A CN 105303567A CN 201510672223 A CN201510672223 A CN 201510672223A CN 105303567 A CN105303567 A CN 105303567A
- Authority
- CN
- China
- Prior art keywords
- image
- sigma
- floating
- affine transformation
- centerdot
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Image Analysis (AREA)
Abstract
一种融合图像尺度不变特征变换和个体熵相关系数的图像配准方法,包括如下步骤:1)提取参考图像和浮动图像的尺度不变特征变换SIFT特征点;2)匹配参考图像和浮动图像中的SIFT特征点;3)基于匹配的SIFT特征点利用最小均方根误差建立仿射变换模型;4)利用仿射变换模型对浮动图像进行变换得到粗配准的图像;5)建立新的仿射变换模型并对其参数进行初始化,同时对Powell优化算法的初始搜索点、初始搜索方向进行初始化;6)利用IECC作为相似性度量目标函数,对该目标函数进行优化得到最优的仿射变换参数;7)利用优化出的最优仿射变换对新的浮动图像进行变换得到最终的精配准图像。本发明同时具有高精度和良好鲁棒性。
An image registration method that fuses image scale-invariant feature transformation and individual entropy correlation coefficients, comprising the steps of: 1) extracting scale-invariant feature transformation SIFT feature points of a reference image and a floating image; 2) matching the reference image and the floating image SIFT feature points; 3) Establish an affine transformation model based on the matching SIFT feature points using the minimum root mean square error; 4) Use the affine transformation model to transform the floating image to obtain a rough registration image; 5) Establish a new The affine transformation model is initialized and its parameters are initialized, and the initial search point and initial search direction of the Powell optimization algorithm are initialized at the same time; 6) Using IECC as the similarity measurement objective function, the objective function is optimized to obtain the optimal affine Transformation parameters; 7) Transform the new floating image using the optimized optimal affine transformation to obtain the final fine registration image. The invention has high precision and good robustness at the same time.
Description
技术领域technical field
本发明涉及图像配准领域,尤其是一种图像配准方法。The invention relates to the field of image registration, in particular to an image registration method.
背景技术Background technique
图像配准有很多种方法,按照所使用的特征可以分为两类,一类是基于图像灰度特征的配准方法,另一类是基于图像几何特征的配准方法。一般来说,当图像中包含丰富的易于检测的几何特征时可以采用基于几何特征的图像配准方法;当图像的几何特征不易检测或几何特征相似性很高的时候可以采用基于灰度特征的配准方法。然而,任何单一的方法都会存在一些缺陷。比如,在基于几何特征的配准方法中许多几何特征的提取容易受噪声的影响,这就会导致检测到的几何特征是不稳定的,从而给参考图像中的特征点和浮动图像中的特征点匹配带来很大的误差,最终导致配准的精度不高。在基于灰度的配准方法中利用的是整幅图像的信息,因此会非常的耗时即会导致配准的效率不高;另外非线性的光照变化会使配准的鲁棒性变差。There are many methods for image registration, which can be divided into two categories according to the features used, one is the registration method based on image grayscale features, and the other is the registration method based on image geometric features. Generally speaking, when the image contains abundant geometric features that are easy to detect, the image registration method based on geometric features can be used; when the geometric features of the image are not easy to detect or the similarity of geometric features is high, the image registration method based on gray features can be used. registration method. However, any single approach will have some drawbacks. For example, in the registration method based on geometric features, the extraction of many geometric features is easily affected by noise, which will cause the detected geometric features to be unstable, thus giving the feature points in the reference image and the features in the floating image Point matching brings a large error, which ultimately leads to low registration accuracy. In the grayscale-based registration method, the information of the entire image is used, so it will be very time-consuming and will lead to low registration efficiency; in addition, nonlinear illumination changes will make the registration robustness worse .
发明内容Contents of the invention
为了克服已有图像配准方法的无法兼顾配准精度和鲁棒性的不足,本发明提供一种同时具有高精度和良好鲁棒性的融合图像尺度不变特征变换和个体熵相关系数的图像配准方法。In order to overcome the lack of both registration accuracy and robustness of existing image registration methods, the present invention provides an image that combines image scale-invariant feature transformation and individual entropy correlation coefficients with high precision and good robustness registration method.
本发明解决其技术问题所采用的技术方案是:The technical solution adopted by the present invention to solve its technical problems is:
一种融合图像尺度不变特征变换和个体熵相关系数的图像配准方法,所述配准方法包括如下步骤:An image registration method that fuses image scale-invariant feature transformation and individual entropy correlation coefficients, the registration method includes the following steps:
1)提取参考图像和浮动图像的尺度不变特征变换SIFT特征点;1) extract the scale-invariant feature transformation SIFT feature points of the reference image and the floating image;
2)匹配参考图像和浮动图像中的SIFT特征点;2) Match the SIFT feature points in the reference image and the floating image;
3)基于匹配的SIFT特征点利用最小均方根误差建立仿射变换模型;3) Establish an affine transformation model based on the matched SIFT feature points using the minimum root mean square error;
4)利用仿射变换模型对浮动图像进行变换得到粗配准的图像;4) Transform the floating image using an affine transformation model to obtain a coarsely registered image;
5)将粗配准后的图像作为新的浮动图像,原参考图像作为新的参考图像,建立新的仿射变换模型并对其参数进行初始化,同时对Powell优化算法的初始搜索点、初始搜索方向进行初始化;5) The image after rough registration is used as a new floating image, and the original reference image is used as a new reference image to establish a new affine transformation model and initialize its parameters. At the same time, the initial search point and initial search point of the Powell optimization algorithm Orientation is initialized;
6)利用IECC作为相似性度量目标函数,对该目标函数进行优化得到最优的仿射变换参数;6) Using IECC as the similarity measurement objective function, optimizing the objective function to obtain the optimal affine transformation parameters;
7)利用优化出的最优仿射变换对新的浮动图像进行变换得到最终的精配准图像。7) Use the optimized optimal affine transformation to transform the new floating image to obtain the final fine registration image.
进一步,所述步骤2)中,匹配过程如下:对于参考图像中的一个关键点,可以在浮动图像中找出离该关键点欧氏距离最近的关键点,并设其距离是d1,以及离该关键点欧氏距离次近的关键点,并设其距离是d2;如果d1/d2的比率小于设定的阈值则认为匹配是正确的,否则就认为不匹配。Further, in the step 2), the matching process is as follows: for a key point in the reference image, the key point with the closest Euclidean distance to the key point can be found in the floating image, and its distance is d 1 , and The key point that is the second closest Euclidean distance to the key point, and its distance is set to d 2 ; if the ratio of d 1 /d 2 is less than the set threshold, the match is considered correct, otherwise it is considered not to match.
再进一步,所述步骤3)中,利用匹配的特征点集来建立参考图像和浮动图像之间的仿射变换模型,得到N对特征点对:Further, in the step 3), the affine transformation model between the reference image and the floating image is established by using the matching feature point set, and N pairs of feature points are obtained:
{(xr,i,yr,i),(xf,i,yf,i)}i=1,2,3…,N(6){(x r,i ,y r,i ) , (x f,i ,y f,i )} i=1,2,3...,N (6)
其中,(xr,i,yr,i)表示参考图像中的第i个特征点坐,(xf,i,yf,i)表示浮动图像中的第i个特征点;Among them, (x r,i ,y r,i ) represents the i-th feature point in the reference image, and (x f,i ,y f,i ) represents the i-th feature point in the floating image;
参考图像和浮动图像之间的仿射变换模型表示为:The affine transformation model between the reference image and the floating image is expressed as:
其中利用最小均方误差MMSE来估计仿射变换模型中的参数[80]:Among them, the minimum mean square error MMSE is used to estimate the parameters in the affine transformation model [80] :
其中,s是缩放因子,α是旋转角度,tx和ty分别表示X轴和Y轴方向的平移量,(xr,yr,1)和(xf,yf,1)分别表示参考图像和浮动图像的特征点,(xr,i,yr,i)表示参考图像中的第i个特征点坐,(xf,i,yf,i)表示浮动图像中的第i个特征点。Among them, s is the scaling factor, α is the rotation angle, t x and t y represent the translation in the X-axis and Y-axis direction respectively, (x r ,y r ,1) and (x f ,y f ,1) represent The feature points of the reference image and the floating image, (x r,i ,y r,i ) represent the i-th feature point in the reference image, (x f,i ,y f,i ) represent the i-th feature point in the floating image feature points.
更进一步,所述步骤4)中,利用仿射变换对浮动图像进行变换并进行双三次插值,得到粗配准图像。Furthermore, in the step 4), the floating image is transformed by affine transformation and bicubic interpolation is performed to obtain a rough registration image.
所述步骤5)中,利用仿射变换对浮动图像进行变换并进行双三次插值,得到粗配准图像,建立新的仿射变换模型并对其参数进行初始化,同时对Powell优化算法的初始搜索点、初始搜索方向进行初始化。In described step 5), utilize affine transformation to transform floating image and carry out bicubic interpolation, obtain rough registration image, set up new affine transformation model and initialize its parameter, simultaneously to the initial search of Powell optimization algorithm point, initial search direction to initialize.
所述步骤6)中,对于给定的参考图像R和浮动图像F分别计算其边缘概率分布p(ri)和p(fj),以及两幅图像的联合概率分布p(ri,fj),参考图像R和浮动图像F的联合概率分布通过归一化两幅图像的联合直方图得到In the step 6), for a given reference image R and a floating image F, respectively calculate their marginal probability distributions p(r i ) and p(f j ), and the joint probability distribution p(r i , f j ) of the two images j ), the joint probability distribution of the reference image R and the floating image F is obtained by normalizing the joint histograms of the two images
其中,ri是参考图像R的第i级灰度值,fj是浮动图像F的第j级灰度值,h(ri,fj)是图像R和F的联合直方图,bin表示灰度级;Among them, ri is the i -level gray value of the reference image R, f j is the j-level gray value of the floating image F, h(r i , f j ) is the joint histogram of images R and F, and bin represents gray scale;
边缘概率密度函数p(ri)和p(fj)可以通过对联合概率密度函数p(ri,fj)求和得到,具体地:The marginal probability density functions p(r i ) and p(f j ) can be obtained by summing the joint probability density functions p(r i , f j ), specifically:
个体熵相关系数IECC(R,F)定义为:The individual entropy correlation coefficient IECC(R,F) is defined as:
本发明的有益效果主要表现在:同时具有高精度和良好鲁棒性。The beneficial effects of the present invention are mainly manifested in: simultaneously having high precision and good robustness.
附图说明Description of drawings
图1是融合图像尺度不变特征变换和个体熵相关系数的图像配准方法的流程图。Figure 1 is a flow chart of an image registration method that combines image scale-invariant feature transformation and individual entropy correlation coefficients.
具体实施方式detailed description
下面结合附图对本发明作进一步描述。The present invention will be further described below in conjunction with the accompanying drawings.
参照图1,一种融合图像尺度不变特征变换和个体熵相关系数的图像配准方法,所述配准方法包括如下步骤:With reference to Fig. 1, a kind of image registration method of fusion image scale-invariant feature transformation and individual entropy correlation coefficient, described registration method comprises the following steps:
1)提取参考图像和浮动图像的尺度不变特征变换SIFT特征点;1) extract the scale-invariant feature transformation SIFT feature points of the reference image and the floating image;
2)匹配参考图像和浮动图像中的SIFT特征点;2) Match the SIFT feature points in the reference image and the floating image;
3)基于匹配的SIFT特征点利用最小均方根误差建立仿射变换模型;3) Establish an affine transformation model based on the matched SIFT feature points using the minimum root mean square error;
4)利用仿射变换模型对浮动图像进行变换得到粗配准的图像;4) Transform the floating image using an affine transformation model to obtain a coarsely registered image;
5)将粗配准后的图像作为新的浮动图像,原参考图像作为新的参考图像,建立新的仿射变换模型并对其参数进行初始化,同时对Powell优化算法的初始搜索点、初始搜索方向进行初始化;5) The image after rough registration is used as a new floating image, and the original reference image is used as a new reference image to establish a new affine transformation model and initialize its parameters. At the same time, the initial search point and initial search point of the Powell optimization algorithm Orientation is initialized;
6)利用IECC作为相似性度量目标函数,对该目标函数进行优化得到最优的仿射变换参数;6) Using IECC as the similarity measurement objective function, optimizing the objective function to obtain the optimal affine transformation parameters;
7)利用优化出的最优仿射变换对新的浮动图像进行变换得到最终的精配准图像。7) Use the optimized optimal affine transformation to transform the new floating image to obtain the final fine registration image.
本实施例中,SIFT特征提取:SIFT是一个用来检测图像局部特征点的算法,该算法在1999年由Lowe提出[54],到2004年成为了一个成熟的算法。SIFT算法有很多的应用领域,比如手势识别,目标识别,图像拼接,视频追踪,三维建模和移动匹配等。在本文中SIFT算法用来提取特征点,然后利用参考图像和浮动图像中匹配的特征点建立仿射变换模型。In this embodiment, SIFT feature extraction: SIFT is an algorithm for detecting local feature points of an image. This algorithm was proposed by Lowe in 1999 [54] and became a mature algorithm in 2004. The SIFT algorithm has many application fields, such as gesture recognition, object recognition, image stitching, video tracking, 3D modeling and mobile matching. In this paper, the SIFT algorithm is used to extract the feature points, and then the affine transformation model is established by using the matching feature points in the reference image and the floating image.
尺度空间极值检测是指在图像的尺度空间中找出亮度最大或最小的点。高斯核被证明是唯一能够产生尺度空间的核。图像的尺度空间就是用高斯核与输入图像进行卷积得到的。Scale space extremum detection refers to finding the point with the maximum or minimum brightness in the scale space of the image. The Gaussian kernel was shown to be the only kernel capable of generating scale spaces. The scale space of the image is obtained by convolving the Gaussian kernel with the input image.
L(x,y,σ)=G(x,y,σ)*I(x,y)(1)L(x,y,σ)=G(x,y,σ)*I(x,y)(1)
其中,L(x,y,σ)是尺度空间,G(x,y,σ)是高斯核,I(x,y)是输入图像,σ是空间尺度因子,*表示卷积操作。Among them, L(x,y,σ) is the scale space, G(x,y,σ) is the Gaussian kernel, I(x,y) is the input image, σ is the spatial scale factor, and * represents the convolution operation.
为了得到稳定的极值点,Lowe提出了高斯差分函数这一有效解决方法。In order to obtain stable extreme points, Lowe proposed an effective solution of Gaussian difference function.
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)=L(x,y,kσ)-L(x,y,σ)(3)D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)=L(x,y,kσ)-L(x,y ,σ)(3)
其中D(x,y,σ)是高斯差分函数,k是常乘积因子。where D(x,y,σ) is a Gaussian difference function, and k is a constant product factor.
通过比较差分图像中某个采样点与该采样点周围八个点以及该差分图像上下两个差分图像中的九个点的灰度值来确定该采样点是不是极大或是极小值点。利用尺度不变性求取的这些极值点具有尺度鲁棒性。Determine whether the sampling point is a maximum or minimum value point by comparing the gray value of a sampling point in the differential image with eight points around the sampling point and nine points in the two differential images above and below the differential image . These extreme points obtained by using scale invariance are scale robust.
由于高斯差分函数对噪声(对比度低的点)比较敏感而且还有较强的边缘响应,因此利用高斯差分函数检测到的局部极值点有些是不稳定的,需要去除。通过对高斯差分函数进行泰勒展开,然后通过设定恰当的阈值来去除低对比度的点。利用海森矩阵迹的平方与海森矩阵的行列式的比值来确定一个极值点是不是被保留下来。如果一个极值点在沿着边缘方向有较大的主曲率,但是在垂直边缘的方向的主曲率比较下,则这样的极值点会被丢弃。Since the Gaussian difference function is sensitive to noise (low-contrast points) and has a strong edge response, some local extreme points detected by the Gaussian difference function are unstable and need to be removed. By performing Taylor expansion on the Gaussian difference function, and then setting an appropriate threshold to remove low-contrast points. Use the ratio of the square of the Hessian matrix trace to the determinant of the Hessian matrix to determine whether an extreme point is retained. If an extremum point has a larger principal curvature along the edge direction, but under the principal curvature comparison in the direction perpendicular to the edge, such an extremum point will be discarded.
通过求取极值点邻域内像素的梯度,利用梯度方向的分布特性为每个极值点分配方向,保证极值点对旋转保持鲁棒性。对于每个极值点的梯度幅值以及方向的求取方法是:By calculating the gradient of the pixels in the neighborhood of the extreme point, the distribution characteristics of the gradient direction are used to assign a direction to each extreme point to ensure that the extreme point remains robust to rotation. The method to obtain the gradient magnitude and direction of each extreme point is:
其中m(x,y)表示极值点的幅值,θ(x,y)表示极值点方向。Among them, m(x, y) represents the amplitude of the extreme point, and θ(x, y) represents the direction of the extreme point.
检测到极值点之后需要对这些关键点进行描述,从而使参考图像中的关键点和浮动图像中的关键点可以一一匹配。每个关键点不仅包含自身的信息还包括该关键点邻域内像素的信息。建立关键点邻域的梯度方向直方图来确定关键点的特征向量。本文中用关键点附近16*16个像素的信息来求取关键点的描述符,每4*4个像素形成一个描述符,因此一个关键点是一个128维的特征向量。为减少光照变换的影响还需要对描述符进行归一化处理。After the extreme points are detected, these key points need to be described, so that the key points in the reference image and the key points in the floating image can be matched one by one. Each key point not only contains its own information but also includes the information of the pixels in the neighborhood of the key point. A histogram of gradient orientations of the keypoint neighborhood is built to determine the feature vector of the keypoint. In this paper, the information of 16*16 pixels near the key point is used to obtain the descriptor of the key point, and every 4*4 pixels form a descriptor, so a key point is a 128-dimensional feature vector. In order to reduce the impact of lighting transformation, it is also necessary to normalize the descriptor.
提取特征点之后为了实现参考图像和浮动图像的配准,需要建立浮动图像到参考图像的变换关系。首先需要做的就是将参考图像和浮动图像中的特征点一一匹配起来,根据匹配的特征点对利用最小均方误差来估计浮动图像到参考图像的仿射变换模型。利用仿射变换模型对浮动图像进行变换即可得到粗配准图像。After extracting the feature points, in order to realize the registration of the reference image and the floating image, it is necessary to establish the transformation relationship from the floating image to the reference image. The first thing to do is to match the feature points in the reference image and the floating image one by one, and use the minimum mean square error to estimate the affine transformation model from the floating image to the reference image according to the matched feature point pairs. The coarse registration image can be obtained by transforming the floating image using the affine transformation model.
特征匹配:在仿射变换模型建立之前需要将参考图像中检测到的特征点和浮动图像中检测到的特征点进行匹配。一个有效的方法就是比较该关键点最近邻距离与次近距离。具体地,对于参考图像中的一个关键点,可以在浮动图像中找出离该关键点欧氏距离最近的关键点,并设其距离是d1,以及离该关键点欧氏距离次近的关键点,并设其距离是d2。如果d1/d2的比率小于设定的阈值则认为匹配是正确的,否则就认为不匹配。这里的阈值对于不同种类的图像是不一样的,可以通过实验来确定。Feature matching: Before the affine transformation model is established, it is necessary to match the feature points detected in the reference image with the feature points detected in the floating image. An effective method is to compare the nearest neighbor distance and the next closest distance of the key point. Specifically, for a key point in the reference image, the key point with the closest Euclidean distance to the key point can be found in the floating image, and its distance is d 1 , and the key point with the second closest Euclidean distance to the key point is key point, and let its distance be d 2 . A match is considered correct if the ratio of d 1 /d 2 is less than a set threshold, otherwise it is considered a mismatch. The threshold here is different for different types of images and can be determined through experiments.
变换模型建立:得到参考图像和浮动图像的匹配点之后,可以利用这些匹配的特征点集来建立参考图像和浮动图像之间的仿射变换模型。假设得到了N对特征点对:Transformation model establishment: After obtaining the matching points of the reference image and the floating image, these matched feature point sets can be used to establish an affine transformation model between the reference image and the floating image. Suppose you get N pairs of feature points:
{(xr,i,yr,i),(xf,i,yf,i)}i=1,2,3…,N(6){(x r,i ,y r,i ) , (x f,i ,y f,i )} i=1,2,3...,N (6)
其中,(xr,i,yr,i)表示参考图像中的特征点,(xf,i,yf,i)表示浮动图像中的特征点。Among them, (x r,i ,y r,i ) represent the feature points in the reference image, and (x f,i ,y f,i ) represent the feature points in the floating image.
参考图像和浮动图像之间的仿射变换模型可以表示为:The affine transformation model between the reference image and the floating image can be expressed as:
其中,s是缩放因子,α是旋转角度,tx和ty分别表示X轴和Y轴方向的平移量。Among them, s is the scaling factor, α is the rotation angle, t x and t y represent the translation amounts in the X-axis and Y-axis directions, respectively.
利用最小均方误差(MMSE)来估计仿射变换模型中的参数[80]:The parameters in the affine transformation model are estimated using the minimum mean square error (MMSE) [80] :
仿射变换参数确定之后便可以利用该仿射变换对浮动图像进行变换并进行双三次插值,得到粗配准图像。然后将该配准后图像作为浮动图像,原参考图像作为新的参考图像进行下一步的精配准。After the affine transformation parameters are determined, the affine transformation can be used to transform the floating image and perform bicubic interpolation to obtain a rough registration image. Then the registered image is used as a floating image, and the original reference image is used as a new reference image for the next fine registration.
IECC相似性度量是个体熵相关系数,该相似性度量在2011年由Itou等提出。对于给定的参考图像R和浮动图像F分别计算其边缘概率分布p(ri)和p(fj),以及两幅图像的联合概率分布p(ri,fj)。参考图像R和浮动图像F的联合概率分布可以通过归一化两幅图像的联合直方图得到。The IECC similarity measure is the individual entropy correlation coefficient, which was proposed by Itou et al. in 2011. For a given reference image R and floating image F, calculate their marginal probability distributions p(r i ) and p(f j ), respectively, and the joint probability distribution p(r i , f j ) of the two images. The joint probability distribution of the reference image R and the floating image F can be obtained by normalizing the joint histograms of the two images.
其中,ri是参考图像R的第i级灰度值,fj是浮动图像F的第j级灰度值,h(ri,fj)是图像R和F的联合直方图,bin表示灰度级。Among them, ri is the i -level gray value of the reference image R, f j is the j-level gray value of the floating image F, h(r i , f j ) is the joint histogram of images R and F, and bin represents gray scale.
边缘概率密度函数p(ri)和p(fj)可以通过对联合概率密度函数p(ri,fj)求和得到,具体地:The marginal probability density functions p(r i ) and p(f j ) can be obtained by summing the joint probability density functions p(r i , f j ), specifically:
个体熵相关系数IECC(R,F)定义为:The individual entropy correlation coefficient IECC(R,F) is defined as:
个体熵相关系数作为目标函数用来衡量参考图像和浮动图像是否配准,当参考图像和浮动图像达到最佳的配准时个体熵相关系数达到最大值。The individual entropy correlation coefficient is used as an objective function to measure whether the reference image and the floating image are registered. When the reference image and the floating image achieve the best registration, the individual entropy correlation coefficient reaches the maximum value.
Claims (6)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201510672223.7A CN105303567A (en) | 2015-10-16 | 2015-10-16 | Image registration method integrating image scale invariant feature transformation and individual entropy correlation coefficient |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201510672223.7A CN105303567A (en) | 2015-10-16 | 2015-10-16 | Image registration method integrating image scale invariant feature transformation and individual entropy correlation coefficient |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN105303567A true CN105303567A (en) | 2016-02-03 |
Family
ID=55200789
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201510672223.7A Pending CN105303567A (en) | 2015-10-16 | 2015-10-16 | Image registration method integrating image scale invariant feature transformation and individual entropy correlation coefficient |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN105303567A (en) |
Cited By (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106023187A (en) * | 2016-05-17 | 2016-10-12 | 西北工业大学 | Image registration method based on SIFT feature and angle relative distance |
| CN106651756A (en) * | 2016-11-16 | 2017-05-10 | 浙江工业大学 | Image registration method based on SIFT and authentication mechanism |
| CN106780309A (en) * | 2016-12-21 | 2017-05-31 | 中国航空工业集团公司雷华电子技术研究所 | A kind of diameter radar image joining method |
| CN106971404A (en) * | 2017-03-20 | 2017-07-21 | 西北工业大学 | A kind of robust SURF unmanned planes Color Remote Sensing Image method for registering |
| CN108416735A (en) * | 2018-03-19 | 2018-08-17 | 深圳市深图医学影像设备有限公司 | The joining method and device of digital X-ray image based on geometric properties |
| CN114216485A (en) * | 2022-02-23 | 2022-03-22 | 广州骏天科技有限公司 | Image calibration method for aerial surveying and mapping of unmanned aerial vehicle |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2007072451A2 (en) * | 2005-12-22 | 2007-06-28 | Philips Intellectual Property & Standards Gmbh | Adaptive point-based elastic image registration |
| CN102194133A (en) * | 2011-07-05 | 2011-09-21 | 北京航空航天大学 | Data-clustering-based adaptive image SIFT (Scale Invariant Feature Transform) feature matching method |
-
2015
- 2015-10-16 CN CN201510672223.7A patent/CN105303567A/en active Pending
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2007072451A2 (en) * | 2005-12-22 | 2007-06-28 | Philips Intellectual Property & Standards Gmbh | Adaptive point-based elastic image registration |
| CN102194133A (en) * | 2011-07-05 | 2011-09-21 | 北京航空航天大学 | Data-clustering-based adaptive image SIFT (Scale Invariant Feature Transform) feature matching method |
Non-Patent Citations (1)
| Title |
|---|
| GAN LIU ET AL: "Combining SIFT and Individual Entropy Correlation Coefficient for Image Registration", 《6TH CHINESE CONFERENCE ON PATTERN RECOGNITION, CCPR 2014》 * |
Cited By (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106023187A (en) * | 2016-05-17 | 2016-10-12 | 西北工业大学 | Image registration method based on SIFT feature and angle relative distance |
| CN106651756A (en) * | 2016-11-16 | 2017-05-10 | 浙江工业大学 | Image registration method based on SIFT and authentication mechanism |
| CN106651756B (en) * | 2016-11-16 | 2020-05-01 | 浙江工业大学 | An Image Registration Method Based on SIFT and Verification Mechanism |
| CN106780309A (en) * | 2016-12-21 | 2017-05-31 | 中国航空工业集团公司雷华电子技术研究所 | A kind of diameter radar image joining method |
| CN106971404A (en) * | 2017-03-20 | 2017-07-21 | 西北工业大学 | A kind of robust SURF unmanned planes Color Remote Sensing Image method for registering |
| CN108416735A (en) * | 2018-03-19 | 2018-08-17 | 深圳市深图医学影像设备有限公司 | The joining method and device of digital X-ray image based on geometric properties |
| CN108416735B (en) * | 2018-03-19 | 2022-02-01 | 深圳市深图医学影像设备有限公司 | Method and device for splicing digital X-ray images based on geometric features |
| CN114216485A (en) * | 2022-02-23 | 2022-03-22 | 广州骏天科技有限公司 | Image calibration method for aerial surveying and mapping of unmanned aerial vehicle |
| CN114216485B (en) * | 2022-02-23 | 2022-04-29 | 广州骏天科技有限公司 | Image calibration method for aerial surveying and mapping of unmanned aerial vehicle |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN101609499B (en) | Rapid fingerprint identification method | |
| CN102800097B (en) | The visible ray of multi-feature multi-level and infrared image high registration accuracy method | |
| CN103400388B (en) | Method for eliminating Brisk key point error matching point pair by using RANSAC | |
| CN100412883C (en) | Fingerprint identification method and system | |
| CN103077512B (en) | Based on the feature extracting and matching method of the digital picture that major component is analysed | |
| CN103839265B (en) | SAR image registration method based on SIFT and normalized mutual information | |
| CN102800099B (en) | Multi-feature multi-level visible light and high-spectrum image high-precision registering method | |
| CN106355577B (en) | Fast Image Matching Method and System Based on Feature State and Global Consistency | |
| CN103400384B (en) | The wide-angle image matching process of calmodulin binding domain CaM coupling and some coupling | |
| Zheng et al. | Vlad encoded deep convolutional features for unconstrained face verification | |
| CN102750537B (en) | Automatic registering method of high accuracy images | |
| CN105303567A (en) | Image registration method integrating image scale invariant feature transformation and individual entropy correlation coefficient | |
| CN108346162A (en) | Remote sensing image registration method based on structural information and space constraint | |
| CN102819839B (en) | High-precision registration method for multi-characteristic and multilevel infrared and hyperspectral images | |
| CN104751465A (en) | ORB (oriented brief) image feature registration method based on LK (Lucas-Kanade) optical flow constraint | |
| CN101980250A (en) | Object Recognition Method Based on Dimensionality Reduction Local Feature Descriptor and Hidden Conditional Random Field | |
| CN103955950B (en) | Image tracking method utilizing key point feature matching | |
| Chen et al. | Robust affine-invariant line matching for high resolution remote sensing images | |
| CN103136520A (en) | Shape matching and target recognition method based on PCA-SC algorithm | |
| CN101493891A (en) | Characteristic extracting and describing method with mirror plate overturning invariability based on SIFT | |
| CN106981077A (en) | Infrared image and visible light image registration method based on DCE and LSS | |
| CN104616297A (en) | Improved SIFI algorithm for image tampering forensics | |
| CN105654423A (en) | Area-based remote sensing image registration method | |
| CN103310456B (en) | Multidate/multi-modal remote sensing image registration method based on Gaussian-Hermite square | |
| CN101727452A (en) | Image processing method and device |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| RJ01 | Rejection of invention patent application after publication | ||
| RJ01 | Rejection of invention patent application after publication |
Application publication date: 20160203 |