[go: up one dir, main page]

CN110717936A - Image stitching method based on camera attitude estimation - Google Patents

Image stitching method based on camera attitude estimation Download PDF

Info

Publication number
CN110717936A
CN110717936A CN201910978286.3A CN201910978286A CN110717936A CN 110717936 A CN110717936 A CN 110717936A CN 201910978286 A CN201910978286 A CN 201910978286A CN 110717936 A CN110717936 A CN 110717936A
Authority
CN
China
Prior art keywords
image
matrix
camera
updated
focal length
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910978286.3A
Other languages
Chinese (zh)
Other versions
CN110717936B (en
Inventor
张智浩
杨宪强
高会军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology Shenzhen
Original Assignee
Harbin Institute of Technology Shenzhen
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology Shenzhen filed Critical Harbin Institute of Technology Shenzhen
Priority to CN201910978286.3A priority Critical patent/CN110717936B/en
Publication of CN110717936A publication Critical patent/CN110717936A/en
Application granted granted Critical
Publication of CN110717936B publication Critical patent/CN110717936B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

一种基于相机姿态估计的图像拼接方法,它属于计算机视觉与图像处理技术领域。本发明解决了现有图像拼接方法存在的拼接结果畸变严重以及拼接效率低的问题。本发明通过以下步骤实现:1.对图像提取并匹配特征点;2.对特征点对进行分类和筛选;3.估计相机的焦距、平移矩阵、旋转矩阵参数;4.对相机的姿态进行折中更新;5.计算每类点对所在平面的法向量;6.进行图像变换与拼接。本发明可以应用于不同视角下、相同场景图像的拼接。

An image stitching method based on camera pose estimation belongs to the technical field of computer vision and image processing. The invention solves the problems of serious distortion of the splicing result and low splicing efficiency existing in the existing image splicing method. The present invention is achieved through the following steps: 1. extracting and matching feature points from the image; 2. classifying and screening feature point pairs; 3. estimating the focal length, translation matrix, and rotation matrix parameters of the camera; 4. folding the camera posture 5. Calculate the normal vector of the plane where each type of point pair is located; 6. Perform image transformation and stitching. The present invention can be applied to the stitching of images of the same scene under different viewing angles.

Description

一种基于相机姿态估计的图像拼接方法An Image Stitching Method Based on Camera Pose Estimation

技术领域technical field

本发明属于计算机视觉与图像处理技术领域,具体涉及一种基于相机姿态估计的图像拼接方法。The invention belongs to the technical field of computer vision and image processing, and in particular relates to an image stitching method based on camera attitude estimation.

背景技术Background technique

基于特征的图像拼接方法是通过建立图像特征点间的对应关系求解图像间的变换关系。该方法相比于基于灰度值的图像拼接方法计算量较小,而且结果较稳定,是常用的主流方法。然而目前大部分图像拼接软件中用到的方法是估计图像间全局变换关系,仅仅在相机姿态只有旋转情况或场景在一个平面时才能获得好的拼接效果,而对于现实中更为一般的情况,却出现重影等错误拼接结果。近年学术界提出局部变换模型和网格优化等方法来解决这个问题,但是也会出现拼接结果畸变严重以及拼接效率低等问题。所以,研究一种高效且能够获得更准确更自然全景图的图像拼接方法很重要。The feature-based image stitching method solves the transformation relationship between images by establishing the corresponding relationship between image feature points. Compared with the gray value-based image stitching method, this method requires less computation and more stable results, and is a commonly used mainstream method. However, the method used in most image stitching software at present is to estimate the global transformation relationship between images. A good stitching effect can only be obtained when the camera pose is only rotated or the scene is in a plane. For the more general situation in reality, However, there are wrong stitching results such as ghosting. In recent years, academia has proposed methods such as local transformation model and grid optimization to solve this problem, but there are also problems such as serious distortion of stitching results and low stitching efficiency. Therefore, it is important to study an efficient image stitching method that can obtain more accurate and natural panoramas.

发明内容SUMMARY OF THE INVENTION

本发明的目的是为解决现有图像拼接方法存在的拼接结果畸变严重以及拼接效率低的问题,而提出了一种基于相机姿态估计的图像拼接方法。The purpose of the present invention is to propose an image stitching method based on camera pose estimation to solve the problems of serious distortion of stitching results and low stitching efficiency in the existing image stitching methods.

本发明为解决上述技术问题采取的技术方案是:一种基于相机姿态估计的图像拼接方法,该方法包括以下步骤:The technical solution adopted by the present invention to solve the above technical problems is: an image stitching method based on camera pose estimation, the method comprises the following steps:

步骤一、分别在不同视角下,利用相机拍摄两张相同场景的图像Ip和Iq,并分别对图像Ip和Iq进行特征点的提取;Step 1: Using a camera to shoot two images I p and I q of the same scene from different perspectives, and extract feature points for the images I p and I q respectively;

再对从图像Ip和Iq中提取出的特征点进行匹配,得到初始特征点对集合S为:

Figure BDA0002234371220000011
其中:pi为图像Ip的特征点,qi为图像Iq的特征点,N为集合S中特征点对的个数;Then match the feature points extracted from the images I p and I q to obtain the initial feature point pair set S as:
Figure BDA0002234371220000011
Where: p i is the feature point of the image I p , q i is the feature point of the image I q , and N is the number of feature point pairs in the set S;

步骤二、对集合S中包含的特征点对进行筛选,并获得筛选出的特征点对的类别;筛选出来的特征点对的集合S1为:

Figure BDA0002234371220000012
N1为筛选出的特征点对的个数,其中:第i′个特征点对的类别号为ci′,ci′=1,...,n,n为类别的个数;Step 2: Screen the feature point pairs contained in the set S, and obtain the categories of the screened feature point pairs; the set S1 of the screened feature point pairs is:
Figure BDA0002234371220000012
N 1 is the number of selected feature point pairs, wherein: the category number of the i′-th feature point pair is c i′ , c i′ =1,...,n, n is the number of categories;

并分别获得集合S1中每类内的特征点对间的单应性变换矩阵,其中:第k类内的特征点对间的单应性变换矩阵为Hk,k=1,...,n;And obtain the homography transformation matrix between the feature point pairs in each class in the set S 1 respectively, wherein: the homography transformation matrix between the feature point pairs in the kth class is H k , k=1,... ,n;

步骤三、根据步骤二获得的单应性变换矩阵Hk,分别估计出与每类内特征点对相对应的相机焦距值fk,再根据相机焦距值fk选取出初始相机焦距值f0Step 3: According to the homography transformation matrix H k obtained in step 2, estimate the camera focal length value f k corresponding to each type of feature point pair, and then select the initial camera focal length value f 0 according to the camera focal length value f k ;

步骤四、根据步骤二得到的集合S1和步骤三得到的初始相机焦距值f0,来估计相机的焦距f和本质矩阵E;Step 4: Estimate the focal length f and the essential matrix E of the camera according to the set S 1 obtained in step 2 and the initial camera focal length value f 0 obtained in step 3;

对获得的本质矩阵E进行分解,得到两个不同视角下,相机间的旋转矩阵R和平移矩阵t;Decompose the obtained essential matrix E to obtain the rotation matrix R and translation matrix t between the cameras under two different viewing angles;

步骤五、设拍摄图像Ip的视角下,相机相对于世界坐标系的旋转矩阵Rp为单位阵、平移矩阵tp为0向量,则拍摄图像Iq的视角下,相机相对于世界坐标系的旋转矩阵Rq=RRp=R、平移矩阵tq=Rtp+t=t;Step 5. Assume that under the viewing angle of the captured image I p , the rotation matrix R p of the camera relative to the world coordinate system is a unit matrix, and the translation matrix t p is a 0 vector, then under the viewing angle of the captured image I q , the camera is relative to the world coordinate system. The rotation matrix R q =RR p =R, the translation matrix t q =Rt p +t=t;

分别将旋转矩阵Rp和Rq转化为旋转向量rp和rq,并计算旋转向量rp和rq的平均值

Figure BDA0002234371220000021
Figure BDA0002234371220000022
再将平均值
Figure BDA0002234371220000023
转化为旋转矩阵
Figure BDA0002234371220000024
旋转矩阵
Figure BDA0002234371220000025
即为折中旋转矩阵;Transform the rotation matrices R p and R q into rotation vectors r p and r q respectively, and calculate the average of the rotation vectors r p and r q
Figure BDA0002234371220000021
Figure BDA0002234371220000022
then average
Figure BDA0002234371220000023
Convert to rotation matrix
Figure BDA0002234371220000024
rotation matrix
Figure BDA0002234371220000025
is the compromise rotation matrix;

折中平移矩阵

Figure BDA0002234371220000026
为:
Figure BDA0002234371220000027
Compromise translation matrix
Figure BDA0002234371220000026
for:
Figure BDA0002234371220000027

将旋转矩阵Rp和Rq更新为

Figure BDA0002234371220000028
Figure BDA0002234371220000029
将平移矩阵tp和tq更新为
Figure BDA00022343712200000210
Figure BDA00022343712200000211
得到更新后的相机姿态;Update the rotation matrices R p and R q to
Figure BDA0002234371220000028
and
Figure BDA0002234371220000029
Update the translation matrices t p and t q to
Figure BDA00022343712200000210
and
Figure BDA00022343712200000211
Get the updated camera pose;

步骤六、根据步骤二得到的单应性变换矩阵Hk、步骤四得到的相机间旋转矩阵R、平移矩阵t以及步骤五得到的更新后的旋转矩阵

Figure BDA00022343712200000212
计算每类内的特征点对所在平面的法向量;Step 6. According to the homography transformation matrix H k obtained in step 2, the rotation matrix R between cameras obtained in step 4, the translation matrix t and the updated rotation matrix obtained in step 5
Figure BDA00022343712200000212
Calculate the normal vector of the plane where the feature point pair in each class is located;

将法向量变换到步骤五获得的更新后的相机姿态下,得到每类内的特征点对所在平面的更新后法向量;Transform the normal vector to the updated camera pose obtained in step 5, and obtain the updated normal vector of the plane where the feature point pair in each category is located;

步骤七、根据步骤五获得的更新后的相机姿态,以及步骤六计算出的更新后法向量,对图像Ip和Iq进行变换,得到变换后图像I′p和I′qStep 7, according to the updated camera posture obtained in step 5, and the updated normal vector calculated in step 6, transform images I p and I q to obtain transformed images I' p and I'q;

再对变换后的图像I′p和I′q进行图像拼接融合,对于图像I′p和I′q重叠的区域,计算重叠区域中每个像素点的像素均值,将计算出的像素均值作为对应像素点的像素值;对于图像I′p和I′q不重叠的区域,则维持原像素值,得到拼接后的图像IpqThen perform image splicing and fusion on the transformed images I' p and I' q . For the overlapping area of the images I' p and I' q , calculate the pixel mean of each pixel in the overlapping area, and use the calculated pixel mean as The pixel value of the corresponding pixel point; for the area where the images I′ p and I′ q do not overlap, the original pixel value is maintained to obtain the spliced image I pq .

本发明的有益效果是:本发明提出了一种基于相机姿态估计的图像拼接方法,本发明首先对不同视角下,相同场景的两张图像进行特征点提取并匹配后,对获得的特征点对进行分类和筛选;再对筛选出的特征点对进行处理,估计相机的焦距、平移矩阵和旋转矩阵,并对相机的姿态进行折中更新;最后根据筛选出的每类特征点对所在平面的法向量,进行图像变换与拼接。相比于现有的图像拼接方法,本发明方法的图像重叠区域对准效果更准确,即重影现象较少,拼接后的全景图畸变较小,尤其是场景中的平面区域能够保持不弯曲;而且本发明方法的计算量较小,能够显著提高图像拼接的效率。The beneficial effects of the present invention are as follows: the present invention proposes an image stitching method based on camera pose estimation. The present invention first extracts and matches two images of the same scene from different perspectives, and then compares the obtained feature points. Perform classification and screening; then process the screened feature point pairs, estimate the focal length, translation matrix and rotation matrix of the camera, and update the camera pose with compromise; Normal vector for image transformation and stitching. Compared with the existing image stitching method, the alignment effect of the image overlapping area of the method of the present invention is more accurate, that is, the ghost phenomenon is less, and the distortion of the panoramic image after stitching is smaller, especially the plane area in the scene can be kept unbent. Moreover, the calculation amount of the method of the present invention is small, and the efficiency of image stitching can be significantly improved.

附图说明Description of drawings

图1是本发明方法的流程图;Fig. 1 is the flow chart of the inventive method;

图2是本发明的待拼接图像1;Fig. 2 is the image to be spliced 1 of the present invention;

图3是本发明的待拼接图像2;Fig. 3 is the image to be spliced 2 of the present invention;

图4是本发明对图像1和图像2的拼接结果图。FIG. 4 is a diagram showing the stitching result of image 1 and image 2 according to the present invention.

具体实施方式Detailed ways

具体实施方式一:结合图1说明本实施方式。本实施方式所述的一种基于相机姿态估计的图像拼接方法,该方法包括以下步骤:Embodiment 1: This embodiment is described with reference to FIG. 1 . An image stitching method based on camera pose estimation described in this embodiment, the method includes the following steps:

步骤一、分别在不同视角下,利用相机拍摄两张相同场景的图像Ip和Iq,并采用SIFT(Scale Invariant Feature Transform,尺度不变特征变换)特征点提取方法分别对图像Ip和Iq进行特征点的提取;Step 1: Using the camera to shoot two images I p and I q of the same scene from different perspectives, and using the SIFT (Scale Invariant Feature Transform, scale invariant feature transform) feature point extraction method to separate the images I p and I respectively. q Extract feature points;

再基于FLANN(Fast Libraryfor Approximate Nearest Neighbors,快速最近邻逼近搜索函数库)对从图像Ip和Iq中提取出的特征点进行匹配,得到初始特征点对集合S为:

Figure BDA0002234371220000031
其中:pi为图像Ip的特征点,qi为图像Iq的特征点,N为集合S中特征点对的个数;Then based on FLANN (Fast Library for Approximate Nearest Neighbors, fast nearest neighbor approximation search function library), the feature points extracted from the images I p and I q are matched, and the initial feature point pair set S is obtained as:
Figure BDA0002234371220000031
Where: p i is the feature point of the image I p , q i is the feature point of the image I q , and N is the number of feature point pairs in the set S;

所述图像Ip和Iq是在不同视角下的,相同场景的两张图像;The images I p and I q are two images of the same scene from different viewing angles;

步骤二、采用RANSAC方法(random sample consensus,随机抽样一致性方法)对集合S中包含的特征点对进行筛选,并获得筛选出的特征点对的类别;筛选出来的特征点对的集合S1为:N1为筛选出的特征点对的个数,其中:第i′个特征点对的类别号为ci′,ci′=1,...,n,n为类别的个数;Step 2: Use the RANSAC method (random sample consensus, random sampling consensus method) to screen the feature point pairs contained in the set S, and obtain the category of the screened feature point pairs; the set S 1 of the screened feature point pairs for: N 1 is the number of selected feature point pairs, wherein: the category number of the i′-th feature point pair is c i′ , c i′ =1,...,n, n is the number of categories;

并分别获得集合S1中每类内的特征点对间的单应性变换矩阵,其中:第k类内的特征点对间的单应性变换矩阵为Hk,k=1,...,n;And obtain the homography transformation matrix between the feature point pairs in each class in the set S 1 respectively, wherein: the homography transformation matrix between the feature point pairs in the kth class is H k , k=1,... ,n;

步骤三、根据步骤二获得的单应性变换矩阵Hk,分别估计出与每类内特征点对相对应的相机焦距值fk,再根据相机焦距值fk选取出初始相机焦距值f0Step 3: According to the homography transformation matrix H k obtained in step 2, estimate the camera focal length value f k corresponding to each type of feature point pair, and then select the initial camera focal length value f 0 according to the camera focal length value f k ;

步骤四、根据步骤二得到的集合S1和步骤三得到的初始相机焦距值f0,来估计相机的焦距f和本质矩阵E;Step 4: Estimate the focal length f and the essential matrix E of the camera according to the set S 1 obtained in step 2 and the initial camera focal length value f 0 obtained in step 3;

采用SVD分解方法(Singular Value Decomposition,奇异值分解)对获得的本质矩阵E进行分解,得到两个不同视角下,相机间的旋转矩阵R和平移矩阵t;The obtained essential matrix E is decomposed by the SVD decomposition method (Singular Value Decomposition, singular value decomposition), and the rotation matrix R and the translation matrix t between the cameras under two different viewing angles are obtained;

步骤五、设拍摄图像Ip的视角下,相机相对于世界坐标系的旋转矩阵Rp为单位阵、平移矩阵tp为0向量,则拍摄图像Iq的视角下,相机相对于世界坐标系的旋转矩阵Rq=RRp=R、平移矩阵tq=Rtp+t=t;Step 5. Assume that under the viewing angle of the captured image I p , the rotation matrix R p of the camera relative to the world coordinate system is a unit matrix, and the translation matrix t p is a 0 vector, then under the viewing angle of the captured image I q , the camera is relative to the world coordinate system. The rotation matrix R q =RR p =R, the translation matrix t q =Rt p +t=t;

采用Rodrigues(罗德里格旋转)公式,分别将旋转矩阵Rp和Rq转化为旋转向量rp和rq,并计算旋转向量rp和rq的平均值

Figure BDA0002234371220000041
再用Rodrigues公式,将平均值
Figure BDA0002234371220000042
转化为旋转矩阵旋转矩阵
Figure BDA0002234371220000044
即为折中旋转矩阵;Using the Rodrigues formula, transform the rotation matrices R p and R q into rotation vectors r p and r q respectively, and calculate the average value of the rotation vectors r p and r q
Figure BDA0002234371220000041
Using the Rodrigues formula again, the average
Figure BDA0002234371220000042
Convert to rotation matrix rotation matrix
Figure BDA0002234371220000044
is the compromise rotation matrix;

折中平移矩阵为:

Figure BDA0002234371220000046
Compromise translation matrix for:
Figure BDA0002234371220000046

将旋转矩阵Rp和Rq更新为

Figure BDA0002234371220000047
Figure BDA0002234371220000048
将平移矩阵tp和tq更新为
Figure BDA0002234371220000049
Figure BDA00022343712200000410
得到更新后的相机姿态;Update the rotation matrices R p and R q to
Figure BDA0002234371220000047
and
Figure BDA0002234371220000048
Update the translation matrices t p and t q to
Figure BDA0002234371220000049
and
Figure BDA00022343712200000410
Get the updated camera pose;

步骤六、根据步骤二得到的单应性变换矩阵Hk、步骤四得到的相机间旋转矩阵R、平移矩阵t以及步骤五得到的更新后的旋转矩阵

Figure BDA00022343712200000411
计算每类内的特征点对所在平面的法向量;Step 6. According to the homography transformation matrix H k obtained in step 2, the rotation matrix R between cameras obtained in step 4, the translation matrix t and the updated rotation matrix obtained in step 5
Figure BDA00022343712200000411
Calculate the normal vector of the plane where the feature point pair in each class is located;

将法向量变换到步骤五获得的更新后的相机姿态下,得到每类内的特征点对所在平面的更新后法向量;Transform the normal vector to the updated camera pose obtained in step 5, and obtain the updated normal vector of the plane where the feature point pair in each category is located;

步骤七、根据步骤五获得的更新后的相机姿态,以及步骤六计算出的更新后法向量,对图像Ip和Iq进行变换,得到变换后图像I′p和I′qStep 7, according to the updated camera posture obtained in step 5, and the updated normal vector calculated in step 6, transform images I p and I q to obtain transformed images I' p and I'q;

再对变换后的图像I′p和I′q进行图像拼接融合,对于图像I′p和I′q重叠的区域,计算重叠区域中每个像素点的像素均值,将计算出的像素均值作为对应像素点的像素值;对于图像I′p和I′q不重叠的区域,则维持原像素值,得到拼接后的图像IpqThen perform image splicing and fusion on the transformed images I' p and I' q . For the overlapping area of the images I' p and I' q , calculate the pixel mean of each pixel in the overlapping area, and use the calculated pixel mean as The pixel value of the corresponding pixel point; for the area where the images I′ p and I′ q do not overlap, the original pixel value is maintained to obtain the spliced image I pq .

在特征点提取与匹配过程中,基于SIFT特征点提取算法和FLANN快速最近邻搜索库的方法能快速建立特征点对应关系。In the process of feature point extraction and matching, the method based on the SIFT feature point extraction algorithm and the FLANN fast nearest neighbor search library can quickly establish the feature point correspondence.

本实施方式可以应用于,现实中一般拍摄场景的相机姿态变换同时存在旋转和平移的情况。This embodiment can be applied to the situation where rotation and translation exist simultaneously in the camera pose transformation of a general shooting scene in reality.

具体实施方式二:本实施方式与具体实施方式一不同的是:所述步骤二中,对集合S中包含的特征点对进行筛选,并获得筛选出的特征点对的类别,其具体过程为:Embodiment 2: The difference between this embodiment and Embodiment 1 is that: in the second step, the feature point pairs contained in the set S are screened, and the category of the screened feature point pairs is obtained. The specific process is as follows: :

步骤二一、设经过筛选后剩余的特征点对的集合为S′,将集合S′初始化为步骤一中的初始特征点对集合S;并将筛选出的特征点对的集合S1初始化为空集,将集合S1中包含的特征点对的类别个数n初始化为0;Step 21: Set the set of remaining feature point pairs after screening as S', initialize the set S' as the initial set of feature point pairs S in step 1; and initialize the set S 1 of the screened feature point pairs as Empty set, initialize the number of categories n of feature point pairs included in the set S 1 to 0;

步骤二二、采用RANSAC方法对集合S′进行提取内点,提取出来的内点特征点对的集合为sn+1,内点特征点对间的单应性变换矩阵为Hn+1Step 22: Using the RANSAC method to extract interior points from the set S', the set of extracted interior point feature point pairs is s n+1 , and the homography transformation matrix between the interior point feature point pairs is H n+1 ;

从集合S′中剔除掉提取出来的内点特征点对后,获得剩余的特征点对的集合,即获得更新后的集合S′;After removing the extracted interior point feature point pairs from the set S', the set of remaining feature point pairs is obtained, that is, the updated set S' is obtained;

其中RANSAC算法选取的模型是单应性矩阵变换模型,内点距离阈值为3,迭代次数为500;Among them, the model selected by the RANSAC algorithm is the homography matrix transformation model, the interior point distance threshold is 3, and the number of iterations is 500;

步骤二三、如果集合sn+1中特征点对的个数大于等于15,则将sn+1中包含的特征点对的类别序号设为n+1后,将集合sn+1中包含的特征点对加入到集合S1中,获得更新后的集合S1,将集合sn+1清空,并将集合S1中包含的特征点对的类别数n加1;Steps 2 and 3: If the number of feature point pairs in the set sn +1 is greater than or equal to 15, set the category number of the feature point pairs contained in The included feature point pairs are added to the set S 1 to obtain an updated set S 1 , the set s n+1 is emptied, and the number of categories n of the feature point pairs included in the set S 1 is increased by 1;

步骤二四、重复步骤二二到步骤二三的过程,继续对更新后的集合S′进行处理,直至集合sn+1中包含的特征点对的个数小于15,得到集合S1中包含的特征点对,以及集合S1中包含的特征点对的类别号。Step 24: Repeat the process from Steps 22 to 23, and continue to process the updated set S' until the number of feature point pairs contained in the set sn +1 is less than 15, and the set S1 contains The feature point pairs of , and the category numbers of the feature point pairs contained in the set S1.

具体实施方式三:本实施方式与具体实施方式二不同的是:所述步骤三的具体过程为:Embodiment 3: The difference between this embodiment and Embodiment 2 is that the specific process of the third step is:

Figure BDA0002234371220000051
则与第k类内特征点对相对应的相机焦距值fk为:Assume
Figure BDA0002234371220000051
Then the camera focal length value f k corresponding to the feature point pair in the kth class is:

其中:

Figure BDA0002234371220000062
Figure BDA0002234371220000063
均为Hk中的元素,将各类对应的相机焦距值f1,f2,...,fk,...,fn的中位数作为初始相机焦距值f0。in:
Figure BDA0002234371220000062
and
Figure BDA0002234371220000063
All are elements in H k , and the median of the corresponding camera focal length values f 1 , f 2 ,...,f k ,...,f n is taken as the initial camera focal length value f 0 .

具体实施方式四:本实施方式与具体实施方式三不同的是:所述步骤四中,根据步骤二得到的集合S1和步骤三得到的初始相机焦距值f0,来估计相机的焦距f和本质矩阵E,其具体过程为:Embodiment 4: This embodiment differs from Embodiment 3 in that: in step 4, the focal lengths f and The essential matrix E, its specific process is:

步骤四一、在初始相机焦距值f0附近的[0.5f0,2f0]范围内,每隔0.01f0进行一次采样,得到相机焦距值集合F={fm=0.5f0+m×0.01f0,m=0,1,...,150},其中:fm代表第m次采样对应的相机焦距值;Step 41. In the range of [0.5f 0 , 2f 0 ] near the initial camera focal length value f 0 , perform sampling every 0.01f 0 to obtain the camera focal length value set F={f m =0.5f 0 +m× 0.01f 0 ,m=0,1,...,150}, where: f m represents the camera focal length value corresponding to the mth sampling;

步骤四二、根据集合F中的每个相机焦距值fm,基于五点算法和RANSAC算法分别对集合S1估计一个本质矩阵Em,并获得fm对应的内点个数nmStep 42: According to the focal length value f m of each camera in the set F, estimate an essential matrix Em for the set S 1 based on the five-point algorithm and the RANSAC algorithm respectively, and obtain the number n m of interior points corresponding to f m ;

则对极几何描述方程为:Then the epipolar geometric description equation is:

Figure BDA0002234371220000064
Figure BDA0002234371220000064

其中:Em为fm对应的本质矩阵,中间变量矩阵

Figure BDA0002234371220000065
cx和cy分别为图像Ip宽度和高度的一半,图像Ip与图像Iq的宽度和高度均相同;上角标T代表矩阵的转置,上角标-1代表矩阵的逆;以图像Ip的左下角顶点为坐标原点O,以图像Ip的宽为x轴,以图像Ip的高为y轴,建立直角坐标系xOy,
Figure BDA0002234371220000066
为特征点pi′在直角坐标系xOy下的坐标,以图像Iq的左下角顶点为坐标原点O′,以图像Iq的宽为x′轴,以图像Iq的高为y′轴,建立直角坐标系x′O′y′,
Figure BDA0002234371220000067
为特征点qi′在直角坐标系x′O′y′下的坐标;Among them: E m is the essential matrix corresponding to f m , the intermediate variable matrix
Figure BDA0002234371220000065
c x and c y are respectively half of the width and height of the image I p , and the width and height of the image I p and I q are the same; the superscript T represents the transpose of the matrix, and the superscript -1 represents the inverse of the matrix; Taking the lower left corner vertex of the image I p as the coordinate origin O, taking the width of the image I p as the x-axis, and taking the height of the image I p as the y-axis, a Cartesian coordinate system xOy is established,
Figure BDA0002234371220000066
is the coordinate of the feature point p i' in the Cartesian coordinate system xOy, the lower left corner vertex of the image I q is the coordinate origin O', the width of the image I q is the x' axis, and the height of the image I q is the y' axis. , establish a Cartesian coordinate system x'O'y',
Figure BDA0002234371220000067
is the coordinate of the feature point qi ' in the Cartesian coordinate system x'O'y';

步骤四三、选取对应的内点个数最多的相机焦距值fm作为相机的焦距f,并将内点个数最多的相机焦距值fm对应的Em作为相机的本质矩阵E。Step 43: Select the focal length value f m of the camera with the largest number of interior points as the focal length f of the camera, and use the E m corresponding to the focal length value f m of the camera with the largest number of interior points as the essential matrix E of the camera.

具体实施方式五:本实施方式与具体实施方式四不同的是:所述步骤四二中,获得fm对应的内点个数nm的具体过程为:Embodiment 5: The difference between this embodiment and Embodiment 4 is that: in the step 42, the specific process of obtaining the number n m of interior points corresponding to f m is as follows:

遍历集合S1中的全部特征点对,若集合S1包含的特征点对(pi′,qi′)中的点qi′到极线的直线距离小于3像素值,则特征点对(pi′,qi′)为fm的内点,否则,特征点对(pi′,qi′)不为fm的内点;Traverse all feature point pairs in the set S 1 , if the linear distance from the point qi ' in the feature point pair (pi ' , qi ' ) included in the set S 1 to the epipolar line is less than 3 pixels, then the feature point pair (pi ' , qi ' ) is the interior point of f m , otherwise, the feature point pair (pi ' , qi ' ) is not the interior point of f m ;

极线方程为:The polar equation is:

Figure BDA0002234371220000071
Figure BDA0002234371220000071

其中:x和y为极线方程的变量。Where: x and y are the variables of the epipolar equation.

具体实施方式六:本实施方式与具体实施方式五不同的是:所述所述

Figure BDA0002234371220000072
Figure BDA0002234371220000073
的具体计算公式如下:Embodiment 6: This embodiment differs from Embodiment 5 in that:
Figure BDA0002234371220000072
and
Figure BDA0002234371220000073
The specific calculation formula is as follows:

Figure BDA0002234371220000074
Figure BDA0002234371220000074

Figure BDA0002234371220000075
Figure BDA0002234371220000075

Figure BDA0002234371220000076
Figure BDA0002234371220000076

Figure BDA0002234371220000077
Figure BDA0002234371220000077

其中:

Figure BDA0002234371220000078
代表Rp对应的更新后的旋转矩阵,
Figure BDA0002234371220000079
代表Rq对应的更新后的旋转矩阵,
Figure BDA00022343712200000710
代表tp对应的更新后的平移矩阵,
Figure BDA00022343712200000711
代表tq对应的更新后的旋转矩阵。in:
Figure BDA0002234371220000078
represents the updated rotation matrix corresponding to R p ,
Figure BDA0002234371220000079
represents the updated rotation matrix corresponding to R q ,
Figure BDA00022343712200000710
represents the updated translation matrix corresponding to t p ,
Figure BDA00022343712200000711
represents the updated rotation matrix corresponding to t q .

具体实施方式七:本实施方式与具体实施方式六不同的是:所述步骤六的具体过程为:Embodiment 7: The difference between this embodiment and Embodiment 6 is that the specific process of the step 6 is:

第k类中的特征点对所在平面的法向量为nk,k=1,...,n;The normal vector of the plane where the feature point pair in the kth class is located is n k , k=1,...,n;

Hk=K(R+tnk)K-1 H k =K(R+tn k )K -1

其中:中间变量矩阵

Figure BDA00022343712200000712
where: matrix of intermediate variables
Figure BDA00022343712200000712

将法向量nk变换到步骤五获得的更新后的相机姿态下,得到第k类中的特征点对所在平面的更新后法向量

Figure BDA00022343712200000713
Transform the normal vector n k to the updated camera pose obtained in step 5, and obtain the updated normal vector of the plane where the feature point pair in the kth class is located
Figure BDA00022343712200000713

同理,得到其他类中的特征点对所在平面的更新后的法向量。In the same way, the updated normal vectors of the planes where the feature point pairs in other classes are located are obtained.

具体实施方式八:本实施方式与具体实施方式七不同的是:所述步骤七中,根据步骤五获得的更新后的相机姿态,以及步骤六计算出的更新后法向量,对图像Ip和Iq进行变换,得到变换后图像I′p和I′q,其具体过程为:Embodiment 8: This embodiment differs from Embodiment 7 in that: in step 7, according to the updated camera pose obtained in step 5 and the updated normal vector calculated in step 6, the image I p and Transform I q to obtain the transformed images I′ p and I′ q , and the specific process is as follows:

步骤七一、在图像Ip中选取网格点,相邻网格点的间隔为40像素,获得网格点集合V为V={p′i″,i″=1,...,N2},N2为网格点集合V中网格点的个数,p′i″为集合V中的第i″个网格点;Step 71. Select grid points in the image I p , the interval between adjacent grid points is 40 pixels, and the obtained grid point set V is V={p′ i″ ,i″=1,...,N 2 }, N 2 is the number of grid points in the grid point set V, and p'i" is the ith" grid point in the set V;

步骤七二、对于网格点集合V中的任一网格点p′i″,分别计算p′i″与图像Ip在集合S1中的特征点集合P1={pi′,i′=1,...,N1}中每个特征点的欧式距离,从集合P1中选取出与p′i″距离最近的5个点,再计算选取出的5个点所在平面的法向量的均值,将得到的均值作为网格点p′i″的法向量

Figure BDA0002234371220000081
Step 72: For any grid point p′ i″ in the grid point set V, calculate the feature point set P 1 ={pi i′ ,i of p′ i″ and the image I p in the set S 1 respectively ′= 1 ,...,N 1 } The Euclidean distance of each feature point in the The mean value of the normal vector, and the obtained mean value is used as the normal vector of the grid point p′ i″
Figure BDA0002234371220000081

步骤七三、根据网格点p′i″的法向量

Figure BDA0002234371220000082
计算网格点p′i″处的变换矩阵
Figure BDA0002234371220000083
Step 73. According to the normal vector of grid point p′ i″
Figure BDA0002234371220000082
Calculate the transformation matrix at grid point p'i"
Figure BDA0002234371220000083

步骤七四、图像Ip中的像素点p在变换后图像I′p中为像素点p′,则像素点p到像素点p′的变换矩阵为:在图像Ip中,与像素点p最邻近的网格点处的变换矩阵;Step seventy-four, the pixel point p in the image I p is the pixel point p' in the transformed image I' p , then the transformation matrix from the pixel point p to the pixel point p' is: in the image I p , and the pixel point p the transformation matrix at the nearest grid point;

根据求得的变换矩阵,将图像Ip中的各个像素点变换到图像I′p中,获得变换后图像I′pAccording to the obtained transformation matrix, transform each pixel point in the image I p into the image I′ p to obtain the transformed image I′ p ;

步骤七五、同理,将图像Iq中的各个像素点变换到图像I′q中,获得变换后图像I′qStep 75: Similarly, transform each pixel in the image I q into the image I' q to obtain the transformed image I' q .

如图4所示,是采用本发明方法对图2和图3的拼接结果图。As shown in FIG. 4 , it is a graph of the splicing result of FIG. 2 and FIG. 3 using the method of the present invention.

本发明的上述算例仅为详细地说明本发明的计算模型和计算流程,而并非是对本发明的实施方式的限定。对于所属领域的普通技术人员来说,在上述说明的基础上还可以做出其它不同形式的变化或变动,这里无法对所有的实施方式予以穷举,凡是属于本发明的技术方案所引伸出的显而易见的变化或变动仍处于本发明的保护范围之列。The above calculation examples of the present invention are only to illustrate the calculation model and calculation process of the present invention in detail, but are not intended to limit the embodiments of the present invention. For those of ordinary skill in the art, on the basis of the above description, other different forms of changes or changes can also be made, and it is impossible to list all the embodiments here. Obvious changes or modifications are still within the scope of the present invention.

Claims (8)

1. An image stitching method based on camera pose estimation is characterized by comprising the following steps:
step one, shooting two images I of the same scene by using a camera under different visual angles respectivelypAnd IqAnd separately for the image IpAnd IqExtracting the characteristic points;
for the secondary image IpAnd IqMatching the extracted feature points to obtain an initial feature point pair set S which is:wherein: p is a radical ofiAs an image IpCharacteristic point of (a), qiAs an image IqN is the number of characteristic point pairs in the set S;
step two, screening the characteristic point pairs contained in the set S, and obtaining the categories of the screened characteristic point pairs; set S of screened pairs of characteristic points1Comprises the following steps:
Figure FDA0002234371210000012
N1the number of the screened feature point pairs is as follows: class number c of ith' characteristic point pairi′,ci′N, n is the number of categories;
and respectively obtain a set S1A homography transformation matrix between pairs of feature points within each class, wherein: the homography transformation matrix between pairs of feature points in class k is Hk,k=1,...,n;
Step three, rootThe homography transformation matrix H obtained according to the step twokRespectively estimating the camera focal length value f corresponding to the characteristic point pairs in each classkThen according to the focal length f of the camerakSelecting an initial camera focal length value f0
Step four, the set S obtained according to the step two1And the initial camera focal length value f obtained in the third step0To estimate the focal length f and the essential matrix E of the camera;
decomposing the obtained essential matrix E to obtain a rotation matrix R and a translation matrix t between cameras at two different viewing angles;
step five, setting a shot image IpFrom the perspective of (1), the rotation matrix R of the camera relative to the world coordinate systempIs a unit matrix and a translation matrix tpIf the vector is 0, image I is takenqFrom the perspective of (1), the rotation matrix R of the camera relative to the world coordinate systemq=RRpR, translation matrix tq=Rtp+t=t;
Respectively rotate the matrix RpAnd RqConverted into a rotation vector rpAnd rqAnd calculating a rotation vector rpAnd rqAverage value of (2)
Figure FDA0002234371210000013
Figure FDA0002234371210000014
Then average value is calculated
Figure FDA0002234371210000015
Into a rotation matrix
Figure FDA0002234371210000016
Rotation matrix
Figure FDA0002234371210000017
Namely, the compromise rotation matrix;
compromise translation matrixComprises the following steps:
Figure FDA0002234371210000019
will rotate the matrix RpAnd RqIs updated to
Figure FDA0002234371210000021
Andwill translate the matrix tpAnd tqIs updated to
Figure FDA0002234371210000023
Andobtaining an updated camera pose;
sixthly, according to the homography transformation matrix H obtained in the step twokThe rotation matrix R and the translation matrix t between the cameras obtained in the fourth step and the updated rotation matrix obtained in the fifth step
Figure FDA0002234371210000025
Calculating the normal vector of the plane where the characteristic point pairs in each class are located;
transforming the normal vector to the updated camera attitude obtained in the step five to obtain an updated normal vector of the plane where the characteristic point pair in each class is located;
step seven, according to the updated camera attitude obtained in the step five and the updated normal vector calculated in the step six, carrying out comparison on the image IpAnd IqIs transformed to obtain a transformed image I'pAnd l'q
And then to the transformed image I'pAnd l'qPerforming image splicing fusion, and obtaining image I'pAnd l'qOverlapping region, calculating the pixel of each pixel point in the overlapping regionTaking the calculated pixel mean value as the pixel value of the corresponding pixel point; for image I'pAnd l'qMaintaining the original pixel value in the non-overlapping area to obtain the spliced image Ipq
2. The image stitching method based on camera pose estimation according to claim 1, wherein in the second step, the feature point pairs included in the set S are screened, and categories of the screened feature point pairs are obtained, and the specific process is as follows:
step two, setting a set of the feature point pairs left after screening as S ', and initializing the set S' into an initial feature point pair set S in the step one; and the set S of the screened characteristic point pairs1Initializing to an empty set, and collecting the set S1The number n of the categories of the feature point pairs contained in the table is initialized to 0;
step two, extracting interior points from the set S' by adopting an RANSAC method, wherein the set of the extracted interior point feature point pairs is Sn+1The homography transformation matrix between pairs of interior point feature points is Hn+1
After the extracted characteristic point pairs of the inner points are removed from the set S ', obtaining a set of the remaining characteristic point pairs, namely obtaining an updated set S';
step two and step three, if set sn+1If the number of the middle characteristic point pairs is more than or equal to 15, then s is addedn+1The class number of the feature point pair included in the set is n +1, and then the set s is setn+1The characteristic point pairs contained in it are added to the set S1In (3), obtaining an updated set S1Will aggregate sn+1Clear and put the set S1Adding 1 to the category number n of the feature point pairs contained in the table;
step two and step four, repeat the process from step two to step two and step three, continue to process the set S' after updating, until the set Sn+1The number of the characteristic point pairs contained in the set S is less than 15 to obtain a set S1And the set S1The class number of the characteristic point pair contained in (a).
3. The image stitching method based on camera pose estimation according to claim 2, wherein the specific process of the third step is as follows:
is provided withThe camera focal length value f corresponding to the characteristic point pair in the kth classkComprises the following steps:
Figure FDA0002234371210000032
wherein:andare all HkThe element in (1) is the corresponding camera focal length value f of each type1,f2,...,fk,...,fnAs the initial camera focal length value f0
4. The method for image stitching based on camera pose estimation of claim 3, wherein in the fourth step, the set S obtained from the second step1And the initial camera focal length value f obtained in the third step0Estimating the focal length f and the essential matrix E of the camera, which comprises the following specific processes:
step four, firstly, setting the initial camera focal length value f0Nearby [0.5f ]0,2f0]Within the range, every 0.01f0Sampling for one time to obtain a camera focal length value set F ═ Fm=0.5f0+m×0.01f00, 1.., 150}, wherein: f. ofmRepresenting the camera focal length value corresponding to the mth sampling;
step four, according to each camera focal length value F in the set FmSeparately for set S1Estimating an essential matrix EmAnd obtain fmCorresponding number of interior points nm
The epipolar geometry description equation is:
wherein: emIs fmCorresponding essence matrix, intermediate variable matrix
Figure FDA0002234371210000036
cxAnd cyAre respectively an image IpHalf width and height, image IpAnd image IqAre the same in width and height; the superscript T represents the transpose of the matrix, and the superscript-1 represents the inverse of the matrix; with image IpThe vertex of the lower left corner of (1) is taken as the origin of coordinates O, and the image I is takenpIs the x-axis, as image IpIs the y-axis, a rectangular coordinate system xOy is established,
Figure FDA0002234371210000037
is a characteristic point pi′Coordinates in a rectangular coordinate system xOy, as image IqIs the origin of coordinates O' with the vertex of the lower left corner of the image IqIs the x' axis, as image IqIs the y 'axis, a rectangular coordinate system x' O 'y' is established,
Figure FDA0002234371210000041
is a characteristic point qi′Coordinates under a rectangular coordinate system x ' O ' y ';
step four and three, selecting the corresponding camera focal length value f with the largest number of interior pointsmAs the focal length f of the camera, and the value f of the focal length of the camera with the largest number of interior pointsmCorresponding EmAs the intrinsic matrix E of the camera.
5. The image stitching method based on camera pose estimation of claim 4, wherein f is obtained in the second stepmCorresponding number of interior points nmTool (A)The process is as follows:
traverse set S1If set S, of all pairs of characteristic points in1Characteristic point pair (p) containedi′,qi′) Point q in (1)i′The straight line distance to the epipolar line is less than 3 pixel values, the characteristic point pair (p)i′,qi′) Is fmOtherwise, the characteristic point pair (p)i′,qi′) Is different from fmThe inner point of (a);
the polar line equation is:
Figure FDA0002234371210000042
wherein: x and y are variables of the polar line equation.
6. The method of claim 5, wherein the image stitching method based on camera pose estimation is characterized in thatAnd
Figure FDA0002234371210000044
the specific calculation formula of (2) is as follows:
Figure FDA0002234371210000045
Figure FDA0002234371210000046
Figure FDA0002234371210000047
Figure FDA0002234371210000048
wherein:
Figure FDA0002234371210000049
represents RpThe corresponding updated rotation matrix is then updated,
Figure FDA00022343712100000410
represents RqThe corresponding updated rotation matrix is then updated,
Figure FDA00022343712100000411
represents tpThe corresponding updated translation matrix is then updated with the translation matrix,
Figure FDA00022343712100000412
represents tqThe corresponding updated rotation matrix.
7. The image stitching method based on camera pose estimation according to claim 6, wherein the specific process of the sixth step is as follows:
the normal vector of the plane where the characteristic point pairs in the kth class are located is nk,k=1,...,n;
Hk=K(R+tnk)K-1
Wherein: intermediate variable matrix
Figure FDA0002234371210000051
A normal vector nkAnd 4, under the updated camera posture obtained in the step five, obtaining an updated normal vector of the plane where the characteristic point pair in the kth class is located
Figure FDA0002234371210000052
Figure FDA0002234371210000053
And similarly, obtaining the updated normal vector of the plane where the characteristic point pair in other classes is located.
8. The method of claim 7, wherein in the seventh step, the image I is stitched according to the updated camera pose obtained in the fifth step and the updated normal vector calculated in the sixth steppAnd IqIs transformed to obtain a transformed image I'pAnd l'qThe specific process comprises the following steps:
step seven one, in the image IpSelecting grid points, wherein the interval between adjacent grid points is 40 pixels, and obtaining a grid point set V of which is V ═ p'i″,i″=1,...,N2},N2Is the number, p ', of grid points in the set of grid points V'i″Is the ith "grid point in the set V;
seventhly two, any grid point p 'in the grid point set V'i″Separately calculate p'i″And image IpIn the set S1Feature point set P in (1)1={pi′,i′=1,...,N1Euclidean distance of each feature point in the set P1Is extracted from'i″5 points closest to the center of the grid point p 'are calculated as the mean value of normal vectors of planes of the selected 5 points'i″Normal vector of (1)
Figure FDA0002234371210000054
Seventhly and three steps of obtaining grid points p'i″Normal vector of (1)
Figure FDA0002234371210000055
Calculating grid point p'i″Transformation matrix of (2)
Figure FDA0002234371210000056
Figure FDA0002234371210000057
Step seven and four, image IpPixel of (2)Point p post-transform image I'pIf the value is pixel p ', the transformation matrix from pixel p to pixel p' is: in picture IpIn the step (2), a transformation matrix at the grid point nearest to the pixel point p is obtained;
based on the obtained transformation matrix, image IpOf (1) converting each pixel point to image I'pTo obtain a transformed image I'p
Step seven five, in the same way, image IqOf (1) converting each pixel point to image I'qTo obtain a transformed image I'q
CN201910978286.3A 2019-10-15 2019-10-15 An Image Stitching Method Based on Camera Pose Estimation Active CN110717936B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910978286.3A CN110717936B (en) 2019-10-15 2019-10-15 An Image Stitching Method Based on Camera Pose Estimation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910978286.3A CN110717936B (en) 2019-10-15 2019-10-15 An Image Stitching Method Based on Camera Pose Estimation

Publications (2)

Publication Number Publication Date
CN110717936A true CN110717936A (en) 2020-01-21
CN110717936B CN110717936B (en) 2023-04-28

Family

ID=69211693

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910978286.3A Active CN110717936B (en) 2019-10-15 2019-10-15 An Image Stitching Method Based on Camera Pose Estimation

Country Status (1)

Country Link
CN (1) CN110717936B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111325792A (en) * 2020-01-23 2020-06-23 北京字节跳动网络技术有限公司 Method, apparatus, device, and medium for determining camera pose
CN111429358A (en) * 2020-05-09 2020-07-17 南京大学 Image splicing method based on planar area consistency
CN111899174A (en) * 2020-07-29 2020-11-06 北京天睿空间科技股份有限公司 Single-camera rotation splicing method based on deep learning
CN113034362A (en) * 2021-03-08 2021-06-25 桂林电子科技大学 Expressway tunnel monitoring panoramic image splicing method
CN113327198A (en) * 2021-06-04 2021-08-31 武汉卓目科技有限公司 Remote binocular video splicing method and system
CN114648586A (en) * 2022-03-24 2022-06-21 重庆长安汽车股份有限公司 Method for estimating vehicle absolute attitude based on visual line characteristics and storage medium

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6009190A (en) * 1997-08-01 1999-12-28 Microsoft Corporation Texture map construction method and apparatus for displaying panoramic image mosaics
US6018349A (en) * 1997-08-01 2000-01-25 Microsoft Corporation Patch-based alignment method and apparatus for construction of image mosaics
WO2012058902A1 (en) * 2010-11-02 2012-05-10 中兴通讯股份有限公司 Method and apparatus for combining panoramic image
EP2518688A1 (en) * 2009-12-21 2012-10-31 Huawei Device Co., Ltd. Method and device for splicing images
WO2012175029A1 (en) * 2011-06-22 2012-12-27 华为终端有限公司 Multi-projection splicing geometric calibration method and calibration device
US20130329072A1 (en) * 2012-06-06 2013-12-12 Apple Inc. Motion-Based Image Stitching
JP2015046040A (en) * 2013-08-28 2015-03-12 Kddi株式会社 Image converter
CN105069750A (en) * 2015-08-11 2015-11-18 电子科技大学 Determination method for optimal projection cylindrical surface radius based on image feature points
US20170018086A1 (en) * 2015-07-16 2017-01-19 Google Inc. Camera pose estimation for mobile devices
CN106355550A (en) * 2016-10-31 2017-01-25 微景天下(北京)科技有限公司 Image stitching system and image stitching method
CN106600592A (en) * 2016-12-14 2017-04-26 中南大学 Track long chord measurement method based on the splicing of continuous frame images
US20170124680A1 (en) * 2014-10-31 2017-05-04 Fyusion, Inc. Stabilizing image sequences based on camera rotation and focal length parameters
CN108122191A (en) * 2016-11-29 2018-06-05 成都观界创宇科技有限公司 Fish eye images are spliced into the method and device of panoramic picture and panoramic video
CN108317953A (en) * 2018-01-19 2018-07-24 东北电力大学 A kind of binocular vision target surface 3D detection methods and system based on unmanned plane
CN109005334A (en) * 2018-06-15 2018-12-14 清华-伯克利深圳学院筹备办公室 A kind of imaging method, device, terminal and storage medium
CN109064404A (en) * 2018-08-10 2018-12-21 西安电子科技大学 It is a kind of based on polyphaser calibration panorama mosaic method, panoramic mosaic system
WO2018232518A1 (en) * 2017-06-21 2018-12-27 Vancouver Computer Vision Ltd. Determining positions and orientations of objects
CN109767388A (en) * 2018-12-28 2019-05-17 西安电子科技大学 Method, mobile terminal, and camera for improving image stitching quality based on superpixels
CN109840884A (en) * 2017-11-29 2019-06-04 杭州海康威视数字技术股份有限公司 A kind of image split-joint method, device and electronic equipment
CN110120010A (en) * 2019-04-12 2019-08-13 嘉兴恒创电力集团有限公司博创物资分公司 A kind of stereo storage rack vision checking method and system based on camera image splicing

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6018349A (en) * 1997-08-01 2000-01-25 Microsoft Corporation Patch-based alignment method and apparatus for construction of image mosaics
US6009190A (en) * 1997-08-01 1999-12-28 Microsoft Corporation Texture map construction method and apparatus for displaying panoramic image mosaics
EP2518688A1 (en) * 2009-12-21 2012-10-31 Huawei Device Co., Ltd. Method and device for splicing images
WO2012058902A1 (en) * 2010-11-02 2012-05-10 中兴通讯股份有限公司 Method and apparatus for combining panoramic image
WO2012175029A1 (en) * 2011-06-22 2012-12-27 华为终端有限公司 Multi-projection splicing geometric calibration method and calibration device
US20130329072A1 (en) * 2012-06-06 2013-12-12 Apple Inc. Motion-Based Image Stitching
JP2015046040A (en) * 2013-08-28 2015-03-12 Kddi株式会社 Image converter
US20170124680A1 (en) * 2014-10-31 2017-05-04 Fyusion, Inc. Stabilizing image sequences based on camera rotation and focal length parameters
US20170018086A1 (en) * 2015-07-16 2017-01-19 Google Inc. Camera pose estimation for mobile devices
CN105069750A (en) * 2015-08-11 2015-11-18 电子科技大学 Determination method for optimal projection cylindrical surface radius based on image feature points
CN106355550A (en) * 2016-10-31 2017-01-25 微景天下(北京)科技有限公司 Image stitching system and image stitching method
CN108122191A (en) * 2016-11-29 2018-06-05 成都观界创宇科技有限公司 Fish eye images are spliced into the method and device of panoramic picture and panoramic video
CN106600592A (en) * 2016-12-14 2017-04-26 中南大学 Track long chord measurement method based on the splicing of continuous frame images
WO2018232518A1 (en) * 2017-06-21 2018-12-27 Vancouver Computer Vision Ltd. Determining positions and orientations of objects
CN109840884A (en) * 2017-11-29 2019-06-04 杭州海康威视数字技术股份有限公司 A kind of image split-joint method, device and electronic equipment
CN108317953A (en) * 2018-01-19 2018-07-24 东北电力大学 A kind of binocular vision target surface 3D detection methods and system based on unmanned plane
CN109005334A (en) * 2018-06-15 2018-12-14 清华-伯克利深圳学院筹备办公室 A kind of imaging method, device, terminal and storage medium
CN109064404A (en) * 2018-08-10 2018-12-21 西安电子科技大学 It is a kind of based on polyphaser calibration panorama mosaic method, panoramic mosaic system
CN109767388A (en) * 2018-12-28 2019-05-17 西安电子科技大学 Method, mobile terminal, and camera for improving image stitching quality based on superpixels
CN110120010A (en) * 2019-04-12 2019-08-13 嘉兴恒创电力集团有限公司博创物资分公司 A kind of stereo storage rack vision checking method and system based on camera image splicing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张富贵等: "基于SIFT算法的无人机烟株图像快速拼接方法" *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111325792A (en) * 2020-01-23 2020-06-23 北京字节跳动网络技术有限公司 Method, apparatus, device, and medium for determining camera pose
CN111325792B (en) * 2020-01-23 2023-09-26 抖音视界有限公司 Method, apparatus, device and medium for determining camera pose
CN111429358A (en) * 2020-05-09 2020-07-17 南京大学 Image splicing method based on planar area consistency
CN111899174A (en) * 2020-07-29 2020-11-06 北京天睿空间科技股份有限公司 Single-camera rotation splicing method based on deep learning
CN113034362A (en) * 2021-03-08 2021-06-25 桂林电子科技大学 Expressway tunnel monitoring panoramic image splicing method
CN113327198A (en) * 2021-06-04 2021-08-31 武汉卓目科技有限公司 Remote binocular video splicing method and system
CN114648586A (en) * 2022-03-24 2022-06-21 重庆长安汽车股份有限公司 Method for estimating vehicle absolute attitude based on visual line characteristics and storage medium

Also Published As

Publication number Publication date
CN110717936B (en) 2023-04-28

Similar Documents

Publication Publication Date Title
CN110717936A (en) Image stitching method based on camera attitude estimation
CN108564617B (en) Three-dimensional reconstruction method and device for multi-view camera, VR camera and panoramic camera
Wang et al. 360sd-net: 360 stereo depth estimation with learnable cost volume
CN108073857B (en) Dynamic visual sensor DVS event processing method and device
CN107767339B (en) Binocular stereo image splicing method
CN110111250B (en) A robust automatic panoramic UAV image stitching method and device
CN109064404A (en) It is a kind of based on polyphaser calibration panorama mosaic method, panoramic mosaic system
CN111553845B (en) A Fast Image Stitching Method Based on Optimized 3D Reconstruction
CN112862683B (en) A Neighborhood Image Stitching Method Based on Elastic Registration and Grid Optimization
CN107204010A (en) A kind of monocular image depth estimation method and system
Mistry et al. Image stitching using Harris feature detection
CN110853151A (en) Three-dimensional point set recovery method based on video
CN111553939A (en) An Image Registration Algorithm for Multi-camera Cameras
CN113538569A (en) Weak texture object pose estimation method and system
CN116579920B (en) Image stitching method and system based on heterogeneous multimode panoramic stereoscopic imaging system
WO2018133119A1 (en) Method and system for three-dimensional reconstruction of complete indoor scene based on depth camera
CN106530239A (en) Large-visual field bionic fish eye-based small unmanned aerial rotorcraft moving target low-altitude tracking method
CN110544203A (en) A Parallax Image Mosaic Method Combining Motion Least Squares and Line Constraints
CN115456870A (en) Multi-image splicing method based on external parameter estimation
Gai et al. Blind separation of superimposed images with unknown motions
Li et al. Scalable mav indoor reconstruction with neural implicit surfaces
Sun et al. Image stitching with weighted elastic registration
CN116912147B (en) Panoramic video real-time splicing method based on embedded platform
CN112200756A (en) Intelligent bullet special effect short video generation method
CN119693559B (en) A method and device for reconstructing ocean wave fields based on binocular vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Yang Xianqiang

Inventor after: Zhang Zhihao

Inventor after: Gao Huijun

Inventor before: Zhang Zhihao

Inventor before: Yang Xianqiang

Inventor before: Gao Huijun

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant