[go: up one dir, main page]

CN109815966A - An implementation method of mobile robot visual odometry based on improved SIFT algorithm - Google Patents

An implementation method of mobile robot visual odometry based on improved SIFT algorithm Download PDF

Info

Publication number
CN109815966A
CN109815966A CN201910139665.3A CN201910139665A CN109815966A CN 109815966 A CN109815966 A CN 109815966A CN 201910139665 A CN201910139665 A CN 201910139665A CN 109815966 A CN109815966 A CN 109815966A
Authority
CN
China
Prior art keywords
feature
mobile robot
matching
points
feature points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910139665.3A
Other languages
Chinese (zh)
Inventor
郑恩辉
王谈谈
刘政
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Jiliang University
Original Assignee
China Jiliang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Jiliang University filed Critical China Jiliang University
Priority to CN201910139665.3A priority Critical patent/CN109815966A/en
Publication of CN109815966A publication Critical patent/CN109815966A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本发明公开了一种基于改进SIFT算法的移动机器人视觉里程计实现方法。通过安装在移动机器人上的深度相机机器人前方视场环境进信息采集,获取移动机器人前方视场环境中的空间点的二维图像信息和三维坐标信息;利用改进SIFT特征匹配算法,处理获得初步匹配的匹配点对;利用随机一致性算法剔除候选特征点中的误匹配点,得到精确匹配的匹配点对;利用精确匹配的匹配点对求解移动机器人运动参数。本发明方法使用深度相机进行信息采集,能够直接获取空间点的三维信息,并利用改进的SIFT特征匹配算法进行特征点匹配,从而显著提高移动机器人定位的效率和准确性。

The invention discloses a mobile robot visual odometry implementation method based on an improved SIFT algorithm. Through the depth camera installed on the mobile robot, the robot's front field of view environment is collected, and the two-dimensional image information and three-dimensional coordinate information of the space points in the front field of view environment of the mobile robot are obtained. The improved SIFT feature matching algorithm is used to process and obtain preliminary matching. The matching point pairs are obtained; the random consistency algorithm is used to eliminate the incorrect matching points in the candidate feature points, and the matching point pairs that are precisely matched are obtained; The method of the invention uses a depth camera to collect information, can directly obtain three-dimensional information of spatial points, and uses an improved SIFT feature matching algorithm to perform feature point matching, thereby significantly improving the efficiency and accuracy of mobile robot positioning.

Description

A kind of mobile robot visual odometer implementation method based on improvement SIFT algorithm
Technical field
The present invention relates to a kind of mobile robot autonomous navigation technical field, in particular to a kind of based on improving SIFT The mobile robot visual odometer implementation method of algorithm.
Background technique
In recent years, mobile robot is rapidly progressed and uses, and penetrates into the every field of life.It is traditional from In localization method, based on the method for self-locating of wheeled odometer, the drift in result is led to due to the phenomenon that there are wheel sideslips It moves, the method based on sonar and supersonic sensing interferes with each other because the two is active sensor, the positioning side based on GPS Method can not be positioned in the weak enclosed areas of signal.Due to the limitation of the relevant technologies, mobile robot is made by oneself at present Position, there are no the solutions that relative maturity is stable.
Visual odometry can be more to determine the self-positioning problem of mobile robot by acquiring and analyzing image sequence Mend traditional method for self-locating there are the problem of, enhance the efficiency and precision of localization for Mobile Robot.With inexpensive depth phase The release of machine, depth camera is more and more for localization for Mobile Robot and navigation.Different from traditional vision mileage It is more to avoid monocular odometer needs for meter, the two-dimensional image information and three-dimensional coordinate information of the available spatial point of depth camera The problem of secondary transformed coordinate system simultaneously can just solve the three-dimensional coordinate of spatial point by a large amount of calculating, improve the speed of calculating with Precision.
Summary of the invention
In order to solve the disadvantage that the prior art and deficiency, the present invention provide a kind of based on the mobile machine for improving SIFT algorithm People's visual odometry implementation method.
The purpose of the present invention is realized by the technical solution of following steps:
1) by the depth camera that is mounted in mobile robot to visual field environment in front of mobile robot into information collection, obtain The two-dimensional image information and three-dimensional coordinate information of spatial point in front of mobile robot in visual field environment;
2) two-dimensional image information obtained to step 1), using SIFT feature matching algorithm is improved, processing obtains preliminary matches Matching double points;
3) characteristic point pair of the preliminary matches obtained to step 2) rejects candidate feature using random consistency algorithm (RANSAC) Mismatching point in point obtains accurate matched matching double points;
4) kinematic parameter of mobile robot is solved using accurate matched matching double points.
The step 2) specifically: progress characteristic point detection first;Secondly the spy of binaryzation is carried out to the characteristic point of acquisition Sign point description;The characteristic point adjacent two images is slightly matched with the feature point description of binaryzation, obtains preliminary matches Matching double points.
The step 2) improve SIFT feature matching algorithm specifically includes the following steps:
2.1) it carries out characteristic point detection: two dimensional image space being established by two dimensional image, is then carried out by two dimensional image down-sampled It carries out process of convolution with gaussian kernel function again after reason to obtain the sampled images of different images size and constitute Gaussian scale-space, so Different DoG scale images is obtained by every adjacent two layers sampled images work difference in Gaussian scale-space afterwards and constitutes DoG ruler Space is spent, extreme point (Blob) is detected in DoG scale space and is used as characteristic point;And for each of DoG scale space When pixel carries out detection extreme point, each pixel with in the DoG scale image of scale 8 neighbor pixels and it is upper, Under adjacent scale DoG scale image in total 26 pixels of each 9 × 2 pixel be compared, can ensure that in this way Extreme point can be detected in scale space and two dimensional image space;
2.2) feature point description is carried out to the characteristic point of acquisition and obtains the gradient eigenvector of characteristic point, the gradient of characteristic point is special It levies vector and carries out binaryzation, obtain binaryzation gradient eigenvector, specific formula are as follows:
Wherein, a is binarization threshold, and f indicates the gradient eigenvector of characteristic point, f=[f1,f2...f128], fiIndicate that gradient is special Levy i-th of gradient information in vector, fi∈f,biI-th of gradient information after indicating binaryzation;
2.3) through the above steps 2.1) -2.2) characteristic point and the binaryzation description processing for obtaining every piece image, to adjacent two Characteristic point between frame image is slightly matched, and candidate feature point is obtained: to adjacent two field pictures, using the binaryzation ladder of characteristic point Decision metric of the Euclidean distance between feature vector as characteristic point similitude in two field pictures is spent, characteristic point in two field pictures is taken Binaryzation gradient eigenvector between close a pair of of the characteristic point of Euclidean distance nearest a pair of of characteristic point and Euclidean distance time, In these two pair characteristic point, if nearest Euclidean distance is less than preset ratio threshold value divided by secondary close Euclidean distance, Europe is judged Nearest a pair of of the characteristic point of formula distance is similar, as matching double points, casts out close a pair of of the characteristic point of Euclidean distance time;
2.4) repeat the above steps 2.2) process, until all matching double points for obtaining meeting condition in two field pictures.
Data characterization calculation amount, Ji Nengbao can be greatly reduced after handling in this way by above-mentioned improvement SIFT feature matching algorithm The accuracy that match point obtains is demonstrate,proved, and can be shortened the time in matching primitives.
In the step 4), the calculation for solving the kinematic parameter of mobile robot is as follows:
4.1) following moveable robot movement parameter equation is initially set up:
4.2) then building residual sum of squares (RSS) function min { R, T } solves rotation when obtaining objective function minimum with least square method Torque battle array R and translation vector T:
Min { R, T }=| | Pqj-(RPpj+T)||2
In formula, PpjAnd PqjThe corresponding three-dimensional coordinate of two characteristic points of matching double points in respectively adjacent two frame sequences image, by The three-dimensional coordinate information that the accurate matched characteristic point and step 1) that step 3) obtains obtain is combined and is obtained;Before subscript p is represented One frame image, subscript q represent a later frame image, and j is the ordinal number of accurate matched matching double points, x, y, and z represents three-dimensional coordinate;
The depth camera uses real senseD435 depth camera.
The present invention is used only real senseD435 depth camera and carries out information collection, and can directly acquire spatial point Three-dimensional coordinate information.
The SIFT algorithm used different from traditional visual odometry in the feature point description discrete consuming a large amount of time, There is a problem of real-time difference, the present invention carries out Feature Points Matching using improved SIFT feature matching algorithm, by SIFT feature Vector carries out binaryzation, to significantly improve the efficiency of localization for Mobile Robot.
Compared with the prior art, the invention has the following advantages and beneficial effects:
1, the present invention obtains the two-dimensional image information and three-dimensional coordinate information of spatial point using real senseD435 depth camera, The three-dimensional coordinate that monocular odometer needs multiple transformed coordinate system and can just solve spatial point by a large amount of calculating is avoided, is mentioned The high speed and precision that calculate.
2, the present invention proposes SIFT feature vector carrying out binaryzation, while realizing that raising guarantees detection accuracy, solves Conventional visual odometer uses SIFT algorithm in the feature point description discrete consuming plenty of time, and then leads to asking for real-time difference Topic.
Detailed description of the invention
Fig. 1 is the logical flow chart of the method for the present invention.
Specific embodiment
The invention will be further described for explanation and specific embodiment with reference to the accompanying drawing.
As shown in Figure 1, specific embodiments of the present invention are as follows:
1) by the real senseD435 depth camera that is mounted in mobile robot to visual field environment in front of mobile robot Into information collection, the two-dimensional image information and three-dimensional coordinate information of the spatial point in front of mobile robot in visual field environment are obtained;
2) two-dimensional image information obtained to step 1), using SIFT feature matching algorithm is improved, processing obtains preliminary matches Matching double points:
2.1) it carries out characteristic point detection: two dimensional image space being established by two dimensional image, is then carried out by two dimensional image down-sampled It carries out process of convolution with gaussian kernel function again after reason to obtain the sampled images of different images size and constitute Gaussian scale-space, so Different DoG scale images is obtained by every adjacent two layers sampled images work difference in Gaussian scale-space afterwards and constitutes DoG ruler Space is spent, extreme point (Blob) is detected in DoG scale space and is used as characteristic point;And for each of DoG scale space When pixel carries out detection extreme point, each pixel with in the DoG scale image of scale 8 neighbor pixels and it is upper, Under adjacent scale DoG scale image in total 26 pixels of each 9 × 2 pixel be compared, can ensure that in this way Extreme point can be detected in scale space and two dimensional image space;
2.2) feature point description is carried out to the characteristic point of acquisition and obtains the gradient eigenvector of characteristic point, the gradient of characteristic point is special It levies vector and carries out binaryzation, obtain binaryzation gradient eigenvector, specific formula are as follows:
Wherein, a is binarization threshold, and f indicates the gradient eigenvector of characteristic point, f=[f1,f2...f128], fiIndicate that gradient is special Levy i-th of gradient information in vector, fi∈f,biI-th of gradient information after indicating binaryzation;
2.3) through the above steps 2.1) -2.2) characteristic point and the binaryzation description processing for obtaining every piece image, to adjacent two Characteristic point between frame image is slightly matched, and candidate feature point is obtained: to adjacent two field pictures, using the binaryzation ladder of characteristic point Decision metric of the Euclidean distance between feature vector as characteristic point similitude in two field pictures is spent, characteristic point in two field pictures is taken Binaryzation gradient eigenvector between close a pair of of the characteristic point of Euclidean distance nearest a pair of of characteristic point and Euclidean distance time, In these two pair characteristic point, if nearest Euclidean distance is less than preset ratio threshold value divided by secondary close Euclidean distance, Europe is judged Nearest a pair of of the characteristic point of formula distance is similar, as matching double points, casts out close a pair of of the characteristic point of Euclidean distance time;
2.4) repeat the above steps 2.2) process, until all matching double points for obtaining meeting condition in two field pictures.
3) characteristic point pair of the preliminary matches obtained to step 2) rejects candidate feature using random consistency algorithm (RANSAC) Mismatching point in point obtains accurate matched matching double points;
4) moveable robot movement parameter is solved using accurate matched matching double points:
4.1) following equation group is initially set up:
4.2) then building residual sum of squares (RSS) function min { R, T } solves rotation when obtaining objective function minimum with least square method Torque battle array R and translation vector T:
Min { R, T }=| | Pqj-(RPpj+T)||2
In formula, PpjAnd PqjThe corresponding three-dimensional coordinate of two characteristic points of matching double points in respectively adjacent two frame sequences image, by The three-dimensional coordinate information that the accurate matched characteristic point and step 1) that step 3) obtains obtain is combined and is obtained;Before subscript p is represented One frame image, subscript q represent a later frame image, and j is the ordinal number of accurate matched matching double points, x, y, and z represents three-dimensional coordinate.
The two adjacent frame sequence images that the present embodiment is acquired first with depth camera, carry out altogether three groups of feature extractions with The comparative experiments of characteristic matching, time needed for obtaining former SIFT feature matching algorithm and improved algorithmic match time compare As shown in table 1.
Table 1
By 1 gained of table, improved SIFT algorithm substantially reduces matching and is consumed while realizing that raising guarantees detection accuracy Time.It solves Conventional visual odometer and consumes the plenty of time using SIFT matching algorithm, and then lead to asking for real-time difference Topic, improves the efficiency of localization for Mobile Robot.
Then the present embodiment is directed to the two adjacent frame sequence images of depth camera acquisition, solve the movement of mobile robot Parameter, the transformation results solved are following (unit: m):
T=[- 0.0351 0.0423 0.282]T
It is recorded according to odometer, camera actual rotation angle is 6 °, and mobile is x=0.04m, y=0.04m, z=0.3m.By upper Calculation result is stated, obtains maximum absolute error as 0.018m, relative error 5%, error illustrates present invention side in allowed band Method is more accurate for localization for Mobile Robot.
More than, it is merely preferred embodiments of the present invention;But scope of protection of the present invention is not limited thereto.It is any Those familiar with the art in the technical scope disclosed by the present invention, according to the technique and scheme of the present invention and its improves Design is subject to equivalent substitution or change, should all cover in protection of the invention.

Claims (5)

1.一种基于改进SIFT算法的移动机器人视觉里程计实现方法,其特征在于,包括以下步骤:1. a mobile robot visual odometry implementation method based on improved SIFT algorithm, is characterized in that, comprises the following steps: 1)通过安装在移动机器人上的深度相机对移动机器人前方视场环境进信息采集,获取移动机器人前方视场环境中的空间点的二维图像信息和三维坐标信息;1) Collect information on the front view field environment of the mobile robot through the depth camera installed on the mobile robot, and obtain the two-dimensional image information and three-dimensional coordinate information of the spatial points in the front view field environment of the mobile robot; 2)对步骤1)获取的二维图像信息,利用改进SIFT特征匹配算法,处理获得初步匹配的匹配点对;2) to the two-dimensional image information obtained in step 1), utilize the improved SIFT feature matching algorithm to process the matching point pairs that obtain preliminary matching; 3)对步骤2)获取的初步匹配的特征点对,利用随机一致性算法(RANSAC)剔除候选特征点中的误匹配点,得到精确匹配的匹配点对;3) For the preliminary matched feature point pairs obtained in step 2), use the random consistency algorithm (RANSAC) to eliminate the incorrect matching points in the candidate feature points, and obtain the accurately matched matching point pairs; 4)利用精确匹配的匹配点对求解移动机器人的运动参数。4) The motion parameters of the mobile robot are solved by using precisely matched matching point pairs. 2.根据权利要求1所述的一种基于改进SIFT算法的移动机器人视觉里程计实现方法,其特征在于:所述步骤2)具体为:首先进行特征点检测;其次对获取的特征点进行二值化的特征点描述;以二值化的特征点描述对相邻两幅图像间的特征点进行粗匹配,获取初步匹配的匹配点对。2. a kind of mobile robot visual odometry implementation method based on improved SIFT algorithm according to claim 1, is characterized in that: described step 2) is specially: first carry out feature point detection; The valued feature point description; the feature points between two adjacent images are roughly matched with the binarized feature point description, and the matching point pair for preliminary matching is obtained. 3.根据权利要求1或2所述的一种基于改进SIFT算法的移动机器人视觉里程计实现方法,其特征在于:所述步骤2)改进SIFT特征匹配算法具体包括以下步骤:3. a kind of mobile robot visual odometry implementation method based on improved SIFT algorithm according to claim 1 and 2 is characterized in that: described step 2) improvement SIFT feature matching algorithm specifically comprises the following steps: 2.1)进行特征点检测:由二维图像建立二维图像空间,接着由二维图像进行降采样处理后再和高斯核函数进行卷积处理,获得不同图像尺寸的采样图像并构成高斯尺度空间,然后由高斯尺度空间中的每相邻两层采样图像作差得到不同的DoG尺度图像并构成了DoG尺度空间,在DoG尺度空间中检测极值点(Blob)作为特征点;且在针对DoG尺度空间中的每个像素点进行检测极值点时,每个像素点与同尺度的DoG尺度图像中的8个相邻像素点和上、下相邻尺度的DoG尺度图像中的各9×2像素点的共计26个像素点进行比较;2.1) Perform feature point detection: build a two-dimensional image space from a two-dimensional image, and then perform downsampling processing on the two-dimensional image, and then perform convolution processing with a Gaussian kernel function to obtain sampled images of different image sizes and form a Gaussian scale space, Then, different DoG scale images are obtained by the difference of each adjacent two layers of sampled images in the Gaussian scale space, and the DoG scale space is formed, and extreme points (Blobs) are detected in the DoG scale space as feature points; When each pixel in the space detects the extreme point, each pixel is connected with 8 adjacent pixels in the DoG scale image of the same scale and each 9 × 2 pixels in the DoG scale image of the upper and lower adjacent scales. A total of 26 pixel points are compared; 2.2)对获取的特征点进行特征点描述得到特征点的梯度特征向量,将特征点的梯度特征向量进行二值化,获得二值化梯度特征向量,具体公式为:2.2) Describe the obtained feature points to obtain the gradient feature vector of the feature point, and binarize the gradient feature vector of the feature point to obtain the binarized gradient feature vector. The specific formula is: 其中,a为二值化阈值,f表示特征点的梯度特征向量,f=[f1,f2...f128],fi表示梯度特征向量中的第i个梯度信息,fi∈f,bi表示二值化后的第i个梯度信息;Among them, a is the binarization threshold, f represents the gradient feature vector of the feature point, f=[f 1 , f 2 ... f 128 ], f i represents the ith gradient information in the gradient feature vector, f i ∈ f, b i represent the ith gradient information after binarization; 2.3)通过上述步骤2.1)-2.2)获得每一幅图像的特征点并二值化描述处理,对相邻两帧图像间的特征点进行粗匹配,获取候选特征点:对相邻两帧图像,采用特征点的二值化梯度特征向量间的欧式距离作为两帧图像中特征点相似性的判定度量,取两帧图像中特征点的二值化梯度特征向量间的欧式距离最近的一对特征点和欧式距离次近的一对特征点,在这两对特征点中,如果最近的欧式距离除以次近的欧式距离少于预设比例阈值,则判断欧式距离最近的一对特征点相似,作为匹配点对,舍去欧式距离次近的一对特征点;2.3) Through the above steps 2.1)-2.2), the feature points of each image are obtained and binarized for description processing, and the feature points between two adjacent frames of images are roughly matched to obtain candidate feature points: for two adjacent frames of images , using the Euclidean distance between the binarized gradient eigenvectors of the feature points as the judging measure of the similarity of the feature points in the two frames of images, and taking the pair with the closest Euclidean distance between the binarized gradient eigenvectors of the feature points in the two frames of images Feature point and a pair of feature points with the second closest Euclidean distance. Among these two pairs of feature points, if the nearest Euclidean distance divided by the second closest Euclidean distance is less than the preset ratio threshold, the pair of feature points with the closest Euclidean distance is judged Similar, as a matching point pair, discard a pair of feature points with the next closest Euclidean distance; 2.4)重复上述步骤2.2)过程,直到得到两帧图像中满足条件的所有匹配点对。2.4) Repeat the process of step 2.2) above until all matching point pairs satisfying the conditions in the two frames of images are obtained. 4.根据权利要求1所述的一种基于改进SIFT算法的移动机器人视觉里数的计算方式如下:4. a kind of calculation mode based on the mobile robot vision mileage of improved SIFT algorithm according to claim 1 is as follows: 4.1)首先建立如下移动机器人运动参数公式:4.1) First, establish the following formula for the motion parameters of the mobile robot: 4.2)然后构建残差平方和函数min{R,T},用最小二乘法求解获得目标函数最小时的旋转矩阵R和平移向量T:4.2) Then construct the residual sum of squares function min{R,T}, and use the least squares method to obtain the rotation matrix R and translation vector T when the objective function is the smallest: min{R,T}=||Pqj-(RPpj+T)||2 min{R,T}=||P qj -(RP pj +T)|| 2 式中,Ppj和Pqj分别为相邻两帧序列图像中匹配点对的两个特征点对应的三维坐标,由步骤3)得到的精确匹配的特征点和步骤1)获取的三维坐标信息结合得到;下角标p代表前一帧图像,下角标q代表后一帧图像,j为精确匹配的匹配点对的序数,x,y,z代表三维坐标。In the formula, P pj and P qj are respectively the three-dimensional coordinates corresponding to the two feature points of the matched point pair in the adjacent two frame sequence images, and the accurately matched feature points obtained in step 3) and the three-dimensional coordinate information obtained in step 1). The subscript p represents the previous frame image, the subscript q represents the next frame image, j is the ordinal number of the exact matching point pair, and x, y, and z represent the three-dimensional coordinates. 5.根据权利要求1所述的一种基于改进SIFT算法的移动机器人视觉里程计实现方法,其特征在于:所述的深度相机采用real senseD435深度相机。5 . The method for realizing the visual odometry of a mobile robot based on an improved SIFT algorithm according to claim 1 , wherein the depth camera adopts a real senseD435 depth camera. 6 .
CN201910139665.3A 2019-02-26 2019-02-26 An implementation method of mobile robot visual odometry based on improved SIFT algorithm Pending CN109815966A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910139665.3A CN109815966A (en) 2019-02-26 2019-02-26 An implementation method of mobile robot visual odometry based on improved SIFT algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910139665.3A CN109815966A (en) 2019-02-26 2019-02-26 An implementation method of mobile robot visual odometry based on improved SIFT algorithm

Publications (1)

Publication Number Publication Date
CN109815966A true CN109815966A (en) 2019-05-28

Family

ID=66607529

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910139665.3A Pending CN109815966A (en) 2019-02-26 2019-02-26 An implementation method of mobile robot visual odometry based on improved SIFT algorithm

Country Status (1)

Country Link
CN (1) CN109815966A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111079786A (en) * 2019-11-15 2020-04-28 北京理工大学 ROS and Gazebo-based rotating camera feature matching algorithm
CN114111787A (en) * 2021-11-05 2022-03-01 上海大学 Visual positioning method and system based on three-dimensional road sign

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111079786A (en) * 2019-11-15 2020-04-28 北京理工大学 ROS and Gazebo-based rotating camera feature matching algorithm
CN114111787A (en) * 2021-11-05 2022-03-01 上海大学 Visual positioning method and system based on three-dimensional road sign
CN114111787B (en) * 2021-11-05 2023-11-21 上海大学 Visual positioning method and system based on three-dimensional road sign

Similar Documents

Publication Publication Date Title
CN110332887B (en) A monocular vision pose measurement system and method based on characteristic cursor points
CN101839692B (en) Method for measuring three-dimensional position and stance of object with single camera
CN107301654B (en) A multi-sensor high-precision real-time localization and mapping method
CN105740899B (en) A kind of detection of machine vision image characteristic point and match compound optimization method
CN115482195B (en) A method for detecting deformation of train components based on 3D point cloud
CN105046271B (en) The positioning of MELF elements and detection method based on template matches
CN110223355B (en) Feature mark point matching method based on dual epipolar constraint
CN107067415B (en) A target localization method based on image matching
CN110044374B (en) Image feature-based monocular vision mileage measurement method and odometer
CN105335973B (en) Apply to the visual processing method of strip machining production line
CN104809738B (en) A binocular vision-based airbag contour size detection method
CN110211180A (en) A kind of autonomous grasping means of mechanical arm based on deep learning
CN111415376A (en) Automobile glass sub-pixel contour extraction method and automobile glass detection method
CN106826815A (en) Target object method of the identification with positioning based on coloured image and depth image
CN111640158A (en) End-to-end camera based on corresponding mask and laser radar external reference calibration method
CN104121902B (en) Implementation method of indoor robot visual odometer based on Xtion camera
CN106969706A (en) Workpiece sensing and three-dimension measuring system and detection method based on binocular stereo vision
CN111637851B (en) Aruco code-based visual measurement method and device for plane rotation angle
CN103727930A (en) Edge-matching-based relative pose calibration method of laser range finder and camera
CN107588723B (en) Circular mark leak source detection method on a kind of High-speed target based on two-step method
CN109579825A (en) Robot positioning system and method based on binocular vision and convolutional neural networks
CN107677274A (en) Unmanned plane independent landing navigation information real-time resolving method based on binocular vision
CN107452030A (en) Method for registering images based on contour detecting and characteristic matching
CN110030979B (en) A method for measuring relative pose of non-cooperative targets in space based on sequence images
CN116091603A (en) Box workpiece pose measurement method based on point characteristics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190528