A kind of mobile robot visual odometer implementation method based on improvement SIFT algorithm
Technical field
The present invention relates to a kind of mobile robot autonomous navigation technical field, in particular to a kind of based on improving SIFT
The mobile robot visual odometer implementation method of algorithm.
Background technique
In recent years, mobile robot is rapidly progressed and uses, and penetrates into the every field of life.It is traditional from
In localization method, based on the method for self-locating of wheeled odometer, the drift in result is led to due to the phenomenon that there are wheel sideslips
It moves, the method based on sonar and supersonic sensing interferes with each other because the two is active sensor, the positioning side based on GPS
Method can not be positioned in the weak enclosed areas of signal.Due to the limitation of the relevant technologies, mobile robot is made by oneself at present
Position, there are no the solutions that relative maturity is stable.
Visual odometry can be more to determine the self-positioning problem of mobile robot by acquiring and analyzing image sequence
Mend traditional method for self-locating there are the problem of, enhance the efficiency and precision of localization for Mobile Robot.With inexpensive depth phase
The release of machine, depth camera is more and more for localization for Mobile Robot and navigation.Different from traditional vision mileage
It is more to avoid monocular odometer needs for meter, the two-dimensional image information and three-dimensional coordinate information of the available spatial point of depth camera
The problem of secondary transformed coordinate system simultaneously can just solve the three-dimensional coordinate of spatial point by a large amount of calculating, improve the speed of calculating with
Precision.
Summary of the invention
In order to solve the disadvantage that the prior art and deficiency, the present invention provide a kind of based on the mobile machine for improving SIFT algorithm
People's visual odometry implementation method.
The purpose of the present invention is realized by the technical solution of following steps:
1) by the depth camera that is mounted in mobile robot to visual field environment in front of mobile robot into information collection, obtain
The two-dimensional image information and three-dimensional coordinate information of spatial point in front of mobile robot in visual field environment;
2) two-dimensional image information obtained to step 1), using SIFT feature matching algorithm is improved, processing obtains preliminary matches
Matching double points;
3) characteristic point pair of the preliminary matches obtained to step 2) rejects candidate feature using random consistency algorithm (RANSAC)
Mismatching point in point obtains accurate matched matching double points;
4) kinematic parameter of mobile robot is solved using accurate matched matching double points.
The step 2) specifically: progress characteristic point detection first;Secondly the spy of binaryzation is carried out to the characteristic point of acquisition
Sign point description;The characteristic point adjacent two images is slightly matched with the feature point description of binaryzation, obtains preliminary matches
Matching double points.
The step 2) improve SIFT feature matching algorithm specifically includes the following steps:
2.1) it carries out characteristic point detection: two dimensional image space being established by two dimensional image, is then carried out by two dimensional image down-sampled
It carries out process of convolution with gaussian kernel function again after reason to obtain the sampled images of different images size and constitute Gaussian scale-space, so
Different DoG scale images is obtained by every adjacent two layers sampled images work difference in Gaussian scale-space afterwards and constitutes DoG ruler
Space is spent, extreme point (Blob) is detected in DoG scale space and is used as characteristic point;And for each of DoG scale space
When pixel carries out detection extreme point, each pixel with in the DoG scale image of scale 8 neighbor pixels and it is upper,
Under adjacent scale DoG scale image in total 26 pixels of each 9 × 2 pixel be compared, can ensure that in this way
Extreme point can be detected in scale space and two dimensional image space;
2.2) feature point description is carried out to the characteristic point of acquisition and obtains the gradient eigenvector of characteristic point, the gradient of characteristic point is special
It levies vector and carries out binaryzation, obtain binaryzation gradient eigenvector, specific formula are as follows:
Wherein, a is binarization threshold, and f indicates the gradient eigenvector of characteristic point, f=[f1,f2...f128], fiIndicate that gradient is special
Levy i-th of gradient information in vector, fi∈f,biI-th of gradient information after indicating binaryzation;
2.3) through the above steps 2.1) -2.2) characteristic point and the binaryzation description processing for obtaining every piece image, to adjacent two
Characteristic point between frame image is slightly matched, and candidate feature point is obtained: to adjacent two field pictures, using the binaryzation ladder of characteristic point
Decision metric of the Euclidean distance between feature vector as characteristic point similitude in two field pictures is spent, characteristic point in two field pictures is taken
Binaryzation gradient eigenvector between close a pair of of the characteristic point of Euclidean distance nearest a pair of of characteristic point and Euclidean distance time,
In these two pair characteristic point, if nearest Euclidean distance is less than preset ratio threshold value divided by secondary close Euclidean distance, Europe is judged
Nearest a pair of of the characteristic point of formula distance is similar, as matching double points, casts out close a pair of of the characteristic point of Euclidean distance time;
2.4) repeat the above steps 2.2) process, until all matching double points for obtaining meeting condition in two field pictures.
Data characterization calculation amount, Ji Nengbao can be greatly reduced after handling in this way by above-mentioned improvement SIFT feature matching algorithm
The accuracy that match point obtains is demonstrate,proved, and can be shortened the time in matching primitives.
In the step 4), the calculation for solving the kinematic parameter of mobile robot is as follows:
4.1) following moveable robot movement parameter equation is initially set up:
4.2) then building residual sum of squares (RSS) function min { R, T } solves rotation when obtaining objective function minimum with least square method
Torque battle array R and translation vector T:
Min { R, T }=| | Pqj-(RPpj+T)||2
In formula, PpjAnd PqjThe corresponding three-dimensional coordinate of two characteristic points of matching double points in respectively adjacent two frame sequences image, by
The three-dimensional coordinate information that the accurate matched characteristic point and step 1) that step 3) obtains obtain is combined and is obtained;Before subscript p is represented
One frame image, subscript q represent a later frame image, and j is the ordinal number of accurate matched matching double points, x, y, and z represents three-dimensional coordinate;
The depth camera uses real senseD435 depth camera.
The present invention is used only real senseD435 depth camera and carries out information collection, and can directly acquire spatial point
Three-dimensional coordinate information.
The SIFT algorithm used different from traditional visual odometry in the feature point description discrete consuming a large amount of time,
There is a problem of real-time difference, the present invention carries out Feature Points Matching using improved SIFT feature matching algorithm, by SIFT feature
Vector carries out binaryzation, to significantly improve the efficiency of localization for Mobile Robot.
Compared with the prior art, the invention has the following advantages and beneficial effects:
1, the present invention obtains the two-dimensional image information and three-dimensional coordinate information of spatial point using real senseD435 depth camera,
The three-dimensional coordinate that monocular odometer needs multiple transformed coordinate system and can just solve spatial point by a large amount of calculating is avoided, is mentioned
The high speed and precision that calculate.
2, the present invention proposes SIFT feature vector carrying out binaryzation, while realizing that raising guarantees detection accuracy, solves
Conventional visual odometer uses SIFT algorithm in the feature point description discrete consuming plenty of time, and then leads to asking for real-time difference
Topic.
Detailed description of the invention
Fig. 1 is the logical flow chart of the method for the present invention.
Specific embodiment
The invention will be further described for explanation and specific embodiment with reference to the accompanying drawing.
As shown in Figure 1, specific embodiments of the present invention are as follows:
1) by the real senseD435 depth camera that is mounted in mobile robot to visual field environment in front of mobile robot
Into information collection, the two-dimensional image information and three-dimensional coordinate information of the spatial point in front of mobile robot in visual field environment are obtained;
2) two-dimensional image information obtained to step 1), using SIFT feature matching algorithm is improved, processing obtains preliminary matches
Matching double points:
2.1) it carries out characteristic point detection: two dimensional image space being established by two dimensional image, is then carried out by two dimensional image down-sampled
It carries out process of convolution with gaussian kernel function again after reason to obtain the sampled images of different images size and constitute Gaussian scale-space, so
Different DoG scale images is obtained by every adjacent two layers sampled images work difference in Gaussian scale-space afterwards and constitutes DoG ruler
Space is spent, extreme point (Blob) is detected in DoG scale space and is used as characteristic point;And for each of DoG scale space
When pixel carries out detection extreme point, each pixel with in the DoG scale image of scale 8 neighbor pixels and it is upper,
Under adjacent scale DoG scale image in total 26 pixels of each 9 × 2 pixel be compared, can ensure that in this way
Extreme point can be detected in scale space and two dimensional image space;
2.2) feature point description is carried out to the characteristic point of acquisition and obtains the gradient eigenvector of characteristic point, the gradient of characteristic point is special
It levies vector and carries out binaryzation, obtain binaryzation gradient eigenvector, specific formula are as follows:
Wherein, a is binarization threshold, and f indicates the gradient eigenvector of characteristic point, f=[f1,f2...f128], fiIndicate that gradient is special
Levy i-th of gradient information in vector, fi∈f,biI-th of gradient information after indicating binaryzation;
2.3) through the above steps 2.1) -2.2) characteristic point and the binaryzation description processing for obtaining every piece image, to adjacent two
Characteristic point between frame image is slightly matched, and candidate feature point is obtained: to adjacent two field pictures, using the binaryzation ladder of characteristic point
Decision metric of the Euclidean distance between feature vector as characteristic point similitude in two field pictures is spent, characteristic point in two field pictures is taken
Binaryzation gradient eigenvector between close a pair of of the characteristic point of Euclidean distance nearest a pair of of characteristic point and Euclidean distance time,
In these two pair characteristic point, if nearest Euclidean distance is less than preset ratio threshold value divided by secondary close Euclidean distance, Europe is judged
Nearest a pair of of the characteristic point of formula distance is similar, as matching double points, casts out close a pair of of the characteristic point of Euclidean distance time;
2.4) repeat the above steps 2.2) process, until all matching double points for obtaining meeting condition in two field pictures.
3) characteristic point pair of the preliminary matches obtained to step 2) rejects candidate feature using random consistency algorithm (RANSAC)
Mismatching point in point obtains accurate matched matching double points;
4) moveable robot movement parameter is solved using accurate matched matching double points:
4.1) following equation group is initially set up:
4.2) then building residual sum of squares (RSS) function min { R, T } solves rotation when obtaining objective function minimum with least square method
Torque battle array R and translation vector T:
Min { R, T }=| | Pqj-(RPpj+T)||2
In formula, PpjAnd PqjThe corresponding three-dimensional coordinate of two characteristic points of matching double points in respectively adjacent two frame sequences image, by
The three-dimensional coordinate information that the accurate matched characteristic point and step 1) that step 3) obtains obtain is combined and is obtained;Before subscript p is represented
One frame image, subscript q represent a later frame image, and j is the ordinal number of accurate matched matching double points, x, y, and z represents three-dimensional coordinate.
The two adjacent frame sequence images that the present embodiment is acquired first with depth camera, carry out altogether three groups of feature extractions with
The comparative experiments of characteristic matching, time needed for obtaining former SIFT feature matching algorithm and improved algorithmic match time compare
As shown in table 1.
Table 1
By 1 gained of table, improved SIFT algorithm substantially reduces matching and is consumed while realizing that raising guarantees detection accuracy
Time.It solves Conventional visual odometer and consumes the plenty of time using SIFT matching algorithm, and then lead to asking for real-time difference
Topic, improves the efficiency of localization for Mobile Robot.
Then the present embodiment is directed to the two adjacent frame sequence images of depth camera acquisition, solve the movement of mobile robot
Parameter, the transformation results solved are following (unit: m):
T=[- 0.0351 0.0423 0.282]T
It is recorded according to odometer, camera actual rotation angle is 6 °, and mobile is x=0.04m, y=0.04m, z=0.3m.By upper
Calculation result is stated, obtains maximum absolute error as 0.018m, relative error 5%, error illustrates present invention side in allowed band
Method is more accurate for localization for Mobile Robot.
More than, it is merely preferred embodiments of the present invention;But scope of protection of the present invention is not limited thereto.It is any
Those familiar with the art in the technical scope disclosed by the present invention, according to the technique and scheme of the present invention and its improves
Design is subject to equivalent substitution or change, should all cover in protection of the invention.