CN106408601A - GPS-based binocular fusion positioning method and device - Google Patents
GPS-based binocular fusion positioning method and device Download PDFInfo
- Publication number
- CN106408601A CN106408601A CN201610849257.3A CN201610849257A CN106408601A CN 106408601 A CN106408601 A CN 106408601A CN 201610849257 A CN201610849257 A CN 201610849257A CN 106408601 A CN106408601 A CN 106408601A
- Authority
- CN
- China
- Prior art keywords
- dimensional
- image
- dimensional map
- region
- gps
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 31
- 230000004927 fusion Effects 0.000 title claims abstract description 17
- 230000000007 visual effect Effects 0.000 claims abstract description 14
- 238000005259 measurement Methods 0.000 claims abstract description 6
- 230000008878 coupling Effects 0.000 claims description 13
- 238000010168 coupling process Methods 0.000 claims description 13
- 238000005859 coupling reaction Methods 0.000 claims description 13
- 230000004807 localization Effects 0.000 claims description 10
- 239000000284 extract Substances 0.000 claims description 7
- 238000000605 extraction Methods 0.000 claims description 6
- 238000005516 engineering process Methods 0.000 abstract description 3
- 230000007613 environmental effect Effects 0.000 description 2
- 238000012216 screening Methods 0.000 description 2
- 230000004888 barrier function Effects 0.000 description 1
- 238000005452 bending Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 231100000572 poisoning Toxicity 0.000 description 1
- 230000000607 poisoning effect Effects 0.000 description 1
Landscapes
- Position Fixing By Use Of Radio Waves (AREA)
Abstract
The invention discloses a GPS-based binocular fusion positioning method and a GPS-based binocular fusion positioning device, which relate to a computer vision technology, in particular to a real-time object positioning method, and is suitable for self-positioning of various objects, such as self-positioning of an unmanned aerial vehicle or a robot. The GPS-based binocular fusion positioning method comprises the steps of: step 1, acquiring a three-dimensional map of a space in which an object to be positioned locates; step 2, acquiring longitude and latitude of the object to be positioned, so as to obtain a coarse positioning range of the object to be positioned in the three-dimensional map according to the longitude and latitude of the object to be positioned; step 3, and using a binocular measurement visual system arranged on the object to be positioned for shooting an image of reference objects surrounding the object to be positioned, performing three-dimensional reconstruction on the reference objects according to the image shot by the binocular measurement visual system to obtain a three-dimensional graph of the reference objects, looking for a region matched with the three-dimensional graph in the coarse positioning range, and further determining a precise positioning region of the reference objects in the three-dimensional map.
Description
Technical field
The present invention relates to computer vision technique, especially a kind of real-time object positioning method, especially object is big
Carry out pinpoint method in the environment of scale.
Background technology
Realize the technical barrier that vision localization always is unmanned plane or robot automtion.Unmanned plane to be solved is autonomous
The problem of flight, must confirm the positional information of itself first.But there is too high calculating in existing frequently-used vision localization algorithm
Measure and cannot be applied in large-scale outdoor environment, and it has a certain degree of error.Unmanned plane or robot are in room
It is capable of outward being accurately positioned having relied on GPS navigation.But positioned using GPS, object can only be passively by itself institute
The information that the GPS device carrying sends is positioned, because GPS location precision only has 2.5 meters in ideal conditions, and more
New frequency is relatively low.For indoor object, gps signal can be lost because blocking it is impossible to realize indoor positioning, and it is fixed to lead to
Position error increases, therefore object is not aware that the accurate location itself being located it is impossible to realize being accurately positioned.
Content of the invention
The technical problem to be solved is:For above-mentioned problem, provide and a kind of melted based on the binocular of GPS
Close localization method and device.Present invention can be suitably applied to the self-align of the self poisoning of various objects, such as unmanned plane or robot.
The binocular fusion localization method based on GPS that the present invention provides, including:
Step 1:Obtain the three-dimensional map in object place to be positioned space, described three-dimensional map comprises the characteristics of image of spatial environmentss
And in environment each point terrestrial coordinates;
Step 2:Obtain longitude, the latitude of object to be positioned, obtain object to be positioned and exist according to the longitude of object to be positioned, latitude
Coarse positioning scope in three-dimensional map;
Step 3:Shoot the object of reference around object to be positioned using measuring visual system positioned at the binocular on object to be positioned
Image, carries out, to object of reference, the 3-D graphic that three-dimensional reconstruction obtains object of reference according to the image that binocular measuring system shoots, thick
Find the region mated with described 3-D graphic in orientation range, and then determination object of reference is accurately positioned area in three-dimensional map
Domain.
Described step 3 further includes:
Step 31:Binocular measures left image and the right image that visual system obtains object of reference;
Step 32:Extract the eigenvalue of left image and right image respectively;
Step 33:Determine the matching characteristic point between left image, right image according to the eigenvalue of the two;
Step 34:Using principle of stereoscopic vision, three-dimensional reconstruction is carried out to matching characteristic point, obtain the three-dimensional of these matching characteristic points
Coordinate, these points are carried out the space three-dimensional figure that surface fitting obtains object of reference;
Step 35:Carry out images match in object to be positioned in the coarse positioning scope in three-dimensional map, find and described space
The region of 3-D graphic coupling and then obtain object to be positioned and be accurately positioned region in three-dimensional map.
Further, in step 2, described longitude, latitude are obtained using the GPS positioning device on object to be positioned.
Further, in step 33, limit of utilization bounding algorithm determines the matching characteristic point in left image, right image.
Further, in step 35, extract the eigenvalue of described space three-dimensional figure, in the coarse positioning region of three-dimensional map
The region that search characteristics value is mated with space three-dimensional graphic feature value, this region is essence in three-dimensional map for the object to be positioned
Really positioning region;The eigenvalue of the space three-dimensional image extracting is texture eigenvalue or geometrical characteristic, described three-dimensional map bag
The characteristics of image containing includes texture eigenvalue and geometrical characteristic.
Present invention also offers a kind of binocular fusion positioner based on GPS, including:
Three-dimensional map acquisition module, for obtaining the three-dimensional map in object place to be positioned space, described three-dimensional map comprises sky
Between the characteristics of image of environment and the terrestrial coordinates of each point in environment;
Coarse positioning module, for obtaining longitude, the latitude of object to be positioned, is treated according to the longitude of object to be positioned, latitude
Positioning coarse positioning scope in three-dimensional map for the object;
Pinpoint module, shoots around object to be positioned for obtaining the binocular measurement visual system being located on object to be positioned
Object of reference image, according to binocular measuring system shoot image the three-dimensional that three-dimensional reconstruction obtains object of reference is carried out to object of reference
Figure, finds, in coarse positioning scope, the region mated with described 3-D graphic, and then determines object of reference in three-dimensional map
It is accurately positioned region.
Described pinpoint module further includes:
Image acquisition unit, measures left image and the right image of the object of reference that visual system shoots for obtaining binocular;
Characteristics extraction unit, for extracting the eigenvalue of left image and right image respectively;
Match point acquiring unit, for determining the matching characteristic point between left image, right image according to the eigenvalue of the two;
Three-dimensional terminal unit, for carrying out three-dimensional reconstruction using principle of stereoscopic vision to matching characteristic point, obtains these couplings special
Levy three-dimensional coordinate a little, these points are carried out the space three-dimensional figure that surface fitting obtains object of reference;
Coupling image acquisition unit, for carrying out images match in object to be positioned in the coarse positioning scope in three-dimensional map,
Find the region with described space three-dimensional Graphic Pattern Matching and then obtain object to be positioned and be accurately positioned region in three-dimensional map.
Further, match point acquiring unit determines that for limit of utilization bounding algorithm the coupling in left image, right image is special
Levy a little.
Coupling image acquisition unit is further used for, and extracts the eigenvalue of described space three-dimensional figure, in three-dimensional map
The region that in coarse positioning region, search characteristics value is mated with space three-dimensional graphic feature value, this region is object to be positioned three
Dimension map in be accurately positioned region;The eigenvalue of the space three-dimensional image extracting is texture eigenvalue or geometrical characteristic, institute
State the characteristics of image that three-dimensional map comprises and include texture eigenvalue and geometrical characteristic.
Due to employing technique scheme, the invention has the beneficial effects as follows:
1. object initial alignment is realized using GPS system, can achieve outdoor large range of object positioning;
2. using binocular measurement visual system, object is positioned further, so that positional accuracy is greatly improved;
3. object is positioned in three-dimensional map using image matching technology, setting accuracy is higher;
4. the present invention using three-dimensional feature map be accurately positioned mode it is achieved that object three-dimensional space position positioning.
To sum up, the present invention realizes the warp to object to be positioned first with GPS positioning device, latitude data obtains, then
Measure visual system using binocular and realize the locus positioning to object, be capable of in interior determination thing to be positioned in a big way
The three-dimensional space position information of body, positioning precision is high.
Brief description
Examples of the present invention will be described by way of reference to the accompanying drawings, wherein:
Fig. 1 is the flow chart of the inventive method.
Pinpoint flow chart in the present invention of Fig. 2 position.
Specific embodiment
All features disclosed in this specification, or disclosed all methods or during step, except mutually exclusive
Feature and/or step beyond, all can combine by any way.
Any feature disclosed in this specification, unless specifically stated otherwise, all can be equivalent or there is similar purpose by other
Alternative features are replaced.I.e., unless specifically stated otherwise, each feature is a series of equivalent or one of similar characteristics example
?.
As shown in figure 1, the inventive method includes three below step:
1)Obtain object place to be positioned space, such as room, the three-dimensional map in building, including at least in three-dimensional map has spatial loop
The characteristics of image in border, eigenvalue as related to image in brightness value, texture eigenvalue, geometrical characteristic etc., three-dimensional map
In should also contain the terrestrial coordinates of each point of space, terrestrial coordinates comprises longitude, latitude and relative three latitudes of ground level
Information, the accurate location of any point on the earth can be accurately reflected.
Three-dimensional map can be created according to the environmental information that radar or ambient light sensor obtain, three-dimensional map
It is retrieved as prior art, the content of its process non-invention care, the three-dimensional map in the present invention is that direct copying is existing
Data obtains, as the basic coordinates system of object positioning.
2)Coarse positioning is carried out to object to be positioned
Obtain longitude, the latitude data of object to be positioned, using the GPS location dress being arranged on object to be positioned in the present embodiment
Put the acquisition latitude and longitude coordinates of its own, it is just fixed that the latitude and longitude coordinates having object to be positioned just can draw a circle to approve it in three-dimensional map
Position region, is all the coarse positioning area of object to be positioned with object latitude and longitude coordinates identical region to be positioned in three-dimensional map
The scope of object to be positioned can be substantially reduced by domain by coarse positioning.
3)Object to be positioned is accurately positioned
On the basis of coarse positioning, measure, using binocular, the environmental view that visual system obtains the place space of object to be positioned
Picture, describes for convenience, and surrounding is become object of reference by the present invention, and we are using the locus of object of reference as thing to be positioned
The locus of body although it is understood that also there is certain space length in object to be positioned and object of reference, but in bigger model
Positioning in enclosing, this space length is negligible.
Binocular measurement visual system is existing equipment, and it comprises left and right two video cameras, and two video cameras enter rower respectively
Can determine, after fixed, the two width images that the physical points in space shoot in left and right cameras(Become left image, right figure in the present invention
Picture)The corresponding relation of middle pixel.
The present embodiment to demarcate the intrinsic parameter of left and right cameras using Zhang Shi standardizition, and outer parameter passes through set baseline
Apart from extracting directly.Calibration process is described taking left video camera as a example, as follows:
1)Draw a chessboard template with drawing instrument, A4 paper prints, and is attached on a smooth flat hardboard;
2)Plane template is pressed the arbitrarily angled image rotating before camera lens and translation, shooting 20 width difference attitudes, was shooting
Keeping intensity of illumination and unchanged direction in journey, not adjusting camera lens in order to avoid changing parameter;
3)Program reads 20 width images, is shown in the same plane;
4)Extract all angle points in this 20 width image, and the image upper left corner is set as interim camera coordinates when demarcating;
5)After having extracted all angle points on 20 width images, calculate the plane projection matrix of each image, determine video camera
Parameter, thus get the intrinsic parameter of video camera.
Calibrated two CCD camera measure system just can be used to shoot with reference to object image.The image of the object of reference obtaining
Afterwards, the 3-D graphic that three-dimensional reconstruction obtains object of reference is carried out to object of reference according to described image, in coarse positioning scope find with
The point of described 3-D graphic coupling, and then determine that object of reference is accurately positioned region in three-dimensional map.
Referring to Fig. 2, illustrate pinpoint concrete grammar in detail below:
1st, image acquisition.Carry out image acquisition first with the scene information around the photographic head object of left and right, obtain left and right two width
Image;
2nd, characteristics extraction is carried out to left images.Different feature extraction modes are selected according to different situations, for example:Object
Small range positioning can using angle point grid, side extract, not the feature extracting method such as bending moment, Hough transform carry out feature extraction;Thing
Body positions on a large scale and can carry out feature extraction using the feature extracting method such as optical flow method, Background difference, frame difference method.
3rd, find the matching characteristic point in two width images.The present embodiment utilizes the matching principles such as epipolar-line constraint in left image
Characteristic point find match point in right image, complete the Feature Points Matching of left images, by Corner Feature coupling as a example, specifically
Method is as follows:
1) angle point screening is carried out to left image A, obtain coordinate in left image for the impact point;
2) to the detection that angle point is carried out on right image B;
3) impact point of the left image A polar curve in right image B is asked for according to epipolar-line constraint;
4) search step 2 on polar curve) in the angle point that detects, see which angle point near this polar curve, and to these candidates
Angle point application angle point screening method obtain sub-pixel impact point coordinate.
Multigroup matching characteristic point can be obtained according to the method described above.
4th, realize carrying out three-dimensional reconstruction to these matching characteristic points according to principle of stereoscopic vision, try to achieve these matching characteristics
The three-dimensional coordinate of point, these points is carried out surface fitting, you can obtain space three-dimensional figure, i.e. three-dimensional point cloud, these spatial point
It is all the pinpoint candidate point of object to be positioned.
5th, search coupling image-region.Optimal with space three-dimensional image is searched in the coarse positioning region of three-dimensional map
Join a little, the feature and three such as the texture eigenvalue of space three-dimensional figure that the reconnaissance that is, time step 4 obtains is formed or geometrical characteristic
The texture eigenvalue in certain region in coarse positioning region in dimension map or geometrical characteristic match, then this region is as undetermined
Position exact position in three-dimensional map for the object.
Present invention also offers a kind of and one-to-one floppy disk system of said method step.
The invention is not limited in aforesaid specific embodiment.The present invention expands to and any discloses in this manual
New feature or any new combination, and the arbitrary new method of disclosure or the step of process or any new combination.
Claims (10)
1. a kind of binocular fusion localization method based on GPS is it is characterised in that include:
Step 1:Obtain the three-dimensional map in object place to be positioned space, described three-dimensional map comprises the characteristics of image of spatial environmentss
And in environment each point terrestrial coordinates;
Step 2:Obtain longitude, the latitude of object to be positioned, determine that object to be positioned exists according to the longitude of object to be positioned, latitude
Coarse positioning scope in three-dimensional map;
Step 3:Shoot the object of reference around object to be positioned using measuring visual system positioned at the binocular on object to be positioned
Image, carries out, to object of reference, the 3-D graphic that three-dimensional reconstruction obtains object of reference according to the image that binocular measuring system shoots, thick
Find the region mated with described 3-D graphic in orientation range, and then determination object of reference is accurately positioned area in three-dimensional map
Domain.
2. a kind of binocular fusion localization method based on GPS according to claim 1 is it is characterised in that described step 3 is entered
One step includes:
Step 31:Binocular measures left image and the right image that visual system obtains object of reference;
Step 32:Extract the eigenvalue of left image and right image respectively;
Step 33:Determine the matching characteristic point between left image, right image according to the eigenvalue of the two;
Step 34:Using principle of stereoscopic vision, three-dimensional reconstruction is carried out to matching characteristic point, obtain the three-dimensional of these matching characteristic points
Coordinate, these points are carried out the space three-dimensional figure that surface fitting obtains object of reference;
Step 35:Carry out images match in object to be positioned in the coarse positioning scope in three-dimensional map, find and described space
The region of 3-D graphic coupling and then obtain object to be positioned and be accurately positioned region in three-dimensional map.
3. a kind of binocular fusion localization method based on GPS according to claim 1 is it is characterised in that in step 2, profit
Obtain described longitude, latitude with the GPS positioning device on object to be positioned.
4. a kind of binocular fusion localization method based on GPS according to claim 2 is it is characterised in that in step 33, profit
Determine the matching characteristic point in left image, right image with limit restraint algorithm.
5. a kind of binocular fusion localization method based on GPS according to claim 2 is it is characterised in that in step 35, carry
Take the eigenvalue of described space three-dimensional figure, search characteristics value is special with space three-dimensional figure in the coarse positioning region of three-dimensional map
The region of value indicative coupling, this region is object to be positioned and is accurately positioned region in three-dimensional map;The space three-dimensional extracting
The eigenvalue of image is texture eigenvalue or geometrical characteristic, and the characteristics of image that described three-dimensional map comprises includes texture eigenvalue
And geometrical characteristic.
6. a kind of binocular fusion positioner based on GPS is it is characterised in that include:
Three-dimensional map acquisition module, for obtaining the three-dimensional map in object place to be positioned space, described three-dimensional map comprises sky
Between the characteristics of image of environment and the terrestrial coordinates of each point in environment;
Coarse positioning module, for obtaining longitude, the latitude of object to be positioned, is treated according to the longitude of object to be positioned, latitude
Positioning coarse positioning scope in three-dimensional map for the object;
Pinpoint module, shoots around object to be positioned for obtaining the binocular measurement visual system being located on object to be positioned
Object of reference image, according to binocular measuring system shoot image the three-dimensional that three-dimensional reconstruction obtains object of reference is carried out to object of reference
Figure, finds, in coarse positioning scope, the region mated with described 3-D graphic, and then determines object of reference in three-dimensional map
It is accurately positioned region.
7. a kind of binocular fusion positioner based on GPS according to claim 6 is it is characterised in that described accurately determine
Position module further includes:
Image acquisition unit, measures left image and the right image of the object of reference that visual system shoots for obtaining binocular;
Characteristics extraction unit, for extracting the eigenvalue of left image and right image respectively;
Match point acquiring unit, for determining the matching characteristic point between left image, right image according to the eigenvalue of the two;
Three-dimensional terminal unit, for carrying out three-dimensional reconstruction using principle of stereoscopic vision to matching characteristic point, obtains these couplings special
Levy three-dimensional coordinate a little, these points are carried out the space three-dimensional figure that surface fitting obtains object of reference;
Coupling image acquisition unit, for carrying out images match in object to be positioned in the coarse positioning scope in three-dimensional map,
Find the region with described space three-dimensional Graphic Pattern Matching and then obtain object to be positioned and be accurately positioned region in three-dimensional map.
8. a kind of binocular fusion positioner based on GPS according to claim 6 is it is characterised in that treated using being located at
GPS positioning device on positioning object obtains described longitude, latitude.
9. a kind of binocular fusion positioner based on GPS according to claim 7 is it is characterised in that match point obtains
Unit determines the matching characteristic point in left image, right image for limit of utilization bounding algorithm.
10. a kind of binocular fusion positioner based on GPS according to claim 7 is it is characterised in that coupling image obtains
Take unit to be further used for, extract the eigenvalue of described space three-dimensional figure, search for special in the coarse positioning region of three-dimensional map
The region that value indicative is mated with space three-dimensional graphic feature value, this region is object to be positioned being accurately positioned in three-dimensional map
Region;The eigenvalue of the space three-dimensional image extracting is texture eigenvalue or geometrical characteristic, the figure that described three-dimensional map comprises
As feature includes texture eigenvalue and geometrical characteristic.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201610849257.3A CN106408601B (en) | 2016-09-26 | 2016-09-26 | A kind of binocular fusion localization method and device based on GPS |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201610849257.3A CN106408601B (en) | 2016-09-26 | 2016-09-26 | A kind of binocular fusion localization method and device based on GPS |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN106408601A true CN106408601A (en) | 2017-02-15 |
| CN106408601B CN106408601B (en) | 2018-12-14 |
Family
ID=57996633
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201610849257.3A Active CN106408601B (en) | 2016-09-26 | 2016-09-26 | A kind of binocular fusion localization method and device based on GPS |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN106408601B (en) |
Cited By (12)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107015193A (en) * | 2017-04-18 | 2017-08-04 | 中国矿业大学(北京) | A kind of binocular CCD vision mine movable object localization methods and system |
| CN108024201A (en) * | 2017-11-30 | 2018-05-11 | 深圳市沃特沃德股份有限公司 | Sweeping robot and its method relocated |
| CN108168522A (en) * | 2017-12-11 | 2018-06-15 | 宁波亿拍客网络科技有限公司 | A kind of unmanned plane observed object method for searching and correlation technique again |
| CN108198217A (en) * | 2017-12-29 | 2018-06-22 | 百度在线网络技术(北京)有限公司 | Indoor orientation method, device, equipment and computer-readable medium |
| CN108225334A (en) * | 2018-01-17 | 2018-06-29 | 泰瑞天际科技(北京)有限公司 | A kind of localization method and device based on three-dimensional live-action data |
| CN108897324A (en) * | 2018-07-25 | 2018-11-27 | 吉林大学 | A kind of control method, device, equipment and storage medium that unmanned vehicle is stopped |
| CN109471447A (en) * | 2018-12-14 | 2019-03-15 | 国网冀北电力有限公司检修分公司 | UAV navigation method, device, UAV and data readable storage device |
| CN110187348A (en) * | 2019-05-09 | 2019-08-30 | 盈科视控(北京)科技有限公司 | A kind of method of laser radar positioning |
| WO2019196478A1 (en) * | 2018-04-13 | 2019-10-17 | 北京三快在线科技有限公司 | Robot positioning |
| CN110780665A (en) * | 2018-07-26 | 2020-02-11 | 比亚迪股份有限公司 | Vehicle unmanned control method and device |
| CN112837343A (en) * | 2021-04-01 | 2021-05-25 | 中国船舶重工集团公司第七0九研究所 | Low-altitude unmanned-machine prevention and control photoelectric early warning identification method and system based on camera array |
| CN115760985A (en) * | 2022-11-28 | 2023-03-07 | 北京河图联合创新科技有限公司 | Indoor space positioning method and device based on AR application |
Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6198431B1 (en) * | 1998-08-27 | 2001-03-06 | Maptrek Llc | Compact GPS tracker and customized mapping system |
| WO2002063243A1 (en) * | 2001-02-08 | 2002-08-15 | Neil Huckle | Navigation system |
| CN1604147A (en) * | 2004-11-02 | 2005-04-06 | 北京航空航天大学 | Electronic map position matching method for intelligent vehicle positioning and navigation system |
| CN104392007A (en) * | 2014-12-18 | 2015-03-04 | 西安电子科技大学宁波信息技术研究院 | Streetscape retrieval and identification method of intelligent mobile terminal |
| CN104751531A (en) * | 2013-12-25 | 2015-07-01 | 鸿富锦精密工业(深圳)有限公司 | Patrol control apparatus, system and method thereof |
| CN105143907A (en) * | 2013-04-22 | 2015-12-09 | 阿尔卡特朗讯 | Localization systems and methods |
| CN105674993A (en) * | 2016-01-15 | 2016-06-15 | 武汉光庭科技有限公司 | Binocular camera-based high-precision visual sense positioning map generation system and method |
| CN105739365A (en) * | 2014-12-10 | 2016-07-06 | 联想(北京)有限公司 | Information processing method and electronic device |
-
2016
- 2016-09-26 CN CN201610849257.3A patent/CN106408601B/en active Active
Patent Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6198431B1 (en) * | 1998-08-27 | 2001-03-06 | Maptrek Llc | Compact GPS tracker and customized mapping system |
| WO2002063243A1 (en) * | 2001-02-08 | 2002-08-15 | Neil Huckle | Navigation system |
| CN1604147A (en) * | 2004-11-02 | 2005-04-06 | 北京航空航天大学 | Electronic map position matching method for intelligent vehicle positioning and navigation system |
| CN105143907A (en) * | 2013-04-22 | 2015-12-09 | 阿尔卡特朗讯 | Localization systems and methods |
| CN104751531A (en) * | 2013-12-25 | 2015-07-01 | 鸿富锦精密工业(深圳)有限公司 | Patrol control apparatus, system and method thereof |
| CN105739365A (en) * | 2014-12-10 | 2016-07-06 | 联想(北京)有限公司 | Information processing method and electronic device |
| CN104392007A (en) * | 2014-12-18 | 2015-03-04 | 西安电子科技大学宁波信息技术研究院 | Streetscape retrieval and identification method of intelligent mobile terminal |
| CN105674993A (en) * | 2016-01-15 | 2016-06-15 | 武汉光庭科技有限公司 | Binocular camera-based high-precision visual sense positioning map generation system and method |
Non-Patent Citations (3)
| Title |
|---|
| MOTILAL AGRAWAL ET AL: "Real-time Localization in Outdoor Environments using Stereo Vision and Inexpensive GPS", 《THE 18TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION》 * |
| 孟祥荔 等: "基于GPS的移动机器人多传感器定位信息融合", 《天津理工大学学报》 * |
| 来磊 等: "交通无线传感网络运动车辆定位方法", 《交通运输工程学报》 * |
Cited By (15)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107015193A (en) * | 2017-04-18 | 2017-08-04 | 中国矿业大学(北京) | A kind of binocular CCD vision mine movable object localization methods and system |
| CN108024201A (en) * | 2017-11-30 | 2018-05-11 | 深圳市沃特沃德股份有限公司 | Sweeping robot and its method relocated |
| CN108168522A (en) * | 2017-12-11 | 2018-06-15 | 宁波亿拍客网络科技有限公司 | A kind of unmanned plane observed object method for searching and correlation technique again |
| CN108198217A (en) * | 2017-12-29 | 2018-06-22 | 百度在线网络技术(北京)有限公司 | Indoor orientation method, device, equipment and computer-readable medium |
| CN108225334A (en) * | 2018-01-17 | 2018-06-29 | 泰瑞天际科技(北京)有限公司 | A kind of localization method and device based on three-dimensional live-action data |
| CN108225334B (en) * | 2018-01-17 | 2020-10-16 | 泰瑞天际科技(北京)有限公司 | Positioning method and device based on three-dimensional live-action data |
| WO2019196478A1 (en) * | 2018-04-13 | 2019-10-17 | 北京三快在线科技有限公司 | Robot positioning |
| CN108897324A (en) * | 2018-07-25 | 2018-11-27 | 吉林大学 | A kind of control method, device, equipment and storage medium that unmanned vehicle is stopped |
| CN110780665A (en) * | 2018-07-26 | 2020-02-11 | 比亚迪股份有限公司 | Vehicle unmanned control method and device |
| CN110780665B (en) * | 2018-07-26 | 2022-02-08 | 比亚迪股份有限公司 | Vehicle unmanned control method and device |
| CN109471447A (en) * | 2018-12-14 | 2019-03-15 | 国网冀北电力有限公司检修分公司 | UAV navigation method, device, UAV and data readable storage device |
| CN110187348A (en) * | 2019-05-09 | 2019-08-30 | 盈科视控(北京)科技有限公司 | A kind of method of laser radar positioning |
| CN112837343A (en) * | 2021-04-01 | 2021-05-25 | 中国船舶重工集团公司第七0九研究所 | Low-altitude unmanned-machine prevention and control photoelectric early warning identification method and system based on camera array |
| CN112837343B (en) * | 2021-04-01 | 2022-12-09 | 中国船舶重工集团公司第七0九研究所 | Low-altitude unmanned-machine prevention and control photoelectric early warning identification method and system based on camera array |
| CN115760985A (en) * | 2022-11-28 | 2023-03-07 | 北京河图联合创新科技有限公司 | Indoor space positioning method and device based on AR application |
Also Published As
| Publication number | Publication date |
|---|---|
| CN106408601B (en) | 2018-12-14 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN106408601A (en) | GPS-based binocular fusion positioning method and device | |
| CN110926474B (en) | Satellite/vision/laser combined urban canyon environment UAV positioning and navigation method | |
| US11032527B2 (en) | Unmanned aerial vehicle surface projection | |
| CN102967305B (en) | Multi-rotor unmanned aerial vehicle pose acquisition method based on markers in shape of large and small square | |
| EP3276374B1 (en) | Aircraft and obstacle avoidance method and system thereof | |
| US10060739B2 (en) | Method for determining a position and orientation offset of a geodetic surveying device and such a surveying device | |
| KR100912715B1 (en) | Digital photogrammetry method and device by heterogeneous sensor integrated modeling | |
| CN104200086B (en) | Wide-baseline visible light camera pose estimation method | |
| CN103175524B (en) | A kind of position of aircraft without view-based access control model under marking environment and attitude determination method | |
| CN110009682B (en) | A Target Recognition and Positioning Method Based on Monocular Vision | |
| US20200357141A1 (en) | Systems and methods for calibrating an optical system of a movable object | |
| CN109146958B (en) | Traffic sign space position measuring method based on two-dimensional image | |
| CN109242918A (en) | A kind of helicopter-mounted binocular stereo vision scaling method | |
| CN114415700B (en) | Autonomous visual landing method for UAV based on deep hybrid camera array | |
| Strecha et al. | Quality assessment of 3D reconstruction using fisheye and perspective sensors | |
| CN115950435B (en) | Real-time positioning method for unmanned aerial vehicle inspection image | |
| CN110986888A (en) | Aerial photography integrated method | |
| CN112461204B (en) | Method for satellite to dynamic flying target multi-view imaging combined calculation of navigation height | |
| RU2571300C2 (en) | Method for remote determination of absolute azimuth of target point | |
| Jeong et al. | O 3 LiDAR–camera calibration: One-shot, one-target and overcoming LiDAR limitations | |
| Jiang et al. | Determination of construction site elevations using drone technology | |
| US11514597B1 (en) | Single-camera stereoaerophotogrammetry using UAV sensors | |
| Wu | Photogrammetry: 3-D from imagery | |
| Navarro et al. | Accuracy analysis of a mobile mapping system for close range photogrammetric projects | |
| CN109146936A (en) | A kind of image matching method, device, localization method and system |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |