CN111256701A - Equipment positioning method and system - Google Patents
Equipment positioning method and system Download PDFInfo
- Publication number
- CN111256701A CN111256701A CN202010336312.5A CN202010336312A CN111256701A CN 111256701 A CN111256701 A CN 111256701A CN 202010336312 A CN202010336312 A CN 202010336312A CN 111256701 A CN111256701 A CN 111256701A
- Authority
- CN
- China
- Prior art keywords
- information
- scene
- real scene
- optical label
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 53
- 230000000007 visual effect Effects 0.000 claims abstract description 63
- 239000003550 marker Substances 0.000 claims description 51
- 238000004590 computer program Methods 0.000 claims description 7
- 230000003287 optical effect Effects 0.000 description 175
- 230000004807 localization Effects 0.000 description 11
- 238000004891 communication Methods 0.000 description 7
- 238000003384 imaging method Methods 0.000 description 7
- 230000008859 change Effects 0.000 description 5
- 230000005484 gravity Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000004075 alteration Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 206010034719 Personality change Diseases 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 239000004984 smart glass Substances 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/28—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Image Analysis (AREA)
Abstract
A device positioning method and system are provided, including: acquiring identification information of the visual mark through equipment; obtaining scene information of a real scene in which the device is located based at least in part on the identification information; acquiring, by the device, an image of the real scene; determining location information of the device in the real scene based on the obtained scene information and the image of the real scene.
Description
Technical Field
The present invention relates to the field of information technologies, and in particular, to a method and a system for positioning a device.
Background
The statements in this section merely provide background information related to the present disclosure and may not constitute prior art for the purposes of describing the present disclosure.
In the prior art, various positioning methods exist, such as a positioning method implemented by a GPS signal, a positioning method implemented by inertial navigation, and a positioning method implemented by computer vision, but these methods have respective defects, such as low accuracy, large error, complex implementation, and the like.
The relative positioning function may be provided by visual markers. For example, an image including a visual marker may be taken using a device (e.g., a cell phone) having an image capture device (e.g., a camera). When the device has different positions and/or poses relative to the visual markers, the imaged positions, sizes, perspective deformations, etc. of the visual markers on the acquired image may differ accordingly. By analyzing the imaged position, size, perspective distortion, etc. of the visual marker, position information and pose information (collectively pose information) of the device relative to the visual marker can be determined.
The visual indicia may take a variety of forms. Fig. 1 shows a schematic view of an optical communication device as a visual marker. As shown in fig. 1, the optical communication device 100 includes 5 marker lights 101. By taking an image including the visual marker using an image pickup device (e.g., a camera) of an apparatus (e.g., a mobile phone) and identifying 5 marker lights 101 among them, the imaging positions of the 5 marker lights 101 in the image can be determined. By analyzing the imaging positions of the 5 marker lights 101 and the relative positional relationship (e.g., relative distance, relative direction, perspective distortion, etc.) between these imaging positions, the position and posture of the image pickup device or apparatus with respect to the visual marker can be determined.
However, when using images containing visual markers for relative positioning, due to imaging accuracy or resolution, the obtained positioning result usually has some errors, which become larger when the visual markers are small or the device is far away from the visual markers, and may affect the user experience in positioning, navigation, virtual reality, augmented reality and other applications. Although the visual marker may be manufactured to have a larger size to provide higher positioning accuracy, this may result in higher manufacturing costs and in some cases may not allow for placement of a larger visual marker due to limitations in the installation environment.
Therefore, a positioning method capable of providing higher accuracy is needed.
Disclosure of Invention
One aspect of the present invention relates to a device location method, comprising: acquiring identification information of the visual mark through equipment; obtaining scene information of a real scene in which the device is located based at least in part on the identification information; acquiring, by the device, an image of the real scene; determining location information of the device in the real scene based on the obtained scene information and the image of the real scene.
Optionally, the method further includes: determining pose information of the device in the real scene based on the scene information and the image of the real scene.
Optionally, wherein the obtaining scene information of the real scene in which the device is located based at least in part on the identification information comprises: determining scene information of a real scene in which the device is located based at least in part on the identification information and the time information.
Optionally, the method further includes: determining location information of the device relative to the visual marker; and wherein said obtaining scene information of the real scene in which the device is located based at least in part on the identification information comprises: determining scene information of a real scene in which the device is located based at least in part on the identification information and the location information of the device relative to the visual marker.
Optionally wherein the device acquires an image containing the visual marker and analyzes the image to determine positional information of the device relative to the visual marker.
Optionally, wherein the obtaining scene information of the real scene in which the device is located based at least in part on the identification information comprises: determining the position and posture information of the visual mark in a space coordinate system through the identification information; determining actual position information of the device based on position information of the device relative to the visual marker and position and attitude information of the visual marker in a spatial coordinate system; determining scene information of a real scene in which the device is located based at least in part on actual location information of the device.
Optionally, the method further includes: determining pose information of the device relative to the visual marker; and wherein said obtaining scene information of the real scene in which the device is located based at least in part on the identification information comprises: determining scene information of a real scene in which the device is located based at least in part on the identification information and the position information and pose information of the device relative to the visual marker.
Optionally, the method further includes: obtaining position and pose information of the device relative to the visual marker; and determining or correcting the position and/or pose information of the visual marker in the real scene based on the position and pose information of the device in the real scene and the position and pose information of the device relative to the visual marker.
Optionally, wherein the position and pose information of the device relative to the visual marker and the position and pose information of the device in the real scene are determined at the same or different times.
Optionally, the scene information includes information about several auxiliary flags in the scene.
Optionally, the information related to the auxiliary mark includes spatial position information and feature information of the auxiliary mark.
Another aspect of the invention relates to a device location system comprising: one or more visual markers installed in a scene; a device having an image capture device mounted thereon, said image capture device capable of capturing an image containing said visual indicia; and an apparatus configured to implement any of the above methods.
Optionally, wherein the apparatus is a server capable of communicating with the device.
Optionally, wherein the apparatus is integrated in the device.
Optionally, the system further includes a plurality of auxiliary signs arranged in the scene.
Another aspect of the invention relates to a storage medium in which a computer program is stored which, when being executed by a processor, can be used for carrying out the above-mentioned method.
Yet another aspect of the invention relates to an electronic device comprising a processor and a memory, in which a computer program is stored which, when being executed by the processor, is operative to carry out the method as described above.
By adopting the scheme of the invention, the equipment can be accurately positioned in a scene. In some embodiments of the invention, by narrowing the range of the candidate scene information, the device in the scene can be positioned more quickly by using the local scene information, and the pose information of the device or the visual marker can be corrected, so that the method and the device have good applicability and flexibility.
Drawings
Embodiments of the invention are further described below with reference to the accompanying drawings, in which:
fig. 1 shows a schematic view of an optical communication device as a visual marker.
FIG. 2A illustrates an exemplary optical label;
FIG. 2B illustrates an exemplary optical label network;
FIG. 3 illustrates a system for localization by an optical tag and its surrounding scene according to one embodiment;
FIG. 4 illustrates a system for localization by an optical tag and its surrounding scene in accordance with another embodiment;
FIG. 5 illustrates a method of localization by an optical label and its surrounding scene according to one embodiment;
FIG. 6 illustrates a method of localization by an optical label and its surrounding scene according to another embodiment;
FIG. 7 illustrates a method of localization by an optical label and its surrounding scene according to another embodiment;
FIG. 8 illustrates a method of localization by an optical label and its surrounding scene according to another embodiment;
FIG. 9 illustrates a method of locating or collating an optical label with its surrounding scene in accordance with another embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail by embodiments with reference to the accompanying drawings. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The invention provides a method for positioning through visual signs and scene information around the visual signs. The visual indicia may have corresponding identifying information, which may be, for example, any indicia corresponding to the visual indicia, such as a number of the visual indicia, a web address corresponding to the visual indicia, and the like. In some embodiments, the information related to the visual indicia may also include, for example, one or more of the following: the visual marker may have a default or uniform size, shape, structure, or color, so that it is not necessary to separately store the size information, shape information, structure information, or color information thereof. The information related to the visual indicia may be stored on the device or on other means accessible by the device, such as a server.
The following description is exemplary of an optical communication device as a visual marker. Optical communication devices are also referred to as optical labels, and these two terms are used interchangeably herein. The optical label can transmit information through different light emitting modes, has the advantages of long identification distance and loose requirements on visible light conditions, and the information transmitted by the optical label can change along with time, so that large information capacity and flexible configuration capacity can be provided.
An optical label may typically include a controller and at least one light source, the controller may drive the light source through different driving modes to communicate different information to the outside. Fig. 2A shows an exemplary optical label 200 that includes three light sources, a first light source 201, a second light source 202, and a third light source 203. Optical label 200 also includes a controller (not shown in FIG. 2A) for selecting a respective drive mode for each light source based on the information to be communicated. For example, in different driving modes, the controller may control the manner in which the light source emits light using different driving signals, such that when the optical label 200 is photographed using an image capture device (e.g., a camera), the image of the light source may take on different appearances (e.g., different colors, patterns, brightness, etc.). By analyzing the images of the light sources in the optical label 200, the driving mode of each light source at the moment can be analyzed, so that the information transmitted by the optical label 200 at the moment can be analyzed. It is to be understood that fig. 2A is merely used as an example, and that an optical label may have a different shape than the example shown in fig. 2A, and may have a different number and/or different shape of light sources than the example shown in fig. 2A.
In order to provide a corresponding service to a user based on the optical labels, each optical label may be configured to transmit an identification Information (ID). In general, the light source may be driven by a controller in the optical label to transmit the identification information outwards, the image acquisition device may perform image acquisition on the optical label to obtain one or more images containing the optical label, and identify the identification information transmitted by the optical label by analyzing the image of the optical label (or each light source in the optical label) in the images, and then may acquire other information associated with the identification information, for example, position information of the optical label corresponding to the identification information.
Information associated with each optical label may be stored in a server. In reality, a large number of optical labels can be constructed into an optical label network. Fig. 2B illustrates an exemplary optical label network including a plurality of optical labels and at least one server. Identification Information (ID) or other information of each optical label, such as service information related to the optical label, description information or attribute information related to the optical label, such as position information, model information, physical size information, physical shape information, attitude or orientation information, etc. of the optical label may be maintained on the server. The optical label may also have uniform or default physical size information and physical shape information, etc. The device may use the identification information of the identified optical label to obtain further information related to the optical label from the server query. The position information of the optical label may refer to an actual position of the optical label in the physical world, which may be indicated by geographical coordinate information. A server may be a software program running on a computing device, or a cluster of computing devices. The optical label may be offline, i.e., the optical label does not need to communicate with the server. Of course, it will be appreciated that an online optical tag capable of communicating with a server is also possible.
The devices mentioned in the present application may be devices capable of autonomous movement, such as autonomous cars, robots, drones, etc.; or a device operated by a person, such as a mobile phone, smart glasses, a car, etc. The apparatus may include an image capture device and one or more sensors for capturing images or other signals. The device may also comprise a data processing system for storage, calculation, output or display of data, etc., for example comprising volatile or non-volatile memory, one or more processors. The apparatus mentioned in the present application may further include a communication means for wired or wireless communication with an external system or other devices (e.g., a server) for transmission and reception of data, as needed.
Embodiments of the present invention are described below with a light label as an example visual marker, a cell phone as an example device, and a mall as an example real scene, but it is understood that aspects of the present invention are equally applicable to any other visual marker, device, and scene.
Fig. 3 shows a system for positioning by means of an optical label and its surrounding scene, comprising an optical label 300, a device 301, a server 302, according to an embodiment. The optical label 300 may be installed on walls or columns on both sides of a pedestrian passageway of a shopping mall, or may be disposed in other places, such as near a doorway of a shopping mall, an elevator entrance, or a stairway entrance. The mounting position and posture of the optical tag 300 are generally fixed. Stores "a 01", "a 02", "a 03", "a 04" and "B01", "B02", "B03", "B04" are provided in the mall. The user walks in the store with the device 301, and the device 301 is equipped with one or more cameras, and may also be equipped with one or more sensors, such as odometers, acceleration sensors, magnetic sensors, orientation sensors, gravity sensors, gyroscopes, compasses, etc., for measuring or tracking the position and attitude changes of the device in space. Device 301 may communicate with server 302 to transfer data. The server 302 may store information related to the optical label, scene information around the optical label, and the like. In one embodiment, all or part of the functionality of server 302 may be integrated into device 301, such that server 302 may not be included in the system.
Fig. 4 shows a system for localization by means of an optical label and its surrounding scene, according to another embodiment, which system comprises, in addition to an optical label 400, a device 401, a server 402, further auxiliary markers 403, 404, 405 (used in the figure) arranged around the optical label 400 ""marker") where the secondary markers 403, 404, 405 may be three points in space that are not collinear. The auxiliary markers may be, for example, light sources, markers, or parts of objects (e.g., corners), etc. Information (e.g., spatial position information and/or feature information) related to the auxiliary marker may be stored in the device 401 or the server 402 together as scene information around the optical tag 400 in association with identification information of the optical tag 400. Compared with the method using the three-dimensional model information or the point cloud information of the scene as the scene information, the method using the auxiliary mark can simplify calculation, save storage space and improve positioning efficiency.
Fig. 5 shows a method for localization by means of an optical label and its surrounding scene, according to an embodiment, comprising the following steps:
s510, the optical label is scanned by using the apparatus to obtain the identification information of the optical label.
The device may capture one or more images containing the optical label via its camera and obtain the identification information of the optical label conveyed by the optical label by analyzing the images of the optical label (or the respective light sources in the optical label) in these images. In one embodiment, the device may also send an image containing the optical label to a server, which analyzes and identifies the identification information of the optical label.
S520, obtaining scene information of the real scene where the device is located based on the identification information of the optical label.
Scene information may be established in advance for a real scene around the optical label and stored in association with identification information of the optical label in the device or other means (e.g., a server) accessible by the device. The scene information around the optical label may be, for example, three-dimensional model information of the scene, point cloud information of the scene, information of an auxiliary marker around the optical label, and other information. The device may query and acquire scene information around the optical label according to the identification information of the optical label.
In some cases, the scene around the optical label may change over time (e.g., new objects may be placed around the optical label), and thus the corresponding scene information may be updated according to the scene change around the optical label.
In some cases, there may be large differences in the scene around the light label at different times. For example, there may be some logos or feature points around a certain light label that are more visible during the day, and there may also be some light sources that are turned on at night. As such, during the day, the scene information around the optical label may include information about these identifications or feature points; at night, the above marks or feature points may be difficult to observe, and therefore the scene information around the optical label may include information about the light sources around it. Thus, in one embodiment, a corresponding applicable time period may be set for the scene information, and the device may determine current scene information of the real scene in which the device is located based at least in part on the identification information of the optical tag and the time information. The time information may be, for example, the current time or the time at which the image was taken, or the time selected or input by the user. In other embodiments, the server may also determine scene information of the real scene where the device is located according to the identification information and/or the time information of the optical tag.
And S530, acquiring an image of the real scene where the equipment is located through the equipment.
The device can take one or more images of its surrounding real scene through a camera. Taking the above mall as an example, the device can take an image of a store, an elevator, a staircase, or a sign around the device. In one embodiment, an image including the optical label acquired when the device scans the optical label to obtain the identification information of the optical label may also be used as the image of the real scene where the device is located.
And S540, determining the position information of the equipment in the real scene based on the obtained scene information of the real scene where the equipment is located and the image of the real scene collected by the equipment.
In one embodiment, the location of the device in the real scene may be determined by comparing scene information of the real scene surrounding the device with images of the real scene captured by the device. In one embodiment, a scene coordinate system may be established for the real scene where the optical label is located, and the position information of the device in the real scene may be represented as coordinates of the device in the scene coordinate system. In one embodiment, the posture information of the device in the real scene can be further determined based on the obtained scene information of the real scene where the device is located and the image of the real scene collected by the device.
In one embodiment, to more accurately determine the position and/or pose of the device, the device may capture multiple images or capture a sequence of images (i.e., a piece of video) and use at least two of the images to determine the position and/or pose of the device, after which the multiple determinations of the position and/or pose of the device may be analyzed to eliminate errors. In one embodiment, the images may be acquired separately by at least two cameras on the device.
In the above embodiment, the specific scene where the device is located is first determined by the identification information of the optical label scanned by the device, so that the range of the candidate scene information is greatly reduced. For example, for a light label installed on the third floor of the mall, the spatial range of the device located on the third floor of the mall of about several meters or several tens of meters can be determined according to the identification information. Then, the scene image acquired by the equipment is compared with the scene information stored in advance of the scene, so that the equipment can be quickly and accurately positioned.
In one embodiment, several auxiliary markers may be set in the real scene around the optical label, and the related information of the auxiliary markers includes the spatial position information and the feature information of the auxiliary markers. The spatial position information of the auxiliary marker may be, for example, spatial position information of the auxiliary marker in a scene coordinate system, an optical label coordinate system, or a world coordinate system; the characteristic information of the auxiliary mark may include, for example, shape information, color information, size information, and the like of the auxiliary mark. In this case, the scene information of the real scene stored in the device or the server may be the relevant information of several auxiliary markers around the optical label. After the device captures an image containing a plurality of auxiliary markers around the optical label, the position and/or posture information of the device in the real scene can be determined by comparing the scene information containing the auxiliary marker related information with the image containing the auxiliary markers captured by the device.
When the apparatus captures an image including the supplementary mark around the optical label, the image including all the supplementary marks around the optical label may be captured, or only an image including a part of the supplementary mark may be captured. For example, some of the secondary markers may not be imaged because they are not within the field of view of the image capture device of the apparatus, or some of the secondary markers may not be identifiable at the current time (e.g., some light sources are used as secondary markers but are not currently turned on), at which time the apparatus may only capture an image that includes a portion of the secondary markers. In order to determine the device position information and the pose information (hereinafter, referred to as pose information) by the auxiliary markers, it is generally necessary to arrange at least three points that are not collinear in a scene space as the auxiliary markers. In one embodiment, if the device also uses gravity sensors to assist in pose determination, at least two points may be used as auxiliary markers.
As described above, by arranging a plurality of auxiliary marks around the optical label and comparing the related information of the auxiliary marks with the image containing the auxiliary marks acquired by the device, on one hand, the scene information can be greatly simplified, the complete three-dimensional model data or point cloud data of the scene is not needed, and on the other hand, the position information of the device can be determined more accurately and rapidly.
In some cases, in determining the location information of the device, not only the identification information conveyed by the optical label, but also the location information of the device relative to the optical label may be further considered.
Fig. 6 shows a method for localization by means of an optical label and its surrounding scene according to another embodiment, comprising the following steps:
s610, scanning the optical label with the device to obtain identification information of the optical label and position information of the device relative to the optical label.
The step of scanning the optical label by the device to obtain the identification information of the optical label is similar to the step S510, and is not described herein again.
The device may obtain position information of the device relative to the optical label by analyzing an image containing the optical label (e.g., analyzing a size, perspective distortion, etc. of an image of the optical label in the image), which may include distance information and orientation information of the device relative to the optical label. In one embodiment, the device may also send the captured image including the optical label to a server, which analyzes the image to determine positional information of the device relative to the optical label.
And S620, determining scene information of the real scene where the equipment is located based on the identification information of the optical label and the position information of the equipment relative to the optical label.
As described in step S520, the scene information of the real scene around the optical label is stored in the device or the server in association with the identification information of the optical label, the scene information around the optical label may be queried and acquired according to the identification information of the optical label, and the range of the scene information of the real scene where the device is located may be further narrowed down using the position information of the device relative to the optical label, so that the scene information for comparison with the image is limited to the scene information around the device.
And S630, acquiring an image of the real scene where the equipment is located through the equipment.
And S640, determining the position information of the equipment in the real scene based on the scene information of the real scene and the image of the real scene acquired by the equipment.
After the required local scene information is determined by using the position information of the device relative to the optical label, the fast positioning of the device can be realized by comparing the local scene information with the image acquired by the device.
In an embodiment, the pose information of the device relative to the optical label may also be obtained, and scene information of a real scene where the device is located may be determined based on the identification information of the optical label and the pose information of the device relative to the optical label, thereby implementing positioning of the device.
FIG. 7 illustrates a method for localization by an optical tag and its surrounding scene according to another embodiment, which includes the steps of:
s710, the optical label is scanned using the device to obtain identification information of the optical label and position and attitude information of the device relative to the optical label.
The step S610 of scanning the optical label by the device to obtain the identification information of the optical label and the position information of the device relative to the optical label is similar to that in the above description, and is not repeated herein.
The device may also obtain its pose information relative to the optical label by analyzing the image containing the optical label, which may be used to determine the extent or boundaries of the real scene captured by the device. For example, when the imaging location or imaging area of the optical label is located at the center of the imaging field of view of the device, the device may be considered to be currently facing the optical label.
And S720, determining scene information of the real scene where the equipment is located based on the identification information of the optical label and the position and posture information of the equipment relative to the optical label.
The range of the scene information for comparison with the image may be further narrowed according to the position and orientation information of the device relative to the optical label, for example, the field of view of the image capturing device of the device may be determined according to the position and orientation information of the device relative to the optical label, and the device may be positioned based on the scene information (e.g., point cloud information) of the real scene within the field of view.
And S730, acquiring an image of the real scene where the equipment is located through the equipment.
And S740, determining or correcting the position and/or posture information of the equipment in the real scene based on the scene information of the real scene where the equipment is located and the image of the real scene collected by the equipment.
In some cases, the position information of the optical tag in the spatial coordinate system may be considered, and the scene information of the real scene where the device is located may be determined according to the position information of the device relative to the optical tag and the position information of the optical tag in the spatial coordinate system, so as to achieve the positioning of the device.
Fig. 8 shows a method for localization by means of an optical label and its surrounding scene according to another embodiment, comprising the following steps:
s810, scanning the optical label with the device to obtain identification information of the optical label and position information of the device relative to the optical label. This portion is similar to step S610 and will not be described herein.
And S820, determining the position and posture information of the optical label in the space coordinate system through the identification information of the optical label.
The identification information of the optical label, the position and posture information of the optical label in the spatial coordinate system and the scene information of the real scene where the optical label is located can be stored on the device or the server in a pre-associated manner, and the position and posture information of the optical label in the spatial coordinate system and the scene information of the real scene where the optical label is located can be inquired and obtained through the identification information of the optical label.
S830, actual position information of the device is determined based on the position information of the device relative to the optical tag and the position and posture information of the optical tag in the spatial coordinate system. In one embodiment, pose information of the device relative to the optical labels may also be obtained, and actual pose information of the device may be determined based on the pose information of the device relative to the optical labels and the pose information of the optical labels in the spatial coordinate system. The actual pose information of the device is pose information of the device in a spatial coordinate system (e.g., a scene coordinate system or a world coordinate system).
And S840, determining scene information of the real scene where the device is located based on the actual position information of the device.
The scene information of the real scene where the device is located can be limited to a smaller range through the actual position information of the device, for example, the local scene information around the actual position of the device.
In one embodiment, if the actual pose information of the device is determined, scene information of the real scene in which the device is located may be determined based on the actual pose information of the device.
S850, the device collects images of the real scene.
And S860, determining the position information of the equipment in the real scene based on the scene information of the real scene and the image of the real scene acquired by the equipment.
In some cases, the position and/or pose of the optical labels disposed in the real scene may change due to changes in the field environment or other factors. For example, the mounting position and/or posture of the optical tag may be changed manually (e.g., adjustment of the mounting position or posture of the optical tag) or changed naturally (e.g., inclination due to gravity). Therefore, it is possible to re-determine or correct the pose information of the optical marker based on the pose information of the device and the relative pose information between the device and the optical marker after determining the pose information of the device without manually determining the pose of the optical marker.
Fig. 9 shows a method for positioning or calibrating an optical label with the optical label and its surrounding scene according to another embodiment, comprising the steps of:
s910, the optical label is scanned using the device to obtain identification information of the optical label and position and attitude information of the device relative to the optical label.
And S920, determining scene information of the real scene where the device is located based on the identification information of the optical label.
S930, the device acquires an image of a real scene.
And S940, determining the position and posture information of the equipment in the real scene based on the scene information of the real scene where the equipment is located and the image of the real scene collected by the equipment.
S950, determining or correcting the position and posture information of the optical tag in the real scene based on the position and posture information of the device in the real scene and the position and posture information of the device relative to the optical tag.
In one embodiment, the device may use the image containing the light tags to determine its pose information relative to the light tags and its pose information in the scene at the same time, since the image containing the light tags is itself also an image of a real scene. In some cases, the device may determine its pose information relative to the optical tag from an image containing the optical tag at a first time, and may determine its pose information in the scene from an image of the real scene at a second time different from the first time. The equipment can be displaced between the first time and the second time, and the pose information of the equipment relative to the optical label can be tracked. For example, the device may use its built-in acceleration sensor, magnetic sensor, orientation sensor, gravity sensor, gyroscope, camera, etc., or track changes in its pose information by methods known in the art (e.g., inertial navigation, visual odometer, SLAM, VSLAM, SFM, etc.), to obtain real-time pose information or change information of the pose of the device.
In one embodiment, the apparatus for performing the methods described herein may be a server. In one embodiment, the means for performing the methods described herein may be a device or be integrated in a device. In one embodiment, a part of the apparatus for performing the method described herein may be located in a server, and another part may be located in a device, that is, the method described herein may be performed by the cooperative operation of the server and the device.
In one embodiment of the invention, the invention may be implemented in the form of a computer program. The computer program may be stored in various storage media (e.g., hard disk, optical disk, flash memory, etc.), which when executed by a processor, can be used to implement the methods of the present invention.
In another embodiment of the invention, the invention may be implemented in the form of an electronic device. The electronic device comprises a processor and a memory in which a computer program is stored which, when being executed by the processor, can be used for carrying out the method of the invention.
References herein to "various embodiments," "some embodiments," "one embodiment," or "an embodiment," etc., indicate that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases "in various embodiments," "in some embodiments," "in one embodiment," or "in an embodiment," or the like, in various places throughout this document are not necessarily referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Thus, a particular feature, structure, or characteristic illustrated or described in connection with one embodiment may be combined, in whole or in part, with a feature, structure, or characteristic of one or more other embodiments without limitation, as long as the combination is not logically inconsistent or workable. Expressions appearing herein similar to "according to a", "based on a", "by a" or "using a" mean non-exclusive, i.e. "according to a" may cover "according to a only", and also "according to a and B", unless it is specifically stated that the meaning is "according to a only". In the present application, for clarity of explanation, some illustrative operational steps are described in a certain order, but one skilled in the art will appreciate that each of these operational steps is not essential and some of them may be omitted or replaced by others. It is also not necessary that these operations be performed sequentially in the manner shown, but rather that some of these operations be performed in a different order, or in parallel, as desired, provided that the new implementation is not logically or operationally unfeasible. For example, in some embodiments, the distance or depth of the virtual object relative to the electronic device may be set prior to determining the orientation of the virtual object relative to the electronic device.
Having thus described several aspects of at least one embodiment of this invention, it is to be appreciated various alterations, modifications, and improvements will readily occur to those skilled in the art. Such alterations, modifications, and improvements are intended to be within the spirit and scope of the invention. Although the present invention has been described by way of preferred embodiments, the present invention is not limited to the embodiments described herein, and various changes and modifications may be made without departing from the scope of the present invention.
Claims (17)
1. A device location method, comprising:
acquiring identification information of the visual mark through equipment;
obtaining scene information of a real scene in which the device is located based at least in part on the identification information;
acquiring, by the device, an image of the real scene;
determining location information of the device in the real scene based on the obtained scene information and the image of the real scene.
2. The method of claim 1, further comprising: determining pose information of the device in the real scene based on the scene information and the image of the real scene.
3. The method of claim 1, wherein the obtaining scene information of a real scene in which the device is located based at least in part on the identification information comprises:
determining scene information of a real scene in which the device is located based at least in part on the identification information and the time information.
4. The method of claim 1, further comprising:
determining location information of the device relative to the visual marker;
and wherein said obtaining scene information of the real scene in which the device is located based at least in part on the identification information comprises:
determining scene information of a real scene in which the device is located based at least in part on the identification information and the location information of the device relative to the visual marker.
5. The method of claim 4, wherein the position information of the device relative to the visual marker is determined by the device acquiring an image containing the visual marker and analyzing the image.
6. The method of claim 4, further comprising:
determining pose information of the device relative to the visual marker;
and wherein said obtaining scene information of the real scene in which the device is located based at least in part on the identification information comprises:
determining scene information of a real scene in which the device is located based at least in part on the identification information and the position information and pose information of the device relative to the visual marker.
7. The method of claim 4, wherein the obtaining scene information of a real scene in which the device is located based at least in part on the identification information comprises:
determining the position and posture information of the visual mark in a space coordinate system through the identification information;
determining actual position information of the device based on position information of the device relative to the visual marker and position and attitude information of the visual marker in a spatial coordinate system;
determining scene information of a real scene in which the device is located based at least in part on actual location information of the device.
8. The method of claim 2, further comprising:
obtaining position and pose information of the device relative to the visual marker; and
determining or correcting the position and/or pose information of the visual marker in the real scene based on the position and pose information of the device in the real scene and the position and pose information of the device relative to the visual marker.
9. The method of claim 8, wherein the position and pose information of the device relative to the visual marker and the position and pose information of the device in the real scene are determined at the same or different times.
10. The method according to any of claims 1-9, wherein the context information comprises information about several auxiliary flags in a context.
11. The method of claim 10, wherein the information related to the supplementary mark comprises spatial position information and feature information of the supplementary mark.
12. A device location system comprising:
one or more visual markers installed in a scene;
a device having an image capture device mounted thereon, said image capture device capable of capturing an image containing said visual indicia; and
an apparatus configured to implement the method of any one of claims 1-11.
13. The system of claim 12, wherein the apparatus is a server capable of communicating with the device.
14. The system of claim 12, wherein the apparatus is integrated in the device.
15. The system of claim 12, further comprising a plurality of auxiliary signs disposed in the scene.
16. A storage medium in which a computer program is stored which, when being executed by a processor, is operative to carry out the method of any one of claims 1-11.
17. An electronic device comprising a processor and a memory, the memory having stored therein a computer program which, when executed by the processor, is operable to carry out the method of any of claims 1-11.
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010336312.5A CN111256701A (en) | 2020-04-26 | 2020-04-26 | Equipment positioning method and system |
| PCT/CN2021/084371 WO2021218546A1 (en) | 2020-04-26 | 2021-03-31 | Device positioning method and system |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010336312.5A CN111256701A (en) | 2020-04-26 | 2020-04-26 | Equipment positioning method and system |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN111256701A true CN111256701A (en) | 2020-06-09 |
Family
ID=70950019
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202010336312.5A Pending CN111256701A (en) | 2020-04-26 | 2020-04-26 | Equipment positioning method and system |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN111256701A (en) |
| WO (1) | WO2021218546A1 (en) |
Cited By (15)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112102407A (en) * | 2020-09-09 | 2020-12-18 | 北京市商汤科技开发有限公司 | Display equipment positioning method and device, display equipment and computer storage medium |
| CN112484713A (en) * | 2020-10-15 | 2021-03-12 | 珊口(深圳)智能科技有限公司 | Map construction method, navigation method and control system of mobile robot |
| CN112556701A (en) * | 2020-12-23 | 2021-03-26 | 北京嘀嘀无限科技发展有限公司 | Method, device, equipment and storage medium for positioning vehicle |
| CN112581630A (en) * | 2020-12-08 | 2021-03-30 | 北京外号信息技术有限公司 | User interaction method and system |
| CN112712559A (en) * | 2020-12-28 | 2021-04-27 | 长安大学 | SfM point cloud correction method based on NED coordinate system vector rotation |
| WO2021218546A1 (en) * | 2020-04-26 | 2021-11-04 | 北京外号信息技术有限公司 | Device positioning method and system |
| TWI747333B (en) * | 2020-06-17 | 2021-11-21 | 光時代科技有限公司 | Interaction method based on optical communictation device, electric apparatus, and computer readable storage medium |
| CN114071003A (en) * | 2020-08-06 | 2022-02-18 | 北京外号信息技术有限公司 | Shooting method and system based on optical communication device |
| CN114066990A (en) * | 2020-07-31 | 2022-02-18 | 北京外号信息技术有限公司 | Two-dimensional image-based scene reconstruction method, electronic device and medium |
| CN114140520A (en) * | 2021-12-01 | 2022-03-04 | 上海擎朗智能科技有限公司 | Method and device for determining position information and storage medium |
| CN114323013A (en) * | 2020-09-30 | 2022-04-12 | 北京外号信息技术有限公司 | Method for determining position information of a device in a scene |
| WO2022121606A1 (en) * | 2020-12-08 | 2022-06-16 | 北京外号信息技术有限公司 | Method and system for obtaining identification information of device or user thereof in scenario |
| CN114726996A (en) * | 2021-01-04 | 2022-07-08 | 北京外号信息技术有限公司 | Method and system for establishing a mapping between a spatial position and an imaging position |
| CN114820776A (en) * | 2021-01-29 | 2022-07-29 | 北京外号信息技术有限公司 | Method and electronic device for obtaining information of objects in scene |
| CN114842069A (en) * | 2021-01-30 | 2022-08-02 | 华为技术有限公司 | Pose determination method and related equipment |
Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104378735A (en) * | 2014-11-13 | 2015-02-25 | 无锡儒安科技有限公司 | Indoor positioning method, client side and server |
| CN105987693A (en) * | 2015-05-19 | 2016-10-05 | 北京蚁视科技有限公司 | Visual positioning device and three-dimensional surveying and mapping system and method based on visual positioning device |
| US20170154424A1 (en) * | 2015-12-01 | 2017-06-01 | Canon Kabushiki Kaisha | Position detection device, position detection method, and storage medium |
| CN106989746A (en) * | 2017-03-27 | 2017-07-28 | 远形时空科技(北京)有限公司 | Air navigation aid and guider |
| CN107328420A (en) * | 2017-08-18 | 2017-11-07 | 上海木爷机器人技术有限公司 | Localization method and device |
| CN108917758A (en) * | 2018-02-24 | 2018-11-30 | 石化盈科信息技术有限责任公司 | A kind of navigation methods and systems based on AR |
| CN109063799A (en) * | 2018-08-10 | 2018-12-21 | 珠海格力电器股份有限公司 | Positioning method and device of equipment |
| CN109099915A (en) * | 2018-06-27 | 2018-12-28 | 未来机器人(深圳)有限公司 | Method for positioning mobile robot, device, computer equipment and storage medium |
Family Cites Families (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105094335B (en) * | 2015-08-04 | 2019-05-10 | 天津锋时互动科技有限公司 | Scene extraction method, object localization method and system thereof |
| US10417469B2 (en) * | 2016-05-07 | 2019-09-17 | Morgan E. Davidson | Navigation using self-describing fiducials |
| CN110211242A (en) * | 2019-06-06 | 2019-09-06 | 芋头科技(杭州)有限公司 | The method that indoor augmented reality information is shown |
| CN111256701A (en) * | 2020-04-26 | 2020-06-09 | 北京外号信息技术有限公司 | Equipment positioning method and system |
-
2020
- 2020-04-26 CN CN202010336312.5A patent/CN111256701A/en active Pending
-
2021
- 2021-03-31 WO PCT/CN2021/084371 patent/WO2021218546A1/en not_active Ceased
Patent Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104378735A (en) * | 2014-11-13 | 2015-02-25 | 无锡儒安科技有限公司 | Indoor positioning method, client side and server |
| CN105987693A (en) * | 2015-05-19 | 2016-10-05 | 北京蚁视科技有限公司 | Visual positioning device and three-dimensional surveying and mapping system and method based on visual positioning device |
| US20170154424A1 (en) * | 2015-12-01 | 2017-06-01 | Canon Kabushiki Kaisha | Position detection device, position detection method, and storage medium |
| CN106989746A (en) * | 2017-03-27 | 2017-07-28 | 远形时空科技(北京)有限公司 | Air navigation aid and guider |
| CN107328420A (en) * | 2017-08-18 | 2017-11-07 | 上海木爷机器人技术有限公司 | Localization method and device |
| CN108917758A (en) * | 2018-02-24 | 2018-11-30 | 石化盈科信息技术有限责任公司 | A kind of navigation methods and systems based on AR |
| CN109099915A (en) * | 2018-06-27 | 2018-12-28 | 未来机器人(深圳)有限公司 | Method for positioning mobile robot, device, computer equipment and storage medium |
| CN109063799A (en) * | 2018-08-10 | 2018-12-21 | 珠海格力电器股份有限公司 | Positioning method and device of equipment |
Cited By (22)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2021218546A1 (en) * | 2020-04-26 | 2021-11-04 | 北京外号信息技术有限公司 | Device positioning method and system |
| TWI747333B (en) * | 2020-06-17 | 2021-11-21 | 光時代科技有限公司 | Interaction method based on optical communictation device, electric apparatus, and computer readable storage medium |
| CN114066990A (en) * | 2020-07-31 | 2022-02-18 | 北京外号信息技术有限公司 | Two-dimensional image-based scene reconstruction method, electronic device and medium |
| CN114066990B (en) * | 2020-07-31 | 2025-09-16 | 北京外号信息技术有限公司 | Scene reconstruction method based on two-dimensional image, electronic equipment and medium |
| CN114071003B (en) * | 2020-08-06 | 2024-03-12 | 北京外号信息技术有限公司 | Shooting method and system based on optical communication device |
| CN114071003A (en) * | 2020-08-06 | 2022-02-18 | 北京外号信息技术有限公司 | Shooting method and system based on optical communication device |
| CN112102407A (en) * | 2020-09-09 | 2020-12-18 | 北京市商汤科技开发有限公司 | Display equipment positioning method and device, display equipment and computer storage medium |
| CN114323013A (en) * | 2020-09-30 | 2022-04-12 | 北京外号信息技术有限公司 | Method for determining position information of a device in a scene |
| CN112484713A (en) * | 2020-10-15 | 2021-03-12 | 珊口(深圳)智能科技有限公司 | Map construction method, navigation method and control system of mobile robot |
| WO2022121606A1 (en) * | 2020-12-08 | 2022-06-16 | 北京外号信息技术有限公司 | Method and system for obtaining identification information of device or user thereof in scenario |
| CN112581630A (en) * | 2020-12-08 | 2021-03-30 | 北京外号信息技术有限公司 | User interaction method and system |
| CN112556701A (en) * | 2020-12-23 | 2021-03-26 | 北京嘀嘀无限科技发展有限公司 | Method, device, equipment and storage medium for positioning vehicle |
| CN112712559B (en) * | 2020-12-28 | 2021-11-30 | 长安大学 | SfM point cloud correction method based on NED coordinate system vector rotation |
| CN112712559A (en) * | 2020-12-28 | 2021-04-27 | 长安大学 | SfM point cloud correction method based on NED coordinate system vector rotation |
| CN114726996A (en) * | 2021-01-04 | 2022-07-08 | 北京外号信息技术有限公司 | Method and system for establishing a mapping between a spatial position and an imaging position |
| CN114726996B (en) * | 2021-01-04 | 2024-03-15 | 北京外号信息技术有限公司 | Methods and systems for establishing mapping between spatial locations and imaging locations |
| CN114820776A (en) * | 2021-01-29 | 2022-07-29 | 北京外号信息技术有限公司 | Method and electronic device for obtaining information of objects in scene |
| CN114842069A (en) * | 2021-01-30 | 2022-08-02 | 华为技术有限公司 | Pose determination method and related equipment |
| WO2022161386A1 (en) * | 2021-01-30 | 2022-08-04 | 华为技术有限公司 | Pose determination method and related device |
| EP4276760A4 (en) * | 2021-01-30 | 2024-06-19 | Huawei Technologies Co., Ltd. | METHOD FOR DETERMINING INSTALLATION AND ASSOCIATED DEVICE |
| CN114140520B (en) * | 2021-12-01 | 2025-02-11 | 上海擎朗智能科技有限公司 | Method, device and storage medium for determining location information |
| CN114140520A (en) * | 2021-12-01 | 2022-03-04 | 上海擎朗智能科技有限公司 | Method and device for determining position information and storage medium |
Also Published As
| Publication number | Publication date |
|---|---|
| WO2021218546A1 (en) | 2021-11-04 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN111256701A (en) | Equipment positioning method and system | |
| EP3246660B1 (en) | System and method for referencing a displaying device relative to a surveying instrument | |
| EP3012587B1 (en) | Image processing device, image processing method, and program | |
| KR101880185B1 (en) | Electronic apparatus for estimating pose of moving object and method thereof | |
| KR102006291B1 (en) | Method for estimating pose of moving object of electronic apparatus | |
| CN111026107B (en) | Method and system for determining the position of a movable object | |
| CN105973236A (en) | Indoor positioning or navigation method and device, and map database generation method | |
| JP2008039611A (en) | Position / orientation measuring apparatus, position / orientation measuring method, mixed reality presentation system, computer program, and storage medium | |
| CN103398717A (en) | Panoramic map database acquisition system and vision-based positioning and navigating method | |
| KR102622585B1 (en) | Indoor navigation apparatus and method | |
| JP2021193538A (en) | Information processing device, mobile device, information processing system and method, and program | |
| CN107607110A (en) | A kind of localization method and system based on image and inertial navigation technique | |
| JP2009140402A (en) | INFORMATION DISPLAY DEVICE, INFORMATION DISPLAY METHOD, INFORMATION DISPLAY PROGRAM, AND RECORDING MEDIUM CONTAINING INFORMATION DISPLAY PROGRAM | |
| CN109982239A (en) | Store floor positioning system and method based on machine vision | |
| CN102288180B (en) | Real-time image navigation system and method | |
| TWI750821B (en) | Navigation method, system, equipment and medium based on optical communication device | |
| CN112528699B (en) | Method and system for obtaining identification information of devices or users thereof in a scene | |
| JP2013024686A (en) | Mobile mapping system, method for measuring route object using the same, and position specification program | |
| US20250095202A1 (en) | Program, information processing device, and information processing method | |
| JP7705220B2 (en) | Location management system and location management method | |
| JP7467299B2 (en) | Location management system, location identification device, and location identification method | |
| CN112581630B (en) | User interaction method and system | |
| CN114726996B (en) | Methods and systems for establishing mapping between spatial locations and imaging locations | |
| CN111752293B (en) | Method and electronic device for guiding a machine capable of autonomous movement | |
| CN114663491B (en) | Method and system for providing information to users in a scene |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200609 |
|
| RJ01 | Rejection of invention patent application after publication |