CN107885800B - Method and device for correcting target position in map, computer equipment and storage medium - Google Patents
Method and device for correcting target position in map, computer equipment and storage medium Download PDFInfo
- Publication number
- CN107885800B CN107885800B CN201711043015.6A CN201711043015A CN107885800B CN 107885800 B CN107885800 B CN 107885800B CN 201711043015 A CN201711043015 A CN 201711043015A CN 107885800 B CN107885800 B CN 107885800B
- Authority
- CN
- China
- Prior art keywords
- live
- action
- map
- target position
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/29—Geographical information databases
Landscapes
- Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Theoretical Computer Science (AREA)
- Remote Sensing (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Navigation (AREA)
- Instructional Devices (AREA)
Abstract
The application relates to a method, a device, a computer device and a computer readable storage medium for correcting target positions in a map. The method for correcting the target position in the map comprises the following steps: by acquiring target location description information; determining a target position in a plane map according to the target position description information; jumping to a live-action map, and acquiring live-action information from a street scene image displayed in the live-action map according to the determined target position in the planar map; feeding back the live-action information; receiving position offset description information in response to the live-action information; determining live-action information in the live-action map according to the position deviation description information; and taking the position corresponding to the determined real scene information in the plane map as the corrected target position. According to the scheme, the effect of obtaining the accurate target position can be achieved, and then the accuracy of obtaining the target position is improved.
Description
Technical Field
The present invention relates to the field of electronic maps, and in particular, to a method, an apparatus, a computer device, and a computer-readable storage medium for correcting a target position in a map.
Background
At present, it is difficult for a worker to determine an accurate target position simply from target position description information, such as speech information of oral description or written information. Therefore, the worker often retrieves the target position by means of a plane map.
However, the target position retrieved from the planar map according to the target position description information is likely to be unable to confirm a more accurate target position due to factors such as confusion of place names or a large range, which results in inaccurate target position to be finally confirmed, and thus the accuracy of obtaining the target position by using the conventional method is low.
Disclosure of Invention
Based on this, it is necessary to provide a method, an apparatus, a computer device, and a computer-readable storage medium for correcting a target position in a map, in order to solve the problem that the accuracy of obtaining the target position by using the conventional method is low.
A method of target location correction in a map, the method comprising:
acquiring target position description information;
determining a target position in a plane map according to the target position description information;
jumping to a live-action map, and acquiring live-action information from a street scene image displayed in the live-action map according to the determined target position in the planar map;
feeding back the live-action information;
receiving position offset description information in response to the live-action information;
determining live-action information in the live-action map according to the position deviation description information;
and taking the position corresponding to the determined real scene information in the plane map as the corrected target position.
In one embodiment, the jumping to the live-action map, and acquiring live-action information from a street view displayed in the live-action map according to the target position determined in the plane map, includes:
converting the plane map to obtain a live-action map corresponding to the plane map;
extracting plane coordinates of the target position in a plane map;
converting the plane coordinates into real scene coordinates;
displaying street scenes in the live-action map according to the live-action coordinates;
and acquiring corresponding live-action information of the target position in the live-action map from the street scene image displayed in the live-action map.
In one embodiment, the determining the live-action information in the live-action map according to the position offset description information comprises:
triggering an instruction for moving the live-action map according to the position deviation description information;
acquiring the re-acquired live-action information, wherein the re-acquired live-action information is obtained after the live-action map is moved;
feeding back the re-acquired live-action information;
if a message indicating confirmation of the re-acquired live-action information is received, determining the re-acquired live-action information in the live-action map;
and if a message indicating denial of the re-acquired live-action information is received, returning to the step of triggering an instruction of moving the live-action map according to the position deviation description information.
In one embodiment, the method further comprises:
when the target position is displayed as a non-passing area in the plane map, retrieving a passing area around the target position;
and reselecting the target position in the passing area according to the plane coordinates of the target position, so that the distance between the target position and the reselected target position is minimum.
In one embodiment, the method further comprises:
feeding back a live-action image acquisition instruction when live-action information corresponding to the target position is not detected in the live-action map;
receiving a live-action image acquired according to the live-action image acquisition instruction;
and calibrating the target position according to the live-action image and the target position description information, and determining an accurate target position in the plane map.
An in-map target position correction apparatus, the apparatus comprising:
the description information acquisition module is used for acquiring the description information of the target position;
the target position determining module is used for determining a target position in the planar map according to the target position description information;
the real-scene information acquisition module is used for jumping to a real-scene map and acquiring real-scene information from a street scene image displayed in the real-scene map according to the target position determined in the plane map;
the real-scene information feedback module is used for feeding back the real-scene information;
a description information receiving module for receiving position offset description information in response to the live-action information;
the live-action information determining module is used for determining live-action information in the live-action map according to the position deviation description information;
and the target position correction module is used for taking the position corresponding to the determined real scene information in the plane map as a corrected target position.
In one embodiment, the apparatus further comprises:
the map conversion module is used for converting the plane map to obtain a live-action map corresponding to the plane map;
the coordinate extraction module is used for extracting plane coordinates of the target position in a plane map;
the coordinate conversion module is used for converting the plane coordinate into a real scene coordinate;
the street scene display module is used for displaying a street scene in the live-action map according to the live-action coordinates;
the live-action information acquisition module is further configured to acquire, from a street scene displayed in the live-action map, corresponding live-action information of the target position in the live-action map.
In one embodiment, the apparatus further comprises:
the instruction triggering module is used for triggering an instruction for moving the live-action map according to the position deviation description information;
the live-action information acquisition module is further used for acquiring the re-acquired live-action information, and the re-acquired live-action information is obtained after the live-action map is moved;
the live-action information feedback module is also used for feeding back the re-acquired live-action information;
the live-action information determining module is further configured to determine the re-acquired live-action information in the live-action map if a message indicating confirmation of the re-acquired live-action information is received;
the instruction triggering module is further configured to return to the step of triggering the instruction for moving the live-action map according to the position offset description information if a message indicating denial of the re-acquired live-action information is received.
A computer-readable storage medium, storing a computer program which, when executed by a processor, causes the processor to perform the steps of the method according to any one of the preceding claims.
A computer device comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of the method of any one of the preceding claims.
The method, the device, the computer equipment and the computer readable storage medium for correcting the target position in the map are used for searching according to the target position description information in the planar map to determine the target position by acquiring the target position description information. And jumping to a live-action map, displaying a street scene in the live-action map according to the target position, acquiring live-action information corresponding to the target position according to the street scene, and observing the acquired live-action information to more intuitively observe the corresponding target position. The live-action information is fed back to obtain the position offset description information responding to the live-action information, and the live-action information can be calibrated according to the position offset description information to determine the accurate live-action information. And the target position can be corrected in the plane map by determining the accurate live-action information to obtain the accurate target position, so that the accurate target position is obtained, and the accuracy of obtaining the target position is improved.
Drawings
FIG. 1 is a diagram illustrating an exemplary embodiment of a method for correcting a location of an object in a map;
FIG. 2 is a schematic flow chart illustrating a method for correcting a location of an object in a map according to an embodiment;
FIG. 3 is a schematic flow chart illustrating a method for correcting a target position in a map according to another embodiment;
FIG. 4 is a block diagram of an apparatus for correcting the position of a target in a map according to an embodiment;
FIG. 5 is a block diagram showing an alternative embodiment of a target position correcting device in a map;
FIG. 6 is a block diagram of an apparatus for correcting the position of a target in a map according to an embodiment;
FIG. 7 is a block diagram showing an example of a target position correcting device in a map according to another embodiment;
FIG. 8 is a block diagram of an apparatus for correcting a target position in a map according to an embodiment;
FIG. 9 is a block diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Fig. 1 is an application environment diagram of a target position correction method in a map according to an embodiment. Referring to fig. 1, the in-map target position correction method is applied to a corrected target position system. The system includes a terminal 110 and a server 120. The terminal 110 and the server 120 are connected through a network. The terminal 110 may specifically be a desktop terminal or a mobile terminal, and the mobile terminal may specifically be at least one of a mobile phone, a tablet computer, a notebook computer, and the like. The server 120 may be implemented as a stand-alone server or a server cluster composed of a plurality of servers.
As shown in fig. 2, in one embodiment, a method for correcting a target position in a map is provided. The embodiment is mainly illustrated by applying the method to the server 120 in fig. 1. Referring to fig. 2, the method for correcting the target position in the map specifically includes the following steps:
s202, obtaining the description information of the target position.
Wherein the target position description information is information for describing a target position. For example, the target location description information may be specifically voice information transmitted through the terminal, text information transmitted through the terminal, or image information transmitted through the terminal. Correspondingly, the target location description information may be obtained by receiving, by the server, the target location description information transmitted by the terminal.
In one embodiment, the terminal sends the target location description information to the server through instant messaging software or real-time messaging software installed on the terminal, and the server acquires the target location description information sent by the terminal.
And S204, determining the target position in the plane map according to the target position description information.
Wherein the planar map may be a two-dimensional electronic map. The target position is determined in the planar map, the coordinate point can be directly selected as the target position according to the position information displayed on the planar map, or the target position can be obtained through retrieval of the planar map according to the target position description information.
In one embodiment, the server analyzes the received target location description information to obtain corresponding city information, administrative district information and insurance place information, retrieves a plurality of location information matched with the city information, the administrative district information and the insurance place information from a database of a planar map according to the city information, the administrative district information and the insurance place information, displays a plurality of locations corresponding to the plurality of location information on the map, and determines a target location from the plurality of locations according to the target location description information.
S206, jumping to the live-action map, and acquiring live-action information from the street scene image displayed in the live-action map according to the target position determined in the plane map.
Wherein the live-action map is a map in which street scenes can be seen. The street view is a view of a street displayed in a live-action map. The live-action map can specifically display two-dimensional street images and can also display three-dimensional street images. The live-action information is information describing a street view. The live-action information may be specifically text information, voice information, or image information. Jumping to the live-action map can specifically be calling the live-action map to cover the plane map, or displaying the plane map and the corresponding live-action map on the same screen on the display screen.
In one embodiment, the server triggers a jump instruction for jumping from the plane map to the live-action map, and before executing the jump instruction, whether the target position has corresponding live-action information in the live-action map is detected. And if the detection result is no, reselecting the target position on the plane map, so that the server jumps to the live-action map from the plane map after executing the jump instruction, and can acquire the live-action information corresponding to the target position.
And S208, feeding back the live-action information.
The feedback live-action information may be feedback live-action information to a terminal that sends the target location description information.
In one embodiment, the terminal sends target location description information in the form of textual information to the server. And after acquiring the live-action information according to the target position description information, the server feeds back the live-action information in the form of voice information to the terminal.
S210, receiving the positional deviation description information in response to the live-action information.
Wherein the positional offset description information is information for describing a positional offset of the target. The position deviation description information may be specifically text information, voice information, or image information. The received location offset description information may be specifically transmitted through real-time messaging software or instant messaging software on the terminal.
In one embodiment, the terminal sends the location offset description information in the form of voice information through the real-time communication software. And after receiving the position deviation description information, the server analyzes the position deviation description information to obtain a message which deviates 15 meters eastward.
S212, determining the live-action information in the live-action map according to the position deviation description information.
The live-action information is determined according to the position offset description information, and the live-action information can be determined by offsetting a live-action map according to the position cheap description information and then determining the live-action information in the offset live-action map.
Specifically, the server shifts the live-action map according to the position shift description information to redisplay the street view, and retrieves the live-action map according to the redisplayed street view.
In one embodiment, after the live-action map is deviated to the east by 45 meters according to the position deviation description information, the live-action information is determined to be a bank and the street view of the forest on the right of the bank in the live-action map deviated to the east by 45 meters.
In one embodiment, after the live-action map is shifted according to the position shift description information, the determined live-action information is a street view, and the determined street view is described by voice to obtain the live-action information in the form of voice information.
And S214, taking the position corresponding to the determined real scene information in the plane map as the corrected target position.
The position in the planar map corresponding to the determined live-action information may be a position calibrated in the planar map after the live-action map is converted into the planar map, or a position automatically corrected in the planar map when the live-action information is determined in the live-action map.
In one embodiment, the planar map and the live-action map are simultaneously displayed on the display of the server, and live-action information corresponding to the target position selected by the planar map is displayed in the live-action map. When the live-action map is shifted to acquire the re-determined live-action information, the target position is re-determined on the planar map at the same time, the target position determined on the planar map at the same time corresponding to the re-determined live-action information on the live-action map.
In one embodiment, the planar map and the live-action map are simultaneously displayed on the display of the server, and live-action information corresponding to the target position selected by the planar map is displayed in the live-action map. When the target position is redetermined on the plan map, live-action information corresponding to the redetermined target position is acquired on the live-action map.
According to the method for correcting the target position in the map, the target position is determined by acquiring the target position description information and retrieving the target position description information in the planar map. And jumping to a live-action map, displaying a street scene in the live-action map according to the target position, acquiring live-action information corresponding to the target position according to the street scene, and observing the acquired live-action information to more intuitively observe the corresponding target position. The live-action information is fed back to obtain the position offset description information responding to the live-action information, and the live-action information can be calibrated according to the position offset description information to determine the accurate live-action information. And the target position can be corrected in the plane map by determining the accurate live-action information to obtain the accurate target position, so that the accurate target position is obtained, and the accuracy of obtaining the target position is improved.
In one embodiment, jumping to a live-action map, and acquiring live-action information from a street scene displayed in the live-action map according to a target position determined in a plane map, includes: converting the plane map to obtain a live-action map corresponding to the plane map; extracting a plane coordinate of the target position in a plane map; converting the plane coordinates into real scene coordinates; displaying a street scene in a live-action map according to the live-action coordinates; and acquiring corresponding live-action information of the target position in the live-action map from the street scene image displayed in the live-action map.
The conversion from the plane map to the live-action map may be by triggering a map switching command. The plane coordinates may specifically be mars coordinates. The live-action coordinates may specifically be hundred-degree coordinates.
In one embodiment, the server converts the plane map to obtain a live-action map corresponding to the plane map, extracts plane coordinates of the target position on the plane map, and converts the plane coordinates to the live-action coordinates to obtain live-action information corresponding to the target position in the live-action map. And when the target position is reselected on the plane map, replacing the live-action information corresponding to the reselected target position on the live-action map in real time.
In the embodiment, the plane coordinates and the live-action coordinates are converted, so that the live-action map and the plane map can be accurately converted, the situation of the target position can be accurately reflected according to the live-action information obtained by converting the target position, and the accuracy of the target position is improved.
In one embodiment, determining the live-action information in the live-action map according to the position offset description information comprises: triggering an instruction of moving the live-action map according to the position deviation description information; acquiring the re-acquired live-action information, wherein the re-acquired live-action information is obtained after the live-action map is moved; feeding back the newly acquired live-action information; if receiving a message for confirming the representation of the newly acquired live-action information, determining the newly acquired live-action information in the live-action map; and if a message indicating denial of the newly acquired live-action information is received, returning to the step of triggering the instruction of moving the live-action map according to the position deviation description information.
The moving live-action map may specifically be the one that triggers an orientation button in the live-action map to move the live-action map, or the one that is converted after the planar map re-determines the target position to move the live-action map. The message indicating confirmation of the newly acquired live-action information is live-action information corresponding to the target position indicating that the newly acquired live-action information is accurate. The message indicating denial of the newly acquired live-action information is live-action information corresponding to the target position indicating that the newly acquired live-action information is inaccurate.
In one embodiment, after the server receives the position offset description information, the server reselects the target position on the planar map, so that the live-action information corresponding to the reselected target position is displayed on the live-action map, and moves the live-action map according to the position offset description information to acquire the live-action information again. After the re-acquired live-action information is fed back to the terminal, if a message for confirming the representation of the re-acquired live-action information is received, determining the re-acquired live-action information in the live-action map; and if a message indicating denial of the newly acquired live-action information is received, returning to the step of triggering the instruction of moving the live-action map according to the position deviation description information.
In this embodiment, the real-scene map is moved according to the real-scene offset description information to obtain accurate real-scene information, so that an accurate target position can be obtained according to the accurate real-scene information, and the accuracy of the target position is improved.
In one embodiment, the method for correcting the target position in the map further includes: when the target position is displayed as a non-passing area in the plane map, searching a passing area around the target position; and reselecting the target position in the passing area according to the plane coordinates of the target position, so that the distance between the target position and the reselected target position is minimum.
Wherein the non-passing area is an area where a road is not displayed on the plan map. The traffic area is an area where roads are displayed on a plan map.
Specifically, when the position of the target position in the plan map does not show a road, the server retrieves roads around the target position in the plan map and reselects the target position on the retrieved roads so that the distance between the target position and the reselected target position is minimized.
In one embodiment, when the position of the target location in the planar map does not show a road, the server retrieves the road in an area around the target location, resulting in a sidewalk and a motor vehicle lane. The server respectively re-determines the target positions on the sidewalk and the motor vehicle lane to enable the distance between the target position and the re-selected target position to be minimum, and plans a route for the re-determined target position on the motor vehicle lane to reach the re-determined target position on the sidewalk.
In this embodiment, when the target position is displayed as the non-passing area in the planar map, the target position is re-determined in the passing area, so that the probability that the re-determined target position acquires the live-action information is higher. Therefore, after the live-action information is acquired, the target position is calibrated according to the live-action information, and the accuracy of acquiring the target position is improved.
In one embodiment, the method for correcting the target position in the map further includes: feeding back a live-action image acquisition instruction when live-action information corresponding to the target position is not detected in the live-action map; receiving a live-action image acquired according to a live-action image acquisition instruction; and calibrating the target position according to the live-action image and the target position description information, and determining the accurate target position in the plane map.
The live-action image acquisition instruction is an instruction for acquiring a live-action image. The live-action image acquisition instruction may specifically start a camera program capable of acquiring a live-action image on the terminal, or may be text information or voice information indicating acquisition of the live-action image.
In one embodiment, when the server does not detect the live-action information corresponding to the target position in the live-action map, the server feeds back a live-action image acquisition instruction to the terminal, so that the terminal starts a camera program and acquires a live-action image near the terminal. After receiving the live-action image collected and sent by the terminal, calibrating the target position description information according to the live-action image, and calibrating the target position in the plane map according to the calibrated target position description information to determine the accurate target position.
In this embodiment, when the live-action information corresponding to the target position is not detected in the live-action map, the acquired live-action image is acquired according to the live-action image acquisition instruction, and the target position is calibrated according to the live-action image and the target position description information, so that the accurate target position can be determined, and the accuracy of acquiring the target position is improved.
In one embodiment, when the real-scene information corresponding to the target position is not detected in the real-scene map, the server retrieves an area around the target position to retrieve an area in which the corresponding real-scene information can be detected. The target position is redetermined in the retrieved area such that the distance between the target position and the redetermined target position is closest.
Fig. 3 is a flowchart illustrating a method for correcting a target position in a map according to an embodiment. Referring to fig. 3, the method for correcting the target position in the map specifically includes the following steps:
s302, obtaining the description information of the target position.
S304, determining the target position in the plane map according to the target position description information.
And S306, converting the plane map to obtain a real scene map corresponding to the plane map.
After step S306 is performed, step S312 is performed. If the target location is displayed as a non-passing area on the plane map, or if the plane map where the target location is located does not have a corresponding live-action map, the step S312 is cancelled, and the step S308 is executed.
S308, searching a passing area around the target position.
S310, reselecting the target position in the passing area according to the plane coordinates of the target position, so that the distance between the target position and the reselected target position is minimum.
After step S310, the process returns to step S306.
And S312, extracting plane coordinates of the target position in the plane map.
And S314, converting the plane coordinates into real scene coordinates.
And S316, acquiring a street scene in the live-action map according to the live-action coordinates.
And S317, acquiring corresponding live-action information of the target position in the live-action map according to the street scene image.
And S318, feeding back the live-action information.
S320, receiving the positional deviation description information in response to the live-action information.
And S322, triggering an instruction for moving the live-action map according to the position deviation description information.
And S324, acquiring the re-acquired live-action information, wherein the re-acquired live-action information is obtained after the live-action map is moved.
And S326, feeding back the re-acquired live-action information.
After step S326, if a message indicating confirmation of the newly acquired live-action information is received, step S328 is executed. If a message indicating denial of the newly acquired live-action information is received, the process returns to step S322.
And S328, determining the re-acquired live-action information in the live-action map.
And S330, taking the position corresponding to the determined real scene information in the plane map as the corrected target position.
In this embodiment, by acquiring the target location description information, the target location is retrieved from the planar map according to the target location description information to determine the target location. And jumping to a live-action map, displaying a street scene in the live-action map according to the target position, acquiring live-action information corresponding to the target position according to the street scene, and observing the acquired live-action information to more intuitively observe the corresponding target position. The live-action information is fed back to obtain the position offset description information responding to the live-action information, and the live-action information can be calibrated according to the position offset description information to determine the accurate live-action information. And the target position can be corrected in the plane map by determining the accurate live-action information to obtain the accurate target position, so that the accurate target position is obtained, and the accuracy of obtaining the target position is improved.
It should be understood that, although the steps in the flowchart of fig. 3 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in fig. 3 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
As shown in fig. 4, in one embodiment, there is further provided a target position correction apparatus 400 in a map, the target position correction apparatus 400 in a map including: a description information obtaining module 402, a target position determining module 404, a live-action information obtaining module 406, a live-action information feedback module 408, a description information receiving module 410, a live-action information determining module 412 and a target position correcting module 414.
A description information obtaining module 402, configured to obtain the description information of the target location.
And a target position determining module 404, configured to determine a target position in the planar map according to the target position description information.
And the live-action information acquisition module 406 is used for jumping to a live-action map and acquiring live-action information from a street scene image displayed in the live-action map according to the target position determined in the planar map.
And a real-scene information feedback module 408, configured to feed back real-scene information.
A description information receiving module 410 for receiving the position offset description information in response to the live-action information.
And the live-action information determining module 412 is configured to determine live-action information in the live-action map according to the position offset description information.
And a target position correction module 414, configured to use a position in the planar map corresponding to the determined live-action information as a corrected target position.
The device for correcting the target position in the map acquires the target position description information and retrieves the target position description information in the planar map to determine the target position. And jumping to a live-action map, displaying a street scene in the live-action map according to the target position, acquiring live-action information corresponding to the target position according to the street scene, and observing the acquired live-action information to more intuitively observe the corresponding target position. The live-action information is fed back to obtain the position offset description information responding to the live-action information, and the live-action information can be calibrated according to the position offset description information to determine the accurate live-action information. And the target position can be corrected in the plane map by determining the accurate live-action information to obtain the accurate target position, so that the accurate target position is obtained, and the accuracy of obtaining the target position is improved.
As shown in fig. 5, in an embodiment, the map target position correction apparatus 400 further includes: a map conversion module 416, configured to convert the planar map to obtain a live-action map corresponding to the planar map; a coordinate extraction module 418, configured to extract a plane coordinate of the target location in the plane map; a coordinate conversion module 420 for converting the plane coordinates into live-action coordinates; a street view display module 421, configured to display a street view in a live-action map according to the live-action coordinates; the live-action information obtaining module 406 is further configured to obtain, from the street scene displayed in the live-action map, corresponding live-action information of the target position in the live-action map.
As shown in fig. 6, in an embodiment, the map target position correction apparatus 400 further includes: the instruction triggering module 422 is configured to trigger an instruction for moving the live-action map according to the position offset description information; the live-action information obtaining module 406 is further configured to obtain re-obtained live-action information, where the re-obtained live-action information is obtained after moving the live-action map; the live-action information feedback module 408 is further configured to feed back the re-acquired live-action information; the live-action information determining module 412 is further configured to determine, if a message indicating confirmation of the re-acquired live-action information is received, the re-acquired live-action information in the live-action map; the instruction triggering module 422 is further configured to return to the step of triggering the instruction of moving the live-action map according to the location offset description information if a message indicating denial of the newly acquired live-action information is received.
As shown in fig. 7, in an embodiment, the map target position correction apparatus 400 further includes: a map retrieving module 424, configured to, when the target location is displayed as a non-passing area in the planar map, retrieve a passing area around the target location; the target position correction module 412 is further configured to reselect a target position within the passing area according to the plane coordinates of the target position, so that a distance between the target position and the reselected target position is minimum.
As shown in fig. 8, in an embodiment, the real-scene information feedback module 408 is further configured to feed back a real-scene image obtaining instruction when the real-scene information corresponding to the target position is not detected in the real-scene map;
the map target position correction apparatus 400 further includes: a live-action image acquiring module 426, configured to receive a live-action image acquired according to the live-action image acquiring instruction;
the target position correction module 412 is further configured to calibrate the target position according to the live-action image and the target position description information, and determine an accurate target position in the planar map.
FIG. 9 is a diagram illustrating an internal structure of a computer device in one embodiment. The computer device may specifically be the server 120 in fig. 1. As shown in fig. 9, the computer apparatus includes a processor, a memory, a network interface, an input device, and a display screen connected through a system bus. Wherein the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium of the computer device stores an operating system and may also store a computer program that, when executed by the processor, causes the processor to implement a method of target position correction in a map. The internal memory may also have a computer program stored therein, which when executed by the processor, causes the processor to perform a method of correcting a target location in a map. The internal memory provides a cached execution environment for the operating system and computer programs in the non-volatile storage medium. The network interface of the computer device may be used for network connection with the terminal. The display screen of the computer device can be a liquid crystal display screen or an electronic ink display screen, and the display screen can be used for displaying a plane map or a live-action map. The input device of the computer equipment can be a touch layer covered on a display screen, a key, a track ball or a touch pad arranged on a shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 9 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, the target position correction apparatus in the map provided by the present application may be implemented in the form of a computer program, and the computer program may be run on a computer device as shown in fig. 9. The memory of the computer device may store various program modules constituting the target position correction means in the map, such as the description information acquisition module 402, the target position determination module 404, the live-action information acquisition module 406, the live-action information feedback module 408, the description information reception module 410, the live-action information determination module 412, and the target position correction module 414 shown in fig. 4. The respective program modules constitute computer programs that cause a processor to execute the steps in the in-map target position correction method of the respective embodiments of the present application described in the present specification.
For example, the computer apparatus shown in fig. 9 may execute step S202 by the description information acquisition module 402 in the target position correction device in the map as shown in fig. 4. The computer device may perform step S204 by the target location determination module 404. The computer device may perform step S206 through the live-action information acquisition module 406. The computer device may perform step S208 through the real information feedback module 408. The computer device may perform step S210 through the description information receiving module 410. The computer device may perform step S212 through the live-action information determination module 412. The computer device may perform step S214 through the target position correction module 414.
In one embodiment, there is also provided a computer device comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of the method of: acquiring target position description information; determining a target position in the planar map according to the target position description information; jumping to a live-action map, and acquiring live-action information from a street scene image displayed in the live-action map according to the target position determined in the planar map; feeding back live-action information; receiving position offset description information in response to the live-action information; determining live-action information in the live-action map according to the position deviation description information; and taking the position corresponding to the determined real scene information in the plane map as the corrected target position.
The computer device obtains the target position description information, and retrieves the target position in the planar map according to the target position description information to determine the target position. And jumping to a live-action map, displaying a street scene in the live-action map according to the target position, acquiring live-action information corresponding to the target position according to the street scene, and observing the acquired live-action information to more intuitively observe the corresponding target position. The live-action information is fed back to obtain the position offset description information responding to the live-action information, and the live-action information can be calibrated according to the position offset description information to determine the accurate live-action information. And the target position can be corrected in the plane map by determining the accurate live-action information to obtain the accurate target position, so that the accurate target position is obtained, and the accuracy of obtaining the target position is improved.
In one embodiment, the computer program, when executed by the processor, causes the processor to perform the step of jumping to a live-action map, acquiring live-action information from a street scene displayed in the live-action map based on a target position determined in the plan map, further causing the processor to perform the steps of: converting the plane map to obtain a live-action map corresponding to the plane map; extracting a plane coordinate of the target position in a plane map; converting the plane coordinates into real scene coordinates; displaying a street scene in a live-action map according to the live-action coordinates; and acquiring corresponding live-action information of the target position in the live-action map from the street scene image displayed in the live-action map.
In one embodiment, the computer program, when executed by the processor, causes the processor to perform the step of determining the live-action information in the live-action map from the positional offset descriptive information, further causes the processor to perform the steps of the method of: triggering an instruction of moving the live-action map according to the position deviation description information; acquiring the re-acquired live-action information, wherein the re-acquired live-action information is obtained after the live-action map is moved; feeding back the newly acquired live-action information; if receiving a message for confirming the representation of the newly acquired live-action information, determining the newly acquired live-action information in the live-action map; and if a message indicating denial of the newly acquired live-action information is received, returning to the step of triggering the instruction of moving the live-action map according to the position deviation description information.
In one embodiment, the computer program, when executed by the processor, further causes the processor to perform the steps of the method of: when the target position is displayed as a non-passing area in the plane map, searching a passing area around the target position; and reselecting the target position in the passing area according to the plane coordinates of the target position, so that the distance between the target position and the reselected target position is minimum.
In one embodiment, the computer program, when executed by the processor, further causes the processor to perform the steps of the method of: feeding back a live-action image acquisition instruction when live-action information corresponding to the target position is not detected in the live-action map; receiving a live-action image acquired according to a live-action image acquisition instruction; and calibrating the target position according to the live-action image and the target position description information, and determining the accurate target position in the plane map.
In one embodiment, there is also provided a computer readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of the method of: acquiring target position description information; determining a target position in the planar map according to the target position description information; jumping to a live-action map, and acquiring live-action information from a street scene image displayed in the live-action map according to the target position determined in the planar map; feeding back live-action information; receiving position offset description information in response to the live-action information; determining live-action information in the live-action map according to the position deviation description information; and taking the position corresponding to the determined real scene information in the plane map as the corrected target position.
The computer-readable storage medium may be configured to retrieve the target location from the target location description information in the plan map by obtaining the target location description information to determine the target location. And jumping to a live-action map, displaying a street scene in the live-action map according to the target position, acquiring live-action information corresponding to the target position according to the street scene, and observing the acquired live-action information to more intuitively observe the corresponding target position. The live-action information is fed back to obtain the position offset description information responding to the live-action information, and the live-action information can be calibrated according to the position offset description information to determine the accurate live-action information. And the target position can be corrected in the plane map by determining the accurate live-action information to obtain the accurate target position, so that the accurate target position is obtained, and the accuracy of obtaining the target position is improved.
In one embodiment, the computer program, when executed by the processor, causes the processor to perform the step of jumping to a live-action map, acquiring live-action information from a street scene displayed in the live-action map based on a target position determined in the plan map, further causing the processor to perform the steps of: converting the plane map to obtain a live-action map corresponding to the plane map; extracting a plane coordinate of the target position in a plane map; converting the plane coordinates into real scene coordinates; displaying a street scene in a live-action map according to the live-action coordinates; and acquiring corresponding live-action information of the target position in the live-action map from the street scene image displayed in the live-action map.
In one embodiment, the computer program, when executed by the processor, causes the processor to perform the step of determining the live-action information in the live-action map from the positional offset descriptive information, further causes the processor to perform the steps of the method of: triggering an instruction of moving the live-action map according to the position deviation description information; acquiring the re-acquired live-action information, wherein the re-acquired live-action information is obtained after the live-action map is moved; feeding back the newly acquired live-action information; if receiving a message for confirming the representation of the newly acquired live-action information, determining the newly acquired live-action information in the live-action map; and if a message indicating denial of the newly acquired live-action information is received, returning to the step of triggering the instruction of moving the live-action map according to the position deviation description information.
In one embodiment, the computer program, when executed by the processor, further causes the processor to perform the steps of the method of: when the target position is displayed as a non-passing area in the plane map, searching a passing area around the target position; and reselecting the target position in the passing area according to the plane coordinates of the target position, so that the distance between the target position and the reselected target position is minimum.
In one embodiment, the computer program, when executed by the processor, further causes the processor to perform the steps of the method of: feeding back a live-action image acquisition instruction when live-action information corresponding to the target position is not detected in the live-action map; receiving a live-action image acquired according to a live-action image acquisition instruction; and calibrating the target position according to the live-action image and the target position description information, and determining the accurate target position in the plane map.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. Any reference to memory, databases, or other media used in the embodiments provided herein may include non-volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (12)
1. A method of target location correction in a map, the method comprising:
acquiring target position description information;
determining a target position in a plane map according to the target position description information;
jumping to a live-action map, and acquiring live-action information from a street scene image displayed in the live-action map according to the determined target position in the planar map;
feeding back the live-action information;
receiving position offset description information in response to the live-action information;
triggering an instruction for moving the live-action map according to the position deviation description information;
acquiring the re-acquired live-action information, wherein the re-acquired live-action information is obtained after the live-action map is moved;
feeding back the re-acquired live-action information;
if a message indicating confirmation of the re-acquired live-action information is received, determining the re-acquired live-action information in the live-action map;
if a message indicating denial of the re-acquired live-action information is received, returning to a step of triggering an instruction of moving the live-action map according to the position deviation description information;
and taking the position corresponding to the determined real scene information in the plane map as the corrected target position.
2. The method of claim 1, wherein the jumping to a live-action map, and obtaining live-action information from a street view displayed in the live-action map according to the target position determined in the plane map, comprises:
converting the plane map to obtain a live-action map corresponding to the plane map;
extracting plane coordinates of the target position in a plane map;
converting the plane coordinates into real scene coordinates;
displaying street scenes in the live-action map according to the live-action coordinates;
and acquiring corresponding live-action information of the target position in the live-action map from the street scene image displayed in the live-action map.
3. The method of claim 1, wherein the moving the live-action map is performed by triggering an orientation button in the live-action map or by switching after the planar map is re-targeted.
4. A method according to any one of claims 1 to 3, characterized in that the method further comprises:
when the target position is displayed as a non-passing area in the plane map, retrieving a passing area around the target position;
and reselecting the target position in the passing area according to the plane coordinates of the target position, so that the distance between the target position and the reselected target position is minimum.
5. A method according to any one of claims 1 to 3, characterized in that the method further comprises:
feeding back a live-action image acquisition instruction when live-action information corresponding to the target position is not detected in the live-action map;
receiving a live-action image acquired according to the live-action image acquisition instruction;
and calibrating the target position according to the live-action image and the target position description information, and determining an accurate target position in the plane map.
6. An apparatus for correcting a position of an object in a map, the apparatus comprising:
the description information acquisition module is used for acquiring the description information of the target position;
the target position determining module is used for determining a target position in the planar map according to the target position description information;
the real-scene information acquisition module is used for jumping to a real-scene map and acquiring real-scene information from a street scene image displayed in the real-scene map according to the target position determined in the plane map;
the real-scene information feedback module is used for feeding back the real-scene information;
a description information receiving module for receiving position offset description information in response to the live-action information;
the instruction triggering module is used for triggering an instruction for moving the live-action map according to the position deviation description information;
the live-action information acquisition module is further used for acquiring the re-acquired live-action information, and the re-acquired live-action information is obtained after the live-action map is moved;
the live-action information feedback module is also used for feeding back the re-acquired live-action information;
a live-action information determining module, configured to determine, if a message indicating confirmation of the re-acquired live-action information is received, the re-acquired live-action information in the live-action map;
the instruction triggering module is further configured to trigger an instruction for moving the live-action map according to the position offset description information if a message indicating denial of the re-acquired live-action information is received;
and the target position correction module is used for taking the position corresponding to the determined real scene information in the plane map as a corrected target position.
7. The apparatus of claim 6, further comprising:
the map conversion module is used for converting the plane map to obtain a live-action map corresponding to the plane map;
the coordinate extraction module is used for extracting plane coordinates of the target position in a plane map;
the coordinate conversion module is used for converting the plane coordinate into a real scene coordinate;
the street scene display module is used for displaying a street scene in the live-action map according to the live-action coordinates;
the live-action information acquisition module is further configured to acquire, from a street scene displayed in the live-action map, corresponding live-action information of the target position in the live-action map.
8. The apparatus of claim 6, wherein the moving the live-action map is performed by triggering an orientation button in the live-action map or by switching after the planar map is re-targeted.
9. The apparatus of any one of claims 6 to 8, further comprising:
the map retrieval module is used for retrieving a passing area around the target position when the target position is displayed as a non-passing area in the plane map;
the target position correction module is further used for reselecting a target position in the passing area according to the plane coordinates of the target position, so that the distance between the target position and the reselected target position is minimum.
10. The apparatus of any one of claims 6 to 8, further comprising:
the live-action information feedback module is further used for feeding back a live-action image acquisition instruction when the live-action information corresponding to the target position is not detected in the live-action map;
the live-action image acquisition module is used for receiving the live-action image acquired according to the live-action image acquisition instruction;
the target position correction module is further configured to calibrate the target position according to the live-action image and the target position description information, and determine an accurate target position in the planar map.
11. A computer-readable storage medium, storing a computer program which, when executed by a processor, causes the processor to carry out the steps of the method according to any one of claims 1 to 5.
12. A computer device comprising a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of the method according to any one of claims 1 to 5.
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201711043015.6A CN107885800B (en) | 2017-10-31 | 2017-10-31 | Method and device for correcting target position in map, computer equipment and storage medium |
| PCT/CN2017/112642 WO2019085081A1 (en) | 2017-10-31 | 2017-11-23 | Method and apparatus for correcting target position in map, computer device, and storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201711043015.6A CN107885800B (en) | 2017-10-31 | 2017-10-31 | Method and device for correcting target position in map, computer equipment and storage medium |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN107885800A CN107885800A (en) | 2018-04-06 |
| CN107885800B true CN107885800B (en) | 2020-02-14 |
Family
ID=61782986
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201711043015.6A Active CN107885800B (en) | 2017-10-31 | 2017-10-31 | Method and device for correcting target position in map, computer equipment and storage medium |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN107885800B (en) |
| WO (1) | WO2019085081A1 (en) |
Families Citing this family (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110321396A (en) * | 2019-05-27 | 2019-10-11 | 深圳市迈测科技股份有限公司 | Position calibration method and Related product |
| CN112629546B (en) * | 2019-10-08 | 2023-09-19 | 宁波吉利汽车研究开发有限公司 | Position adjustment parameter determining method and device, electronic equipment and storage medium |
| CN111750888B (en) * | 2020-06-17 | 2021-05-04 | 北京嘀嘀无限科技发展有限公司 | Information interaction method, apparatus, electronic device and computer-readable storage medium |
| CN111750877A (en) * | 2020-06-30 | 2020-10-09 | 深圳市元征科技股份有限公司 | Map updating method and related device |
| CN112115219A (en) * | 2020-08-31 | 2020-12-22 | 汉海信息技术(上海)有限公司 | Position determination method, device, equipment and storage medium |
| CN114756633B (en) * | 2021-01-08 | 2025-09-02 | 丰图科技(深圳)有限公司 | Map information updating method, device, computer equipment and storage medium |
| WO2023249550A2 (en) * | 2022-06-20 | 2023-12-28 | Grabtaxi Holdings Pte. Ltd. | Method and device for placing road objects on map using sensor information |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103487059A (en) * | 2013-09-25 | 2014-01-01 | 中国科学院深圳先进技术研究院 | Positioning and navigation system, device and method |
| CN105045815A (en) * | 2015-06-25 | 2015-11-11 | 湖南大麓管道工程有限公司 | Data collecting method and apparatus |
| CN106530794A (en) * | 2016-12-28 | 2017-03-22 | 上海仪电数字技术股份有限公司 | Automatic identification and calibration method of driving road and system thereof |
| CN106658415A (en) * | 2017-02-21 | 2017-05-10 | 上海量明科技发展有限公司 | Method for seeking shared vehicle in realistic view, vehicle-booking and system |
| CN106874356A (en) * | 2016-12-28 | 2017-06-20 | 平安科技(深圳)有限公司 | Geographical location information management method and device |
| CN107084727A (en) * | 2017-04-12 | 2017-08-22 | 武汉理工大学 | A kind of vision positioning system and method based on high-precision three-dimensional map |
Family Cites Families (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH01214711A (en) * | 1988-02-23 | 1989-08-29 | Toshiba Corp | Navigation apparatus |
| JP2008032596A (en) * | 2006-07-31 | 2008-02-14 | Mobile Mapping Kk | Three-dimensional map-matching processor, processing method, and processing program, and navigation apparatus, method, and program, and automobile |
| CN104833360B (en) * | 2014-02-08 | 2018-09-18 | 无锡维森智能传感技术有限公司 | A kind of conversion method of two-dimensional coordinate to three-dimensional coordinate |
-
2017
- 2017-10-31 CN CN201711043015.6A patent/CN107885800B/en active Active
- 2017-11-23 WO PCT/CN2017/112642 patent/WO2019085081A1/en not_active Ceased
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103487059A (en) * | 2013-09-25 | 2014-01-01 | 中国科学院深圳先进技术研究院 | Positioning and navigation system, device and method |
| CN105045815A (en) * | 2015-06-25 | 2015-11-11 | 湖南大麓管道工程有限公司 | Data collecting method and apparatus |
| CN106530794A (en) * | 2016-12-28 | 2017-03-22 | 上海仪电数字技术股份有限公司 | Automatic identification and calibration method of driving road and system thereof |
| CN106874356A (en) * | 2016-12-28 | 2017-06-20 | 平安科技(深圳)有限公司 | Geographical location information management method and device |
| CN106658415A (en) * | 2017-02-21 | 2017-05-10 | 上海量明科技发展有限公司 | Method for seeking shared vehicle in realistic view, vehicle-booking and system |
| CN107084727A (en) * | 2017-04-12 | 2017-08-22 | 武汉理工大学 | A kind of vision positioning system and method based on high-precision three-dimensional map |
Also Published As
| Publication number | Publication date |
|---|---|
| CN107885800A (en) | 2018-04-06 |
| WO2019085081A1 (en) | 2019-05-09 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN107885800B (en) | Method and device for correcting target position in map, computer equipment and storage medium | |
| US11060880B2 (en) | Route planning method and apparatus, computer storage medium, terminal | |
| US9582937B2 (en) | Method, apparatus and computer program product for displaying an indication of an object within a current field of view | |
| EP3480561B1 (en) | Navigation method, device, and system | |
| US9652144B2 (en) | Method and apparatus for providing POI information in portable terminal | |
| US20130107038A1 (en) | Terminal location specifying system, mobile terminal and terminal location specifying method | |
| US20220076469A1 (en) | Information display device and information display program | |
| KR102046841B1 (en) | Scene sharing based navigation support method and terminal | |
| US20120148106A1 (en) | Terminal and method for providing augmented reality | |
| JP6591594B2 (en) | Information providing system, server device, and information providing method | |
| WO2020039937A1 (en) | Position coordinates estimation device, position coordinates estimation method, and program | |
| KR20160048006A (en) | Feedback method for bus information inquiry, mobile terminal and server | |
| US20200226695A1 (en) | Electronic business card exchange system and method using mobile terminal | |
| KR101615504B1 (en) | Apparatus and method for serching and storing contents in portable terminal | |
| US9418351B2 (en) | Automated network inventory using a user device | |
| JP2014063300A (en) | Character recognition device, character recognition processing method, and program | |
| JP2008070557A (en) | Landmark display method, navigation device, on-vehicle equipment, and navigation system | |
| CN110954064A (en) | Positioning method and positioning device | |
| CN114881060A (en) | Code scanning method and device, electronic equipment and readable storage medium | |
| CN105025227A (en) | Image processing method and terminal | |
| KR102589833B1 (en) | Method, Apparatus and Computer Program for Providing Demand-Response Mobility Services based on Virtual Stop Point | |
| JP2008190941A (en) | Navigation device and destination setting method | |
| JP2010044630A (en) | Building-related information providing system | |
| US20250157070A1 (en) | Notification assistance system, notification assistance method, and computer-readable storage medium | |
| CN111879331B (en) | Navigation method, device and electronic device |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |