HK40019576A - Method for obtaining poi data, terminal and readable storage medium - Google Patents
Method for obtaining poi data, terminal and readable storage medium Download PDFInfo
- Publication number
- HK40019576A HK40019576A HK42020009645.1A HK42020009645A HK40019576A HK 40019576 A HK40019576 A HK 40019576A HK 42020009645 A HK42020009645 A HK 42020009645A HK 40019576 A HK40019576 A HK 40019576A
- Authority
- HK
- Hong Kong
- Prior art keywords
- target object
- map point
- direction angle
- scene image
- position information
- Prior art date
Links
Description
Technical Field
The present invention relates to the field of electronic technologies, and in particular, to a method, a terminal, and a readable storage medium for acquiring POI data.
Background
With the development of electronic technology and internet technology, various terminals are widely used, and accordingly, the types of application programs on the terminals are more and more, and the functions are more and more abundant. The map application program is a very common application program, and when the terminal displays the electronic map through the map application program, the terminal may display a name of a POI (Point of interest, information Point) at a corresponding position in the electronic map based on pre-acquired geographical position information (e.g., latitude and longitude information) of the POI, so that people can determine the position of a certain POI according to the electronic map. For example, if a POI is a mall, the terminal may display the name of the mall at the corresponding position in the electronic map based on the geographical location information of the mall.
Since the POI is frequently changed, it is important to acquire accurate POI data (the POI data may include POI geographical location information and POI attribute information, where the attribute information may be a name) in time in order to improve the accuracy of the electronic map. Currently, the method for acquiring POI data generally comprises: the technical staff go to each position and acquire the geographical position information of the POI of a target object (such as a certain market) at the current position through a precise surveying and mapping instrument, and further, the acquired geographical position information of the POI and the name of the target object can be used as POI data of the target object and uploaded to a server through a terminal, so that the server can store the geographical position information and the name of the POI.
In the process of implementing the invention, the inventor finds that the related art has at least the following problems:
based on the above processing mode for acquiring the POI data, the POI data can be acquired only when technicians reach various positions, so that the efficiency of acquiring the POI data is low.
Disclosure of Invention
The embodiment of the invention provides a method, a terminal and a readable storage medium for acquiring POI data, which can solve the problem of low efficiency of acquiring POI data in the related art. The technical scheme is as follows:
in one aspect, a method for acquiring POI data is provided, where the method includes:
displaying a first scene image and a second scene image, wherein the first scene image and the second scene image both contain a target object;
when a selection instruction of the target object in the first scene image and the second scene image is acquired, acquiring a first direction angle and a second direction angle of the target object, wherein the first direction angle is a direction angle of the target object relative to a first map point corresponding to the first scene image, and the second direction angle is a direction angle of the target object relative to a second map point corresponding to the second scene image;
and acquiring POI data of the target object according to the geographical position information of the first map point, the geographical position information of the second map point, the first direction angle and the second direction angle.
In one aspect, an apparatus for acquiring POI data is provided, the apparatus comprising:
the first display module is used for displaying a first scene image and a second scene image, and the first scene image and the second scene image both comprise a target object;
a first obtaining module, configured to obtain a first direction angle and a second direction angle of the target object when a selection instruction for the target object in the first scene image and the second scene image is obtained, where the first direction angle is a direction angle of the target object with respect to a first map point corresponding to the first scene image, and the second direction angle is a direction angle of the target object with respect to a second map point corresponding to the second scene image;
and the second acquisition module is used for acquiring POI data of the target object according to the geographical position information of the first map point, the geographical position information of the second map point, the first direction angle and the second direction angle.
In one aspect, a terminal is provided, which includes a processor and a memory, where at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the memory, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by the processor to implement the method for obtaining POI data as described above.
In one aspect, a computer readable storage medium is provided having stored therein at least one instruction, at least one program, set of codes, or set of instructions that is loaded and executed by a processor to implement the method of obtaining POI data as described above.
The technical scheme provided by the embodiment of the invention has the beneficial effects that at least:
in the embodiment of the invention, the terminal can acquire the POI data of the target object through the first direction and the second direction of the target object in the first scene image and the second scene image, the geographical position information of the first map point corresponding to the first scene image and the geographical position information of the second map point corresponding to the second scene image, and a technician does not need to arrive at each position to acquire each POI data, so that the efficiency of the POI data can be acquired.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a system framework diagram provided by an embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating a POI data display according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating a POI data display according to an embodiment of the present invention;
fig. 4 is a flowchart of a method for acquiring POI data according to an embodiment of the present invention;
FIG. 5 is a schematic view of an orientation angle provided by an embodiment of the present invention;
fig. 6 is a schematic view of a scene angle according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a POI annotation interface according to an embodiment of the present invention;
FIG. 8 is an elevation view of an embodiment of the present invention;
FIG. 9 is an elevation view of an embodiment of the present invention;
FIG. 10 is a schematic diagram of an error region provided by an embodiment of the present invention;
fig. 11 is a schematic structural diagram of an apparatus for acquiring POI data according to an embodiment of the present invention;
fig. 12 is a schematic structural diagram of an apparatus for acquiring POI data according to an embodiment of the present invention;
fig. 13 is a schematic structural diagram of an apparatus for acquiring POI data according to an embodiment of the present invention;
fig. 14 is a schematic structural diagram of an apparatus for acquiring POI data according to an embodiment of the present invention;
fig. 15 is a schematic structural diagram of an apparatus for acquiring POI data according to an embodiment of the present invention;
fig. 16 is a schematic structural diagram of an apparatus for acquiring POI data according to an embodiment of the present invention;
fig. 17 is a schematic structural diagram of an apparatus for acquiring POI data according to an embodiment of the present invention;
fig. 18 is a schematic structural diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
The embodiment of the invention provides a method for acquiring POI data, which can be realized by a terminal 101 and a server 102 together, and a system framework schematic diagram is shown in FIG. 1. The terminal 101 may be a terminal having a function of acquiring POI data, for example, the terminal may be a personal computer, a tablet computer, a mobile phone, or the like. The server 102 may be a background server having a function of acquiring POI data, and may be configured to provide the terminal 101 with map data and scene images, and also may be configured to store each POI data.
The terminal 101 may include a processor, memory, screen, transceiver, etc. The processor may be a CPU (Central Processing Unit), and may be configured to obtain the first direction angle and the second direction angle, and obtain POI data of the target object according to the geographical location information of the first map point, the geographical location information of the second map point, the first direction angle and the second direction angle. The Memory may be a RAM (Random Access Memory), a Flash Memory, and the like, and may be configured to store received data, data required by the processing procedure, data generated in the processing procedure, and the like, such as the first scene image, the second scene image, the geographical location information of the first map point, the geographical location information of the second map point, and the like. The screen can be used for displaying a POI annotation interface, a first scene image, a second scene image and a map, can be a touch screen, and can also be used for detecting a touch signal and the like under the condition. The transceiver may be configured to perform data transmission with other devices, for example, receive a POI annotation interface sent by a server, send obtained POI data of a target object to the server, and may include an antenna, a matching circuit, a modem, and the like.
With the development of electronic technology and internet technology, various terminals are widely used, and accordingly, the types of application programs on the terminals are more and more, and the functions are more and more abundant. The map application program is a very common application program, and when the terminal displays the electronic map through the map application program, the terminal can display the names of the POIs at corresponding positions in the electronic map based on the pre-collected geographical position information (such as longitude and latitude information) of the POIs, so that people can determine the position of a certain POI according to the electronic map, wherein a schematic diagram of displaying POI data on a two-dimensional map is shown in fig. 2, a schematic diagram of displaying the POI data in a scene image in a street view map is shown in fig. 3, and data in a circle is the names of the POIs. The street view map may include scene images corresponding to the map points (the scene images may be three-dimensional images obtained based on pictures taken of the surroundings of the corresponding map points). In the street view map mode, the terminal can display a two-dimensional map, and when the terminal acquires a selection instruction of a map point in the two-dimensional map, a scene image corresponding to the map point can be displayed, so that a user can more intuitively view the surrounding environment of the map point.
Since the POI is frequently changed, it is important to acquire accurate POI data (the POI data may include POI geographical location information and POI attribute information, where the attribute information may be a name) in time in order to improve the accuracy of the electronic map. In this scheme, the terminal may display a first scene image and a second scene image both including the target object. When a technician wants to label POI data, a target object may be selected in the first scene image and the second scene image, at this time, the terminal may obtain a selection instruction for the target object in the first scene image and the second scene image, and may further obtain a first direction angle and a second direction angle of the target object, where the first direction angle may be a direction angle of the target object with respect to a first map point corresponding to the first scene image, and the second direction angle may be a direction angle of the target object with respect to a second map point corresponding to the second scene image. After the first direction angle and the second direction angle are obtained, the terminal can obtain the POI data of the target object based on the geographical position information of the first map point, the geographical position information of the second map point, the first direction angle and the second direction angle, wherein the POI data at least comprises the geographical position information (longitude and latitude information) of the target object. After the POI data are acquired, the terminal can send the POI data to the server, so that the server can store the POI data. Therefore, the terminal can acquire POI data through the scene image without allowing technicians to reach all positions, and the POI data of the target object is acquired through a precise surveying and mapping instrument, so that the efficiency of acquiring the POI data can be improved. In addition, through this scheme, need not the input of accurate surveying and mapping instrument and manpower resources to, can save the cost.
The process flow shown in fig. 4 will be described in detail below with reference to the specific embodiments, and the contents may be as follows:
step 401, displaying a first scene image and a second scene image, where the first scene image and the second scene image both include a target object.
The first scene image and the second scene image may also be referred to as a first street view image and a second street view image, the first scene image is a three-dimensional scene image obtained according to pictures taken of an environment around the first map point, the second scene image is a three-dimensional scene image obtained according to pictures taken of an environment around the second map point, and the target object may be any target object to be obtained corresponding to the POI data.
In implementation, the terminal may have a function of acquiring POI data. In the process of using the terminal, when a technician wants to acquire POI data of a target object, the terminal can be triggered to display a first scene image and a second scene image containing the target object through operation, wherein the first scene image can be called a main scene image, and the second scene image can be called a secondary scene image.
Optionally, the terminal may display the first scene image and the second scene image in the POI annotation interface, and correspondingly, the terminal may further perform the following processing: and displaying a POI labeling interface, wherein the POI labeling interface comprises a map. Accordingly, the process of step 401 may be as follows: when an operation instruction for a first map point in a map is acquired, a first scene image is displayed on a POI marking interface, and a second scene image corresponding to a second map point is displayed.
The first map point and the second map point may be position points in a map, the first map point may be a shooting location corresponding to the first scene image, and the second map point may be a shooting location corresponding to the second scene image.
In implementation, a browser can be installed in the terminal, when a technician wants to acquire POI data of a target object, a website of a POI labeling interface can be input in an address bar of the browser, and then the terminal can be triggered to acquire the POI labeling interface from the server and display the POI labeling interface, wherein the POI labeling interface comprises a plurality of display areas and can display a map in the first area. After the terminal displays the map, a technician can select a first map point in the map according to own needs, and at the moment, the terminal can acquire an operation instruction for the first map point in the map, so that the terminal can acquire a scene image (namely, a first scene image) at the first map point and display the first scene image in a second area of the POI marking interface. After the terminal acquires the operation instruction of the first map point, the second map point can be automatically determined, the scene image (namely, the second scene image) at the second map point is acquired, and then the second scene image can be displayed in the third area of the POI marking interface.
Alternatively, the second map point may be selected by a technician. Specifically, a first icon and a second icon may be displayed in the map, where the first icon is used to select the first scene image, and the second icon is used to select the second scene image. After the terminal displays a map in the second area of the POI annotation interface, a technician can drag the first icon to the first map point by operating a cursor, and at this time, the terminal can obtain an instruction that the first icon is dragged to the first map point, and then can obtain a first scene image corresponding to the first map point, and display the first scene image in the second area. The technical staff can also determine the corresponding second map point according to the needs of the technical staff, namely the second icon can be dragged to the second map point through the operation cursor, at the moment, the terminal can acquire the instruction that the second image is dragged to the second map point, and then the second scene image corresponding to the second map point can be acquired, and the second scene image can be displayed in the third area.
Optionally, the specific processing procedure of the terminal determining the second location may be as follows: and determining the second map point according to the direction of the path where the first map point is located and the geographical position information of the first map point.
In implementation, after the terminal determines the first map point, the direction of the road where the first map point is located may be obtained, and then, in the direction of the road, according to the geographical location information of the first map point, a map point whose distance from the first map point is a preset distance threshold may be selected as the second map point, where the direction of the road may be a driving direction allowed by the road, and if the road is a bidirectional road, the direction of the road obtained may be any one of two directions.
Step 402, when a selection instruction for a target object in a first scene image and a second scene image is acquired, a first direction angle and a second direction angle of the target object are acquired.
The first direction angle is a direction angle of the object relative to a first map point corresponding to the first scene image, and the second direction angle is a direction angle of the object relative to a second map point corresponding to the second scene image.
In implementation, after the first scene image and the second scene image are displayed, a technician may select an object in the first scene image and the second scene image respectively (for example, may double-click the object in the first scene image and the second scene image), and accordingly, the terminal may obtain a selection instruction for the object in the first scene image and the second scene image, and further may obtain a first direction angle and a second direction angle of the object. The first direction angle may be a direction angle of the target object on the horizontal plane relative to the first map point, and may be an angle between a connecting line of the target object and the first map point and a true north direction of the first map point. The second direction angle may be a direction angle of the object on the horizontal plane relative to the second map point, and may be an angle between a connecting line of the object and the second map point and a direction due to the north of the second map point, as shown in fig. 5, where a1 represents the first map point, a2 represents the second map point, P represents a projection point of the object on the horizontal plane, a1 represents the first direction angle, and a2 represents the second direction angle.
Optionally, the terminal may determine the first direction angle and the second direction angle according to the direction angle of the center position of the current frame where the target object is located and the position of the target object in the current frame, and accordingly, the specific processing procedure may be as follows: when a selection instruction of a target object in a first scene image is acquired, acquiring a third direction angle of a first picture where the target object is located currently, wherein the first picture is a picture contained in the first scene image; determining a first direction angle of the target object according to the third direction angle, the pixel column number of the target object in the first picture and the direction angle range; when a selection instruction of a target object in a second scene image is acquired, acquiring a fourth direction angle of a second picture where the target object is located currently, wherein the second picture is a picture contained in the second scene image; and determining a second direction angle of the target object according to the fourth direction angle, the pixel column number of the target object in the second picture and the direction angle range.
In implementation, after the terminal displays the scene image, the user may trigger the terminal to rotate the scene image by operating the cursor, so that the user may browse each picture of the scene image, and when each picture is displayed, the terminal may obtain a direction angle corresponding to the picture, where the direction angle of the picture may be a direction angle corresponding to a center position of the picture (where, the center position may be a pixel point position), the direction angle corresponding to the center position may be a direction angle of a geographic position corresponding to the center position relative to a map point corresponding to the scene image, and each picture has a certain direction angle range, for example, the direction angle range is 120 degrees.
In this case, when the terminal acquires the selection instruction for the target object in the first scene image, a direction angle (i.e., a third direction angle, which is a direction angle of the first center position of the first screen) of a screen (i.e., the first screen) where the target object is currently located may be acquired, where the first screen is the screen in the first scene image, and further, the direction angle of the target object at the pixel column number in the first screen may be determined as the first direction angle of the target object based on the third direction angle and the direction angle range of the screen, where the direction angles corresponding to the same pixel column number may be the same. For example, the direction angle range is 120 degrees, the third direction angle of the first center position (6,6) is 90 degrees, that is, the first center position is a pixel point of the sixth row and the sixth column, and the number of pixel columns of the target object in the first screen is 3, the server may determine, based on the third direction angle and the direction angle range, that the direction angle corresponding to the pixel position on the first column of the first screen is 30 degrees, and that the direction angle corresponding to the pixel position on the last column (11 th column) is 150, and based on a mechanism that each column equally divides the direction angle range, may determine that the first direction angle of the target object is 54 degrees. Accordingly, when the terminal acquires the selection instruction of the target object in the second scene image, the direction angle (i.e., a fourth direction angle, which is the direction angle of the second center position of the second picture) of the picture (i.e., the second picture) in which the target object is currently located may be acquired, and further, the direction angle of the target object at the pixel column number in the second picture may be determined as the second direction angle of the target object based on the fourth direction angle and the direction angle range of the picture.
Step 403, obtaining POI data of the target object according to the geographical location information of the first map point, the geographical location information of the second map point, the first direction angle and the second direction angle.
In an implementation, after the first direction angle and the second direction angle are obtained, the terminal may obtain the geographic position information of the first map point and the geographic position information of the second map point, and further obtain the POI data of the target object (for example, the POI data of the target object may be calculated) based on the geographic position information of the first map point, the geographic position information of the second map point, the first direction angle and the second direction angle, where the POI data may include the geographic position information of the target object.
Optionally, based on different specific implementation manners of obtaining POI data, the processing manner in step 403 may be various, and several feasible processing manners are given below:
the method comprises the steps that in the first mode, according to the geographical position information and the first direction angle of a first map point, a first linear equation of a first straight line where the first map point and a target object are located is determined; determining a second straight line equation of a second straight line where the second map point and the target object are located according to the geographical position information and the second direction angle of the second map point; and determining the geographical position information of the intersection point of the first straight line and the second straight line according to the first straight line equation and the second straight line equation, wherein the geographical position information is used as POI data of the target object.
In implementation, after acquiring the geographic position information of the first map point, the geographic position information of the second map point, the first direction angle, and the second direction angle, the terminal may select any point on a first straight line where the first map point and the target object are located according to the geographic position information of the first map point and the first direction angle, acquire the geographic position information of the point, and further determine a straight line equation (i.e., a first straight line equation) of the first straight line according to the geographic position information of the point and the geographic position information of the first map point. In addition to determining the first linear equation, a linear equation of a second straight line (i.e., a second linear equation) in which the second map point and the target object are located may also be determined. Specifically, the terminal may arbitrarily select a point on a second straight line where the second map point and the target object are located according to the geographic position information and the second direction angle of the second map point, and acquire the geographic position information of the point, and then may determine a second straight line equation of the second straight line according to the geographic position information of the point and the geographic position information of the second map point. After the first linear equation and the second linear equation are determined, the terminal can determine the geographical position information of the intersection point of the first straight line and the second straight line and use the geographical position information as POI data of the target object, wherein the intersection point of the first straight line and the second straight line is the position point where the target object is located.
Determining a first linear equation of a first straight line where the first map point and the target object are located according to the geographical position information of the first map point and the first direction angle; determining a scene included angle between a second straight line where a second map point and a target object are located and the first straight line according to the first direction angle and the second direction angle; and determining the geographical position information of the target object as POI data of the target object according to the cosine value of the scene included angle, a vector included angle formula, the geographical position information of the first map point, the geographical position information of the second map point and a first linear equation.
In implementation, after acquiring the geographic position information of the first map point, the geographic position information of the second map point, the first direction angle and the second direction angle, the terminal may select any point on a first straight line where the first map point and the target object are located according to the geographic position information of the first map point and the first direction angle, acquire the geographic position information of the point, and further determine a first straight line equation of the first straight line according to the geographic position information of the point and the geographic position information of the first map point. In addition, the terminal may further determine an angle between the second straight line and the first straight line based on the first direction angle and the second direction angle (the angle may be referred to as a scene angle and may be denoted by a). For example, as shown in fig. 6, a1, a2, P, a1 and a2, respectively, the scene angle a may be 360-a2+ a 1. After the first linear equation and the scene included angle are determined, the terminal can calculate a cosine value cosa of the scene included angle, and then an equation of the geographical position information of the target object with the unknown number can be established based on a vector included angle formula, the geographical position information of the first map point and the geographical position information of the second map point. The terminal can determine the geographical position information of the target object by combining the established equation and the first straight line equation (the target object is positioned on the first straight line), and the geographical position information is used as POI data of the target object. For example, if the geographical location information of the target object is (x3, y3), the geographical location information of the first map point a1 is (x1, y1), the geographical location information of the second map point a2 is (x2, y2), and the first linear equation is Wx + Qy + R equal to 0, the server may calculate the geographical location information of the target object based on the following equation set.
Determining a second straight line equation of a second straight line where the second map point and the target object are located according to the geographical position information of the second map point and the second direction angle; determining an included angle between a first straight line and a second straight line where the first map point and the target object are located according to the first direction angle and the second direction angle; and determining the geographical position information of the target object as POI data of the target object according to the cosine value of the included angle, a vector included angle formula, the geographical position information of the first map point, the geographical position information of the second map point and the second linear equation.
In implementation, after acquiring the geographic position information of the first map point, the geographic position information of the second map point, the first direction angle and the second direction angle, the terminal may arbitrarily select a point on a second straight line where the second map point and the target object are located according to the geographic position information and the second direction angle of the second map point, acquire the geographic position information of the point, and further determine a second straight line equation of the second straight line according to the geographic position information of the point and the geographic position information of the second map point. In addition, the terminal can also determine a scene angle between the first straight line and the second straight line based on the first direction angle and the second direction angle. After the second linear equation and the scene included angle are determined, the terminal can calculate a cosine value cosa of the scene included angle, and then an equation of the geographical position information of the target object with the unknown number can be established based on a vector included angle formula, the geographical position information of the first map point and the geographical position information of the second map point. The terminal can determine the geographical position information of the target object by combining the established equation and the second straight line equation (the target object is positioned on the second straight line), and the geographical position information is used as POI data of the target object.
Optionally, the POI data may include attribute information of the target object in addition to the geographical location information of the target object, and accordingly, the POI annotation interface may include an information input area. Correspondingly, the terminal may further perform the following processing: and acquiring attribute information of the target object input in the information input area as POI data of the target object.
In an implementation, the POI labeling interface displayed by the terminal may further include an information input area, which may be referred to as a fourth area, where the information input area may include an input box of attribute information of the target object. The technical personnel can input corresponding attribute information of the target object in the corresponding input box, a submit button can be displayed in the information input area, the technical personnel can click the submit button after inputting, at the moment, the terminal can acquire a selection instruction of the submit button, and further, the attribute information of the target object input in the information input area can be acquired and used as POI data of the target object. For example, as shown in fig. 7, a first scene image, a second scene image, a map and an information input area may be displayed in the POI annotation interface, where the information input area may include a name, an address, a telephone number, a category, an ID (identification), geographic location information of a target object (i.e., coordinates X, Y in the information input area), and an input box of altitude information (i.e., Z in the information input area) (where the geographic location information and altitude information of the target object may be automatically input to the corresponding input box by the terminal based on the acquired data), where the name, the address, and the category may be indispensable items.
Optionally, the terminal may obtain geographical location information of the target object and may also determine height information of the target object, and based on different adopted elevation angles, corresponding processing manners may be various, and several feasible processing manners are provided below:
the method comprises the steps of acquiring a first elevation angle of a target object, wherein the first elevation angle is the elevation angle of the target object relative to a first map point; and determining height information corresponding to the target object as POI data of the target object according to the geographical position information of the target object, the geographical position information of the first map point and the first elevation angle.
In implementation, when the terminal acquires the selected instruction for the target object in the first scene image, in addition to acquiring the first direction angle, the terminal may acquire a first elevation angle (which may be denoted by h 1) of the target object, wherein the first elevation angle may be an elevation angle of the target object with respect to the first map point, as shown in fig. 8, where p denotes a location point of the target object. After the first elevation angle is obtained, the terminal may calculate height information corresponding to the target object according to the following formula based on the geographical position information (longitude and latitude information) of the target object, the geographical position information of the first map point, and the first elevation angle, and use the height information as POI data of the target object.
Where z3 represents height information of the target object, x3 and y3 represent geographical position information of the target object, and x1 and y1 represent geographical position information of the first map point.
Obtaining a second elevation angle of the target object, wherein the second elevation angle is the elevation angle of the target object relative to a second map point; and determining height information corresponding to the target object as POI data of the target object according to the geographical position information of the target object, the geographical position information of the second map point and the second elevation angle.
In implementation, when the terminal acquires the selected instruction for the target object in the second scene image, in addition to acquiring the second direction angle, the terminal may acquire a second elevation angle (which may be denoted by h 2) of the target object, wherein the second elevation angle may be an elevation angle of the target object with respect to a second map point, as shown in fig. 9, where p denotes a location point of the target object. After the second elevation angle is obtained, the terminal may calculate height information corresponding to the target object according to the following formula based on the geographical position information (longitude and latitude information) of the target object, the geographical position information of the second map point, and the second elevation angle, and use the height information as POI data of the target object.
Where z3 represents height information of the target object, x3 and y3 represent geographical position information of the target object, and x2 and y2 represent geographical position information of the second map point.
The obtaining mode of the first elevation angle and the second elevation angle may be similar to the obtaining mode of the first direction angle and the second direction angle, that is, when the selected instruction of the target object in the first scene image is obtained, a third elevation angle of the first picture where the target object is currently located may be obtained (where the third elevation angle may be an elevation angle of the center position of the first picture relative to the first map point); determining a first elevation angle of the target object according to the third elevation angle, the pixel line number of the target object in the first picture and the elevation angle range; when a selected instruction of a target object in the second scene image is acquired, acquiring a fourth elevation angle of the second picture where the target object is located currently (wherein the fourth elevation angle may be an elevation angle of the center position of the second picture relative to the second map point); and determining the second elevation angle of the target object according to the fourth elevation angle, the pixel line number of the target object in the second picture and an elevation angle range, wherein the elevation angle range can be 60 degrees.
Step 404, uploading the POI data to a server.
In implementation, after the terminal acquires the POI data of the target object, the POI data can be uploaded to the server so that the server can store the POI data.
Optionally, in the process of rotating the first scene image, the terminal may control the second scene image to rotate along with the first scene image, and correspondingly, the terminal may further perform the following processing: and in the process of rotating the first scene image, controlling the second scene image to rotate, and keeping the current displayed picture of the second scene image and the current displayed picture of the first scene image to contain the same picture.
In implementation, in the process that the terminal displays the first scene image and the second scene image, a technician can trigger the terminal to detect a dragging signal in the first scene image through operation according to own requirements, and then the terminal can control the first scene image to rotate based on the detected dragging signal. In the process of rotating the first scene image, the terminal may control the second scene image to rotate, and keep the currently displayed picture of the second scene image and the currently displayed picture of the first scene image to contain the same picture.
Optionally, the specific processing procedure of the terminal controlling the second scene image to rotate may be as follows: controlling the first scene image to rotate after detecting a drag signal in the first scene image; determining a fifth direction angle corresponding to the currently displayed picture in the first scene image in the process of continuously rotating the first scene image; determining the geographical position information of the target position contained in the first scene image according to the fifth direction angle and the geographical position information of the first map point; determining a corresponding sixth azimuth of the target position in the second scene image according to the geographical position information of the target position and the geographical position information of the second map point; and based on the sixth orientation angle, rotating the second scene image to display a target picture, the target picture being a picture whose center position is the target position.
In an implementation, after the terminal detects a drag signal in the first scene image, the first scene image may be controlled to rotate. In the process of continuous rotation of the first scene image, that is, in the process of continuous dragging signal, each time a preset rotation period is reached, the terminal may determine a direction angle (that is, a fifth direction angle) corresponding to the currently displayed picture in the first scene image, where the fifth direction angle may be a direction angle of a center position of the currently displayed picture relative to the first map point, and further, based on the geographic position information of the first map point and the fifth direction angle, determine geographic position information of a target position where a distance from the first map point in a direction corresponding to the fifth direction angle is a preset distance threshold. After determining the geographic position information of the target position, the terminal may determine a corresponding direction angle (i.e., a sixth direction angle) of the target position in the second scene image according to the geographic position information of the target position and the geographic position information of the second map point, where the sixth direction angle may be a direction angle of the target position relative to the second map point. After the sixth direction angle is determined, the terminal may rotate the second scene image to a display target picture based on the sixth direction angle, where the target picture may be a picture with the center position as the target position.
Optionally, the terminal may obtain POI data of the target object under the condition that the scene included angle is within the reference included angle range, and correspondingly, the processing procedure in step 403 may be as follows: determining a scene included angle according to the first direction angle and the second direction angle, wherein the scene included angle is an included angle between a first straight line and a second straight line, the first straight line is a straight line where the first map point and the target object are located, and the second straight line is a straight line where the second map point and the target object are located; and if the scene included angle is within the range of the reference included angle, acquiring POI data of the target object according to the geographical position information of the first map point, the geographical position information of the second map point, the first direction angle and the second direction angle.
In implementation, after the terminal acquires the first direction angle and the second direction angle, the scene included angle may be determined based on the first direction angle and the second direction angle, and then, the size of the scene included angle and the range of the reference included angle may be determined, where the range of the reference included angle may be a range including 90 degrees, for example, 85 to 95 degrees. If the scene included angle is within the reference included angle range, the terminal can acquire POI data of the target object according to the geographical position information of the first map point, the geographical position information of the second map point, the first direction angle and the second direction angle. If the scene included angle is not within the range of the reference included angle, the terminal can not acquire POI data of the target object and can send out prompt information. For example, if the scene angle is smaller than the minimum value of the reference angle range, it may be suggested that the scene angle is too small, and if the scene angle is larger than the maximum value of the reference angle range, it may be suggested that the scene angle is too large.
Since there may be a certain error in the obtained geographical location information of the first map point and the second map point, which results in a certain error in the viewing angle α of the photographer (or viewer), the scene angle is too small or too large, which results in an error angle in the obtained POI data, where the error under different scene angles is schematically shown in fig. 10, and the shaded area represents an error area.
In the embodiment of the invention, the terminal can acquire the POI data of the target object through the first direction and the second direction of the target object in the first scene image and the second scene image, the geographical position information of the first map point corresponding to the first scene image and the geographical position information of the second map point corresponding to the second scene image, and a technician does not need to arrive at each position to acquire each POI data, so that the efficiency of the POI data can be acquired.
Based on the same technical concept, an embodiment of the present invention further provides an apparatus for acquiring information point POI data, as shown in fig. 11, where the apparatus may be the above terminal, and the apparatus includes:
a first display module 1110, configured to display a first scene image and a second scene image, where the first scene image and the second scene image both include a target object;
a first obtaining module 1120, configured to obtain a first direction angle and a second direction angle of the target object when a selection instruction for the target object in the first scene image and the second scene image is obtained, where the first direction angle is a direction angle of the target object with respect to a first map point corresponding to the first scene image, and the second direction angle is a direction angle of the target object with respect to a second map point corresponding to the second scene image;
the second obtaining module 1130 is configured to obtain the POI data of the target object according to the geographic position information of the first map point, the geographic position information of the second map point, the first direction angle, and the second direction angle.
Optionally, as shown in fig. 12, the apparatus further includes:
an upload module 1140 for uploading the POI data to a server.
Optionally, as shown in fig. 13, the apparatus further includes:
a second display module 1150, configured to display a POI annotation interface, where the POI annotation interface includes a map;
the first display module 1110 is configured to:
and when an operation instruction for a first map point in the map is acquired, displaying the first scene image on the POI labeling interface, and displaying a second scene image corresponding to the second map point.
Optionally, the first display module 1110 is further configured to:
and before displaying a second scene image corresponding to the second map point, determining the second map point according to the direction of the path where the first map point is located and the geographical position information of the first map point.
Optionally, as shown in fig. 14, the POI labeling interface includes an information input area;
the device further comprises:
a third obtaining module 1160, configured to obtain the attribute information of the target object input in the information input area, as the POI data of the target object.
Optionally, the first obtaining module 1120 is configured to:
when a selection instruction of the target object in the first scene image is obtained, obtaining a third direction angle of a first picture where the target object is located currently, wherein the first picture is a picture contained in the first scene image;
determining a first direction angle of the target object according to the third direction angle, the pixel column number of the target object in the first picture and a direction angle range;
when a selection instruction of the target object in the second scene image is obtained, a fourth direction angle of a second picture where the target object is located at present is obtained, wherein the second picture is a picture contained in the second scene image;
and determining a second direction angle of the target object according to the fourth direction angle, the pixel column number of the target object in the second picture and the direction angle range.
Optionally, the second obtaining module 1130 is configured to:
determining a first linear equation of a first straight line where the first map point and the target object are located according to the geographical position information of the first map point and the first direction angle;
determining a second straight line equation of a second straight line where the second map point and the target object are located according to the geographical position information of the second map point and the second direction angle;
and determining the geographical position information of the intersection point of the first straight line and the second straight line according to the first straight line equation and the second straight line equation, wherein the geographical position information is used as POI data of the target object.
Optionally, the second obtaining module 1130 is configured to:
determining a first linear equation of a first straight line where the first map point and the target object are located according to the geographical position information of the first map point and the first direction angle;
determining a scene included angle between the second map point and a second straight line where the target object is located and the first straight line according to the first direction angle and the second direction angle;
and determining the geographical position information of the target object as POI data of the target object according to the cosine value of the scene included angle, a vector included angle formula, the geographical position information of the first map point, the geographical position information of the second map point and the first linear equation.
Optionally, the second obtaining module 1130 is configured to:
determining a second straight line equation of a second straight line where the second map point and the target object are located according to the geographical position information of the second map point and the second direction angle;
determining a scene included angle between the first map point and a first straight line and a second straight line where the target object is located according to the first direction angle and the second direction angle;
and determining the geographical position information of the target object as POI data of the target object according to the cosine value of the scene included angle, a vector included angle formula, the geographical position information of the first map point, the geographical position information of the second map point and the second linear equation.
Optionally, as shown in fig. 15, the apparatus further includes:
a fourth obtaining module 1170, configured to obtain a first elevation angle of the target object, where the first elevation angle is an elevation angle of the target object relative to the first map point;
the first determining module 1180 is configured to determine, according to the geographic position information of the target object, the geographic position information of the first map point, and the first elevation angle, height information corresponding to the target object as POI data of the target object.
Optionally, as shown in fig. 16, the apparatus further includes:
a fifth acquiring module 1190, configured to acquire a second elevation angle of the target object, where the second elevation angle is an elevation angle of the target object relative to the second map point;
the second determining module 11100 is configured to determine, according to the geographic position information of the target object, the geographic position information of the second map point, and the second elevation angle, altitude information corresponding to the target object as POI data of the target object.
Optionally, as shown in fig. 17, the apparatus further includes:
a control module 11110, configured to control the second scene image to rotate in the process of rotating the first scene image, and keep the currently displayed picture of the second scene image and the currently displayed picture of the first scene image to contain the same picture.
Optionally, the second obtaining module 1130 is configured to:
determining a scene included angle according to the first direction angle and the second direction angle, wherein the scene included angle is an included angle between a first straight line and a second straight line, the first straight line is a straight line where the first map point and the target object are located, and the second straight line is a straight line where the second map point and the target object are located;
and if the scene included angle is within a reference included angle range, acquiring POI data of the target object according to the geographical position information of the first map point, the geographical position information of the second map point, the first direction angle and the second direction angle.
In the embodiment of the invention, the terminal can acquire the POI data of the target object through the first direction and the second direction of the target object in the first scene image and the second scene image, the geographical position information of the first map point corresponding to the first scene image and the geographical position information of the second map point corresponding to the second scene image, and a technician does not need to arrive at each position to acquire each POI data, so that the efficiency of the POI data can be acquired.
It should be noted that: in the above-described embodiment, when the apparatus for acquiring POI data acquires POI data, only the division of the functional modules is illustrated, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the terminal may be divided into different functional modules to complete all or part of the functions described above. In addition, the apparatus for acquiring POI data and the method for acquiring POI data provided in the above embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiments and will not be described herein again.
Fig. 18 is a block diagram illustrating a terminal 1800 according to an exemplary embodiment of the present invention. The terminal 1800 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio layer iii, motion video Experts compression standard Audio layer 3), an MP4 player (Moving Picture Experts Group Audio layer IV, motion video Experts compression standard Audio layer 4), a notebook computer, or a desktop computer. The terminal 1800 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and the like.
Generally, the terminal 1800 includes: a processor 1801 and a memory 1802.
The processor 1801 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 1801 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1801 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1801 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing content required to be displayed on the display screen. In some embodiments, the processor 1801 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 1802 may include one or more computer-readable storage media, which may be non-transitory. Memory 1802 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1802 is used to store at least one instruction for execution by processor 1801 to implement a method of obtaining POI data as provided by method embodiments herein.
In some embodiments, the terminal 1800 may further optionally include: a peripheral interface 1803 and at least one peripheral. The processor 1801, memory 1802, and peripheral interface 1803 may be connected by a bus or signal line. Each peripheral device may be connected to the peripheral device interface 1803 by a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1804, touch screen display 1805, camera 1806, audio circuitry 1807, positioning components 1808, and power supply 1809.
The peripheral interface 1803 may be used to connect at least one peripheral associated with I/O (Input/Output) to the processor 1801 and the memory 1802. In some embodiments, the processor 1801, memory 1802, and peripheral interface 1803 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1801, the memory 1802, and the peripheral device interface 1803 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 1804 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 1804 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 1804 converts electrical signals into electromagnetic signals for transmission, or converts received electromagnetic signals into electrical signals. Optionally, the radio frequency circuitry 1804 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 1804 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 1804 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1805 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1805 is a touch display screen, the display screen 1805 also has the ability to capture touch signals on or over the surface of the display screen 1805. The touch signal may be input to the processor 1801 as a control signal for processing. At this point, the display 1805 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 1805 may be one, providing a front panel of the terminal 1800; in other embodiments, the number of the display screens 1805 may be at least two, and each of the display screens is disposed on a different surface of the terminal 1800 or is in a foldable design; in still other embodiments, the display 1805 may be a flexible display disposed on a curved surface or on a folded surface of the terminal 1800. Even more, the display 1805 may be arranged in a non-rectangular irregular figure, i.e. a shaped screen. The Display 1805 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or the like.
The camera assembly 1806 is used to capture images or video. Optionally, the camera assembly 1806 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1806 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 1807 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1801 for processing or inputting the electric signals to the radio frequency circuit 1804 to achieve voice communication. The microphones may be provided in a plurality, respectively, at different positions of the terminal 1800 for the purpose of stereo sound collection or noise reduction. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1801 or the radio frequency circuitry 1804 to sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 1807 may also include a headphone jack.
The positioning component 1808 is utilized to locate a current geographic position of the terminal 1800 for navigation or LBS (Location Based Service). The positioning component 1808 may be a positioning component based on a GPS (global positioning System) in the united states, a beidou System in china, a graves System in russia, or a galileo System in the european union.
The power supply 1809 is used to power various components within the terminal 1800. The power supply 1809 may be ac, dc, disposable or rechargeable. When the power supply 1809 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 1800 also includes one or more sensors 1810. The one or more sensors 1810 include, but are not limited to: acceleration sensor 1811, gyro sensor 1812, pressure sensor 1813, fingerprint sensor 1814, optical sensor 1815, and proximity sensor 1816.
The acceleration sensor 1811 may detect the magnitude of acceleration on three coordinate axes of a coordinate system established with the terminal 1800. For example, the acceleration sensor 1811 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 1801 may control the touch display 1805 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1811. The acceleration sensor 1811 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1812 may detect a body direction and a rotation angle of the terminal 1800, and the gyro sensor 1812 may cooperate with the acceleration sensor 1811 to collect a 3D motion of the user on the terminal 1800. The processor 1801 may implement the following functions according to the data collected by the gyro sensor 1812: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensors 1813 may be disposed on a side bezel of the terminal 1800 and/or on a lower layer of the touch display 1805. When the pressure sensor 1813 is disposed on a side frame of the terminal 1800, a user's grip signal on the terminal 1800 can be detected, and the processor 1801 performs left-right hand recognition or shortcut operation according to the grip signal collected by the pressure sensor 1813. When the pressure sensor 1813 is disposed at the lower layer of the touch display screen 1805, the processor 1801 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 1805. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1814 is used to collect the fingerprint of the user, and the processor 1801 identifies the user according to the fingerprint collected by the fingerprint sensor 1814, or the fingerprint sensor 1814 identifies the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the processor 1801 authorizes the user to perform relevant sensitive operations, including unlocking a screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 1814 may be disposed on the front, back, or side of the terminal 1800. When a physical key or vendor Logo is provided on the terminal 1800, the fingerprint sensor 1814 may be integrated with the physical key or vendor Logo.
The optical sensor 1815 is used to collect the ambient light intensity. In one embodiment, the processor 1801 may control the display brightness of the touch display 1805 based on the ambient light intensity collected by the optical sensor 1815. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 1805 is increased; when the ambient light intensity is low, the display brightness of the touch display 1805 is turned down. In another embodiment, the processor 1801 may also dynamically adjust the shooting parameters of the camera assembly 1806 according to the intensity of the ambient light collected by the optical sensor 1815.
A proximity sensor 1816, also known as a distance sensor, is typically provided on the front panel of the terminal 1800. The proximity sensor 1816 is used to collect the distance between the user and the front surface of the terminal 1800. In one embodiment, when the proximity sensor 1816 detects that the distance between the user and the front surface of the terminal 1800 gradually decreases, the processor 1801 controls the touch display 1805 to switch from the bright screen state to the dark screen state; when the proximity sensor 1816 detects that the distance between the user and the front surface of the terminal 1800 becomes gradually larger, the processor 1801 controls the touch display 1805 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 18 is not intended to be limiting of terminal 1800 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
An embodiment of the present invention further provides a computer-readable storage medium, where at least one instruction, at least one program, a code set, or an instruction set is stored in the storage medium, and the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by a processor to implement the above-mentioned method for acquiring POI data.
In the embodiment of the invention, the terminal can acquire the POI data of the target object through the first direction and the second direction of the target object in the first scene image and the second scene image, the geographical position information of the first map point corresponding to the first scene image and the geographical position information of the second map point corresponding to the second scene image, and a technician does not need to arrive at each position to acquire each POI data, so that the efficiency of the POI data can be acquired.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only an example of the present invention and should not be taken as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.
Claims (15)
1. A method for acquiring POI data, the method comprising:
displaying a first scene image and a second scene image, wherein the first scene image and the second scene image both contain a target object;
when a selection instruction of the target object in the first scene image and the second scene image is acquired, acquiring a first direction angle and a second direction angle of the target object, wherein the first direction angle is a direction angle of the target object relative to a first map point corresponding to the first scene image, and the second direction angle is a direction angle of the target object relative to a second map point corresponding to the second scene image;
and acquiring POI data of the target object according to the geographical position information of the first map point, the geographical position information of the second map point, the first direction angle and the second direction angle.
2. The method of claim 1, further comprising:
displaying a POI labeling interface, wherein the POI labeling interface comprises a map;
the displaying the first scene image and the second scene image comprises:
and when an operation instruction for a first map point in the map is acquired, displaying the first scene image on the POI labeling interface, and displaying a second scene image corresponding to the second map point.
3. The method of claim 2, wherein before displaying the second scene image corresponding to the second map point, the method further comprises:
and determining the second map point according to the direction of the path where the first map point is located and the geographical position information of the first map point.
4. The method of claim 2, wherein the POI annotation interface comprises an information input area;
the method further comprises the following steps:
and acquiring the attribute information of the target object input in the information input area as POI data of the target object.
5. The method of claim 1, wherein obtaining the first and second directional angles of the target object when obtaining the selected instruction for the target object in the first and second scene images comprises:
when a selection instruction of the target object in the first scene image is obtained, obtaining a third direction angle of a first picture where the target object is located currently, wherein the first picture is a picture contained in the first scene image;
determining a first direction angle of the target object according to the third direction angle, the pixel column number of the target object in the first picture and a direction angle range;
when a selection instruction of the target object in the second scene image is obtained, a fourth direction angle of a second picture where the target object is located at present is obtained, wherein the second picture is a picture contained in the second scene image;
and determining a second direction angle of the target object according to the fourth direction angle, the pixel column number of the target object in the second picture and the direction angle range.
6. The method according to claim 1, wherein the obtaining the POI data of the target object according to the geographical location information of the first map point, the geographical location information of the second map point, the first direction angle and the second direction angle comprises:
determining a first linear equation of a first straight line where the first map point and the target object are located according to the geographical position information of the first map point and the first direction angle;
determining a second straight line equation of a second straight line where the second map point and the target object are located according to the geographical position information of the second map point and the second direction angle;
and determining the geographical position information of the intersection point of the first straight line and the second straight line according to the first straight line equation and the second straight line equation, wherein the geographical position information is used as POI data of the target object.
7. The method according to claim 1, wherein the obtaining the POI data of the target object according to the geographical location information of the first map point, the geographical location information of the second map point, the first direction angle and the second direction angle comprises:
determining a first linear equation of a first straight line where the first map point and the target object are located according to the geographical position information of the first map point and the first direction angle;
determining a scene included angle between the second map point and a second straight line where the target object is located and the first straight line according to the first direction angle and the second direction angle;
and determining the geographical position information of the target object as POI data of the target object according to the cosine value of the scene included angle, a vector included angle formula, the geographical position information of the first map point, the geographical position information of the second map point and the first linear equation.
8. The method according to claim 1, wherein the obtaining the POI data of the target object according to the geographical location information of the first map point, the geographical location information of the second map point, the first direction angle and the second direction angle comprises:
determining a second straight line equation of a second straight line where the second map point and the target object are located according to the geographical position information of the second map point and the second direction angle;
determining a scene included angle between the first map point and a first straight line and a second straight line where the target object is located according to the first direction angle and the second direction angle;
and determining the geographical position information of the target object as POI data of the target object according to the cosine value of the scene included angle, a vector included angle formula, the geographical position information of the first map point, the geographical position information of the second map point and the second linear equation.
9. The method of claim 1, further comprising:
acquiring a first elevation angle of a target object, wherein the first elevation angle is an elevation angle of the target object relative to the first map point;
and determining height information corresponding to the target object according to the geographical position information of the target object, the geographical position information of the first map point and the first elevation angle, wherein the height information is used as POI data of the target object.
10. The method of claim 1, further comprising:
acquiring a second elevation angle of the target object, wherein the second elevation angle is an elevation angle of the target object relative to the second map point;
and determining height information corresponding to the target object according to the geographical position information of the target object, the geographical position information of the second map point and the second elevation angle, and using the height information as POI data of the target object.
11. The method of claim 1, further comprising:
and in the process of rotating the first scene image, controlling the second scene image to rotate, and keeping the current displayed picture of the second scene image and the current displayed picture of the first scene image to contain the same picture.
12. The method according to claim 1, wherein the obtaining the POI data of the target object according to the geographical location information of the first map point, the geographical location information of the second map point, the first direction angle and the second direction angle comprises:
determining a scene included angle according to the first direction angle and the second direction angle, wherein the scene included angle is an included angle between a first straight line and a second straight line, the first straight line is a straight line where the first map point and the target object are located, and the second straight line is a straight line where the second map point and the target object are located;
and if the scene included angle is within a reference included angle range, acquiring POI data of the target object according to the geographical position information of the first map point, the geographical position information of the second map point, the first direction angle and the second direction angle.
13. The method of claim 1, further comprising:
and uploading the POI data to a server.
14. A terminal, characterized in that the terminal comprises a processor and a memory, in which at least one instruction, at least one program, a set of codes, or a set of instructions is stored, which is loaded and executed by the processor to implement the method of acquiring POI data according to any one of claims 1 to 13.
15. A computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement a method of acquiring POI data according to any one of claims 1 to 13.
Publications (2)
| Publication Number | Publication Date |
|---|---|
| HK40019576A true HK40019576A (en) | 2020-10-16 |
| HK40019576B HK40019576B (en) | 2022-08-19 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN109166150B (en) | Pose acquisition method and device storage medium | |
| CN111464749B (en) | Method, device, equipment and storage medium for image synthesis | |
| CN111768454A (en) | Pose determination method, device, equipment and storage medium | |
| CN110986930B (en) | Equipment positioning method and device, electronic equipment and storage medium | |
| CN109862412B (en) | Method and device for video co-shooting and storage medium | |
| CN110134744B (en) | Method, device and system for updating geomagnetic information | |
| CN111369684B (en) | Target tracking method, device, equipment and storage medium | |
| CN111754386B (en) | Image area shielding method, device, equipment and storage medium | |
| CN110941375A (en) | Method and device for locally amplifying image and storage medium | |
| CN111385525B (en) | Video monitoring method, device, terminal and system | |
| CN109886208B (en) | Object detection method and device, computer equipment and storage medium | |
| CN110839128A (en) | Photographing behavior detection method and device and storage medium | |
| CN112241987B (en) | Systems, methods, devices and storage media for determining defense zones | |
| CN111370096A (en) | Interactive interface display method, device, equipment and storage medium | |
| CN113160031B (en) | Image processing method, device, electronic equipment and storage medium | |
| CN113592874B (en) | Image display method, device and computer equipment | |
| CN113467682B (en) | Method, device, terminal and storage medium for controlling movement of map covering | |
| CN112804481B (en) | Method and device for determining position of monitoring point and computer storage medium | |
| CN111754564B (en) | Video display method, device, equipment and storage medium | |
| CN113824902B (en) | Method, device, system, equipment and medium for determining time delay of infrared camera system | |
| CN110633335B (en) | Method, terminal and readable storage medium for acquiring POI data | |
| HK40019576A (en) | Method for obtaining poi data, terminal and readable storage medium | |
| CN113590877A (en) | Method and device for acquiring marked data | |
| CN112835021A (en) | Positioning method, device, system and computer readable storage medium | |
| CN115496828B (en) | Methods, apparatus, computer equipment, and storage media for determining formation dip angle parameters |