[go: up one dir, main page]

CN111008297B - Addressing methods and servers - Google Patents

Addressing methods and servers Download PDF

Info

Publication number
CN111008297B
CN111008297B CN201911268859.XA CN201911268859A CN111008297B CN 111008297 B CN111008297 B CN 111008297B CN 201911268859 A CN201911268859 A CN 201911268859A CN 111008297 B CN111008297 B CN 111008297B
Authority
CN
China
Prior art keywords
address
image
target
terminal
corresponding relation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911268859.XA
Other languages
Chinese (zh)
Other versions
CN111008297A (en
Inventor
马明月
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201911268859.XA priority Critical patent/CN111008297B/en
Publication of CN111008297A publication Critical patent/CN111008297A/en
Application granted granted Critical
Publication of CN111008297B publication Critical patent/CN111008297B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/587Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9537Spatial or temporal dependent retrieval, e.g. spatiotemporal queries

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention provides an addressing method and a server. The method comprises the following steps: receiving an address query request for a first object sent by a first terminal, wherein the address query request comprises a first image of the first object; acquiring a second target image matched with the first image and a second target address matched with the second target image according to the address inquiry request and a first corresponding relation between a pre-stored second image and the second address, wherein the first corresponding relation is generated according to the mutually-associated object image and the object address uploaded by at least one second terminal; and responding to the address inquiry request, and sending the second target image and the second target address to the first terminal. The invention can improve the addressing efficiency of the first object, reduce the addressing time consumption, improve the addressing accuracy and reduce the addressing difficulty.

Description

Addressing method and server
Technical Field
The present invention relates to the field of communications technologies, and in particular, to an addressing method and a server.
Background
Currently, in many application scenarios, a user needs to find an address of an object, for example, find a geographical address of a building of a certain entity, and for example, a lost owner finds a lost address of a lost article, so that the lost address is used to find the lost article.
When an object is addressed, for example, searching for a physical building, a road sign or a physical address of a certain destination, map software is generally adopted in the related art to search for a map by receiving query information input by a user, however, the map software can provide limited places, and some remote places are not marked in the map software, so that the map software is difficult to search for; or when the query information input by the user is not accurate enough (or the same-name building exists), the map searched place can have errors. For another example, when a user searches for a lost article, the problem of article searching can be solved only by manually searching or alarming, and the risk of article impossibility exists.
Therefore, in the related art, when searching for the related address of the object, there are problems of low addressing efficiency, long time consumption, low accuracy and high addressing difficulty.
Disclosure of Invention
The embodiment of the invention provides an addressing method and a server, which are used for solving the problems of low addressing efficiency, long time consumption, low accuracy and high addressing difficulty existing in the related technology when searching the related address of an object.
In order to solve the technical problems, the invention is realized as follows:
In a first aspect, an embodiment of the present invention provides an addressing method, applied to a server, where the method includes:
receiving an address query request for a first object sent by a first terminal, wherein the address query request comprises a first image of the first object;
acquiring a second target image matched with the first image and a second target address matched with the second target image according to the address inquiry request and a first corresponding relation between a pre-stored second image and the second address, wherein the first corresponding relation is generated according to the mutually-associated object image and the object address uploaded by at least one second terminal;
and responding to the address inquiry request, and sending the second target image and the second target address to the first terminal.
In a second aspect, an embodiment of the present invention further provides a server, where the server includes:
the first receiving module is used for receiving an address query request of a first object sent by a first terminal, wherein the address query request comprises a first image of the first object;
the acquisition module is used for acquiring a second target image matched with the first image and a second target address matched with the second target image according to the address inquiry request and a first corresponding relation between a pre-stored second image and the second address, wherein the first corresponding relation is generated according to the mutually-related object image and the object address uploaded by at least one second terminal;
And the response module is used for responding to the address inquiry request and sending the second target image and the second target address to the first terminal.
In a third aspect, an embodiment of the present invention further provides a server, including: a memory, a processor and a computer program stored on the memory and executable on the processor, which when executed by the processor, implements the steps of the addressing method.
In a fourth aspect, embodiments of the present invention also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the addressing method.
In the embodiment of the invention, the address of the first object, namely the second target address, can be found by utilizing the first image of the first object needing addressing, and the second target image and the second target address are sent to the first terminal, so that the addressing efficiency of the first object is improved, and the time consumption of addressing is reduced by acquiring the second target image matched with the first image in the address inquiry request and the second target address matched with the second target image in the first corresponding relation by receiving the address inquiry request of the first terminal carrying the first image of the first object needing addressing and receiving and storing the first corresponding relation between the second image and the second address which are mutually related and uploaded by at least one second terminal in advance; and the accuracy of image matching is higher, so that the addressing accuracy is improved; in the addressing process, the first terminal can achieve the purpose of addressing only by uploading an image containing a first object to be addressed, and the addressing difficulty is greatly reduced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments of the present invention will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of an addressing method of one embodiment of the present invention;
FIG. 2 is a schematic illustration of a terminal interface according to one embodiment of the invention;
FIG. 3 is a block diagram of a server according to one embodiment of the invention;
fig. 4 is a schematic hardware structure of a terminal device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The embodiment of the invention provides an addressing method, which has a plurality of application scenes, and can comprise addressing an entity building, a road sign or a certain destination, searching a lost address for a lost article so as to find the lost article, and social scenes.
In the case of addressing scenes of entity buildings, road signs or a certain destination, because some remote places are not marked in the map software, or some places have wrong search names input by users, or the situations of places with the same names exist, the situations can cause the problem of addressing errors caused by text searching by using the map software, and the addressing is very time-consuming and has lower efficiency;
in the case of addressing a lost address for a lost item and thus finding the lost item, the life activity range of everyone is wider due to the current convenient traffic, and the population density of the city is high. People can not easily retrieve things after carelessly losing things. It is also difficult to reach the owner by picking up items to give the owner. Causing unnecessary loss to many users, especially when important documents or certificates are discarded. After the user loses things, the user can generally go to the places recently visited or public places such as police offices nearby to find out. And the person who picks up the article is typically handed to a nearby police office or the like. Obviously, the above-mentioned searching mode is very time-consuming, and the owner and the picker do not necessarily select the same mode, for example, the owner returns to search according to the original route, and the picker delivers the articles to the police; in addition, the object searching mode even has risks of collusion and the like. Therefore, the existing scheme is difficult to conveniently and accurately give the lost article to the lost owner.
In order to solve the addressing problem in the above-mentioned respective scenarios, and to achieve convenient and accurate addressing of objects (including but not limited to the building, road sign, destination, and object), referring to fig. 1, a flowchart of an addressing method according to an embodiment of the present invention is shown, and applied to a server, the method may specifically include the following steps:
step 101, receiving an address query request for a first object sent by a first terminal, wherein the address query request comprises a first image of the first object;
the first object is an object that needs to find an address corresponding to the first object, and the address may be an actual physical address of the object, or an address where the object is lost, that is, a lost address.
Further, the first image may be an original image including the first object and other images, and the first image may be an area image including only the first object extracted from the original image including the first object.
In addition, when the address query request is triggered, referring to fig. 2, on the terminal side, the user may trigger the address query request by adopting a sliding input of two fingers to the photo 11 in the interface of the photo 11 in the album application, where in this example, the contact point between the two fingers of the user and the display screen slides from the dotted line position 12 to the position 13 (i.e. the edge of the display screen of the mobile phone).
In one example 1, i.e. in a scenario where a first object (e.g. building a) is looking for its actual geographical location, a user wants to go to the location of building a, but the physical address of the building is not found on the map, and a photo of building a is stored in the user's mobile phone (i.e. the first terminal), the photo may be uploaded to the cloud album, so as to trigger an address query request for building a, where the request carries the photo.
For example, if the photograph (i.e., the first image) in the address query request is an original photograph (i.e., includes not only the area image of the building a but also other person or background area images), the server may extract the area image of the building a from the original photograph and perform the subsequent steps using the area image of the building a in order to avoid disclosure of the privacy information of the user.
For another example, the photo in the address query request may also be an area image only including the building a, that is, before triggering the address query request, the mobile phone side extracts the area image of the building a from the photo that the user selects to upload, so in this example, the first image is the area image only including the building a.
In an example 2, that is, in a scenario in which a lost article is addressed to a lost address and then a lost article is found, if a user loses a wallet, a photograph about the lost wallet B (the first object) stored in the mobile phone may be uploaded to a server (specifically, the photograph may be published to a cloud lost-article receiving platform on the server side), and other technical details are similar to the description about example 1 above, which is not repeated here.
Step 102, obtaining a second target image matched with the first image and a second target address matched with the second target image according to the address inquiry request and a first correspondence between a pre-stored second image and the second address, wherein the first correspondence is generated according to the mutually-associated object image and the object address uploaded by at least one second terminal;
the terminal searching for the address of the object may upload the image to the server, and for the terminal knowing the address of a certain object, may upload the image of the object and the address of the object to the server, so the server may generate a first correspondence between the second image and the second address according to the object image and the object address that are mutually associated and uploaded by the respective second terminals, and thus the first correspondence is a correspondence between the image and the address that is generated according to the object image and the object address that are mutually associated and uploaded by the at least one second terminal.
In addition, when the object image is an original image (i.e., includes not only the region image of the object but also other region images), the region image including only the object may be extracted as a second image for the object image in order to protect privacy; when the object image is the region image of the object which is generated by extraction on the mobile phone side in advance, the object image is the second image.
In addition, the object address may be address information extracted from the object image (because the shot photo generally carries positioning information of the shooting place), or may be address information related to the object image uploaded separately by the mobile phone side.
In addition, the object address associated with the object image can then be understood as the actual address of the object. For example, in the scenario of example 1, the object address is the actual physical address of building a, and in the scenario of example 2, the object address is the actual physical address that the user who picked up wallet B.
Thus, in the scenario of example 1, for example, a guest taking a photograph of building a while traveling at a attraction may upload the photograph to a server, which may generate a correspondence between a second image of building a and the actual physical address (i.e., second address) of the building a; in the scenario of example 2, with user 2 having picked up wallet B at a location (e.g., a subway station) and taken a photograph, the photograph may be uploaded to a server, which may generate a correspondence between a second image of wallet B and the location of wallet B (i.e., a second address).
And uploading data through each second terminal, so that the server side has prestored a first corresponding relation between the second image and the second address when receiving the address inquiry request.
Thus, in this step, the image in the first correspondence may be matched with the first image in the address query request, and when it is defined what two images match each other, a similarity of image features (for example, the image features may include, but are not limited to, contour features, color features, texture features, broken features, and the like) between the two images may be calculated, and if the similarity is greater than a preset threshold, it is determined that the two images match. Then, by calculating the similarity between the image features of the two images, a second target image matched with the first image can be queried from the first corresponding relationship, and correspondingly, a second target address matched with the second target image can also be acquired.
Thus in example 1, the actual physical address of building a, i.e., the second destination address, is obtained; in example 2, the missing address of wallet B, the second destination address, is obtained.
And step 103, responding to the address inquiry request, and sending the second target image and the second target address to the first terminal.
Wherein the server may send the matched second target image to the first terminal together with the second target address associated therewith in response to the address query request.
It is noted that the second target image and the second target address appear in pairs, and that the pairs may be one or more pairs.
When the second target image and the second target address are multiple groups, a group of second target image and the second target address with the highest similarity with the first image can be returned to the first terminal; when the second target image and the second target address with the highest similarity with the first image are multiple groups, the multiple groups of second target images and the second target addresses can be sent to the first terminal in pairs, a user can judge, for example, which wallet is the image of the wallet lost by the user, and whether the second target address corresponding to the image is the address of the wallet B lost by the user on the same day, so that the lost address of the wallet is determined. Similarly, the user can determine which building in the image is the building B he wants to go to, thereby determining the actual physical address of the building B.
In the embodiment of the invention, the address of the first object, namely the second target address, can be found by utilizing the first image of the first object needing addressing, and the second target image and the second target address are sent to the first terminal, so that the addressing efficiency of the first object is improved, and the time consumption of addressing is reduced by acquiring the second target image matched with the first image in the address inquiry request and the second target address matched with the second target image in the first corresponding relation by receiving the address inquiry request of the first terminal carrying the first image of the first object needing addressing and receiving and storing the first corresponding relation between the second image and the second address which are mutually related and uploaded by at least one second terminal in advance; and the accuracy of image matching is higher, so that the addressing accuracy is improved; in the addressing process, the first terminal can achieve the purpose of addressing only by uploading an image containing a first object to be addressed, and the addressing difficulty is greatly reduced.
In the scene 1 for searching the geographic position such as a building, a road sign and a destination, the first terminal only needs to upload pictures of various objects to be searched to the cloud album platform, and the server of the platform can automatically remove the personal information in the pictures and only keep the image information of the building, the road sign or the destination, so that the personal information leakage of a user is avoided. Then the destination location can be quickly located by this method when the user has a destination building picture but does not know the destination specific address; for example, when a user browses a tour guide, the user sees a photograph of a scenic spot released by the user, but the user cannot locate the address of the scenic spot on a map, and only needs to download the photograph and upload the photograph to the server in the embodiment of the present invention, the accurate address of the scenic spot can be obtained, so that navigation and other operations can be performed by using the accurate address.
In the scene 2 of searching for the lost address of the lost article to retrieve the article, the first terminal only needs to upload the photograph of the lost article to the server, the server can accurately and efficiently find the lost address of the article through matching the photograph of the article with the photograph in the cloud album, and the owner can quickly and accurately find the lost article by means of the lost address or find the lost article by finding the user uploading the article, so that risks of impoverishment are avoided.
It should be noted that, the first terminal may also upload the object image and the object address that are associated with each other to the server, so that the server updates the locally stored first correspondence.
Thus, the at least one second terminal may comprise a first terminal, but it is noted that in this embodiment, the second target terminal uploading the second target image and the second target address associated with each other is different from the first terminal.
Optionally, the address query request further includes: a first address;
that is, the first address may also be uploaded together when the first terminal uploads the first image.
In one embodiment, the first address is address information generated according to a location where the first terminal is located, or in another embodiment, the first address is address information generated according to a historical location of the first terminal;
for example, in the above example 1, when the address query request is triggered, the mobile phone may not only carry the picture that the user triggered to upload to the request, but also carry the positioning information that the mobile phone is currently located at the time of uploading the picture to the request.
In one embodiment, the first address may be an administrative area (e.g., a city, or a city and region, etc.) generated based on the location in which the first terminal is located.
For another example, in the above example 2, when the address query request is triggered, the mobile phone may not only carry the picture that the user triggered to upload to the request, but also generate the first address to carry to the request according to the mobile phone historical positioning information in the date that the wallet is lost by the mobile phone.
In one embodiment, the first address may include a geographic route generated according to the historical location of the first terminal (e.g., tiananmen station to Sihui station of Beijing subway line 1), and/or a surrounding geographic area in which the geographic route is located, and/or locating points (e.g., sihui station, national trade station) in which each of the geographic routes occurs more frequently in the historical location, and/or some accurate address associated with the first image, such as a cafe M of some address, that is uploaded separately by the user when the address query request is triggered.
Optionally, in one embodiment, when step 102 is performed, a second correspondence may be identified in a first correspondence between a second image and a second address stored in advance, where a second address in the second correspondence is within a first geographic range corresponding to the first address; then, in the second correspondence, a second target image matching the first image and a second target address matching the second target image are acquired.
It should be noted that, in the present embodiment, the second correspondence relationship, the third correspondence relationship, and the fourth correspondence relationship are all the correspondence relationships between the image and the address, and the third correspondence relationship, or the fourth correspondence relationship defined in the subsequent embodiment are all different correspondence relationships that are screened out from the first correspondence relationship under different conditions.
Because the range of the first correspondence is larger, if the first correspondence is traversed and matched one by one, the efficiency is lower, so in order to improve the addressing efficiency and reduce the addressing time consumption, in the embodiment of the invention, the first correspondence can be screened by means of the first address, and the second correspondence can be screened from the first correspondence.
When the first address is address information generated according to the location of the first terminal, the first geographic range corresponding to the first address may be an administrative region corresponding to the location;
When the first address is address information generated according to the historical positioning of the first terminal, the first geographical range may be at least one of the geographical route and a surrounding geographical area where the geographical route is located.
Then the second corresponding relation can be formed by identifying the second address of the second address in the first geographic range and identifying the second image corresponding to the second address in the first geographic range from the first corresponding relation; and then, acquiring a second target image matched with the first image and a second target address matched with the second target image in the second corresponding relation, so that the addressing range is reduced from a large range of the first corresponding relation to a second corresponding relation in a first geographic range corresponding to the first address, and the addressing efficiency is greatly improved.
For example, in the above example 1, when a general user needs an address of a building, the user is already in a city where the building is located, for example, the user travels to a city and needs to address an address of a small scenic spot of the building a, then by using the method of the embodiment of the present invention, an actual physical address matched with the building a can be queried in a second correspondence relationship in the city.
In example 2 above, the second correspondence relationship of the pickup uploaded by the pick-up may be queried in the geographical range of the route of travel of the user lost wallet a in the current day to query whether there is a pick-up address, i.e. a lost address, related to the wallet a.
Alternatively, in the above embodiment, when step 102 is performed, it may be further implemented as follows:
if a second target image matched with the first image is not acquired in the second corresponding relation, expanding the first geographic range according to preset conditions to generate a second geographic range;
that is, after the second correspondence is determined, if the second target image matching the first image is not found within the range of the second correspondence, the query range may be appropriately increased, that is, the first geographical range corresponding to the first address may be enlarged according to the preset condition (for example, the addressing radius is increased, the range of the preset radius is enlarged for the first geographical range, or the administrative area is enlarged (for example, the first geographical range is from the beijing subway No. 1 line to the tahui station, the first geographical range is enlarged to the beijing subway No. 1 line, or to the beijing city, or to the north hui area, etc.)).
Identifying a third corresponding relation in the first corresponding relation, wherein a second address in the third corresponding relation is in the second geographic range;
the principle of identifying the second correspondence from the first correspondence by using the first geographic range in this step is similar to that of identifying the second correspondence from the first correspondence, and will not be described herein.
And in the third corresponding relation, a second target image matched with the first image and a second target address matched with the second target image are acquired.
The principle of this step is similar to that of the above embodiment that the second target image and the second target address are acquired by using the second correspondence, and will not be described herein.
Alternatively, in another embodiment, a first object type of the first image may be identified when step 102 is performed; identifying a fourth corresponding relation in a first corresponding relation between a second image and a second address, wherein a second object type of the second image in the fourth corresponding relation is matched with the first object type; and in the fourth corresponding relation, a second target image matched with the first image and a second target address matched with the second target image are acquired.
Specifically, since the range of the first correspondence is larger, if the matching is traversed one by one, the efficiency is lower, so in order to improve the addressing efficiency and reduce the addressing time consumption, in the embodiment of the invention, the addressing range can be reduced by means of the type of the object in the image, and the addressing accuracy is improved, and the addressing range can be reduced from the first correspondence to the fourth correspondence.
For example, in example 1 above, the first object type is a building, and in example 2 above, the first object type is a wallet; when the first correspondence is stored in advance, the server may store a second object type in association with each second image in the first correspondence, so in order to reduce the addressing scope, the second object type may be screened out from the first correspondence to match with the first object type (where the matching manner includes, but is not limited to, one or more of two object types being context relationships, such as perfume and XX brand perfume, and/or object types being identical, such as object types being wallets, and/or object types having a semantic similarity greater than a preset threshold, and/or a second image (such as image K) corresponding to the second image (i.e., image K herein) to form the fourth correspondence. Then, a second target image and a second target address are acquired from the fourth correspondence.
Optionally, when the address query request further includes: when the first address is the address information generated according to the location where the first terminal is located, after step 103, the method according to the embodiment of the present invention may further include:
if first preset information (for example, information indicating that a building in the second target image is building a that the user wants to address) sent by the first terminal and indicating to confirm the second target image is received, the server may generate at least one navigation route from the location to the second target address according to the second target address corresponding to the confirmed second target image and the location where the second terminal is located, and send the at least one navigation route to the first terminal;
the local map software of the first terminal may generate at least one navigation route according to the issued at least one navigation route, from which the user may select a navigation route for use.
Optionally, after the user of the second terminal reaches the second target address corresponding to the building a, the user may take a photograph of the building a, and upload the photograph (carrying the location information of the building a) to the server, and the server updates the locally stored first correspondence with the photograph, so as to help more people address the destination address.
The embodiment of the invention can realize the function of searching the destination through the photo. And searching and matching are carried out according to the picture and the address information, so that the destination address can be efficiently and accurately found, a user can obtain the location information which cannot be found by map navigation only by simple operation (for example, the name of the input location is inaccurate, the navigation cannot find the location or a scene without the name of the location can be applied), and a navigation route is obtained. Compared with the prior art, the method has the advantage of greatly improving.
Optionally, in another embodiment, when the address query request further includes a first address, and when the first address is address information generated according to the historical positioning of the first terminal, after step 103, the method according to an embodiment of the present invention may further include:
receiving second preset information (for example, a wallet B indicating that the second target image is lost by a user of the first terminal) sent by the first terminal and indicating that the second target image is confirmed;
and sending the first user contact information corresponding to the first terminal to a second target terminal, and/or sending the second user contact information corresponding to the second target terminal to the first terminal, wherein the second target terminal is a second terminal for uploading the second target image and the second target address which are mutually related.
Specifically, when the first terminal uploads the address query request, the contact information of the first terminal user (i.e., the first user contact information includes, for example, a name and a phone) may be carried in the address query request and uploaded to the server, or may be uploaded to the server separately. Then if the server stores the first user contact information, the server may send the first user contact information to the second target terminal that picked up the wallet B and uploaded the photograph of the wallet B in example 2 above.
In addition, when the second terminal uploads the object image and the object address which are associated with each other, the contact information of the user of the second terminal may be reported to the server together, so that each group of corresponding relations in the first corresponding relations stored at the server side may be associated with corresponding user contact information.
The lost person of wallet B may actively contact the user who picked up wallet B and/or the user who picked up wallet B may actively contact the user who lost wallet B, thereby achieving the purpose that the lost article is retrieved.
The embodiment of the invention can realize the function of searching lost objects through the photo. And the lost articles can be efficiently found by searching and matching according to the picture and the place (address) information, and the lost articles can be found by the user without going out. The user who picks up lost objects does not need to find police office and other places, and can conveniently and rapidly release information. The recovery probability of lost objects is greatly improved, and the time cost of users is reduced.
Compared with the prior art, the method has the advantage of greatly improving.
According to the method, the lost article receiving platform of the cloud album is provided, the user can automatically search for the matching by uploading the lost article picture to the cloud album platform, the user can find the lost article without going out, the user cannot pick up things or lose things and then cannot worry, and lost articles or lost owners can be found more quickly and accurately.
Optionally, in step 103, if the second target image and the second target address sent to the first terminal are multiple groups, only one second target image is identified in the first preset information or the second preset information, and accordingly, when the summary server acquires the user contact information of the corresponding terminal, the summary server may also acquire the second user contact information corresponding to the second target image identified by the user in the first preset information or the second preset information.
Optionally, after step 103, after receiving the second target image and the second target address, if the first terminal has a display screen, the second target image and the first image may be displayed in different areas of the display screen, so that the user may conveniently compare whether the object in the second target image is the first object that the user wants to address; if the first terminal is a folding screen terminal, the first terminal can automatically unfold the folding screen and display the second target image and the first image on two screens, so that the comparison of users is facilitated.
Optionally, after step 103, the method according to an embodiment of the present invention further comprises:
if the server receives third preset information (for example, a wallet B indicating that the wallet in the second target image is not the user of the first terminal and is lost, and if the building in the second target image is not the building a that the user wants to address) sent by the first terminal and indicating that the second target image is refused to confirm, the server may filter the second target image and the second target address in step 103 from the first corresponding relationship to generate a fourth corresponding relationship, and acquire the second target image matched with the first image and the second target address matched with the second target image according to the address query request and the fourth corresponding relationship; and then the second target image and the second target address are sent to the first terminal.
In this way, when the queried second target image is not the first object addressed by the user, the method of the embodiment of the invention can delete the corresponding relation between the second target object and the second target address in the first corresponding relation in the next round of query, thereby avoiding meaningless query and improving the addressing efficiency and the addressing accuracy.
Optionally, after step 103, the method according to an embodiment of the present invention further comprises:
if the server receives the third preset information sent by the first terminal and representing rejection confirmation of the second target image, filtering the second target image and the second target address in step 103 from the first corresponding relation (because other second terminals may upload new object images and object addresses in the preset time interval, the first corresponding relation in the step may be updated compared with the first corresponding relation at the moment of step 102) after the preset time interval, and generating a fifth corresponding relation; according to the address inquiry request and the fifth corresponding relation, a second target image matched with the first image and a second target address matched with the second target image are obtained; and transmitting the second target image and the second target address to the first terminal.
In this way, when the queried second target image is not the first object addressed by the user, the method of the embodiment of the invention can wait for the preset time before executing the next round of query, thus the range of the first corresponding relation in the next round of query can be improved, and the addressing hit rate is improved; and deleting the corresponding relation between the second target object and the second target address in the first corresponding relation in the next round of inquiry, thereby avoiding meaningless inquiry and improving addressing efficiency and addressing accuracy.
Alternatively, the method of the above embodiment of the present invention may be applied not only to addressing the above listed entity buildings, road signs or some destination, finding lost addresses for lost items and thus lost items, but also to social scenarios.
For example, in a social scenario, such as user a wanting to play a shuttlecock, however, there are no friends around to play. User B also encounters the same situation. The user A and the user B respectively upload a picture comprising the shuttlecock and the playing field information (namely address information) to the server, the server can match the address information with the picture information, when the picture information and the address information are matched, the contact information of the user B can be issued to the terminal of the user A, and/or the contact information of the user A can be issued to the terminal of the user B, so that two people can team to play the shuttlecock, and the two parties can communicate with each other after acquiring the contact way.
Other embodiments regarding social scenarios are similar to the principles of the various embodiments listed above and are not repeated here.
Referring to FIG. 3, a block diagram of a server of one embodiment of the invention is shown. The server according to the embodiment of the invention can realize details of the addressing method in the embodiment and achieve the same effect. The server shown in fig. 3 includes:
a first receiving module 31, configured to receive an address query request sent by a first terminal for a first object, where the address query request includes a first image of the first object;
an obtaining module 32, configured to obtain a second target image matched with the first image and a second target address matched with the second target image according to the address query request and a first correspondence between a second image and a second address that are stored in advance, where the first correspondence is generated according to an object image and an object address that are uploaded by at least one second terminal and are associated with each other;
and a response module 33, configured to send the second target image and the second target address to the first terminal in response to the address query request.
Optionally, the address query request further includes: a first address, wherein the first address is address information generated according to the location of the first terminal, or the first address is address information generated according to the historical location of the first terminal;
The acquisition module 32 includes:
the first identification sub-module is used for identifying a second corresponding relation in a first corresponding relation between a second image and a second address which are stored in advance, wherein the second address in the second corresponding relation is in a first geographic range corresponding to the first address;
the first obtaining sub-module is used for obtaining a second target image matched with the first image and a second target address matched with the second target image in the second corresponding relation.
Optionally, the obtaining module 32 further includes:
the expansion sub-module is used for expanding the first geographic range according to preset conditions to generate a second geographic range if a second target image matched with the first image is not acquired in the second corresponding relation;
a second identifying sub-module, configured to identify a third corresponding relationship in the first corresponding relationship, where a second address in the third corresponding relationship is within the second geographic range;
and the second acquisition sub-module is used for acquiring a second target image matched with the first image and a second target address matched with the second target image in the third corresponding relation.
Optionally, the obtaining module 32 further includes:
a third recognition sub-module for recognizing a first object type of the first image;
a fourth recognition sub-module, configured to recognize a fourth correspondence in a first correspondence between a second image and a second address, where a second object type of the second image in the fourth correspondence is matched with the first object type;
and the third acquisition sub-module is used for acquiring a second target image matched with the first image and a second target address matched with the second target image in the fourth corresponding relation.
Optionally, the address query request further includes: a first address, when the first address is address information generated according to a history location of the first terminal, the server further includes:
the second receiving module is used for receiving preset information which is sent by the first terminal and used for indicating to confirm the second target image;
and the sending module is used for sending the first user contact information corresponding to the first terminal to a second target terminal and/or sending the second user contact information corresponding to the second target terminal to the first terminal, wherein the second target terminal is a second terminal for uploading the second target image and the second target address which are mutually related.
The server provided by the embodiment of the present invention can implement each process implemented by the server in the above method embodiment, and in order to avoid repetition, details are not repeated here.
Through the module, the server can acquire the second target image matched with the first image in the address inquiry request and the second target address matched with the second target image in the first corresponding relation by receiving the address inquiry request of the first terminal carrying the first image of the first object to be addressed and receiving and storing the first corresponding relation between the second image and the second address which are mutually related and uploaded by at least one second terminal in advance, so that the address of the first object, namely the second target address, can be found by utilizing the first image of the first object to be addressed, and the second target image and the second target address are sent to the first terminal, thereby improving the addressing efficiency of the first object and reducing the time consumption of addressing. And the accuracy of image matching is higher, so that the addressing accuracy is improved; in the addressing process, the first terminal can achieve the purpose of addressing only by uploading an image containing a first object to be addressed, and the addressing difficulty is greatly reduced.
Fig. 4 is a schematic hardware configuration of a terminal device implementing functions of the terminal side according to the above embodiments of the present invention,
the terminal device 400 includes, but is not limited to: radio frequency unit 401, network module 402, audio output unit 403, input unit 404, sensor 405, display unit 406, user input unit 407, interface unit 408, memory 409, processor 410, and power source 411. It will be appreciated by those skilled in the art that the terminal device structure shown in fig. 4 does not constitute a limitation of the terminal device, and the terminal device may comprise more or less components than shown, or may combine certain components, or may have a different arrangement of components. In the embodiment of the invention, the terminal equipment comprises, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer and the like.
The radio frequency unit 401 is configured to send an address query request for a first object, where the address query request includes a first image of the first object to a server; and/or, the radio frequency unit 401 is further configured to upload the object image and the object address associated with each other to the server;
the server is used for acquiring a second target image matched with the first image and a second target address matched with the second target image according to the address inquiry request and a first corresponding relation between a pre-stored second image and the second address; the first corresponding relation is generated according to the mutually-related object image and the object address uploaded by at least one second terminal device.
The input unit 404 is configured to receive the second target image and the second target address.
In the embodiment of the invention, the terminal equipment can receive the second target image matched with the first image in the address inquiry request and the second target address matched with the second target image obtained in the first corresponding relation according to the server by sending the address inquiry request carrying the first image of the first object to be addressed, wherein the first corresponding relation is generated according to the object image and the object address which are uploaded by at least one second terminal equipment and are associated with each other, so that the terminal equipment of the embodiment of the invention can find the address of the first object by utilizing the first image of the first object to be addressed, namely the second target address, thereby improving the addressing efficiency of the first object and reducing the time consumption of addressing; and the accuracy of image matching is higher, so that the addressing accuracy is improved; in the addressing process, the terminal equipment can achieve the purpose of addressing only by uploading an image containing the first object to be addressed, and the addressing difficulty is greatly reduced.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 401 may be used for receiving and transmitting signals during the process of receiving and transmitting information or communication, specifically, receiving downlink data from a base station and then processing the received downlink data by the processor 410; and, the uplink data is transmitted to the base station. Typically, the radio frequency unit 401 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 401 may also communicate with networks and other devices through a wireless communication system.
The terminal device provides wireless broadband internet access to the user through the network module 402, such as helping the user to send and receive e-mail, browse web pages, access streaming media, etc.
The audio output unit 403 may convert audio data received by the radio frequency unit 401 or the network module 402 or stored in the memory 409 into an audio signal and output as sound. Also, the audio output unit 403 may also provide audio output (e.g., a call signal reception sound, a message reception sound, etc.) related to a specific function performed by the terminal device 400. The audio output unit 403 includes a speaker, a buzzer, a receiver, and the like.
The input unit 404 is used to receive an audio or video signal. The input unit 404 may include a graphics processor (Graphics Processing Unit, GPU) 4041 and a microphone 4042, the graphics processor 4041 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 406. The image frames processed by the graphics processor 4041 may be stored in memory 409 (or other storage medium) or transmitted via the radio frequency unit 401 or the network module 402. The microphone 4042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output that can be transmitted to the mobile communication base station via the radio frequency unit 401 in the case of a telephone call mode.
The terminal device 400 further comprises at least one sensor 405, such as a light sensor, a motion sensor and other sensors. Specifically, the light sensor includes an ambient light sensor and a proximity sensor, wherein the ambient light sensor can adjust the brightness of the display panel 4061 according to the brightness of ambient light, and the proximity sensor can turn off the display panel 4061 and/or the backlight when the terminal device 400 moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and direction when the accelerometer sensor is stationary, and can be used for recognizing the gesture (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking) and the like of the terminal equipment; the sensor 405 may further include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which are not described herein.
The display unit 406 is used to display information input by a user or information provided to the user. The display unit 406 may include a display panel 4061, and the display panel 4061 may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 407 is operable to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the terminal device. Specifically, the user input unit 407 includes a touch panel 4071 and other input devices 4072. The touch panel 4071, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on the touch panel 4071 or thereabout using any suitable object or accessory such as a finger, stylus, etc.). The touch panel 4071 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts it into touch point coordinates, and sends the touch point coordinates to the processor 410, and receives and executes commands sent from the processor 410. In addition, the touch panel 4071 may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave. The user input unit 407 may include other input devices 4072 in addition to the touch panel 4071. In particular, other input devices 4072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 4071 may be overlaid on the display panel 4061, and when the touch panel 4071 detects a touch operation thereon or thereabout, the touch operation is transferred to the processor 410 to determine the type of touch event, and then the processor 410 provides a corresponding visual output on the display panel 4061 according to the type of touch event. Although in fig. 4, the touch panel 4071 and the display panel 4061 are two independent components to implement the input and output functions of the terminal device, in some embodiments, the touch panel 4071 may be integrated with the display panel 4061 to implement the input and output functions of the terminal device, which is not limited herein.
The interface unit 408 is an interface to which an external device is connected to the terminal apparatus 400. For example, the external devices may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 408 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal apparatus 400 or may be used to transmit data between the terminal apparatus 400 and an external device.
Memory 409 may be used to store software programs as well as various data. The memory 409 may mainly include a storage program area that may store an operating system, application programs required for at least one function (such as a sound playing function, an image playing function, etc.), and a storage data area; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, memory 409 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 410 is a control center of the terminal device, connects various parts of the entire terminal device using various interfaces and lines, and performs various functions of the terminal device and processes data by running or executing software programs and/or modules stored in the memory 409 and calling data stored in the memory 409, thereby performing overall monitoring of the terminal device. Processor 410 may include one or more processing units; preferably, the processor 410 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 410.
The terminal device 400 may further include a power source 411 (e.g., a battery) for supplying power to the respective components, and preferably, the power source 411 may be logically connected to the processor 410 through a power management system, so as to perform functions of managing charging, discharging, power consumption management, etc. through the power management system.
In addition, the terminal device 400 includes some functional modules, which are not shown, and will not be described herein.
Preferably, the embodiment of the present invention further provides a terminal device, including a processor 410, a memory 409, and a computer program stored in the memory 409 and capable of running on the processor 410, where the computer program when executed by the processor 410 implements each process of the above addressing method embodiment, and the same technical effects can be achieved, and for avoiding repetition, a detailed description is omitted herein.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the processes of the above-mentioned addressing method embodiment, and can achieve the same technical effects, and in order to avoid repetition, the description is omitted here. Wherein the computer readable storage medium is selected from Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The embodiments of the present invention have been described above with reference to the accompanying drawings, but the present invention is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present invention and the scope of the claims, which are to be protected by the present invention.

Claims (8)

1. An addressing method applied to a server, the method comprising:
receiving an address query request for a first object sent by a first terminal, wherein the address query request comprises a first image of the first object; the address query request further includes: a first address, wherein the first address is address information generated according to the location of the first terminal, or the first address is address information generated according to the historical location of the first terminal;
acquiring a second target image matched with the first image and a second target address matched with the second target image according to the address inquiry request and a first corresponding relation between a pre-stored second image and the second address, wherein the first corresponding relation is generated according to the mutually-associated object image and the object address uploaded by at least one second terminal;
The obtaining the second target image matched with the first image and the second target address matched with the second target image according to the address query request and the first correspondence between the prestored second image and the second address comprises the following steps:
identifying a second corresponding relation in a first corresponding relation between a second image and a second address which are stored in advance, wherein the second address in the second corresponding relation is in a first geographic range corresponding to the first address; the second address is a real address of the first object; the second corresponding relation is obtained by screening from the first corresponding relation;
acquiring a second target image matched with the first image and a second target address matched with the second target image in the second corresponding relation;
and responding to the address inquiry request, and sending the second target image and the second target address to the first terminal.
2. The method according to claim 1, wherein the obtaining a second target image matched with the first image and a second target address matched with the second target image according to the address query request and a first correspondence between a pre-stored second image and a second address, further comprises:
If a second target image matched with the first image is not acquired in the second corresponding relation, expanding the first geographic range according to preset conditions to generate a second geographic range;
identifying a third corresponding relation in the first corresponding relation, wherein a second address in the third corresponding relation is in the second geographic range;
and in the third corresponding relation, a second target image matched with the first image and a second target address matched with the second target image are acquired.
3. The method according to claim 1, wherein the obtaining a second target image matched with the first image and a second target address matched with the second target image according to the address query request and a first correspondence between a pre-stored second image and a second address, further comprises:
identifying a first object type of the first image;
identifying a fourth corresponding relation in a first corresponding relation between a second image and a second address, wherein a second object type of the second image in the fourth corresponding relation is matched with the first object type;
And in the fourth corresponding relation, a second target image matched with the first image and a second target address matched with the second target image are acquired.
4. The method of claim 1, wherein the address query request further comprises: a first address, when the first address is address information generated according to a history location of the first terminal, after the second target image and the second target address are transmitted to the first terminal in response to the address query request, the method further includes:
receiving preset information which is sent by the first terminal and indicates to confirm the second target image;
and sending the first user contact information corresponding to the first terminal to a second target terminal, and/or sending the second user contact information corresponding to the second target terminal to the first terminal, wherein the second target terminal is a second terminal for uploading the second target image and the second target address which are mutually related.
5. A server, the server comprising:
the first receiving module is used for receiving an address query request of a first object sent by a first terminal, wherein the address query request comprises a first image of the first object; the address query request further includes: a first address, wherein the first address is address information generated according to the location of the first terminal, or the first address is address information generated according to the historical location of the first terminal;
The acquisition module is used for acquiring a second target image matched with the first image and a second target address matched with the second target image according to the address inquiry request and a first corresponding relation between a pre-stored second image and the second address, wherein the first corresponding relation is generated according to the mutually-related object image and the object address uploaded by at least one second terminal;
the acquisition module comprises:
the first identification sub-module is used for identifying a second corresponding relation in a first corresponding relation between a second image and a second address which are stored in advance, wherein the second address in the second corresponding relation is in a first geographic range corresponding to the first address; the second address is a real address of the first object; the second corresponding relation is obtained by screening from the first corresponding relation;
the first obtaining sub-module is used for obtaining a second target image matched with the first image and a second target address matched with the second target image in the second corresponding relation;
and the response module is used for responding to the address inquiry request and sending the second target image and the second target address to the first terminal.
6. The server of claim 5, wherein the acquisition module further comprises:
the expansion sub-module is used for expanding the first geographic range according to preset conditions to generate a second geographic range if a second target image matched with the first image is not acquired in the second corresponding relation;
a second identifying sub-module, configured to identify a third corresponding relationship in the first corresponding relationship, where a second address in the third corresponding relationship is within the second geographic range;
and the second acquisition sub-module is used for acquiring a second target image matched with the first image and a second target address matched with the second target image in the third corresponding relation.
7. The server of claim 5, wherein the acquisition module further comprises:
a third recognition sub-module for recognizing a first object type of the first image;
a fourth recognition sub-module, configured to recognize a fourth correspondence in a first correspondence between a second image and a second address, where a second object type of the second image in the fourth correspondence is matched with the first object type;
And the third acquisition sub-module is used for acquiring a second target image matched with the first image and a second target address matched with the second target image in the fourth corresponding relation.
8. The server of claim 5, wherein the address query request further comprises: a first address, when the first address is address information generated according to a history location of the first terminal, the server further includes:
the second receiving module is used for receiving preset information which is sent by the first terminal and used for indicating to confirm the second target image;
and the sending module is used for sending the first user contact information corresponding to the first terminal to a second target terminal and/or sending the second user contact information corresponding to the second target terminal to the first terminal, wherein the second target terminal is a second terminal for uploading the second target image and the second target address which are mutually related.
CN201911268859.XA 2019-12-11 2019-12-11 Addressing methods and servers Active CN111008297B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911268859.XA CN111008297B (en) 2019-12-11 2019-12-11 Addressing methods and servers

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911268859.XA CN111008297B (en) 2019-12-11 2019-12-11 Addressing methods and servers

Publications (2)

Publication Number Publication Date
CN111008297A CN111008297A (en) 2020-04-14
CN111008297B true CN111008297B (en) 2023-12-15

Family

ID=70114540

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911268859.XA Active CN111008297B (en) 2019-12-11 2019-12-11 Addressing methods and servers

Country Status (1)

Country Link
CN (1) CN111008297B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112414427B (en) * 2020-10-27 2023-04-28 维沃移动通信有限公司 Navigation information display method and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107084736A (en) * 2017-04-27 2017-08-22 维沃移动通信有限公司 A kind of air navigation aid and mobile terminal
CN107315755A (en) * 2016-04-27 2017-11-03 杭州海康威视数字技术股份有限公司 The orbit generation method and device of query object
CN107533746A (en) * 2015-02-28 2018-01-02 华为技术有限公司 Information protection method, server and terminal
CN108023924A (en) * 2016-10-31 2018-05-11 财付通支付科技有限公司 A kind of information processing method, terminal and server
CN108256100A (en) * 2018-01-31 2018-07-06 维沃移动通信有限公司 A kind of information search method, mobile terminal and Cloud Server

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9195898B2 (en) * 2009-04-14 2015-11-24 Qualcomm Incorporated Systems and methods for image recognition using mobile devices
EP3066591B1 (en) * 2014-02-10 2019-07-10 GEENEE GmbH Systems and methods for image-feature-based recognition

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107533746A (en) * 2015-02-28 2018-01-02 华为技术有限公司 Information protection method, server and terminal
CN107315755A (en) * 2016-04-27 2017-11-03 杭州海康威视数字技术股份有限公司 The orbit generation method and device of query object
CN108023924A (en) * 2016-10-31 2018-05-11 财付通支付科技有限公司 A kind of information processing method, terminal and server
CN107084736A (en) * 2017-04-27 2017-08-22 维沃移动通信有限公司 A kind of air navigation aid and mobile terminal
CN108256100A (en) * 2018-01-31 2018-07-06 维沃移动通信有限公司 A kind of information search method, mobile terminal and Cloud Server

Also Published As

Publication number Publication date
CN111008297A (en) 2020-04-14

Similar Documents

Publication Publication Date Title
CN108711355B (en) Track map strategy making and using method, device and readable storage medium
CN111343081A (en) Information display method and electronic equipment
CN108241752B (en) Photo display method, mobile terminal and computer readable storage medium
US11373410B2 (en) Method, apparatus, and storage medium for obtaining object information
CN107967339B (en) Image processing method, image processing device, computer-readable storage medium and computer equipment
CN110519699A (en) A kind of air navigation aid and electronic equipment
CN108647957A (en) A kind of method of payment, device and mobile terminal
CN109508398B (en) Photo classification method and terminal equipment thereof
CN108882142A (en) Reminder message sending method and mobile terminal
CN107547741B (en) Information processing method and device and computer readable storage medium
CN108319709A (en) Position information processing method, device, electronic equipment and storage medium
CN109684277B (en) Image display method and terminal
CN108541015A (en) A kind of signal strength reminding method and mobile terminal
CN111597455A (en) Social relationship establishing method and device, electronic equipment and storage medium
CN107908770A (en) A kind of photo searching method and mobile terminal
CN109219004B (en) Short message unsubscribing method and device, mobile terminal and readable storage medium
CN108195392A (en) A kind of more people's layout of roads methods and terminal
CN108460817A (en) A kind of pattern splicing method and mobile terminal
CN108256100A (en) A kind of information search method, mobile terminal and Cloud Server
CN110891122A (en) A wallpaper push method and electronic device
CN111064888A (en) Prompting method and electronic equipment
CN111008297B (en) Addressing methods and servers
CN108063869B (en) Safety early warning method and mobile terminal
CN112798005B (en) Road data processing method and related device
CN110535754B (en) Image sharing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant