[go: up one dir, main page]

CN111158556A - Display control method and electronic equipment - Google Patents

Display control method and electronic equipment Download PDF

Info

Publication number
CN111158556A
CN111158556A CN201911424042.7A CN201911424042A CN111158556A CN 111158556 A CN111158556 A CN 111158556A CN 201911424042 A CN201911424042 A CN 201911424042A CN 111158556 A CN111158556 A CN 111158556A
Authority
CN
China
Prior art keywords
target object
electronic device
preview screen
target
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911424042.7A
Other languages
Chinese (zh)
Other versions
CN111158556B (en
Inventor
韩晨阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201911424042.7A priority Critical patent/CN111158556B/en
Publication of CN111158556A publication Critical patent/CN111158556A/en
Application granted granted Critical
Publication of CN111158556B publication Critical patent/CN111158556B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Remote Sensing (AREA)
  • Data Mining & Analysis (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本发明实施例提供的一种显示控制方法及电子设备,应用于通信技术领域,以解决传统位置定位方式无法准确的定位出目的地的问题。该方法包括:电子设备获取摄像头的预览画面;在上述预览画面中包括目标对象、且该目标对象显示于上述预览画面中的目标位置的情况下,电子设备在上述预览画面中,显示第一标识,该第一标识用于指示上述目标对象在上述预览画面中的位置。

Figure 201911424042

The display control method and electronic device provided by the embodiments of the present invention are applied in the field of communication technology to solve the problem that the destination cannot be accurately located by the traditional position positioning method. The method includes: the electronic device obtains a preview image of the camera; in the case that the preview image includes a target object and the target object is displayed at a target position in the preview image, the electronic device displays the first logo in the preview image. , and the first identifier is used to indicate the position of the target object in the preview screen.

Figure 201911424042

Description

Display control method and electronic equipment
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to a display control method and electronic equipment.
Background
In daily life, navigation is gradually an indispensable travel tool in our life, and when we use an electronic device to search for a destination, the electronic device marks the searched destination on a map in the form of a mark point. For example, when a user wants to find a convenience store, the user may search for the "convenience store" in the map application, so that the electronic device marks the convenience stores near the user on the map, and the user may go to the desired convenience store based on the positions of the marked points.
In the conventional technology, a map application in an electronic device mainly uses a Global Positioning System (GPS) for position location.
However, at present, the GPS can only locate the approximate position of the destination, so that a certain deviation exists in the mark point used for marking the destination in the map, and the user still cannot find the accurate destination after reaching the vicinity of the mark point.
Disclosure of Invention
The embodiment of the invention provides a display control method and electronic equipment, and aims to solve the problem that a destination cannot be accurately positioned in a traditional position positioning mode.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present invention provides a display control method, where the method includes: the electronic equipment acquires a preview picture of the camera; when a target object is included in the preview screen and the target object is displayed at a target position in the preview screen, the electronic equipment displays a first mark in the preview screen, wherein the first mark is used for indicating the position of the target object in the preview screen.
In a second aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes: the acquisition module is used for acquiring a preview picture of the camera; and a display module, configured to, when the preview screen acquired by the acquisition module includes a target object and the target object is displayed at a target position in the preview screen, display a first identifier in the preview screen, where the first identifier is used to indicate a position of the target object in the preview screen.
In a third aspect, an embodiment of the present invention provides an electronic device, including a processor, a memory, and a computer program stored on the memory and executable on the processor, where the computer program, when executed by the processor, implements the steps of the display control method according to the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the display control method according to the first aspect.
In the embodiment of the invention, the electronic equipment acquires a preview picture of the camera, and when the preview picture comprises a target object and the target object is displayed at a target position in the preview picture, the electronic equipment displays a first identifier in the preview picture, wherein the first identifier is used for indicating the position of the target object in the preview picture, so that the position of the target object in the preview picture is marked more intuitively and accurately in a first identifier marking mode, and a user can quickly confirm the position of the target object according to the first identifier.
Drawings
FIG. 1 is a block diagram of a possible operating system according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a display control method according to an embodiment of the present invention;
fig. 3 is one of schematic interface diagrams of a display control method according to an embodiment of the present invention;
fig. 4 is a second schematic interface diagram of a display control method according to an embodiment of the present invention;
fig. 5 is a third schematic interface diagram of a display control method according to an embodiment of the present invention;
fig. 6 is a fourth schematic interface diagram of a display control method according to an embodiment of the present invention;
FIG. 7 is a fifth schematic interface diagram of a display control method according to an embodiment of the present invention;
fig. 8 is a sixth schematic interface diagram of a display control method according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
fig. 10 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that "/" in this context means "or", for example, A/B may mean A or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone.
It should be noted that "a plurality" herein means two or more than two.
It should be noted that, in the embodiments of the present invention, words such as "exemplary" or "for example" are used to indicate examples, illustrations or explanations. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
An execution main body of the display control method provided in the embodiment of the present invention may be the electronic device, or may also be a functional module and/or a functional entity capable of implementing the display control method in the electronic device, which may be determined specifically according to actual use requirements, and the embodiment of the present invention is not limited.
For example, taking an electronic device as a terminal device as an example, the terminal device in the embodiment of the present invention may be a mobile terminal device, and may also be a non-mobile terminal device. The mobile terminal device may be a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), etc.; the non-mobile terminal device may be a Personal Computer (PC), a Television (TV), a teller machine, a self-service machine, or the like; the embodiments of the present invention are not particularly limited.
The electronic device in the embodiment of the present invention may be an electronic device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present invention are not limited in particular.
The following describes a software environment applied to a display control method according to an embodiment of the present invention, taking an operating system as an example.
Fig. 1 is a schematic diagram of a possible operating system according to an embodiment of the present invention. In fig. 1, the architecture of the operating system includes 4 layers, respectively: an application layer, an application framework layer, a system runtime layer, and a kernel layer (specifically, a Linux kernel layer).
The application layer comprises various application programs (including system application programs and third-party application programs) in an operating system.
The application framework layer is a framework of the application, and a developer can develop some applications based on the application framework layer under the condition of complying with the development principle of the framework of the application.
The system runtime layer includes a library (also referred to as a system library) and an operating system runtime environment. The library mainly provides various resources required by the operating system. The operating system runtime environment is used to provide a software environment for the operating system.
The kernel layer is the operating system layer of the operating system and belongs to the lowest layer of the operating system software layer. The kernel layer provides kernel system services and hardware-related drivers for the operating system based on the Linux kernel.
Taking an operating system as an example, in the embodiment of the present invention, a developer may develop a software program for implementing the display control method provided in the embodiment of the present invention based on the system architecture of the operating system shown in fig. 1, so that the display control method may run based on the operating system shown in fig. 1. That is, the processor or the electronic device may implement a display control method provided by the embodiment of the present invention by running the software program in the operating system.
The following describes a display control method according to an embodiment of the present invention with reference to a flowchart of the display control method shown in fig. 2, where fig. 2 is a schematic flowchart of the display control method according to the embodiment of the present invention, and includes steps 201 to 202:
step 201: the electronic equipment acquires a preview picture of the camera.
Illustratively, when the electronic device is located within a predetermined range of the target object, the electronic device starts the camera to acquire a preview screen of the camera.
For example, the camera may be a camera provided in the electronic device itself, or may be a camera externally connected to the electronic device.
Step 202: and if the target object is included in the preview picture and the target object is displayed at the target position in the preview picture, the electronic equipment displays a first mark in the preview picture.
Illustratively, the first identifier is used to indicate the position of the target object in the preview screen.
Illustratively, the preview screen is the real-time screen content acquired by the camera.
Illustratively, the target object includes, but is not limited to, at least one of: buildings, people, and signs. The target object includes: and at least one item of target object information such as name, contour and position information. The specific embodiments of the present invention are not limited thereto.
For example, the target object may be a destination, or a person or an object at the destination, and the destination may be a clear point or a certain approximate range, which is not limited in the embodiment of the present invention. For example, the target object may be some infrastructure (hotel, supermarket, hospital, bank, etc.) near the location where the electronic device is located.
For example, the first identifier may be an AR image. In an example, the AR image may be a virtual image obtained by rendering the target object by the electronic device using AR technology.
For example, the types of the first identifiers corresponding to different types of target objects are different.
Example 1: if the target object is a building, the first identifier may be an AR box, and if the target object is a person, the first identifier may be an AR light pillar.
Example 2: taking the first marker as an AR light pillar as an example, light pillars of different colors may be rendered for objects of different classes. For example, if there is only one target object, only one color light pillar may be rendered; if a plurality of target objects exist, light columns of different colors can be rendered for different types of objects according to the types of the target objects, for example, a hospital can be rendered as a red light column, a bank can be rendered as a yellow light column, and a mall can be rendered as a green light column.
For example, if the target object is a plurality of objects, the first identifier includes a plurality of identifiers, and one identifier corresponds to one object. That is, in a scene in which a plurality of target objects are included in the preview screen of the camera, the electronic device may identify each target object separately. For example, each target object is marked separately with a different colored marker.
For example, in a case where the preview screen includes the target position, the electronic device may determine whether the preview screen includes the target object by using a SLAM (simultaneous localization And Mapping) technique, And in a case where the preview screen includes the target object, acquire the position of the target object in the preview screen. Then, a first mark indicating the position of the target object on the preview screen is added to the position. The SLAM technology determines the positioning and path planning of the current position and the position of a target object by utilizing picture information acquired by a camera, and the electronic equipment acquires a preview picture acquired by the camera of the electronic equipment in real time and determines the position of the target object in the preview picture according to the picture information of the preview picture under the condition that the preview picture contains the target object.
For example, in the process of determining whether the preview screen of the camera includes the target object, the electronic device may continuously obtain screen information of the preview screen, upload the screen information to the database to compare with object information of the target object, and display the first identifier in the preview screen to indicate the position of the target object when the preview screen includes the target object.
In an example, taking a target object as an example, the electronic device performs face recognition by acquiring face information in a preview picture in real time, so as to recognize whether the preview picture contains a target face.
Example 1, as shown in fig. 3, the target object is taken as a "convenience store" as an example. Assuming that 2 "convenience stores" are included in a preview screen (31 in fig. 3) of the electronic device, and the 2 "convenience stores" are "convenience store 1" and "convenience store 2", respectively, the electronic device marks the 2 "convenience stores" with boxes, the mark of "convenience store 1" is shown as 32 in fig. 3, and the mark of "convenience store 2" is shown as 33 in fig. 3.
Example 2, as shown in fig. 3, the target object is "contact 1" as an example. Assuming that the preview screen 31 contains the "contact 1", the electronic device may mark "contact 1" with a circle, as shown at 34 in fig. 3.
For example, the electronic device may load a virtual compass in the preview screen to indicate the direction of the user. Such as a virtual compass 35 in the preview screen 31 in fig. 3.
For example, the electronic device may display position prompt information of each target object in the preview screen, the position prompt information being used to prompt the user of the position of the target object.
According to the display control method provided by the embodiment of the invention, the electronic equipment acquires the preview picture of the camera, and under the condition that the preview picture comprises the target object and the target object is displayed at the target position in the preview picture, the electronic equipment displays the first identifier in the preview picture, wherein the first identifier is used for indicating the position of the target object in the preview picture, so that the position of the target object in the preview picture is marked more intuitively and accurately by adopting a first identifier marking mode, and a user can quickly confirm the position of the target object according to the first identifier.
Optionally, in an embodiment of the present invention, in a case that the preview screen includes a target object, and the target object is displayed at a target position in the preview screen, after step 201, the method further includes step a 1:
step A1: and the electronic equipment displays a second identifier in the preview picture.
Wherein the second identifier is used to indicate a walking route from the electronic device to the target object, for example, a walking route map formed by continuous arrow images.
For example, the second identifier is further used to indicate a route distance and a walking direction of a walking route from the electronic device to the target object.
It should be noted that the walking route indicated by the second identifier may be continuously adjusted according to the current location of the electronic device.
For example, the second identifier may be an AR image, through which a walking route from the electronic device to the target object is indicated for the user. That is, the walking route may be a virtual route that the electronic device renders from the electronic device to the target object using AR technology.
For example, the electronic device may map and locate the current position of the electronic device and the position of the target object by using SLAM technology, and determine a walking route from the electronic device to the target object.
For example, referring to fig. 3, when the user wants to go to "convenience store 2" shown in 33 of fig. 3, as shown in fig. 4, a walking route (42 in fig. 4, i.e., the second identifier) from the electronic device to "convenience store 2" is displayed on a preview screen (41 in fig. 4) of the electronic device, so that the user can quickly find "convenience store 2" through the walking route 42.
In this way, in the case where the preview screen includes the target object, the electronic apparatus indicates the walking route from the current position of the electronic apparatus to the position of the target object with the second mark in the preview screen, thereby enabling the user to more intuitively find the position of the target object through the walking route in the preview screen.
Optionally, in an embodiment of the present invention, in a case that the preview screen includes a target object, and the target object displays a target position in the preview screen, after step 201, the method further includes step B1:
step B1: and when the distance between the current first position of the electronic equipment and the second position of the target object is smaller than a preset threshold value, the electronic equipment displays a third mark on the preview picture.
The third mark is used for marking the object outline of the target object, so that the target object is more striking in a preview picture, and a user can conveniently and intuitively and quickly find the position of the target object.
Illustratively, the third identifier may be an AR image. For example, the third identifier may be an AR contour image obtained by rendering the target object by the electronic device using an AR technology.
It should be noted that, the AR image may refer to the above description, and is not described herein again.
For example, when the distance between the current first position of the electronic device and the second position of the target object is smaller than a preset threshold, it indicates that the electronic device is already within the predetermined range of the target object, that is, the target object is closer to the electronic device. At this time, if the preview screen includes the target object, the electronic device may mark the object outline of the target object by the third identifier.
For example, taking the third identifier as an AR image, when a distance between the current first position of the electronic device and the second position of the target object is smaller than a preset threshold, the electronic device may obtain object contour information of the target object from the preview screen, and then render the object contour of the target object by using an AR technique based on the object contour information.
Therefore, the electronic equipment determines whether the current position of the electronic equipment is located within the range of the target object by judging whether the distance between the current first position and the second position of the target object is smaller than a preset threshold value, and then the object outline of the target object at the position of the third identification mark is used in the preview picture, so that the display of the target object in the preview picture is easier to distinguish.
Optionally, in an embodiment of the present invention, when the preview screen includes the object search area, before step 201, the method further includes step C1 and step C2:
step C1: the electronic equipment receives a first input of a user in the object search area.
Step C2: in response to the first input, the electronic device obtains a target position of a target object input by the first input.
Illustratively, the first input is used for inputting object information related to the target object, such as a name of the target object, a picture, and the like. Alternatively, the first input is for inputting destination information, and the target object is an object on a destination.
For example, the object search area may be displayed in a floating manner on the preview screen.
For example, the object search area may also be moved on the preview screen as the user's finger moves.
For example, after acquiring the target position of the target object input by the first input, if the preview screen includes the target object, the electronic device displays a first identifier on the preview screen to mark the position of the target object.
Illustratively, the first input is for inputting object information of a target object in the object search area. The object information includes but is not limited to: name of the target object, location information of the location where the target object is located, and the like.
For example, the first input may be a click input that a user may click on the object search area, or a slide input that the user may slide on the object search area, or other feasibility inputs that the user may make on the object search area, which may be determined according to actual usage requirements, and embodiments of the present invention are not limited thereto. For example, the electronic device may be triggered to collect a user voice, which is a voice for the target object, by the sliding input of the user in the object search area.
For example, the click input may be a single click input, a double click input, or any number of click inputs; the click input may be a long-press input or a short-press input. The sliding input may be a sliding input in any direction, for example, sliding upwards, sliding downwards, sliding leftwards or sliding rightwards, and the sliding trajectory of the sliding input may be a straight line or a curved line, and may be specifically set according to actual requirements.
For example, the object search area may be a search interface displayed in a floating manner in the preview screen. Text or voice information may be entered in the search area.
Illustratively, the object search area is used for triggering the electronic device to retrieve the information input in the object search area.
In one example, in an example of the user inputting a target object name in the object search area, the electronic device retrieves object information (e.g., location coordinates of the matching object, appearance information of the matching object (e.g., if the matching object is a building, the appearance information may be a building appearance picture), etc.) of a matching object matching the target object name within a predetermined range of the electronic device based on the target object name. Then, the electronic equipment determines whether the current preview picture of the electronic equipment contains the matching objects or not based on the object information of the matching objects, and if so, displays first marks corresponding to the matching objects in the preview picture.
For example, as shown in fig. 5, in the preview screen (51 in fig. 5) of the electronic device, after the user inputs "convenience store" in the object search box (i.e., the above object search area, 52 in fig. 5) and clicks the "magnifying glass" flag for searching. In this case, as shown in fig. 3, the preview screen 31 of the electronic apparatus includes two "convenience stores" 32 and 33 in fig. 3, respectively.
In one example, after acquiring the target position of the target object, the electronic device may display position prompt information of each target object in the preview screen, where the position prompt information is used for prompting the user about the position of the target object.
For example, referring to fig. 5, as shown in fig. 6, the preview screen 61 displays 2 presentation links, namely, a location link of "convenience store 1" (as shown in fig. 6, "convenience store 1" goes here) and a location link of "convenience store 2" (as shown in fig. 6, "convenience store 2" goes here). The user clicks on these links and may jump to a navigation or map interface to indicate the electronic device's route to the destinations.
In this way, the electronic device detects the content related to the target object input by the user in the object search area, so as to acquire the position of the target object in the preview screen, so that the user can more quickly and intuitively and certainly recognize the target object to be found.
Optionally, in an embodiment of the present invention, before the step 201, the method further includes steps D1 to D3:
step D1: the electronic device displays a map on the preview screen.
The map comprises an object mark of the target object, and the object mark is used for indicating the position of the target object in the map. For example, the map may also display a current location mark of the electronic device, a walking route between the target object and the electronic device, and the like. Therefore, the user can conveniently and visually know the distance and the route from the current position of the electronic equipment to the target object according to the map.
Step D2: the electronic device receives a second input for the object marker.
Step D3: in response to the third input, the electronic device obtains a target position of the target object.
For example, the third input may be a touch input of the user on the object identifier, or other feasible inputs, which is not limited in the embodiment of the present invention.
For example, the electronic device may display a map in the preview screen.
For example, the map may be displayed in a floating manner on the preview screen. Further, the map may move on the preview screen with the movement of the first finger of the user.
For example, the user may adjust the size, display position and transparency of the map according to the use requirement, and the user may press or slide the map or the map may be limited to the present invention.
For example, the size of the map may be a default size, or may be flexibly adjusted according to the operation of the user. It should be noted that the maximum size of the map is as large as the display screen of the electronic device.
In one example, the user double-finger touches the map and double-finger outspreads a sliding input, which the electronic device may determine as an input to change the size of the map. Illustratively, the map can be zoomed in or zoomed out along the diagonal line of the map according to the gesture of the user, and when the size of the map is zoomed in, the map can be squeezed inwards along the diagonal line of the map; when the map is zoomed in, it can be stretched diagonally outward along the map.
For example, the map may be displayed in a preset transparency superimposed on the preview screen, for example, if the preset transparency is T1, the value range of T1 may be 0% < T1< 100%. In addition, the map window may also be displayed on the preview screen with high brightness or low brightness, which is not limited in the present invention.
For example, taking the third input as a click input as an example, after the user clicks the object identifier, the electronic device may obtain object information of the target object.
In one example, if the preview screen further includes an object search area, after the electronic device receives a target object name input by a user in the object search area, the electronic device searches the target object name through a server to obtain location information of the target object, and then displays the target object in a map on the preview screen based on the location information.
For example, as shown in fig. 7, when there are a plurality of convenience stores in a map (e.g., 72 in fig. 7) on a preview screen (e.g., 71 in fig. 7) of the electronic device, the convenience stores are marked by a plurality of solid points on the map, where a hollow point (e.g., 73 in fig. 7) in the map is a current location of the electronic device, and a solid point is a location of the convenience stores, and a user can click one of the solid point marks (i.e., the mark of the target object, e.g., 74 in fig. 7) to view information of the convenience stores at the location.
In this way, the user knows the position of the target object through the mark in the map on the preview screen, so that the user can more intuitively check the distance between the current position and the position of the target object.
Optionally, in an embodiment of the present invention, when the target object is a target contact, before step 202, the method further includes: step E1 to step E3:
step E1: the electronic device receives a second input from the user.
Step E2: and responding to the second input, and sending a position sharing request to the electronic equipment of the target contact person by the electronic equipment.
Step E3: and the electronic equipment receives the target position sent by the electronic equipment of the target contact person based on the position sharing request.
Illustratively, the location sharing request is used for requesting to obtain the location information of the target contact.
Further optionally, in the embodiment of the present invention, the step E1 includes steps E11 and E12:
step E11: and the electronic equipment displays at least one contact person identifier in a preview picture of the camera.
Step E12: and the electronic equipment receives a second input of the target contact person identification in the at least one contact person identification by the user.
For example, each of the at least one contact identifier may correspond to a contact.
Illustratively, the second input is a user input of a target contact identification.
Illustratively, the second input specifically includes: the click input by the user for the target contact identifier, or the slide input by the user for the target contact identifier, or other feasible inputs by the user for the target contact identifier may be specifically determined according to actual use requirements, and the embodiment of the present invention is not limited.
For example, the above click input and the above slide input can refer to the description in the above first input, and are not described herein again.
For example, a user may request to establish a location sharing request connection with a target contact by clicking the target contact in the contact list, and if the location sharing request connection is passed by the target contact, a preview screen of the electronic device may display a prompt message of "agreeing to share a location", for example, "the other party agrees to share the location information, is connecting"; if the position sharing request connection is rejected by the target contact person, the preview screen of the electronic equipment displays prompt information of 'the other party rejects the sharing position', for example, 'the other party rejects the sharing position information, and the viewing permission is not temporarily given'.
For example, the electronic device may map and locate the current position of the electronic device and the position of the target contact by using SLAM technology, and when the preview screen contains the target contact, the electronic device acquires the position of the target contact in the preview screen corresponding to the electronic device. An AR image indicating the target contact is then rendered at the location.
For example, as shown in (a) in fig. 8, there are 3 contacts in a "contact list" (82 in a in fig. 8) displayed in a preview screen (81 in a in fig. 8) of the electronic device, which are "contact 1", "contact 2", and "contact 3", respectively, each of which corresponds to one electronic device, when the user wants to acquire the location information of "contact 1", the user may click an identifier of "contact 1" (83 in a in fig. 8) to send a "location sharing request" connection to "contact 1", and when "contact 1" receives the "location sharing request" connection and agrees to establish the location sharing connection. As shown in fig. 8 (b), the preview screen 81 of the electronic apparatus displays "the other party agrees to share the location and is connecting". As shown in fig. 8 (c), when the connection establishment is successful, the location of the contact is identified by a box in the preview screen 81 (e.g., 84 in fig. 8 c, i.e., the first identifier mentioned above).
In this way, the electronic device sends the location sharing to the electronic device of the target contact, so that the user can more quickly and accurately recognize the location of the target contact by marking the target contact when the target contact is included in the preview screen.
The various marks mentioned in the embodiments of the present invention (for example, the first mark, the second mark, the third mark, the object mark, and the like) may be easily distinguishable marks such as a point, a circle, an arrow, and the like, or marks such as characters and pictures, and may be specifically set according to actual needs, which is not limited in the embodiments of the present invention. For example, the picture may be an AR image, a cartoon image, a dynamic image, and the like, which is not limited in this embodiment of the present invention. The shape, pattern, color, form and size of the AR image may be set according to actual requirements, which is not limited in the embodiment of the present invention, for example, the AR image mentioned in the embodiment of the present invention may be a rectangular frame image, a light pillar image, an arrow image, a route image, or the like.
Fig. 9 is a schematic diagram of a possible structure of an electronic device according to an embodiment of the present invention, and as shown in fig. 9, the electronic device 900 includes: an obtaining module 901 and a displaying module 902, wherein:
an obtaining module 901, configured to obtain a preview screen of a camera.
The display module 902 displays a first indicator on the preview screen, where the first indicator indicates a position of the target object on the preview screen, when the target object is included in the preview screen acquired by the acquisition module 901 and the target object is displayed at the target position on the preview screen.
Optionally, as shown in fig. 9, the electronic device 900 further includes: a receiving module 903, wherein: a receiving module 903, configured to receive a first input of the user in the object search area; an obtaining module 901, configured to, in response to the first input received 903 by the receiving module, obtain a target position of a target object input by the first input; wherein the target object comprises at least one of: people, buildings, signs.
Optionally, the display module 902 is further configured to display a second identifier in the preview screen; wherein the second identifier is used for indicating a walking route from the electronic device 900 to the target object.
Optionally, the display module 902 is further configured to display a third identifier in the preview screen when a distance between the current first position of the electronic device 900 and the second position of the target object is smaller than a preset threshold, where the third identifier is used to mark an object outline of the target object.
Optionally, as shown in fig. 9, the electronic device 900 further includes: a sending module 904, wherein: the receiving module 903 is further configured to receive a second input of the user; a sending module 904, configured to send a location sharing request to the electronic device of the target contact in response to the second input received by the receiving module 903; the receiving module 903 is further configured to receive a target location sent by the electronic device of the target contact based on the location sharing request.
Optionally, the display module 902 is further configured to display at least one contact identifier in a preview screen of the camera; the receiving module 903 is specifically configured to receive a second input of the target contact identifier in the at least one contact identifier by the user.
According to the electronic device provided by the embodiment of the invention, the electronic device acquires the preview picture of the camera, and under the condition that the preview picture comprises the target object and the target object is displayed at the target position in the preview picture, the electronic device displays the first identifier in the preview picture, wherein the first identifier is used for indicating the position of the target object in the preview picture, so that the position of the target object in the preview picture is marked more intuitively and accurately by adopting a first identifier marking mode, and a user can quickly confirm the position of the target object according to the first identifier.
It should be noted that, as shown in fig. 9, modules that are necessarily included in the electronic device 900 are indicated by solid line boxes, such as an obtaining module 901; modules that may or may not be included in the electronic device 900 are illustrated with dashed boxes, such as a transmit module 904.
The electronic device provided by the embodiment of the present invention can implement each process implemented by the electronic device in the above method embodiments, and is not described herein again to avoid repetition.
Taking an electronic device as an example of a terminal device, fig. 10 is a schematic diagram of a hardware structure of a terminal device for implementing various embodiments of the present invention, where the terminal device 100 includes but is not limited to: the system comprises a radio frequency unit 101, a network module 102, an audio output unit 103, an input unit 104, a sensor 105, a display unit 106, a user input unit 107, an interface unit 108, a memory 109, a processor 110, a power supply 111, a camera module 112 and the like. Those skilled in the art will appreciate that the configuration of the terminal device 100 shown in fig. 10 does not constitute a limitation of the terminal device, and that the terminal device 100 may include more or less components than those shown, or combine some components, or arrange different components. In the embodiment of the present invention, the terminal device 100 includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal device, a wearable device, a pedometer, and the like. The camera module 112 includes a camera, which may be a front camera or a rear camera.
The user input unit 107 is used for acquiring a preview picture of the camera; and a processor 110 configured to display a first indicator on the preview screen, the first indicator indicating a position of the target object on the preview screen, when the target object is included in the preview screen and the target object is displayed at a target position on the preview screen.
In the terminal device provided by the embodiment of the present invention, the electronic device obtains a preview screen of the camera, and when the preview screen includes the target object and the target object is displayed at the target position in the preview screen, the electronic device displays the first identifier in the preview screen, where the first identifier is used to indicate the position of the target object in the preview screen, so that the position of the target object in the preview screen is marked more intuitively and accurately by using the first identifier, so that a user can quickly confirm the position of the target object according to the first identifier.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 101 may be used for receiving and sending signals during a message transmission or call process, and specifically, after receiving downlink data from a base station, the downlink data is processed by the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through a wireless communication system.
The terminal device 100 provides the user with wireless broadband internet access via the network module 102, such as helping the user send and receive e-mails, browse web pages, and access streaming media.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output as sound. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the terminal device 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 104 is used to receive an audio or video signal. The input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the graphics processor 1041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the network module 102. The microphone 1042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode.
The terminal device 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or the backlight when the terminal device 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the terminal device posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration identification related functions (such as pedometer, tapping), and the like; the sensors 105 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an organic light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal device 100. Specifically, the user input unit 107 includes a touch panel 1071 and other input devices 1072. Touch panel 1071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 1071 (e.g., operations by a user on or near touch panel 1071 using a finger, stylus, or any suitable object or attachment). The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and receives and executes commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Specifically, other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 1071 may be overlaid on the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although in fig. 10, the touch panel 1071 and the display panel 1061 are two independent components to implement the input and output functions of the terminal device 100, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the terminal device 100, and is not limited herein.
The interface unit 108 is an interface for connecting an external device to the terminal apparatus 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal apparatus 100 or may be used to transmit data between the terminal apparatus 100 and the external device.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the terminal device 100, connects various parts of the entire terminal device 100 by various interfaces and lines, and performs various functions of the terminal device 100 and processes data by running or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the terminal device 100. Processor 110 may include one or more processing units; alternatively, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The terminal device 100 may further include a power supply 111 (such as a battery) for supplying power to each component, and optionally, the power supply 111 may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the terminal device 100 includes some functional modules that are not shown, and are not described in detail here.
Optionally, an embodiment of the present invention further provides a terminal device, which includes a processor, a memory, and a computer program stored in the memory and capable of running on the processor 110, where the computer program, when executed by the processor, implements each process of the foregoing display control method embodiment, and can achieve the same technical effect, and details are not repeated here to avoid repetition.
Optionally, an embodiment of the present invention further provides an AR device, including a processor, a memory, and a computer program stored in the memory and capable of running on the processor, where the computer program, when executed by the processor, implements the processes of the foregoing method embodiment, and can achieve the same technical effect, and details are not described here to avoid repetition.
Optionally, in this embodiment of the present invention, the electronic device in the above embodiment may be an AR device. Specifically, when the electronic device in the above embodiment (for example, the electronic device shown in fig. 10) is an AR device, the AR device may include all or part of the functional modules in the electronic device. Of course, the AR device may further include a functional module not included in the electronic device.
It is to be understood that, in the embodiment of the present invention, when the electronic device in the above embodiment is an AR device, the electronic device may be an electronic device integrated with AR technology. The AR technology is a technology for realizing the combination of a real scene and a virtual scene. By adopting the AR technology, the visual function of human can be restored, so that human can experience the feeling of combining a real scene and a virtual scene through the AR technology, and further the human can experience the experience of being personally on the scene better.
Taking the AR device as AR glasses as an example, when the user wears the AR glasses, the scene viewed by the user is generated by processing through the AR technology, that is, the virtual scene can be displayed in the real scene in an overlapping manner through the AR technology. When the user operates the content displayed by the AR glasses, the user can see that the AR glasses peel off the real scene, so that a more real side is displayed to the user. For example, only the case of the carton can be observed when a user visually observes one carton, but the user can directly observe the internal structure of the carton through AR glasses when the user wears the AR glasses.
The AR equipment can comprise the camera, so that the AR equipment can be combined with the virtual picture to display and interact on the basis of the picture shot by the camera. For example, in the embodiment of the present invention, the AR device may obtain a preview screen of a camera of the AR device, and when the preview screen includes the target object and the target object is displayed at a target position in the preview screen, display a first identifier in the preview screen, where the first identifier is used to indicate a position of the target object in the preview screen. Therefore, the AR device can mark the position of the target object in the preview picture more intuitively and accurately by adopting the first identification mark, so that the user can quickly confirm the position of the target object according to the first identification.
An embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the display control method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1.一种显示控制方法,应用于电子设备,其特征在于,包括:1. A display control method, applied to electronic equipment, is characterized in that, comprising: 获取摄像头的预览画面;Get the preview screen of the camera; 在所述预览画面中包括目标对象、且所述目标对象显示于所述预览画面中的目标位置的情况下,在所述预览画面中,显示第一标识,所述第一标识用于指示所述目标对象在所述预览画面中的位置。When the preview screen includes a target object and the target object is displayed at a target position in the preview screen, a first mark is displayed on the preview screen, and the first mark is used to indicate the target object. position of the target object in the preview screen. 2.根据权利要求1所述的方法,其特征在于,所述预览画面包括对象搜索区;2. The method according to claim 1, wherein the preview picture comprises an object search area; 所述获取摄像头的预览画面之前,所述方法还包括:Before obtaining the preview image of the camera, the method further includes: 接收用户在所述对象搜索区的第一输入;receiving a first input from the user in the object search area; 响应于所述第一输入,获取所述第一输入所输入的目标对象的目标位置;In response to the first input, acquiring the target position of the target object input by the first input; 其中,所述目标对象包括以下至少一项:人、建筑物、标志物。Wherein, the target object includes at least one of the following: people, buildings, and landmarks. 3.根据权利要求1所述的方法,其特征在于,在所述预览画面中包括目标对象、且所述目标对象显示于所述预览画面中的目标位置的情况下,所述获取摄像头的预览画面之后,所述方法还包括:3 . The method according to claim 1 , wherein when the preview image includes a target object and the target object is displayed at a target position in the preview image, the obtaining a preview of the camera is performed. 4 . After the screen, the method further includes: 在所述预览画面中,显示第二标识;in the preview screen, displaying the second logo; 其中,所述第二标识用于指示从所述电子设备至所述目标对象的步行路线。Wherein, the second identifier is used to indicate a walking route from the electronic device to the target object. 4.根据权利要求1至3任一项所述的方法,其特征在于,在所述预览画面中包括目标对象、且所述目标对象显示于所述预览画面中的目标位置的情况下,所述获取摄像头的预览画面之后,所述方法还包括:4. The method according to any one of claims 1 to 3, wherein in the case that the preview screen includes a target object and the target object is displayed at a target position in the preview screen, the After obtaining the preview image of the camera as described above, the method further includes: 在所述电子设备当前的第一位置与所述目标对象的第二位置之间的距离小于预设阈值的情况下,在所述预览画面中,显示第三标识,所述第三标识用于标记所述目标对象的对象轮廓。In the case where the distance between the current first position of the electronic device and the second position of the target object is less than a preset threshold, in the preview screen, a third logo is displayed, and the third logo is used for Mark the object outline of the target object. 5.根据权利要求1所述的方法,其特征在于,所述目标对象为目标联系人;5. The method according to claim 1, wherein the target object is a target contact; 所述在所述预览画面中,显示第一标识之前,还包括:Before displaying the first logo in the preview screen, the method further includes: 接收用户的第二输入;receiving a second input from the user; 响应于所述第二输入,向目标联系人的电子设备发送位置共享请求;in response to the second input, sending a location sharing request to the electronic device of the target contact; 接收所述目标联系人的电子设备基于所述位置共享请求发送的目标位置。The electronic device receiving the target contact sends the target location based on the location sharing request. 6.根据权利要求5所述的方法,其特征在于,所述接收用户的第二输入,包括:6. The method according to claim 5, wherein the receiving the second input from the user comprises: 在所述摄像头的预览画面中,显示至少一个联系人标识;In the preview screen of the camera, display at least one contact identifier; 接收用户对所述至少一个联系人标识中的目标联系人标识的第二输入。A second user input for a target contact identification of the at least one contact identification is received. 7.一种电子设备,其特征在于,所述电子设备包括:获取模块和显示模块,其中:7. An electronic device, characterized in that the electronic device comprises: an acquisition module and a display module, wherein: 获取模块,用于获取摄像头的预览画面;The acquisition module is used to acquire the preview screen of the camera; 显示模块,在所述获取模块获取的预览画面中包括目标对象,且所述目标对象显示于所述预览画面中的目标位置的情况下,在所述预览画面中,显示第一标识,所述第一标识用于指示所述目标对象在所述预览画面中的位置。The display module, in the case where the preview screen obtained by the obtaining module includes a target object and the target object is displayed at a target position in the preview screen, in the preview screen, a first logo is displayed, the The first identifier is used to indicate the position of the target object in the preview screen. 8.根据权利要求7所述的电子设备,其特征在于,所述预览画面包括对象搜索区;所述电子设备还包括接收模块;8. The electronic device according to claim 7, wherein the preview screen comprises an object search area; the electronic device further comprises a receiving module; 所述接收模块,用于接收用户在所述对象搜索区的第一输入;The receiving module is used to receive the first input of the user in the object search area; 所述获取模块,还用于响应于所述接收模块接收的所述第一输入,获取所述第一输入所输入的目标对象的目标位置;The obtaining module is further configured to obtain the target position of the target object input by the first input in response to the first input received by the receiving module; 其中,所述目标对象包括以下至少一项:人、建筑物、标志物。Wherein, the target object includes at least one of the following: people, buildings, and landmarks. 9.一种电子设备,其特征在于,包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现如权利要求1至6中任一项所述的显示控制方法的步骤。9. An electronic device, characterized in that it comprises a processor, a memory, and a computer program stored on the memory and running on the processor, the computer program being executed by the processor to achieve the right The steps of the display control method described in any one of claims 1 to 6. 10.一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储计算机程序,所述计算机程序被处理器执行时实现如权利要求1至6中任一项所述的显示控制方法的步骤。10. A computer-readable storage medium, wherein a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the display according to any one of claims 1 to 6 is realized The steps of the control method.
CN201911424042.7A 2019-12-31 2019-12-31 A display control method and electronic device Active CN111158556B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911424042.7A CN111158556B (en) 2019-12-31 2019-12-31 A display control method and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911424042.7A CN111158556B (en) 2019-12-31 2019-12-31 A display control method and electronic device

Publications (2)

Publication Number Publication Date
CN111158556A true CN111158556A (en) 2020-05-15
CN111158556B CN111158556B (en) 2022-03-25

Family

ID=70560677

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911424042.7A Active CN111158556B (en) 2019-12-31 2019-12-31 A display control method and electronic device

Country Status (1)

Country Link
CN (1) CN111158556B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113239291A (en) * 2021-04-07 2021-08-10 北京三快在线科技有限公司 Page display method, device, equipment and storage medium
CN114338897A (en) * 2021-12-16 2022-04-12 杭州逗酷软件科技有限公司 Object sharing method and device, electronic equipment and storage medium
WO2023134491A1 (en) * 2022-01-12 2023-07-20 北京字跳网络技术有限公司 Page display control method and apparatus, and mobile terminal and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110279445A1 (en) * 2010-05-16 2011-11-17 Nokia Corporation Method and apparatus for presenting location-based content
CN109040960A (en) * 2018-08-27 2018-12-18 优视科技新加坡有限公司 A kind of method and apparatus for realizing location-based service
CN110519699A (en) * 2019-08-20 2019-11-29 维沃移动通信有限公司 A kind of air navigation aid and electronic equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110279445A1 (en) * 2010-05-16 2011-11-17 Nokia Corporation Method and apparatus for presenting location-based content
CN109040960A (en) * 2018-08-27 2018-12-18 优视科技新加坡有限公司 A kind of method and apparatus for realizing location-based service
CN110519699A (en) * 2019-08-20 2019-11-29 维沃移动通信有限公司 A kind of air navigation aid and electronic equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113239291A (en) * 2021-04-07 2021-08-10 北京三快在线科技有限公司 Page display method, device, equipment and storage medium
CN114338897A (en) * 2021-12-16 2022-04-12 杭州逗酷软件科技有限公司 Object sharing method and device, electronic equipment and storage medium
CN114338897B (en) * 2021-12-16 2024-01-16 杭州逗酷软件科技有限公司 Object sharing methods, devices, electronic devices and storage media
WO2023134491A1 (en) * 2022-01-12 2023-07-20 北京字跳网络技术有限公司 Page display control method and apparatus, and mobile terminal and storage medium

Also Published As

Publication number Publication date
CN111158556B (en) 2022-03-25

Similar Documents

Publication Publication Date Title
CN110995923B (en) Screen projection control method and electronic equipment
CN110221737B (en) Icon display method and terminal device
CN111142723B (en) Icon moving method and electronic device
CN111638824B (en) Unread message display method and device and electronic equipment
CN110058754B (en) Option display method and terminal device
CN109862504B (en) Display method and terminal equipment
CN110908750B (en) Screen capturing method and electronic equipment
CN110944139B (en) Display control method and electronic equipment
CN109857289B (en) Display control method and terminal equipment
US12238406B2 (en) Object display method and electronic device
CN111158556B (en) A display control method and electronic device
WO2020253340A1 (en) Navigation method and mobile terminal
CN111124709A (en) Text processing method and electronic equipment
CN108592939A (en) A kind of air navigation aid and terminal
CN111124231B (en) Picture generation method and electronic equipment
CN111090489B (en) An information control method and electronic device
WO2021109960A1 (en) Image processing method, electronic device, and storage medium
CN111078819A (en) Application sharing method and electronic equipment
CN110940339A (en) A navigation method and electronic device
WO2021104232A1 (en) Display method and electronic device
CN111104533A (en) Picture processing method and electronic equipment
CN109067975B (en) A contact information management method and terminal device
CN111142772A (en) Content display method and wearable device
CN109582200B (en) Navigation information display method and mobile terminal
CN110440825B (en) Distance display method and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant