Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that "/" in this context means "or", for example, A/B may mean A or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone.
It should be noted that "a plurality" herein means two or more than two.
It should be noted that, in the embodiments of the present invention, words such as "exemplary" or "for example" are used to indicate examples, illustrations or explanations. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
An execution main body of the display control method provided in the embodiment of the present invention may be the electronic device, or may also be a functional module and/or a functional entity capable of implementing the display control method in the electronic device, which may be determined specifically according to actual use requirements, and the embodiment of the present invention is not limited.
For example, taking an electronic device as a terminal device as an example, the terminal device in the embodiment of the present invention may be a mobile terminal device, and may also be a non-mobile terminal device. The mobile terminal device may be a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), etc.; the non-mobile terminal device may be a Personal Computer (PC), a Television (TV), a teller machine, a self-service machine, or the like; the embodiments of the present invention are not particularly limited.
The electronic device in the embodiment of the present invention may be an electronic device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present invention are not limited in particular.
The following describes a software environment applied to a display control method according to an embodiment of the present invention, taking an operating system as an example.
Fig. 1 is a schematic diagram of a possible operating system according to an embodiment of the present invention. In fig. 1, the architecture of the operating system includes 4 layers, respectively: an application layer, an application framework layer, a system runtime layer, and a kernel layer (specifically, a Linux kernel layer).
The application layer comprises various application programs (including system application programs and third-party application programs) in an operating system.
The application framework layer is a framework of the application, and a developer can develop some applications based on the application framework layer under the condition of complying with the development principle of the framework of the application.
The system runtime layer includes a library (also referred to as a system library) and an operating system runtime environment. The library mainly provides various resources required by the operating system. The operating system runtime environment is used to provide a software environment for the operating system.
The kernel layer is the operating system layer of the operating system and belongs to the lowest layer of the operating system software layer. The kernel layer provides kernel system services and hardware-related drivers for the operating system based on the Linux kernel.
Taking an operating system as an example, in the embodiment of the present invention, a developer may develop a software program for implementing the display control method provided in the embodiment of the present invention based on the system architecture of the operating system shown in fig. 1, so that the display control method may run based on the operating system shown in fig. 1. That is, the processor or the electronic device may implement a display control method provided by the embodiment of the present invention by running the software program in the operating system.
The following describes a display control method according to an embodiment of the present invention with reference to a flowchart of the display control method shown in fig. 2, where fig. 2 is a schematic flowchart of the display control method according to the embodiment of the present invention, and includes steps 201 to 202:
step 201: the electronic equipment acquires a preview picture of the camera.
Illustratively, when the electronic device is located within a predetermined range of the target object, the electronic device starts the camera to acquire a preview screen of the camera.
For example, the camera may be a camera provided in the electronic device itself, or may be a camera externally connected to the electronic device.
Step 202: and if the target object is included in the preview picture and the target object is displayed at the target position in the preview picture, the electronic equipment displays a first mark in the preview picture.
Illustratively, the first identifier is used to indicate the position of the target object in the preview screen.
Illustratively, the preview screen is the real-time screen content acquired by the camera.
Illustratively, the target object includes, but is not limited to, at least one of: buildings, people, and signs. The target object includes: and at least one item of target object information such as name, contour and position information. The specific embodiments of the present invention are not limited thereto.
For example, the target object may be a destination, or a person or an object at the destination, and the destination may be a clear point or a certain approximate range, which is not limited in the embodiment of the present invention. For example, the target object may be some infrastructure (hotel, supermarket, hospital, bank, etc.) near the location where the electronic device is located.
For example, the first identifier may be an AR image. In an example, the AR image may be a virtual image obtained by rendering the target object by the electronic device using AR technology.
For example, the types of the first identifiers corresponding to different types of target objects are different.
Example 1: if the target object is a building, the first identifier may be an AR box, and if the target object is a person, the first identifier may be an AR light pillar.
Example 2: taking the first marker as an AR light pillar as an example, light pillars of different colors may be rendered for objects of different classes. For example, if there is only one target object, only one color light pillar may be rendered; if a plurality of target objects exist, light columns of different colors can be rendered for different types of objects according to the types of the target objects, for example, a hospital can be rendered as a red light column, a bank can be rendered as a yellow light column, and a mall can be rendered as a green light column.
For example, if the target object is a plurality of objects, the first identifier includes a plurality of identifiers, and one identifier corresponds to one object. That is, in a scene in which a plurality of target objects are included in the preview screen of the camera, the electronic device may identify each target object separately. For example, each target object is marked separately with a different colored marker.
For example, in a case where the preview screen includes the target position, the electronic device may determine whether the preview screen includes the target object by using a SLAM (simultaneous localization And Mapping) technique, And in a case where the preview screen includes the target object, acquire the position of the target object in the preview screen. Then, a first mark indicating the position of the target object on the preview screen is added to the position. The SLAM technology determines the positioning and path planning of the current position and the position of a target object by utilizing picture information acquired by a camera, and the electronic equipment acquires a preview picture acquired by the camera of the electronic equipment in real time and determines the position of the target object in the preview picture according to the picture information of the preview picture under the condition that the preview picture contains the target object.
For example, in the process of determining whether the preview screen of the camera includes the target object, the electronic device may continuously obtain screen information of the preview screen, upload the screen information to the database to compare with object information of the target object, and display the first identifier in the preview screen to indicate the position of the target object when the preview screen includes the target object.
In an example, taking a target object as an example, the electronic device performs face recognition by acquiring face information in a preview picture in real time, so as to recognize whether the preview picture contains a target face.
Example 1, as shown in fig. 3, the target object is taken as a "convenience store" as an example. Assuming that 2 "convenience stores" are included in a preview screen (31 in fig. 3) of the electronic device, and the 2 "convenience stores" are "convenience store 1" and "convenience store 2", respectively, the electronic device marks the 2 "convenience stores" with boxes, the mark of "convenience store 1" is shown as 32 in fig. 3, and the mark of "convenience store 2" is shown as 33 in fig. 3.
Example 2, as shown in fig. 3, the target object is "contact 1" as an example. Assuming that the preview screen 31 contains the "contact 1", the electronic device may mark "contact 1" with a circle, as shown at 34 in fig. 3.
For example, the electronic device may load a virtual compass in the preview screen to indicate the direction of the user. Such as a virtual compass 35 in the preview screen 31 in fig. 3.
For example, the electronic device may display position prompt information of each target object in the preview screen, the position prompt information being used to prompt the user of the position of the target object.
According to the display control method provided by the embodiment of the invention, the electronic equipment acquires the preview picture of the camera, and under the condition that the preview picture comprises the target object and the target object is displayed at the target position in the preview picture, the electronic equipment displays the first identifier in the preview picture, wherein the first identifier is used for indicating the position of the target object in the preview picture, so that the position of the target object in the preview picture is marked more intuitively and accurately by adopting a first identifier marking mode, and a user can quickly confirm the position of the target object according to the first identifier.
Optionally, in an embodiment of the present invention, in a case that the preview screen includes a target object, and the target object is displayed at a target position in the preview screen, after step 201, the method further includes step a 1:
step A1: and the electronic equipment displays a second identifier in the preview picture.
Wherein the second identifier is used to indicate a walking route from the electronic device to the target object, for example, a walking route map formed by continuous arrow images.
For example, the second identifier is further used to indicate a route distance and a walking direction of a walking route from the electronic device to the target object.
It should be noted that the walking route indicated by the second identifier may be continuously adjusted according to the current location of the electronic device.
For example, the second identifier may be an AR image, through which a walking route from the electronic device to the target object is indicated for the user. That is, the walking route may be a virtual route that the electronic device renders from the electronic device to the target object using AR technology.
For example, the electronic device may map and locate the current position of the electronic device and the position of the target object by using SLAM technology, and determine a walking route from the electronic device to the target object.
For example, referring to fig. 3, when the user wants to go to "convenience store 2" shown in 33 of fig. 3, as shown in fig. 4, a walking route (42 in fig. 4, i.e., the second identifier) from the electronic device to "convenience store 2" is displayed on a preview screen (41 in fig. 4) of the electronic device, so that the user can quickly find "convenience store 2" through the walking route 42.
In this way, in the case where the preview screen includes the target object, the electronic apparatus indicates the walking route from the current position of the electronic apparatus to the position of the target object with the second mark in the preview screen, thereby enabling the user to more intuitively find the position of the target object through the walking route in the preview screen.
Optionally, in an embodiment of the present invention, in a case that the preview screen includes a target object, and the target object displays a target position in the preview screen, after step 201, the method further includes step B1:
step B1: and when the distance between the current first position of the electronic equipment and the second position of the target object is smaller than a preset threshold value, the electronic equipment displays a third mark on the preview picture.
The third mark is used for marking the object outline of the target object, so that the target object is more striking in a preview picture, and a user can conveniently and intuitively and quickly find the position of the target object.
Illustratively, the third identifier may be an AR image. For example, the third identifier may be an AR contour image obtained by rendering the target object by the electronic device using an AR technology.
It should be noted that, the AR image may refer to the above description, and is not described herein again.
For example, when the distance between the current first position of the electronic device and the second position of the target object is smaller than a preset threshold, it indicates that the electronic device is already within the predetermined range of the target object, that is, the target object is closer to the electronic device. At this time, if the preview screen includes the target object, the electronic device may mark the object outline of the target object by the third identifier.
For example, taking the third identifier as an AR image, when a distance between the current first position of the electronic device and the second position of the target object is smaller than a preset threshold, the electronic device may obtain object contour information of the target object from the preview screen, and then render the object contour of the target object by using an AR technique based on the object contour information.
Therefore, the electronic equipment determines whether the current position of the electronic equipment is located within the range of the target object by judging whether the distance between the current first position and the second position of the target object is smaller than a preset threshold value, and then the object outline of the target object at the position of the third identification mark is used in the preview picture, so that the display of the target object in the preview picture is easier to distinguish.
Optionally, in an embodiment of the present invention, when the preview screen includes the object search area, before step 201, the method further includes step C1 and step C2:
step C1: the electronic equipment receives a first input of a user in the object search area.
Step C2: in response to the first input, the electronic device obtains a target position of a target object input by the first input.
Illustratively, the first input is used for inputting object information related to the target object, such as a name of the target object, a picture, and the like. Alternatively, the first input is for inputting destination information, and the target object is an object on a destination.
For example, the object search area may be displayed in a floating manner on the preview screen.
For example, the object search area may also be moved on the preview screen as the user's finger moves.
For example, after acquiring the target position of the target object input by the first input, if the preview screen includes the target object, the electronic device displays a first identifier on the preview screen to mark the position of the target object.
Illustratively, the first input is for inputting object information of a target object in the object search area. The object information includes but is not limited to: name of the target object, location information of the location where the target object is located, and the like.
For example, the first input may be a click input that a user may click on the object search area, or a slide input that the user may slide on the object search area, or other feasibility inputs that the user may make on the object search area, which may be determined according to actual usage requirements, and embodiments of the present invention are not limited thereto. For example, the electronic device may be triggered to collect a user voice, which is a voice for the target object, by the sliding input of the user in the object search area.
For example, the click input may be a single click input, a double click input, or any number of click inputs; the click input may be a long-press input or a short-press input. The sliding input may be a sliding input in any direction, for example, sliding upwards, sliding downwards, sliding leftwards or sliding rightwards, and the sliding trajectory of the sliding input may be a straight line or a curved line, and may be specifically set according to actual requirements.
For example, the object search area may be a search interface displayed in a floating manner in the preview screen. Text or voice information may be entered in the search area.
Illustratively, the object search area is used for triggering the electronic device to retrieve the information input in the object search area.
In one example, in an example of the user inputting a target object name in the object search area, the electronic device retrieves object information (e.g., location coordinates of the matching object, appearance information of the matching object (e.g., if the matching object is a building, the appearance information may be a building appearance picture), etc.) of a matching object matching the target object name within a predetermined range of the electronic device based on the target object name. Then, the electronic equipment determines whether the current preview picture of the electronic equipment contains the matching objects or not based on the object information of the matching objects, and if so, displays first marks corresponding to the matching objects in the preview picture.
For example, as shown in fig. 5, in the preview screen (51 in fig. 5) of the electronic device, after the user inputs "convenience store" in the object search box (i.e., the above object search area, 52 in fig. 5) and clicks the "magnifying glass" flag for searching. In this case, as shown in fig. 3, the preview screen 31 of the electronic apparatus includes two "convenience stores" 32 and 33 in fig. 3, respectively.
In one example, after acquiring the target position of the target object, the electronic device may display position prompt information of each target object in the preview screen, where the position prompt information is used for prompting the user about the position of the target object.
For example, referring to fig. 5, as shown in fig. 6, the preview screen 61 displays 2 presentation links, namely, a location link of "convenience store 1" (as shown in fig. 6, "convenience store 1" goes here) and a location link of "convenience store 2" (as shown in fig. 6, "convenience store 2" goes here). The user clicks on these links and may jump to a navigation or map interface to indicate the electronic device's route to the destinations.
In this way, the electronic device detects the content related to the target object input by the user in the object search area, so as to acquire the position of the target object in the preview screen, so that the user can more quickly and intuitively and certainly recognize the target object to be found.
Optionally, in an embodiment of the present invention, before the step 201, the method further includes steps D1 to D3:
step D1: the electronic device displays a map on the preview screen.
The map comprises an object mark of the target object, and the object mark is used for indicating the position of the target object in the map. For example, the map may also display a current location mark of the electronic device, a walking route between the target object and the electronic device, and the like. Therefore, the user can conveniently and visually know the distance and the route from the current position of the electronic equipment to the target object according to the map.
Step D2: the electronic device receives a second input for the object marker.
Step D3: in response to the third input, the electronic device obtains a target position of the target object.
For example, the third input may be a touch input of the user on the object identifier, or other feasible inputs, which is not limited in the embodiment of the present invention.
For example, the electronic device may display a map in the preview screen.
For example, the map may be displayed in a floating manner on the preview screen. Further, the map may move on the preview screen with the movement of the first finger of the user.
For example, the user may adjust the size, display position and transparency of the map according to the use requirement, and the user may press or slide the map or the map may be limited to the present invention.
For example, the size of the map may be a default size, or may be flexibly adjusted according to the operation of the user. It should be noted that the maximum size of the map is as large as the display screen of the electronic device.
In one example, the user double-finger touches the map and double-finger outspreads a sliding input, which the electronic device may determine as an input to change the size of the map. Illustratively, the map can be zoomed in or zoomed out along the diagonal line of the map according to the gesture of the user, and when the size of the map is zoomed in, the map can be squeezed inwards along the diagonal line of the map; when the map is zoomed in, it can be stretched diagonally outward along the map.
For example, the map may be displayed in a preset transparency superimposed on the preview screen, for example, if the preset transparency is T1, the value range of T1 may be 0% < T1< 100%. In addition, the map window may also be displayed on the preview screen with high brightness or low brightness, which is not limited in the present invention.
For example, taking the third input as a click input as an example, after the user clicks the object identifier, the electronic device may obtain object information of the target object.
In one example, if the preview screen further includes an object search area, after the electronic device receives a target object name input by a user in the object search area, the electronic device searches the target object name through a server to obtain location information of the target object, and then displays the target object in a map on the preview screen based on the location information.
For example, as shown in fig. 7, when there are a plurality of convenience stores in a map (e.g., 72 in fig. 7) on a preview screen (e.g., 71 in fig. 7) of the electronic device, the convenience stores are marked by a plurality of solid points on the map, where a hollow point (e.g., 73 in fig. 7) in the map is a current location of the electronic device, and a solid point is a location of the convenience stores, and a user can click one of the solid point marks (i.e., the mark of the target object, e.g., 74 in fig. 7) to view information of the convenience stores at the location.
In this way, the user knows the position of the target object through the mark in the map on the preview screen, so that the user can more intuitively check the distance between the current position and the position of the target object.
Optionally, in an embodiment of the present invention, when the target object is a target contact, before step 202, the method further includes: step E1 to step E3:
step E1: the electronic device receives a second input from the user.
Step E2: and responding to the second input, and sending a position sharing request to the electronic equipment of the target contact person by the electronic equipment.
Step E3: and the electronic equipment receives the target position sent by the electronic equipment of the target contact person based on the position sharing request.
Illustratively, the location sharing request is used for requesting to obtain the location information of the target contact.
Further optionally, in the embodiment of the present invention, the step E1 includes steps E11 and E12:
step E11: and the electronic equipment displays at least one contact person identifier in a preview picture of the camera.
Step E12: and the electronic equipment receives a second input of the target contact person identification in the at least one contact person identification by the user.
For example, each of the at least one contact identifier may correspond to a contact.
Illustratively, the second input is a user input of a target contact identification.
Illustratively, the second input specifically includes: the click input by the user for the target contact identifier, or the slide input by the user for the target contact identifier, or other feasible inputs by the user for the target contact identifier may be specifically determined according to actual use requirements, and the embodiment of the present invention is not limited.
For example, the above click input and the above slide input can refer to the description in the above first input, and are not described herein again.
For example, a user may request to establish a location sharing request connection with a target contact by clicking the target contact in the contact list, and if the location sharing request connection is passed by the target contact, a preview screen of the electronic device may display a prompt message of "agreeing to share a location", for example, "the other party agrees to share the location information, is connecting"; if the position sharing request connection is rejected by the target contact person, the preview screen of the electronic equipment displays prompt information of 'the other party rejects the sharing position', for example, 'the other party rejects the sharing position information, and the viewing permission is not temporarily given'.
For example, the electronic device may map and locate the current position of the electronic device and the position of the target contact by using SLAM technology, and when the preview screen contains the target contact, the electronic device acquires the position of the target contact in the preview screen corresponding to the electronic device. An AR image indicating the target contact is then rendered at the location.
For example, as shown in (a) in fig. 8, there are 3 contacts in a "contact list" (82 in a in fig. 8) displayed in a preview screen (81 in a in fig. 8) of the electronic device, which are "contact 1", "contact 2", and "contact 3", respectively, each of which corresponds to one electronic device, when the user wants to acquire the location information of "contact 1", the user may click an identifier of "contact 1" (83 in a in fig. 8) to send a "location sharing request" connection to "contact 1", and when "contact 1" receives the "location sharing request" connection and agrees to establish the location sharing connection. As shown in fig. 8 (b), the preview screen 81 of the electronic apparatus displays "the other party agrees to share the location and is connecting". As shown in fig. 8 (c), when the connection establishment is successful, the location of the contact is identified by a box in the preview screen 81 (e.g., 84 in fig. 8 c, i.e., the first identifier mentioned above).
In this way, the electronic device sends the location sharing to the electronic device of the target contact, so that the user can more quickly and accurately recognize the location of the target contact by marking the target contact when the target contact is included in the preview screen.
The various marks mentioned in the embodiments of the present invention (for example, the first mark, the second mark, the third mark, the object mark, and the like) may be easily distinguishable marks such as a point, a circle, an arrow, and the like, or marks such as characters and pictures, and may be specifically set according to actual needs, which is not limited in the embodiments of the present invention. For example, the picture may be an AR image, a cartoon image, a dynamic image, and the like, which is not limited in this embodiment of the present invention. The shape, pattern, color, form and size of the AR image may be set according to actual requirements, which is not limited in the embodiment of the present invention, for example, the AR image mentioned in the embodiment of the present invention may be a rectangular frame image, a light pillar image, an arrow image, a route image, or the like.
Fig. 9 is a schematic diagram of a possible structure of an electronic device according to an embodiment of the present invention, and as shown in fig. 9, the electronic device 900 includes: an obtaining module 901 and a displaying module 902, wherein:
an obtaining module 901, configured to obtain a preview screen of a camera.
The display module 902 displays a first indicator on the preview screen, where the first indicator indicates a position of the target object on the preview screen, when the target object is included in the preview screen acquired by the acquisition module 901 and the target object is displayed at the target position on the preview screen.
Optionally, as shown in fig. 9, the electronic device 900 further includes: a receiving module 903, wherein: a receiving module 903, configured to receive a first input of the user in the object search area; an obtaining module 901, configured to, in response to the first input received 903 by the receiving module, obtain a target position of a target object input by the first input; wherein the target object comprises at least one of: people, buildings, signs.
Optionally, the display module 902 is further configured to display a second identifier in the preview screen; wherein the second identifier is used for indicating a walking route from the electronic device 900 to the target object.
Optionally, the display module 902 is further configured to display a third identifier in the preview screen when a distance between the current first position of the electronic device 900 and the second position of the target object is smaller than a preset threshold, where the third identifier is used to mark an object outline of the target object.
Optionally, as shown in fig. 9, the electronic device 900 further includes: a sending module 904, wherein: the receiving module 903 is further configured to receive a second input of the user; a sending module 904, configured to send a location sharing request to the electronic device of the target contact in response to the second input received by the receiving module 903; the receiving module 903 is further configured to receive a target location sent by the electronic device of the target contact based on the location sharing request.
Optionally, the display module 902 is further configured to display at least one contact identifier in a preview screen of the camera; the receiving module 903 is specifically configured to receive a second input of the target contact identifier in the at least one contact identifier by the user.
According to the electronic device provided by the embodiment of the invention, the electronic device acquires the preview picture of the camera, and under the condition that the preview picture comprises the target object and the target object is displayed at the target position in the preview picture, the electronic device displays the first identifier in the preview picture, wherein the first identifier is used for indicating the position of the target object in the preview picture, so that the position of the target object in the preview picture is marked more intuitively and accurately by adopting a first identifier marking mode, and a user can quickly confirm the position of the target object according to the first identifier.
It should be noted that, as shown in fig. 9, modules that are necessarily included in the electronic device 900 are indicated by solid line boxes, such as an obtaining module 901; modules that may or may not be included in the electronic device 900 are illustrated with dashed boxes, such as a transmit module 904.
The electronic device provided by the embodiment of the present invention can implement each process implemented by the electronic device in the above method embodiments, and is not described herein again to avoid repetition.
Taking an electronic device as an example of a terminal device, fig. 10 is a schematic diagram of a hardware structure of a terminal device for implementing various embodiments of the present invention, where the terminal device 100 includes but is not limited to: the system comprises a radio frequency unit 101, a network module 102, an audio output unit 103, an input unit 104, a sensor 105, a display unit 106, a user input unit 107, an interface unit 108, a memory 109, a processor 110, a power supply 111, a camera module 112 and the like. Those skilled in the art will appreciate that the configuration of the terminal device 100 shown in fig. 10 does not constitute a limitation of the terminal device, and that the terminal device 100 may include more or less components than those shown, or combine some components, or arrange different components. In the embodiment of the present invention, the terminal device 100 includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal device, a wearable device, a pedometer, and the like. The camera module 112 includes a camera, which may be a front camera or a rear camera.
The user input unit 107 is used for acquiring a preview picture of the camera; and a processor 110 configured to display a first indicator on the preview screen, the first indicator indicating a position of the target object on the preview screen, when the target object is included in the preview screen and the target object is displayed at a target position on the preview screen.
In the terminal device provided by the embodiment of the present invention, the electronic device obtains a preview screen of the camera, and when the preview screen includes the target object and the target object is displayed at the target position in the preview screen, the electronic device displays the first identifier in the preview screen, where the first identifier is used to indicate the position of the target object in the preview screen, so that the position of the target object in the preview screen is marked more intuitively and accurately by using the first identifier, so that a user can quickly confirm the position of the target object according to the first identifier.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 101 may be used for receiving and sending signals during a message transmission or call process, and specifically, after receiving downlink data from a base station, the downlink data is processed by the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through a wireless communication system.
The terminal device 100 provides the user with wireless broadband internet access via the network module 102, such as helping the user send and receive e-mails, browse web pages, and access streaming media.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output as sound. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the terminal device 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 104 is used to receive an audio or video signal. The input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the graphics processor 1041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the network module 102. The microphone 1042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode.
The terminal device 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or the backlight when the terminal device 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the terminal device posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration identification related functions (such as pedometer, tapping), and the like; the sensors 105 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an organic light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal device 100. Specifically, the user input unit 107 includes a touch panel 1071 and other input devices 1072. Touch panel 1071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 1071 (e.g., operations by a user on or near touch panel 1071 using a finger, stylus, or any suitable object or attachment). The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and receives and executes commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Specifically, other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 1071 may be overlaid on the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although in fig. 10, the touch panel 1071 and the display panel 1061 are two independent components to implement the input and output functions of the terminal device 100, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the terminal device 100, and is not limited herein.
The interface unit 108 is an interface for connecting an external device to the terminal apparatus 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal apparatus 100 or may be used to transmit data between the terminal apparatus 100 and the external device.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the terminal device 100, connects various parts of the entire terminal device 100 by various interfaces and lines, and performs various functions of the terminal device 100 and processes data by running or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the terminal device 100. Processor 110 may include one or more processing units; alternatively, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The terminal device 100 may further include a power supply 111 (such as a battery) for supplying power to each component, and optionally, the power supply 111 may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the terminal device 100 includes some functional modules that are not shown, and are not described in detail here.
Optionally, an embodiment of the present invention further provides a terminal device, which includes a processor, a memory, and a computer program stored in the memory and capable of running on the processor 110, where the computer program, when executed by the processor, implements each process of the foregoing display control method embodiment, and can achieve the same technical effect, and details are not repeated here to avoid repetition.
Optionally, an embodiment of the present invention further provides an AR device, including a processor, a memory, and a computer program stored in the memory and capable of running on the processor, where the computer program, when executed by the processor, implements the processes of the foregoing method embodiment, and can achieve the same technical effect, and details are not described here to avoid repetition.
Optionally, in this embodiment of the present invention, the electronic device in the above embodiment may be an AR device. Specifically, when the electronic device in the above embodiment (for example, the electronic device shown in fig. 10) is an AR device, the AR device may include all or part of the functional modules in the electronic device. Of course, the AR device may further include a functional module not included in the electronic device.
It is to be understood that, in the embodiment of the present invention, when the electronic device in the above embodiment is an AR device, the electronic device may be an electronic device integrated with AR technology. The AR technology is a technology for realizing the combination of a real scene and a virtual scene. By adopting the AR technology, the visual function of human can be restored, so that human can experience the feeling of combining a real scene and a virtual scene through the AR technology, and further the human can experience the experience of being personally on the scene better.
Taking the AR device as AR glasses as an example, when the user wears the AR glasses, the scene viewed by the user is generated by processing through the AR technology, that is, the virtual scene can be displayed in the real scene in an overlapping manner through the AR technology. When the user operates the content displayed by the AR glasses, the user can see that the AR glasses peel off the real scene, so that a more real side is displayed to the user. For example, only the case of the carton can be observed when a user visually observes one carton, but the user can directly observe the internal structure of the carton through AR glasses when the user wears the AR glasses.
The AR equipment can comprise the camera, so that the AR equipment can be combined with the virtual picture to display and interact on the basis of the picture shot by the camera. For example, in the embodiment of the present invention, the AR device may obtain a preview screen of a camera of the AR device, and when the preview screen includes the target object and the target object is displayed at a target position in the preview screen, display a first identifier in the preview screen, where the first identifier is used to indicate a position of the target object in the preview screen. Therefore, the AR device can mark the position of the target object in the preview picture more intuitively and accurately by adopting the first identification mark, so that the user can quickly confirm the position of the target object according to the first identification.
An embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the display control method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.