[go: up one dir, main page]

CN111815786A - Information display method, device, equipment and storage medium - Google Patents

Information display method, device, equipment and storage medium Download PDF

Info

Publication number
CN111815786A
CN111815786A CN202010621088.4A CN202010621088A CN111815786A CN 111815786 A CN111815786 A CN 111815786A CN 202010621088 A CN202010621088 A CN 202010621088A CN 111815786 A CN111815786 A CN 111815786A
Authority
CN
China
Prior art keywords
determining
image
enhanced
effect
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010621088.4A
Other languages
Chinese (zh)
Inventor
侯欣如
王鼎禄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202010621088.4A priority Critical patent/CN111815786A/en
Publication of CN111815786A publication Critical patent/CN111815786A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Human Computer Interaction (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application provides an information display method, an information display device, information display equipment and a storage medium, wherein the method comprises the following steps: acquiring a trigger operation input on an image to be enhanced on a display interface; the image to be enhanced is acquired based on a real scene; in response to the trigger operation, determining position information of the trigger operation on the display interface; according to the position information, determining a target virtual effect matched with a target object corresponding to the trigger operation in the image to be enhanced; and superposing the target virtual effect on the image to be enhanced and presenting the target virtual effect on the display interface.

Description

Information display method, device, equipment and storage medium
Technical Field
The embodiment of the application relates to an enhanced display technology, in particular to but not limited to an information display method, device, equipment and a storage medium.
Background
Augmented Reality (AR) technology superimposes entity information (visual information, sound, touch, etc.) on the real world after simulation, so that a real environment and a virtual object are presented on the same screen or space in real time. The improvement of the optimization of the display effect of the augmented reality scene presented by the AR device is increasingly important.
Disclosure of Invention
The embodiment of the application provides an information display method, an information display device, information display equipment and a storage medium.
The embodiment of the application provides an information display method, which comprises the following steps:
acquiring a trigger operation input on an image to be enhanced on a display interface; the image to be enhanced is acquired based on a real scene;
in response to the trigger operation, determining position information of the trigger operation on the display interface;
according to the position information, determining a target virtual effect matched with a target object corresponding to the trigger operation in the image to be enhanced;
and superposing the target virtual effect on the image to be enhanced and presenting the target virtual effect on the display interface.
In some possible implementation manners, the determining, according to the position information, a target virtual effect matched with a target object corresponding to the trigger operation in the image to be enhanced includes:
determining the target object according to the position information;
acquiring a three-dimensional virtual model of the target object;
and determining a target virtual effect matched with the three-dimensional virtual model.
In some possible implementations, the determining the target virtual effect matching the three-dimensional virtual model includes:
determining attribute information and function information of the target object;
determining the effect type of the virtual effect of the target object according to the attribute information and/or the function information; wherein the effect types include: virtual tags, non-interactive animations or interactive animations;
and acquiring the target virtual effect corresponding to the effect type according to the effect type.
In some possible implementations, the determining the location information of the trigger operation on the presentation interface includes:
acquiring a screen coordinate system to which a display interface belongs;
determining a first coordinate value of a contact corresponding to the trigger operation in the screen coordinate system;
and determining the first coordinate value as the position information of the trigger operation on the display interface.
In some possible implementations, the determining the target object according to the location information includes:
determining pixel information of the contact according to the first coordinate value;
and determining the target object in the image to be enhanced according to the pixel information.
In some possible implementation manners, the determining, according to the position information, a target virtual effect matched with a target object corresponding to the trigger operation in the image to be enhanced includes:
obtaining a model coordinate system to which the three-dimensional virtual model belongs;
determining a conversion relationship between the model coordinate system and the screen coordinate system;
converting the first coordinate value into a second coordinate value of the contact corresponding to the trigger operation in the model coordinate system according to the conversion relation;
and determining the virtual effect matched with the three-dimensional virtual model of the second coordinate value as the target virtual effect.
In some possible implementations, the superimposing the target virtual effect on the image to be enhanced and presenting the target virtual effect on the presentation interface includes:
determining a display mode matched with the effect type of the target virtual effect;
and displaying the augmented reality image superposed with the target virtual effect on the display interface by adopting the matched display mode.
In some possible implementations, the method further includes:
determining the position information of other objects in the image to be enhanced;
determining the relative position relation between the other objects and the target object according to the position information of the other objects and the position information of the target object;
determining the transparency of the three-dimensional virtual model of the other object according to the relative position relation;
rendering the three-dimensional virtual models of the other objects by adopting the transparency, and displaying the rendered three-dimensional virtual models of the other objects on the display interface.
In some possible implementations, the method further includes:
determining tone information of a target object in the image to be enhanced;
determining the tone of the target virtual effect according to the tone information;
and superposing the target virtual effect with the color tone on the image to be enhanced and presenting the target virtual effect on the display interface.
An embodiment of the present application provides an information display device, the device includes:
the first acquisition module is used for acquiring a trigger operation input on an image to be enhanced on a display interface; the image to be enhanced is acquired based on a real scene;
the first response module is used for responding to the trigger operation and determining the position information of the trigger operation on the display interface;
the first determining module is used for determining a target virtual effect matched with a target object corresponding to the trigger operation in the image to be enhanced according to the position information;
and the first presentation module is used for superposing the target virtual effect on the image to be enhanced and presenting the target virtual effect on the display interface.
In some possible implementations, the first determining module includes:
the first determining submodule is used for determining the target object according to the position information;
the first obtaining submodule is used for obtaining a three-dimensional virtual model of the target object;
and the second determining submodule is used for determining a target virtual effect matched with the three-dimensional virtual model.
In some possible implementations, the second determining sub-module includes:
a first determination unit configured to determine attribute information and function information of the target object;
a second determining unit, configured to determine an effect type of a virtual effect of the target object according to the attribute information and/or the function information; wherein the effect types include: virtual tags, non-interactive animations or interactive animations;
a first obtaining unit, configured to obtain the target virtual effect corresponding to the effect type according to the effect type.
In some possible implementations, the first response module includes:
the second acquisition submodule is used for acquiring a screen coordinate system to which the display interface belongs;
the third determining submodule is used for determining a first coordinate value of a contact corresponding to the trigger operation in the screen coordinate system;
and the fourth determining submodule is used for determining the first coordinate value as the position information of the triggering operation on the display interface.
In some possible implementations, the first determining sub-module includes:
a third determining unit, configured to determine pixel information of the contact according to the first coordinate value;
and the fourth determining unit is used for determining the target object in the image to be enhanced according to the pixel information.
In some possible implementations, the first determining module includes:
the third obtaining submodule is used for obtaining a model coordinate system to which the three-dimensional virtual model belongs;
a fifth determining submodule for determining a conversion relationship between the model coordinate system and the screen coordinate system;
the first conversion submodule is used for converting the first coordinate value into a second coordinate value of the contact corresponding to the trigger operation in the model coordinate system according to the conversion relation;
and the sixth determining submodule is used for determining the virtual effect matched with the three-dimensional virtual model of the second coordinate value as the target virtual effect.
In some possible implementations, the first rendering module includes:
a seventh determining submodule, configured to determine a display manner that is matched with the effect type of the target virtual effect;
and the first presentation submodule is used for presenting the augmented reality image superposed with the target virtual effect on the display interface by adopting the matched display mode.
In some possible implementations, the apparatus further includes:
the second determination module is used for determining the position information of other objects in the image to be enhanced;
a third determining module, configured to determine, according to the position information of the other object and the position information of the target object, a relative position relationship between the other object and the target object;
a fourth determining module, configured to determine, according to the relative position relationship, a transparency of the three-dimensional virtual model of the other object;
and the first rendering module is used for rendering the three-dimensional virtual models of the other objects by adopting the transparency and presenting the rendered three-dimensional virtual models of the other objects on the display interface.
In some possible implementations, the apparatus further includes:
the fifth determining module is used for determining tone information of the target object in the image to be enhanced;
a sixth determining module, configured to determine, according to the hue information, a hue of the target virtual effect;
and the second presentation module is used for superposing the target virtual effect with the color tone on the image to be enhanced and presenting the target virtual effect on the display interface.
Embodiments of the present application provide a computer storage medium, where computer-executable instructions are stored, and after being executed, the computer-executable instructions can implement the above-mentioned method steps.
Embodiments of the present application provide a computer device, where the computer device includes a memory and a processor, where the memory stores computer-executable instructions, and the processor executes the computer-executable instructions on the memory to implement the above-mentioned method steps.
Embodiments of the present application provide a computer program comprising computer instructions for implementing the above-mentioned method steps.
According to the technical scheme provided by the embodiment of the application, aiming at the trigger operation input on the image to be enhanced, firstly, the position information of the trigger operation on a display interface is determined; then, based on the position information, determining a target object corresponding to the trigger operation in the image to be enhanced, and automatically matching a target virtual effect of the target object; and finally, superposing the target virtual effect on the image to be enhanced and presenting the target virtual effect on a display interface. Therefore, different target virtual effects are presented for different positions on the display interface for the triggering operation input by the viewer, so that richer and more accurate augmented reality effects can be provided.
Drawings
FIG. 1A is a schematic diagram of an alternative architecture of an information display system according to an embodiment of the present invention;
fig. 1B is a schematic view of an implementation flow of an information display method according to an embodiment of the present application;
fig. 2 is a schematic flow chart of another implementation of the information display method according to the embodiment of the present application;
fig. 3A is an application scene diagram of an information display method according to an embodiment of the present application;
fig. 3B is another application scenario diagram of the information display method according to the embodiment of the present application;
fig. 3C is an application scene diagram of the information display method according to the embodiment of the present application;
FIG. 4 is a schematic diagram of a structure of an information display device according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, specific technical solutions of the present invention will be described in further detail below with reference to the accompanying drawings in the embodiments of the present application. The following examples are intended to illustrate the present application but are not intended to limit the scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the following description, references to the terms "first \ second \ third" are only to distinguish similar objects and do not denote a particular order, but rather the terms "first \ second \ third" are used to interchange specific orders or sequences, where appropriate, to enable embodiments of the invention described herein to be practiced in other than the order shown or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein is for the purpose of describing embodiments of the invention only and is not intended to be limiting of the invention.
Augmented Reality (AR) technology may be applied to an AR device, which may be any electronic device capable of supporting AR functions, including but not limited to AR glasses, a tablet computer, a smart phone, and the like. When the AR device is operated in a real scene, the AR device may view a virtual object superimposed in the real scene, for example, a virtual tree superimposed on a real campus playground, and a virtual flying bird superimposed in the sky, how to make the virtual objects such as the virtual tree and the virtual flying bird better merge with the real scene, so as to achieve a presentation effect of the virtual object in an augmented reality scene.
To facilitate understanding of the embodiment, first, a detailed description is given to an AR scene information display method disclosed in the embodiment of the present disclosure, an execution subject of the AR scene information display method provided in the embodiment of the present disclosure may be the AR device, or may be other processing devices with data processing capability, such as a local or cloud server, and the embodiment of the present disclosure is not limited in this application.
An exemplary application of the information display device provided in the embodiments of the present application is described below, and the information display device provided in the embodiments of the present application may be implemented as various types of user terminals such as a notebook computer, a tablet computer, a desktop computer, a set-top box, a mobile device (e.g., a mobile phone, a portable music player, a personal digital assistant, a dedicated messaging device, and a portable game device), and may also be implemented as a server. In the following, an exemplary application will be explained when the device is implemented as a terminal or a server.
Referring to fig. 1A, fig. 1A is an optional schematic structural diagram of an information display system according to an embodiment of the present invention, and in order to support an exemplary application, first, a trigger operation 13 input on an image 12 to be enhanced in a display interface 11 is obtained; then, determining the position of the trigger operation 13 on the display interface 11; then, based on the position information, determining a target object 14 corresponding to the trigger operation in the image 12 to be enhanced, and automatically matching a target virtual effect 15 of the target object 14; and finally, superposing the target virtual effect 15 on the image to be augmented 12 to obtain an augmented reality image 16, and presenting the augmented reality image 16 on the display interface 11. Therefore, the positions in the images to be displayed triggered by the triggering operation are different, the triggered target objects are different, and different virtual effects are matched for different triggering positions, so that the augmented reality scene provided for the viewer can better meet the requirements of the viewer.
The method may be applied to a computer device, and in some embodiments, the functions performed by the method may be implemented by a processor in the computer device invoking program code, where the program code may be stored in a computer storage medium.
The embodiment of the present application provides an information display method, which is described in detail below with reference to fig. 1B.
And step S101, acquiring a trigger operation input on the image to be enhanced of the display interface.
In some possible implementations, the image to be enhanced is acquired based on a real scene; the real scene may be a building indoor scene, a street scene, a specific object, or the like, in which a virtual object can be superimposed, and by superimposing the virtual object in the real scene, an augmented reality effect may be presented in the AR device. For example, the image may be an image captured from a video, such as a video screenshot or an interface screenshot, based on a Red Green Blue (RGB) image captured in an outdoor scene or an indoor scene. The display interface can be any display unit of the device, such as a device with a collection function, a computer, a mobile phone, a mobile screen with a camera, a camera and the like, or a device without a collection function, such as a display screen. The triggering operation is to trigger the image to be enhanced displayed in the display interface, and may be touch operation, click operation, or sight tracking. In one embodiment, if it is detected that the line of sight of the viewer stays at the same position on the image to be enhanced for more than a preset time period (e.g., 3 seconds), it is determined that the line of sight stays as a trigger operation.
Step S102, responding to the trigger operation, and determining the position information of the trigger operation on the display interface.
In some possible implementation manners, if the triggering operation is a touch operation or a click operation, determining coordinate values on a touch or click display interface; if the triggering operation is the sight tracking and the stay time at the same position on the image to be enhanced is longer, the position information of the sight stay, such as the coordinate value on the display interface, is determined.
Step S103, according to the position information, determining a target virtual effect matched with a target object corresponding to the trigger operation in the image to be enhanced.
In some possible implementation manners, after the position where the triggering operation occurs is obtained, a pixel of the image to be enhanced at the position is determined, and then the target object is targeted by the pixel, so that a target virtual effect matched with the target object is obtained. In a specific example, if the image to be enhanced is an image collected in a campus and including a plurality of buildings, after the position of the trigger operation is determined, the position is obtained and is directed to the 3 th building, so that the target virtual effect corresponding to the 3 rd building is matched. For example, the label of basic information and functional information such as the name of the building No. 3, the construction time, the expense condition, the construction unit, the area in the building and the like is introduced. The type of the target virtual effect is related to the type of the target object, for example, if the target object is a building type, the virtual effect is a label for introducing basic information and functional information of the building, or an animation for embodying the whole construction process, or an image for presenting the internal structure of the building, etc.; if the target object is a plant, the virtual effect is a label for introducing the name, the subject, the growth condition, the growth environment and other basic information of the plant, or animation and the like for reflecting the whole growth process; if the target object is an animal, the virtual effect may be a label that introduces basic information such as a subject, a growth situation, a growth environment, and the like to which the name of the animal belongs, or an animation or video that represents the activity of the animal, or an interaction with a viewer, and the like. For example, the trigger operation is that the viewer clicks the mouth of an animal, and the corresponding virtual effect may be that the mouth is opened or closed, etc.
And step S104, superimposing the target virtual effect on the image to be enhanced and presenting the target virtual effect on the display interface.
In some possible implementation manners, the target virtual effect is superimposed on the image to be enhanced to form a virtual-real combined AR image, and the AR image is presented on the presentation interface. For example, the image to be enhanced is an image which is acquired in a campus and comprises a plurality of buildings, after the position of the trigger operation is determined, the position is directed at the building No. 3, and the target virtual effect corresponding to the building No. 3 is a label for introducing the basic information of the building No. 3, the label is superposed in the image to be enhanced to form an AR image, and the AR image is presented on the display interface.
In the embodiment of the application, different target virtual effects are presented for different positions on the display interface, which are used for triggering operation input by a viewer, so that richer and more accurate augmented reality effects can be provided, and the experience of the viewer is further improved.
In some embodiments, to provide an augmented reality effect more fitting to the mind of the viewer, step S103 may be implemented by:
step S131, determining the target object according to the position information.
In some possible implementations, after the position of the trigger operation is acquired, pixel information at the position is determined, and then a target object with the pixel information is determined. For example, the image to be enhanced is an image collected in a campus and including a plurality of buildings, after the coordinates of the trigger operation are determined, pixels under the coordinates are determined, and then a target object with the pixels, namely, a building 3, is determined.
Step S132, a three-dimensional virtual model of the target object is obtained.
In some possible implementations, where the three-dimensional virtual model is used to represent the real scene and is presented in the same coordinate system as the real scene in equal proportion, for example, taking the real scene as a scene of a street, the street including a tall building, the three-dimensional virtual model representing the real scene also includes a model of the street and the tall building in the street, and the three-dimensional virtual model and the real scene are in the same coordinate system according to 1: 1, that is, if the three-dimensional virtual model is placed in the world coordinate system of the real scene, the three-dimensional virtual model will completely coincide with the real scene. The three-dimensional virtual model of the target object may be created in advance, and the corresponding three-dimensional virtual model may be directly retrieved based on the position information. In some embodiments, a three-dimensional virtual model of a target object can be created by a binocular stereo vision measurement method, the method simulates the stereo imaging principle of human eyes, a left camera and a right camera with appropriate angles shoot a certain object in a scene at the same time, coordinate information of surface points of the object under the same visual angle of the left camera and the right camera is obtained through a triangular geometric relationship and a parallax principle, and then the position and the shape of the target object are constructed, so that the three-dimensional virtual model can be obtained. In the embodiment of the present application, the three-dimensional virtual model may also be created by other methods, which are not limited herein.
And step S133, determining a target virtual effect matched with the three-dimensional virtual model.
In some possible implementation manners, different three-dimensional virtual models are needed for different target objects, and the matching virtual effects are different.
Step S131 and step S133 mentioned above call the information of the special effect presented by the virtual object in the real scene through the preset position information of the virtual object in the three-dimensional virtual model used for representing the real scene, here, since the three-dimensional virtual model can represent the real scene, the position information of the virtual object constructed based on the three-dimensional virtual model can be better integrated into the real scene, and the virtual effect presented matched with the position information is determined from the position information of the virtual object in the three-dimensional virtual model, that is, the realistic augmented reality effect can be presented in the AR device.
When the execution subject of the above process is executed by a processor deployed on the AR device, after the virtual effect corresponding to the position information of the trigger operation is determined based on the above manner, the AR scene image may be directly displayed by the AR device; under the condition that the execution main body of the process is deployed on the cloud platform server, after the virtual effect corresponding to the position information of the trigger operation is determined, the target virtual effect can be sent to the AR equipment terminal, and then the AR scene image is displayed through the AR equipment.
In some embodiments, step S133 may be implemented by:
firstly, determining attribute information and function information of the target object; the attribute information includes basic information such as a name and a component of the target object, for example, the target object is a building, and the attribute information includes a building name, construction time, an accommodation space, a floor area and the like; the function information includes information such as a function provided by the target object, for example, the target object is a vehicle, and the function information includes: the functions that each part of the vehicle can realize.
Then, determining the effect type of the virtual effect of the target object according to the attribute information and/or the function information; wherein the effect types include: virtual tags, non-interactive animations, video or interactive animations; in some possible implementation manners, according to the attribute information, determining the virtual effect type of the target object to be a virtual label, a non-interactive animation, a video and the like; and determining the virtual effect type of the target object as an interactive animation according to the function information.
And finally, acquiring the target virtual effect corresponding to the effect type according to the effect type.
In some possible implementations, each type of virtual effect may be pre-configured, and the type of virtual effect is invoked based on the type of virtual effect of the target object. For example, if the effect type is a virtual tag, a virtual effect of the tag class is created, as shown in fig. 3C; if the effect type is animation, creating virtual effects of animation class and the like, as shown in fig. 3A and fig. 3B, for example, the image to be enhanced is a street of a photographed christmas day, wherein some posters of santa claus and the like are included, a viewer clicks the posters of santa claus in the image, and the target virtual effect can be that the santa claus walks like a user by determining the click position and the target object of the position, namely the posters of the santa claus; the Santa Claus 322 moving on a road 321 displayed in the real scene image can move from a position A shown in FIG. 3A to a position B shown in FIG. 3B according to a preset motion trajectory.
In the embodiment of the application, based on different attribute information or function information, different types of virtual effects are determined, so that various types of virtual effects can be displayed, an augmented reality effect more fitting the mind of a viewer can be provided, and the experience of the viewer is improved.
Based on the above mode, the virtual effects of different effect types are presented to different target objects, and then the target virtual effects are superimposed on the image to be enhanced and presented on the display interface, and the method can be realized by the following modes:
first, a display mode matching with the effect type of the target virtual effect is determined.
In some possible implementations, the corresponding display modes are different according to different effect types. For example, if the effect type is a virtual label, the effect type is displayed in an image manner; and if the effect type is non-interactive animation or interactive animation, displaying in a video mode.
And then, presenting an augmented reality image superposed with the target virtual effect on the display interface by adopting the matched display mode.
In some possible implementations, the target virtual effect is superimposed on the image to be enhanced, resulting in an enhanced display image, and the enhanced display image is presented on the display interface. The position of the target virtual effect in the image to be enhanced can be superposed in a background area in the image to be enhanced, superposed in a neighboring area of the target object, and the like. So, realized to the virtual effect of different grade type, adopted different show modes to demonstrate to provide abundanter visual experience for the viewer.
In some embodiments, in order to accurately determine the pixel point to which the trigger operation is applied, step S102 may be implemented by the following steps, referring to fig. 2, and fig. 2 is another implementation flow diagram of the information display method provided in the embodiment of the present application, and the following description is made with reference to fig. 1B:
step S201, acquiring a screen coordinate system to which the display interface belongs.
In some possible implementations, the coordinate system in which the display interface is currently located is determined as a screen coordinate system, and in some possible implementations, the screen coordinate system may be a world coordinate system.
Step S202, determining a first coordinate value of a contact corresponding to the trigger operation in the screen coordinate system.
In some possible implementations, in the screen coordinate system, a coordinate value of a touch point corresponding to the trigger operation, that is, a first coordinate value, is determined. For example, if the trigger operation is a touch operation, determining a coordinate value of a touch point in a screen coordinate system; if the triggering operation is a clicking operation, determining a coordinate value of a clicked point in a screen coordinate system; and if the triggering operation is sight tracking, determining the coordinate value of the tracked point in the screen coordinate system.
Step S203, determining the first coordinate value as the position information of the trigger operation on the display interface.
In some possible implementations, the first coordinate value is position information of the trigger operation on the display interface.
In the above steps S201 to S203, a manner of implementing "determining the position information of the trigger operation on the display interface" is provided, in which a coordinate value of a touch point of a click operation in a screen coordinate system is determined in the coordinate system, so that a pixel of the touch point can be accurately determined.
In some embodiments, after determining the first coordinate value of the touch point corresponding to the trigger operation in the screen coordinate system, determining the target object according to the position information may be implemented by:
firstly, pixel information of the contact is determined according to the first coordinate value.
In some possible implementations, after obtaining the coordinate values of the touch point triggering the operation in the screen coordinate system, based on the coordinate values, the pixel information of the touch point in the image to be enhanced, for example, the pixel value of this point, is determined.
Then, according to the pixel information, the target object is determined in the image to be enhanced.
In some possible implementations, after obtaining the pixel values of the touch points, an object for the pixel values is determined in the image to be enhanced, thereby obtaining the target object. Therefore, the corresponding real object is determined for the coordinate of the click operation, so that the virtual effect aiming at the object can be matched, and the aim of matching different virtual effects for different objects is fulfilled.
In some embodiments, based on the above determination that the touch point triggering the operation is behind the first coordinate value of the screen coordinate system, in order to be able to accurately match the target virtual effect of the target object, step S103 may be implemented by:
firstly, obtaining a model coordinate system to which the three-dimensional virtual model belongs.
In some possible implementation manners, after the created three-dimensional virtual model is called, a model coordinate system in which the three-dimensional virtual model is located is determined. The model coordinate system is different from the screen coordinate system, but a conversion relationship exists between the two.
And secondly, determining a conversion relation between the model coordinate system and the screen coordinate system.
In some possible implementations, the rotation vector, and the translation vector of the AR device in the screen coordinate system relative to the model coordinate system are determined by first comparing the screen coordinate system and the model coordinate system.
And thirdly, converting the first coordinate value into a second coordinate value of the contact corresponding to the trigger operation in the model coordinate system according to the conversion relation.
In some possible implementations, the first coordinate value is rotated by using a rotation vector, the first coordinate value of the screen coordinate system is translated by using a translation vector, and the position of the first coordinate value in the model coordinate system, that is, the second coordinate value, is determined.
The first to third steps described above implement the alignment process of the model coordinate system and the screen coordinate system, which may be implemented in some embodiments by:
the space where the target object and the AR device are located may be understood as a real space, and the image to be enhanced including the target object may be understood as a pixel space; the target virtual effect corresponds to a virtual space. The corresponding relation between the pixel space and the real space can be determined according to the distance between the target object and the AR equipment and the parameters of the AR equipment; the corresponding relation between the real space and the virtual space can be determined by the parameters of the display device and the parameters of the target virtual effect. After the corresponding relationship between the pixel space and the real space and the corresponding relationship between the real space and the virtual space are determined, the corresponding relationship between the pixel space and the virtual space, that is, the mapping relationship between the image to be enhanced and the virtual space can be determined.
In some embodiments, the mapping relationship between the image to be enhanced and the virtual space may be determined with a position of the target virtual effect in the virtual space as a reference point.
Firstly, determining a proportional relation n between the unit pixel distance of the image to be enhanced and the unit distance of the virtual space.
Wherein, the unit pixel distance refers to the corresponding size or length of each pixel; the virtual space unit distance refers to a unit size or a unit length in the virtual space.
In one example, the determination may be made by determining a first proportional relationship between a unit pixel distance of the image to be enhanced and a real space unit distance, and a second proportional relationship between the real space unit distance and a virtual space unit distance. The real space unit distance refers to a unit size or a unit length in the real space.
And fourthly, determining the virtual effect matched with the three-dimensional virtual model of the second coordinate value as the target virtual effect.
In some possible implementation manners, after the second coordinate value is determined, the three-dimensional virtual model with the second coordinate value is determined, and the virtual effect matched with the three-dimensional virtual model is called, so that the target virtual effect can be obtained.
In the embodiment of the application, the touch point coordinates (i.e. screen coordinates) are converted into the coordinate system of the three-dimensional virtual model, and the corresponding label of the corresponding three-dimensional virtual model is called and displayed, so that the virtual effect matched with the target object is determined.
In some embodiments, for a real object in an image to be enhanced, under different scene requirements, a three-dimensional virtual model corresponding to the real object may be, but is not limited to be, transparent, and may be implemented through the following processes:
firstly, determining the position information of other objects in the image to be enhanced.
In some possible implementations, in the screen coordinate system, the coordinates of other objects (i.e., other real objects) in the image to be enhanced are determined. For example, the image to be enhanced is an image which is acquired in a campus and comprises a plurality of buildings (for example, buildings 3 to 6), after the position of the trigger operation is determined, the position is obtained and is directed to the building 3, namely, the target object is the building 3, the buildings 4 to 6 are other objects, and the coordinates of the buildings 4 to 6 are determined in the screen coordinate system.
And secondly, determining the relative position relation between the other objects and the target object according to the position information of the other objects and the position information of the target object.
In some possible implementations, the relative positional relationship between the other object and the target object is determined by comparing the coordinates of the other object in the screen coordinate system with the coordinates of the target object in the screen coordinate system. For example, the relative position relationship of the floor 3 and the floor 4 in the image to be enhanced is determined by comparing the coordinates of the floor 3 and the floor 4 in the screen coordinate system, for example, the floor 4 is in front of the floor 3, that is, the floor 3 is completely or partially occluded.
And thirdly, determining the transparency of the three-dimensional virtual models of the other objects according to the relative position relation.
In some possible implementations, if the target object is occluded by other objects due to the relative position relationship, the transparency of the three-dimensional virtual model of the other objects may be set according to the occlusion degree, for example, the other objects are set to be transparent or semi-transparent. It is also possible to set a tree to be transparent or non-transparent if the relative position relationship indicates that other objects are far from the target object, for example, if the other objects are a tree far from level 3. In other implementations, if the relative positional relationships are adjacent and the location is a location at which the target virtual effect of the target object is superimposed, the transparency of the three-dimensional virtual model of the other object may be set to transparent in order not to obscure the target virtual effect.
And fourthly, rendering the three-dimensional virtual models of the other objects by adopting the transparency, and displaying the rendered three-dimensional virtual models of the other objects on the display interface.
For example, if the relative position relationship is such that the target object is occluded by other objects, the three-dimensional virtual model of the other objects can be rendered into a transparent model, and then the three-dimensional virtual model of the target object can be seen from the perspective of the viewer, so that the effect of preventing occlusion by the target object is achieved, and the rendered three-dimensional virtual model of the other objects is presented for the viewer, thereby improving the visual experience of the viewer.
In the embodiment of the application, a three-dimensional virtual model corresponding to a real object which does not need to be displayed is set as a transparent model; therefore, the self-adaptive adjustment of the transparency of the three-dimensional virtual model is realized.
In some embodiments, to achieve adaptive adjustment of virtual effect hue, the method further comprises the steps of:
firstly, determining tone information of a target object in the image to be enhanced.
In some possible implementations, the hue information includes RGB color components, neutral color, base color, hue, chroma, lightness, and the like. It is understood that the color tone information such as the color and the brightness of the target object is acquired.
And secondly, determining the tone of the target virtual effect according to the tone information.
In some possible implementations, the target virtual effect is set to a hue that is similar to (e.g., belongs to the same color family as) the hue of the target object. The original hue information of the target virtual effect may be adjusted by the hue information of the target object, or the target virtual effect may be re-rendered based on the hue information of the target object, so that the hue of the target virtual effect is similar to the hue of the target object.
And thirdly, superposing the target virtual effect with the color tone on the image to be enhanced and presenting the target virtual effect on the display interface.
In some possible implementation manners, the target virtual result after re-rendering or tone frame skipping is superposed on the image to be enhanced, and the image of the augmented reality is presented on the display interface. Thus, by acquiring the tone information of the target object, the adaptive adjustment of the virtual effect tone is realized.
In the following, an exemplary application of the embodiments of the present invention in a practical application scenario will be described.
In the embodiment of the application, by clicking an object on a screen, the touch point coordinates (screen coordinates) are converted into the coordinate system of the three-dimensional virtual model, and the corresponding label of the corresponding three-dimensional virtual model is called and displayed. Description of the drawings: the RGB image corresponding to the real object is displayed on the screen, and the three-dimensional virtual model corresponding to the real object may be, but is not limited to be, transparent. In this way, clicking the object in the display area to trigger the corresponding virtual effect display may be to determine the corresponding reconstructed three-dimensional virtual model based on the triggered pixel point by detecting the pixel point of the touch operation displayed on the touch screen, and to display the preset virtual effect corresponding to the three-dimensional virtual model. As shown in fig. 3C, fig. 3C is an application scenario diagram of the information display method provided in the embodiment of the present application, and the following description is made with reference to fig. 3C:
image 301 is an RGB image captured based on a real scene.
When the target object targeted by the click operation is the building 302, it is determined that the virtual effect matched with the building 302, that is, the label 303 of the introduction building 302, as shown in fig. 3C, the name "× international group headquarters" of the introduction building 302, and the introduction information about the developer of the building 302 include: "registered capital fund 5 million yuan RMB, total amount of assets 182 million yuan, affiliated secondary business organization 40", "having established more than 30 offices overseas, branch companies, developing business in more than 100 countries and regions in the world", "annual business income exceeds 200 million", and "being evaluated as one of" largest 250 contractors in the world "by XX news records for 23 consecutive years", and important information is highlighted in a label 303, such as "182 million", "30 +", "200 million", and "250"; then, the label 303 is superimposed on the image 301 and presented in the display interface, so that the intelligent superimposition of the corresponding virtual effect is triggered based on the trigger operation, and the vivid AR special effect is generated.
An information display device is provided in an embodiment of the present application, fig. 4 is a schematic diagram of a composition structure of the information display device in the embodiment of the present application, and as shown in fig. 4, the information display device 400 includes:
a first obtaining module 401, configured to obtain a trigger operation input on an image to be enhanced on a display interface; the image to be enhanced is acquired based on a real scene;
a first response module 402, configured to determine, in response to the trigger operation, location information of the trigger operation on the presentation interface;
a first determining module 403, configured to determine, according to the location information, a target virtual effect matched with a target object corresponding to the trigger operation in the image to be enhanced;
a first presenting module 404, configured to superimpose the target virtual effect on the image to be enhanced and present the target virtual effect on the display interface.
In some possible implementations, the first determining module 403 includes:
the first determining submodule is used for determining the target object according to the position information;
the first obtaining submodule is used for obtaining a three-dimensional virtual model of the target object;
and the second determining submodule is used for determining a target virtual effect matched with the three-dimensional virtual model.
In some possible implementations, the second determining sub-module includes:
a first determination unit configured to determine attribute information and function information of the target object;
a second determining unit, configured to determine an effect type of a virtual effect of the target object according to the attribute information and/or the function information; wherein the effect types include: virtual tags, non-interactive animations or interactive animations;
a first obtaining unit, configured to obtain the target virtual effect corresponding to the effect type according to the effect type.
In some possible implementations, the first response module 402 includes:
the second acquisition submodule is used for acquiring a screen coordinate system to which the display interface belongs;
the third determining submodule is used for determining a first coordinate value of a contact corresponding to the trigger operation in the screen coordinate system;
and the fourth determining submodule is used for determining the first coordinate value as the position information of the triggering operation on the display interface.
In some possible implementations, the first determining sub-module includes:
a third determining unit, configured to determine pixel information of the contact according to the first coordinate value;
and the fourth determining unit is used for determining the target object in the image to be enhanced according to the pixel information.
In some possible implementations, the first determining module 403 includes:
the third obtaining submodule is used for obtaining a model coordinate system to which the three-dimensional virtual model belongs;
a fifth determining submodule for determining a conversion relationship between the model coordinate system and the screen coordinate system;
the first conversion submodule is used for converting the first coordinate value into a second coordinate value of the contact corresponding to the trigger operation in the model coordinate system according to the conversion relation;
and the sixth determining submodule is used for determining the virtual effect matched with the three-dimensional virtual model of the second coordinate value as the target virtual effect.
In some possible implementations, the first presenting module 404 includes:
a seventh determining submodule, configured to determine a display manner that is matched with the effect type of the target virtual effect;
and the first presentation submodule is used for presenting the augmented reality image superposed with the target virtual effect on the display interface by adopting the matched display mode.
In some possible implementations, the apparatus further includes:
the second determination module is used for determining the position information of other objects in the image to be enhanced;
a third determining module, configured to determine, according to the position information of the other object and the position information of the target object, a relative position relationship between the other object and the target object;
a fourth determining module, configured to determine, according to the relative position relationship, a transparency of the three-dimensional virtual model of the other object;
and the first rendering module is used for rendering the three-dimensional virtual models of the other objects by adopting the transparency and presenting the rendered three-dimensional virtual models of the other objects on the display interface.
In some possible implementations, the apparatus further includes:
the fifth determining module is used for determining tone information of the target object in the image to be enhanced;
a sixth determining module, configured to determine, according to the hue information, a hue of the target virtual effect;
and the second presentation module is used for superposing the target virtual effect with the color tone on the image to be enhanced and presenting the target virtual effect on the display interface.
It should be noted that the above description of the embodiment of the apparatus, similar to the above description of the embodiment of the method, has similar beneficial effects as the embodiment of the method. For technical details not disclosed in the embodiments of the apparatus of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
It should be noted that, in the embodiment of the present application, if the information display method is implemented in the form of a software functional module and sold or used as a standalone product, the information display method may also be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or portions thereof contributing to the prior art may be embodied in the form of a software product stored in a storage medium, and including several instructions for causing a computer device (which may be a terminal, a server, etc.) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, or an optical disk. Thus, embodiments of the present application are not limited to any specific combination of hardware and software.
Correspondingly, an embodiment of the present application further provides a computer program product, where the computer program product includes computer-executable instructions, and the computer-executable instructions are used to implement the steps in the information display method provided in the embodiment of the present application.
Accordingly, an embodiment of the present application further provides a computer storage medium, where computer-executable instructions are stored on the computer storage medium, and the computer-executable instructions are used to implement the steps of the information display method provided in the foregoing embodiment.
Accordingly, an embodiment of the present application provides a computer device, fig. 5 is a schematic structural diagram of the computer device in the embodiment of the present application, and as shown in fig. 5, the computer device 500 includes: a processor 501, at least one communication interface 502, a memory 503, and at least one communication bus 504. Wherein the communication bus 504 is configured to enable connective communication between these components. Where the user interface may include a display screen and the communication interface 502 may include standard wired and wireless interfaces. The processor 501 is configured to execute an image processing program in a memory to implement the steps of the information display method provided in the above embodiments.
The above description of the computer device and storage medium embodiments is similar to the description of the method embodiments above, with similar beneficial effects as the method embodiments. For technical details not disclosed in the embodiments of the computer device and the storage medium of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application. The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiments of the present application.
In addition, all functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: various media that can store program codes, such as a removable Memory device, a Read Only Memory (ROM), a magnetic disk, or an optical disk. Alternatively, the integrated units described above in the present application may be stored in a computer-readable storage medium if they are implemented in the form of software functional modules and sold or used as independent products. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or portions thereof contributing to the prior art may be embodied in the form of a software product stored in a storage medium, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic or optical disk, or other various media that can store program code.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (12)

1. An information display method, characterized in that the method comprises:
acquiring a trigger operation input on an image to be enhanced on a display interface; the image to be enhanced is acquired based on a real scene;
in response to the trigger operation, determining position information of the trigger operation on the display interface;
according to the position information, determining a target virtual effect matched with a target object corresponding to the trigger operation in the image to be enhanced;
and superposing the target virtual effect on the image to be enhanced and presenting the target virtual effect on the display interface.
2. The method according to claim 1, wherein the determining, according to the position information, a target virtual effect matched with a target object corresponding to the trigger operation in the image to be enhanced comprises:
determining the target object according to the position information;
acquiring a three-dimensional virtual model of the target object;
and determining a target virtual effect matched with the three-dimensional virtual model.
3. The method of claim 2, wherein determining the target virtual effect that matches the three-dimensional virtual model comprises:
determining attribute information and function information of the target object;
determining the effect type of the virtual effect of the target object according to the attribute information and/or the function information; wherein the effect types include: virtual tags, non-interactive animations or interactive animations;
and acquiring the target virtual effect corresponding to the effect type according to the effect type.
4. The method of claim 2, wherein the determining the location information of the trigger operation on the presentation interface comprises:
acquiring a screen coordinate system to which a display interface belongs;
determining a first coordinate value of a contact corresponding to the trigger operation in the screen coordinate system;
and determining the first coordinate value as the position information of the trigger operation on the display interface.
5. The method of claim 4, wherein said determining the target object based on the location information comprises:
determining pixel information of the contact according to the first coordinate value;
and determining the target object in the image to be enhanced according to the pixel information.
6. The method according to claim 4, wherein the determining, according to the position information, a target virtual effect matched with a target object corresponding to the trigger operation in the image to be enhanced comprises:
obtaining a model coordinate system to which the three-dimensional virtual model belongs;
determining a conversion relationship between the model coordinate system and the screen coordinate system;
converting the first coordinate value into a second coordinate value of the contact corresponding to the trigger operation in the model coordinate system according to the conversion relation;
and determining the virtual effect matched with the three-dimensional virtual model of the second coordinate value as the target virtual effect.
7. The method according to any one of claims 1 to 6, wherein the superimposing the target virtual effect on the image to be enhanced and presenting the target virtual effect on the presentation interface comprises:
determining a display mode matched with the effect type of the target virtual effect;
and displaying the augmented reality image superposed with the target virtual effect on the display interface by adopting the matched display mode.
8. The method of any one of claims 1 to 6, further comprising:
determining the position information of other objects in the image to be enhanced;
determining the relative position relation between the other objects and the target object according to the position information of the other objects and the position information of the target object;
determining the transparency of the three-dimensional virtual model of the other object according to the relative position relation;
rendering the three-dimensional virtual models of the other objects by adopting the transparency, and displaying the rendered three-dimensional virtual models of the other objects on the display interface.
9. The method of any one of claims 1 to 6, further comprising:
determining tone information of a target object in the image to be enhanced;
determining the tone of the target virtual effect according to the tone information;
and superposing the target virtual effect with the color tone on the image to be enhanced and presenting the target virtual effect on the display interface.
10. An information display apparatus, characterized in that the apparatus comprises:
the first acquisition module is used for acquiring a trigger operation input on an image to be enhanced on a display interface; the image to be enhanced is acquired based on a real scene;
the first response module is used for responding to the trigger operation and determining the position information of the trigger operation on the display interface;
the first determining module is used for determining a target virtual effect matched with a target object corresponding to the trigger operation in the image to be enhanced according to the position information;
and the first presentation module is used for superposing the target virtual effect on the image to be enhanced and presenting the target virtual effect on the display interface.
11. A computer storage medium having computer-executable instructions stored thereon that, when executed, perform the method steps of any of claims 1 to 9.
12. A computer device comprising a memory having computer-executable instructions stored thereon and a processor operable to perform the method steps of any of claims 1 to 9 when the processor executes the computer-executable instructions on the memory.
CN202010621088.4A 2020-06-30 2020-06-30 Information display method, device, equipment and storage medium Pending CN111815786A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010621088.4A CN111815786A (en) 2020-06-30 2020-06-30 Information display method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010621088.4A CN111815786A (en) 2020-06-30 2020-06-30 Information display method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111815786A true CN111815786A (en) 2020-10-23

Family

ID=72855683

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010621088.4A Pending CN111815786A (en) 2020-06-30 2020-06-30 Information display method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111815786A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113269782A (en) * 2021-04-21 2021-08-17 青岛小鸟看看科技有限公司 Data generation method and device and electronic equipment
CN114422644A (en) * 2022-01-25 2022-04-29 Oppo广东移动通信有限公司 Device control method, apparatus, user equipment, and computer-readable storage medium
WO2022089168A1 (en) * 2020-10-26 2022-05-05 腾讯科技(深圳)有限公司 Generation method and apparatus and playback method and apparatus for video having three-dimensional effect, and device
CN114584681A (en) * 2020-11-30 2022-06-03 北京市商汤科技开发有限公司 Target object motion display method and device, electronic equipment and storage medium
CN115482364A (en) * 2022-10-19 2022-12-16 中国农业银行股份有限公司 Augmented reality image generation method, device, computer equipment and storage medium
WO2023028755A1 (en) * 2021-08-30 2023-03-09 京东方科技集团股份有限公司 Display control method and apparatus, and computer-readable storage medium and display device
CN116152469A (en) * 2023-02-16 2023-05-23 宏景科技股份有限公司 Three-dimensional space data correction method for virtual reality
CN119728993A (en) * 2023-09-28 2025-03-28 北京字跳网络技术有限公司 Video generation method, device, electronic device and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120200601A1 (en) * 2010-02-28 2012-08-09 Osterhout Group, Inc. Ar glasses with state triggered eye control interaction with advertising facility
CN108154558A (en) * 2017-11-21 2018-06-12 中电海康集团有限公司 A kind of augmented reality methods, devices and systems
CN108550190A (en) * 2018-04-19 2018-09-18 腾讯科技(深圳)有限公司 Augmented reality data processing method, device, computer equipment and storage medium
CN108776544A (en) * 2018-06-04 2018-11-09 网易(杭州)网络有限公司 Exchange method and device, storage medium, electronic equipment in augmented reality
CN109420336A (en) * 2017-08-30 2019-03-05 深圳市掌网科技股份有限公司 Game implementation method and device based on augmented reality
CN109741462A (en) * 2018-12-29 2019-05-10 广州欧科信息技术股份有限公司 Showpiece based on AR leads reward device, method and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120200601A1 (en) * 2010-02-28 2012-08-09 Osterhout Group, Inc. Ar glasses with state triggered eye control interaction with advertising facility
CN109420336A (en) * 2017-08-30 2019-03-05 深圳市掌网科技股份有限公司 Game implementation method and device based on augmented reality
CN108154558A (en) * 2017-11-21 2018-06-12 中电海康集团有限公司 A kind of augmented reality methods, devices and systems
CN108550190A (en) * 2018-04-19 2018-09-18 腾讯科技(深圳)有限公司 Augmented reality data processing method, device, computer equipment and storage medium
CN108776544A (en) * 2018-06-04 2018-11-09 网易(杭州)网络有限公司 Exchange method and device, storage medium, electronic equipment in augmented reality
CN109741462A (en) * 2018-12-29 2019-05-10 广州欧科信息技术股份有限公司 Showpiece based on AR leads reward device, method and storage medium

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12243561B2 (en) 2020-10-26 2025-03-04 Tencent Technology (Shenzhen) Company Limited Method and apparatus for generating video with 3D effect, method and apparatus for playing video with 3D effect, and device
WO2022089168A1 (en) * 2020-10-26 2022-05-05 腾讯科技(深圳)有限公司 Generation method and apparatus and playback method and apparatus for video having three-dimensional effect, and device
CN114584681A (en) * 2020-11-30 2022-06-03 北京市商汤科技开发有限公司 Target object motion display method and device, electronic equipment and storage medium
US11995741B2 (en) 2021-04-21 2024-05-28 Qingdao Pico Technology Co., Ltd. Data generation method and apparatus, and electronic device
CN113269782B (en) * 2021-04-21 2023-01-03 青岛小鸟看看科技有限公司 Data generation method and device and electronic equipment
CN113269782A (en) * 2021-04-21 2021-08-17 青岛小鸟看看科技有限公司 Data generation method and device and electronic equipment
WO2023028755A1 (en) * 2021-08-30 2023-03-09 京东方科技集团股份有限公司 Display control method and apparatus, and computer-readable storage medium and display device
US12315041B2 (en) 2021-08-30 2025-05-27 Boe Technology Group Co., Ltd. Display control method and device applied to monitor apparatus, computer-readable storage medium for display control method and display device including display control device
CN114422644A (en) * 2022-01-25 2022-04-29 Oppo广东移动通信有限公司 Device control method, apparatus, user equipment, and computer-readable storage medium
CN115482364A (en) * 2022-10-19 2022-12-16 中国农业银行股份有限公司 Augmented reality image generation method, device, computer equipment and storage medium
CN116152469A (en) * 2023-02-16 2023-05-23 宏景科技股份有限公司 Three-dimensional space data correction method for virtual reality
CN116152469B (en) * 2023-02-16 2023-10-20 宏景科技股份有限公司 Three-dimensional space data correction method for virtual reality
CN119728993A (en) * 2023-09-28 2025-03-28 北京字跳网络技术有限公司 Video generation method, device, electronic device and storage medium
WO2025067530A1 (en) * 2023-09-28 2025-04-03 北京字跳网络技术有限公司 Video generation method and apparatus, electronic device, and storage medium

Similar Documents

Publication Publication Date Title
CN111815786A (en) Information display method, device, equipment and storage medium
CN111833458B (en) Image display method and device, equipment and computer readable storage medium
US10417829B2 (en) Method and apparatus for providing realistic 2D/3D AR experience service based on video image
US10089794B2 (en) System and method for defining an augmented reality view in a specific location
CN111815780A (en) Display method, display device, equipment and computer readable storage medium
JP7007348B2 (en) Image processing equipment
JP4253567B2 (en) Data authoring processor
CN105046752B (en) Method for describing virtual information in the view of true environment
Kalkofen et al. Visualization techniques for augmented reality
JP3992629B2 (en) Image generation system, image generation apparatus, and image generation method
US11232636B2 (en) Methods, devices, and systems for producing augmented reality
US20110084983A1 (en) Systems and Methods for Interaction With a Virtual Environment
CN111881861A (en) Display method, device, equipment and storage medium
JP2006053694A (en) Space simulator, space simulation method, space simulation program, recording medium
KR20140082610A (en) Method and apaaratus for augmented exhibition contents in portable terminal
CN113012299A (en) Display method and device, equipment and storage medium
US9183654B2 (en) Live editing and integrated control of image-based lighting of 3D models
CN111862866A (en) Image display method, device, equipment and computer readable storage medium
CN110873963B (en) Content display method and device, terminal equipment and content display system
CN114092670A (en) Virtual reality display method, equipment and storage medium
JP2023504608A (en) Display method, device, device, medium and program in augmented reality scene
CN109255841A (en) AR image presentation method, device, terminal and storage medium
CN110418185B (en) Positioning method and system for anchor point in augmented reality video picture
JP2022507502A (en) Augmented Reality (AR) Imprint Method and System
CN111815782A (en) Display method, device and equipment of AR scene content and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201023

RJ01 Rejection of invention patent application after publication