Disclosure of Invention
In view of the above, the main objective of the present invention is to provide an interaction method and device for Augmented Reality (AR), which can improve the accuracy of user operation recognition in AR scene while ensuring the convenience of user operation.
In order to achieve the above purpose, the technical solution provided by the embodiment of the present application is as follows:
an augmented reality interaction method, comprising:
In an AR scenario, monitoring a user's designated limb movements based on Ultra Wideband (UWB) radar sensors; wherein the coverage area of the UWB radar sensor overlaps with the AR display area;
When the appointed limb action is monitored, corresponding rendering is carried out on the AR scene according to the operation corresponding to the limb action; and if the operation corresponding to the limb action belongs to the operation for the specific object in the scene, determining the target operation object in the AR display area according to the limb positioning coordinates obtained during corresponding monitoring.
Preferably, the method further comprises:
The emission point of the UWB radar sensor is a view cone vertex of an AR display area; the origin of the spherical coordinate system for UWB radar positioning is the view cone vertex of the AR display area;
The determining the target operation object in the AR display area according to the limb positioning coordinates obtained during corresponding monitoring comprises:
and taking the display object positioned at the limb positioning coordinates in the spherical coordinate system as the target operation object.
Preferably, the method further comprises:
the AR display area is cut into a corresponding number of sub display areas in a spherical coordinate system by spherical surfaces having different spherical radii; the position of the display object in the AR display area in the spherical coordinate system is the position of the display object on the sphere corresponding to the sub-display area.
Preferably, the position of the display object in the spherical coordinate system is commonly characterized by a spherical radius R and coordinates (x, y, z); wherein R is the sphere radius of the sphere corresponding to the sub-display area to which the display object belongs, and the coordinates (x, y, z) are the coordinates of the display object on the sphere corresponding to the sub-display area to which the display object belongs.
Preferably, the sphere radius is obtained based on a preset sphere radius interval.
Preferably, the specified limb motion is a gesture motion.
The embodiment of the invention also discloses an interaction device for augmented reality, which comprises:
The monitoring module is used for monitoring the appointed limb actions of the user based on the UWB radar sensor in the AR scene; wherein the coverage area of the UWB radar sensor overlaps with the AR display area;
the rendering module is used for performing corresponding rendering on the AR scene according to the operation corresponding to the limb action when the appointed limb action is monitored; and if the operation corresponding to the limb action belongs to the operation for the specific object in the scene, determining the target operation object in the AR display area according to the limb positioning coordinates obtained during corresponding monitoring.
Preferably, the emission point of the UWB radar sensor is the vertex of a view cone of the AR display area; the origin of the spherical coordinate system for UWB radar positioning is the view cone vertex of the AR display area;
the rendering module is used for determining a target operation object in the AR display area according to limb positioning coordinates obtained during corresponding monitoring, and comprises the following steps:
and taking the display object positioned at the limb positioning coordinates in the spherical coordinate system as the target operation object.
Preferably, the AR display area of the AR scene is cut into a corresponding number of sub display areas by spheres having different sphere radii in a spherical coordinate system; the position of the display object in the AR display area in the spherical coordinate system is the position of the display object on the sphere corresponding to the sub-display area.
The embodiment of the application also discloses a non-volatile computer readable storage medium, which stores instructions, characterized in that the instructions, when executed by a processor, cause the processor to perform the steps of the augmented reality interaction method as described above.
The embodiment of the application also discloses an electronic device which comprises the nonvolatile computer readable storage medium and the processor which can access the nonvolatile computer readable storage medium.
According to the technical scheme, the augmented reality interaction scheme provided by the embodiment of the invention monitors the appointed limb actions of the user by using the UWB radar sensor in the AR scene, and enables the coverage area of the UWB radar sensor to overlap with the AR display area, so that when the appointed limb actions are monitored, the target operation object in the corresponding AR display area can be rapidly and accurately determined according to the corresponding limb positioning coordinates, and the user operation instruction can be rapidly and accurately identified.
Detailed Description
The present invention will be described in further detail with reference to the drawings and the embodiments, in order to make the objects, technical solutions and advantages of the present invention more apparent.
Fig. 1 is a schematic flow chart of an embodiment of the present invention, and as shown in fig. 1, an interaction method of augmented reality implemented by this embodiment mainly includes the following steps:
Step 101, monitoring the appointed limb actions of a user based on a UWB radar sensor in an AR scene; wherein the coverage area of the UWB radar sensor overlaps with the AR display area.
In the step, in the AR scene, the appointed limb actions of the user are required to be monitored based on the UWB radar sensor, so that the positioning advantage of the UWB radar sensor can be fully utilized, and the appointed limb actions of the user can be rapidly and accurately captured in real time. Moreover, the coverage area of the UWB radar sensor is overlapped with the AR display area, so that limb positioning coordinates obtained during limb motion monitoring are favorably associated with specific objects in the AR display area, and a user can directly indicate the specific objects needing to be operated in an AR scene by utilizing limb motion, so that the operation instruction of the user can be rapidly and accurately identified while the operation convenience of the user is ensured.
By adopting the UWB radar technology, based on the emission and recovery waveforms, the limb actions (including single-point stationary actions and continuous actions) of the user can be accurately positioned and identified. The monitoring of the user's limb movements in this step may be specifically implemented by using the prior art, which is not described here again.
In one embodiment, considering that the user's visual range from the origin of the field of view is a cone shape, to reduce the interaction overhead, the UWB radar (transmit+receive) coverage area and the AR display area may be overlapped by adjusting the emission point of the UWB radar sensor to be the cone vertex of the AR display area. In addition, the origin of the spherical coordinate system for UWB radar positioning (shown in fig. 2) is set as the vertex of the view cone (shown in fig. 3) of the AR display area, so that the positions of the respective display contents in the AR display area can be identified directly using the spherical coordinate system for UWB radar positioning. Therefore, the coordinate system for positioning the content in the AR display area is consistent with the coordinate system for positioning the UWB radar, so that the positioning coordinates of the limb actions are consistent with the coordinates of the target operation object in the AR display area, and the target operation object in the AR display area can be rapidly positioned directly based on the positioning coordinates of the limb actions.
In one embodiment, in order to facilitate the identification of the position of the target operation object in the AR display area, the display object in the AR display area may be identified by using a spherical coordinate system, specifically, the AR display area may be cut by using a sphere with a coordinate system origin as a sphere center and different sphere radii in the spherical coordinate system, so as to obtain a corresponding number of sub-display areas, so that each sub-display area is defined by a sphere with different sphere radii, and the position of the display object in the AR display area may be accurately identified by using the sphere with the largest sphere radius corresponding to the sub-display area in the AR display area, that is, the position of the display object in the AR display area in the spherical coordinate system is the position of the display object on the sphere corresponding to the sub-display area.
In one embodiment, the position of each display object in the AR scene in a spherical coordinate system may be characterized in particular by a sphere radius R and coordinates (x, y, z) together.
Wherein R is the sphere radius of the sphere corresponding to the sub-display area to which the display object belongs, and the coordinates (x, y, z) are the coordinates of the display object on the sphere corresponding to the sub-display area to which the display object belongs.
Here, the spherical surface corresponding to each sub-display area may specifically be a spherical surface corresponding to a maximum spherical radius or a minimum spherical radius corresponding to the sub-display area.
Fig. 4 shows a schematic view of a view cone from the origin of the view to the far plane of the view, as shown, on which two points (P and Q) in the AR display area lie, point P being on the sphere of sphere radius Rp and point Q being on the sphere corresponding to sphere radius between Rq and Rp, such that the coordinates of point P and point Q on the respective spheres can be marked P (Px, py, pz), Q (Qx, qy, qz), respectively, and the positions of point P and point Q can be marked Fp (Rq, px, py, pz), respectively, fq (Rp, qx, qy, qz).
In one embodiment, the specific limb motion may be a gesture motion, but is not limited thereto, and may be a foot motion, for example.
For the specified limb motion, different limb motions may be defined according to the transmission waveform and the reflection waveform, for example, for a finger drag motion, as shown in fig. 5, a finger is selected at a point P Fp (Rp, xp, yp, zp), and then the finger is dragged to a point Q Fq (Rq, xq, yq, zq), which is defined as a drag. When the finger is selected at a certain position, the position information of the point can be accurately positioned based on the echo signal of the single point.
In practical applications, the designated set of limb actions may be specifically set according to practical needs, and as an example of gesture actions, a set of gesture actions as shown in fig. 6 may be set, but is not limited thereto.
In one embodiment, the sphere radius may be obtained based on a preset sphere radius interval, that is, a group of sphere radii with the same interval is obtained, so that the distances between adjacent sub-display areas obtained after the AR display areas are cut are the same.
The smaller the sphere radius interval is, the smaller the cutting granularity of the AR display area is, the more accurate the position mark of the scene content is by utilizing the sub display area, and in practical application, the sphere radius interval can be set to a proper value according to practical needs by a person skilled in the art.
102, When a designated limb action is monitored, performing corresponding rendering on the AR scene according to an operation corresponding to the limb action; and if the operation corresponding to the limb action belongs to the operation for the specific object in the scene, determining the target operation object in the AR display area according to the limb positioning coordinates obtained during corresponding monitoring.
In this step, if the currently monitored limb actions belong to the operations for the specific object, the association between the limb positioning coordinates and the display content in the AR scene is needed to be utilized, and the target operation object in the AR display area is quickly and accurately determined based on the limb positioning coordinates.
In one embodiment, the following method may be specifically used to determine the target operation object:
And taking the object positioned at the limb positioning coordinates in the spherical coordinate system as the target operation object.
According to the embodiment, the interaction between the user and the AR scene is realized by introducing the UWB radar technology and adopting the mode of overlapping and calibrating the UWB radar coverage area and the AR display area, and further, the accuracy of identifying the user operation can be improved by combining different layering of the AR display area marks, and the operation cost is low. The following describes in detail the implementation of the above method embodiments in connection with several specific application scenarios.
Scene one: moving virtual objects in an AR scene
Fig. 7 is a schematic diagram of scenario one. FIG. 8 is a schematic diagram of a process for moving an object in scene one. As shown in fig. 8, the finger points to select a virtual object, drags to the selected area, releases the finger, and the clicked object is re-rendered at the new position. In fig. 7, clicking on a sofa (virtual object) moves its position, and a more reasonable home layout can be selected.
Scene II: rotating virtual objects in an AR scene
Fig. 9 is a schematic diagram of scenario two. Fig. 10 is a schematic diagram of a process of rotating a cart in scenario two. As shown in fig. 10, the finger clicks the virtual object, rotates a certain angle (e.g., rotates 180 ° clockwise), the finger loosens, and the clicked object is re-rendered at the current position, but the rendered angle rotates 180 °. In fig. 9, the selected car toy (virtual object) rotates it, and more information on the other faces can be seen.
Scene III: copying/pasting virtual objects in an AR scene
Fig. 11 is a schematic diagram of a process of copying/pasting objects in an AR scene. As shown in fig. 11, the virtual object is clicked by two hands, the left hand is kept still, the right hand drags the virtual object, the right hand is released, the clicked object repeatedly renders one virtual object (and the virtual object originally clicked is copied as it is) at the current position (the position where the right hand is released), and the two virtual objects are identical and only are located at different positions.
Based on the above method embodiment, the embodiment of the present application also discloses an augmented reality interaction device, as shown in the figure, the interaction device includes:
The monitoring module is used for monitoring the appointed limb actions of the user based on the UWB radar sensor in the augmented reality AR scene; wherein the coverage area of the UWB radar sensor overlaps with the AR display area;
the rendering module is used for performing corresponding rendering on the AR scene according to the operation corresponding to the limb action when the appointed limb action is monitored; and if the operation corresponding to the limb action belongs to the operation for the specific object in the scene, determining the target operation object in the AR display area according to the limb positioning coordinates obtained during corresponding monitoring.
In one embodiment, the emission point of the UWB radar sensor is the cone apex of the AR display area; the origin of the spherical coordinate system for UWB radar localization is the cone vertex of the AR display area.
The rendering module is specifically configured to determine, according to limb positioning coordinates obtained during corresponding monitoring, a target operation object in the AR display area, including: and taking the display object positioned at the limb positioning coordinates in the spherical coordinate system as the target operation object.
In one embodiment, the AR display area of the AR scene is cut in a spherical coordinate system by spheres having different spherical radii into a corresponding number of sub display areas; the position of the display object in the AR display area in the spherical coordinate system is the position of the display object on the sphere corresponding to the sub-display area.
In one embodiment, the specified limb motion is a gesture motion.
In one embodiment, the sphere radius is derived based on a preset sphere radius interval.
In one embodiment, the position of the display object in the spherical coordinate system may be characterized by a sphere radius R and coordinates (x, y, z) together.
The radius R is the sphere radius of the sphere corresponding to the sub-display area to which the display object belongs, and the coordinates (x, y, z) are the coordinates of the display object on the sphere corresponding to the sub-display area to which the display object belongs.
Based on the embodiment of the interaction method of augmented reality, the embodiment of the application realizes the interaction electronic equipment of augmented reality, which comprises a processor and a memory; the memory has stored therein an application executable by the processor for causing the processor to perform the augmented reality interaction method as described above. Specifically, a system or apparatus provided with a storage medium on which a software program code realizing the functions of any of the above embodiments is stored, and a computer (or CPU or MPU) of the system or apparatus may be caused to read out and execute the program code stored in the storage medium. Further, some or all of the actual operations may be performed by an operating system or the like operating on a computer based on instructions of the program code. The program code read out from the storage medium may also be written into a memory provided in an expansion board inserted into a computer or into a memory provided in an expansion unit connected to the computer, and then, based on instructions of the program code, a CPU or the like mounted on the expansion board or the expansion unit is caused to perform part or all of actual operations, thereby realizing the functions of any of the above-described embodiments of the interaction method of augmented reality.
The memory may be implemented as various storage media such as an electrically erasable programmable read-only memory (EEPROM), a Flash memory (Flash memory), a programmable read-only memory (PROM), and the like. A processor may be implemented to include one or more central processors or one or more field programmable gate arrays, where the field programmable gate arrays integrate one or more central processor cores. In particular, the central processor or central processor core may be implemented as a CPU or MCU.
Embodiments of the present application implement a computer program product comprising computer programs/instructions which, when executed by a processor, implement the steps of an augmented reality interaction method as described above.
It should be noted that not all the steps and modules in the above processes and the structure diagrams are necessary, and some steps or modules may be omitted according to actual needs. The execution sequence of the steps is not fixed and can be adjusted as required. The division of the modules is merely for convenience of description and the division of functions adopted in the embodiments, and in actual implementation, one module may be implemented by a plurality of modules, and functions of a plurality of modules may be implemented by the same module, and the modules may be located in the same device or different devices.
The hardware modules in the various embodiments may be implemented mechanically or electronically. For example, a hardware module may include specially designed permanent circuits or logic devices (e.g., special purpose processors such as FPGAs or ASICs) for performing certain operations. A hardware module may also include programmable logic devices or circuits (e.g., including a general purpose processor or other programmable processor) temporarily configured by software for performing particular operations. As regards implementation of the hardware modules in a mechanical manner, either by dedicated permanent circuits or by circuits that are temporarily configured (e.g. by software), this may be determined by cost and time considerations.
In this document, "schematic" means "serving as an example, instance, or illustration," and any illustrations, embodiments described herein as "schematic" should not be construed as a more preferred or advantageous solution. For simplicity of the drawing, the parts relevant to the present invention are shown only schematically in the drawings, and do not represent the actual structure thereof as a product. Additionally, in order to simplify the drawing for ease of understanding, components having the same structure or function in some of the drawings are shown schematically with only one of them, or only one of them is labeled. In this document, "a" does not mean to limit the number of relevant portions of the present invention to "only one thereof", and "an" does not mean to exclude the case where the number of relevant portions of the present invention is "more than one". In this document, "upper", "lower", "front", "rear", "left", "right", "inner", "outer", and the like are used merely to indicate relative positional relationships between the relevant portions, and do not limit the absolute positions of the relevant portions.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.