CN119653062A - A projection image processing method, device, terminal and medium based on reality augmentation - Google Patents
A projection image processing method, device, terminal and medium based on reality augmentation Download PDFInfo
- Publication number
- CN119653062A CN119653062A CN202510180026.7A CN202510180026A CN119653062A CN 119653062 A CN119653062 A CN 119653062A CN 202510180026 A CN202510180026 A CN 202510180026A CN 119653062 A CN119653062 A CN 119653062A
- Authority
- CN
- China
- Prior art keywords
- information
- current
- projection image
- image information
- virtual element
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Processing Or Creating Images (AREA)
Abstract
The application relates to the technical field of image processing and discloses a projection image processing method, device, terminal and medium based on reality enhancement, which comprises the steps of obtaining current projection space information, current environment state information and optical related information to be projected; the method comprises the steps of determining reference projection image information according to current projection space information and optical related information to be projected, determining reference virtual element image information according to current environment state information and the reference projection image information, performing reality enhancement processing on the reference virtual element image information and the reference projection image information to obtain target projection image information, and performing projection display on the target projection image information under the current projection space and the current environment state. By implementing the method provided by the application, the effectiveness of projection display can be improved, and the aim of more effective, richer and more interesting projection display can be realized.
Description
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method, an apparatus, a terminal, and a medium for processing a projection image based on reality augmentation.
Background
With the generalization of projection scenes, the conventional projection mode is gradually difficult to meet the requirements of viewers on projection experience. At present, preset images or video contents are simply projected into corresponding projection spaces, only the most basic picture display effect can be realized, the presented contents are static and lack of change, the viewers are difficult to attract the attention of the viewers for a long time, and the interest points of the viewers cannot be fully mobilized, so that the whole projection process is boring. The projection scene can not enable the audience to generate the sense of being in the scene, obvious cracking sense exists between the projection picture and the real environment, and the audience just serves as a bystander to watch the picture and can not really be fused into the displayed scene, so that the effectiveness of the whole projection display is lower. Therefore, how to improve the effectiveness of projection display, so as to provide more effective, richer and more interesting projection display for the audience, is a problem to be solved.
Disclosure of Invention
The invention provides a projection image processing method, a device, a terminal and a medium based on reality enhancement, which can improve the effectiveness of projection display and are beneficial to realizing the purposes of more effective, richer and more interesting projection display.
In a first aspect, a method for processing a projection image based on reality augmentation is provided, including:
Acquiring current projection space information, current environment state information and optical related information to be projected;
Determining reference projection image information according to the current projection space information and the optical related information to be projected;
Determining reference virtual element image information according to the current environment state information and the reference projection image information;
Performing reality enhancement processing on the reference virtual element image information and the reference projection image information to obtain target projection image information;
and under the current projection space and the current environment state, the projection display is carried out on the target projection image information.
In a second aspect, there is provided a projection image processing apparatus based on reality augmentation, comprising:
The acquisition module is used for acquiring current projection space information, current environment state information and optical related information to be projected;
The first determining module is used for determining reference projection image information according to the current projection space information and the optical related information to be projected;
The second determining module is used for determining reference virtual element image information according to the current environment state information and the reference projection image information;
The processing module is used for carrying out reality enhancement processing on the reference virtual element image information and the reference projection image information to obtain target projection image information;
And the display module is used for carrying out projection display on the target projection image information under the current projection space and the current environment state.
In a third aspect, a computer device is provided, comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the above-described reality-augmentation-based projection image processing method when the computer program is executed by the processor.
In a fourth aspect, a computer readable storage medium is provided, the computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the above-described reality-augmentation-based projection image processing method.
In the scheme realized by the projection image processing method, the device, the terminal and the medium based on reality augmentation, the reference projection image information can be determined according to the current projection space information and the optical related information to be projected by acquiring the current projection space information, the current environment state information and the optical related information to be projected, and the reference virtual element image information can be determined according to the current environment state information and the reference projection image information, so that the reference virtual element image information and the reference projection image information can be subjected to reality augmentation processing to obtain target projection image information, and then the target projection image information can be subjected to projection display in the current projection space and the current environment state, so that the effectiveness of projection display can be improved, and the aim of more effective, richer and more interesting projection display can be realized.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments of the present invention will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a projection image processing method based on reality augmentation in an embodiment of the present invention;
FIG. 2 is a schematic diagram of a projection image processing apparatus based on reality augmentation according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a computer device according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of another embodiment of the computer device according to the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In order to better understand the projection image processing method based on reality augmentation provided by the embodiment of the application, a scene to which the projection image processing method based on reality augmentation is applied is briefly described below. At present, more and more scenes can enrich projection display contents by means of projection display technology, for example, when exhibition scenes, such as museums, art exhibition halls and the like, are used for displaying various exhibited articles and artworks, education training scenes, such as classroom teaching or professional skill training, business activity scenes, such as new release meetings or brand display activities, interior decoration scenes, such as interior decoration or home decoration of business places, and theme entertainment scenes, such as theme parks, virtual reality experience halls and the like, can be used for highlighting articles to be displayed better by means of projection display technology.
However, the existing projection display simply projects a preset image on an object to be displayed, or repeatedly projects the corresponding projection video content of the object to be displayed in a projection space, so that a viewer cannot be immersed in the object or scene to be displayed. For example, in the artwork display process, the existing projection technology can simply carry out static projection on the usage scene of the artwork, so that the artwork can be similar to being displayed in an exact usage scene, but if there is a great difference between the usage scene (i.e. a projection picture) and the real environment, a viewer cannot be fused into the display scene, so that the effect of projection display is lost, and the effectiveness of projection display is reduced.
In order to solve the above problems, an embodiment of the present application provides a projection image processing method based on reality augmentation, which can better adapt to a reference projection image of a suitable current projection space according to current projection space information and optical related information to be projected, can determine a reference virtual element with better interactivity and immersion according to current environment state information and the reference projection image, and can further perform reality augmentation processing on the reference virtual element and the reference projection image to obtain a target projection image with better fusion, so that a viewer can better integrate into the display scene, can obtain better viewing experience, such as a more realistic understanding of a displayed artwork, or can more intuitively understand teaching knowledge points, and can further make the projection display process more efficient.
Referring to fig. 1, fig. 1 is a schematic flow chart of a projected image processing method based on reality augmentation according to an embodiment of the present invention, including the following steps:
S10, acquiring current projection space information, current environment state information and optical related information to be projected.
The current projection space information may be used to indicate spatially related attribute description information of the current projection presentation behavior. The current projection space information may include, but is not limited to, current space size, current space shape, space background material, background color, etc., which is not limited by the present application.
The current environmental state information may be comprehensive description information of the real environmental state in which the current projection presentation behavior is located. The current environmental state information may include, but is not limited to, current environmental atmosphere state information such as illumination, temperature, humidity, sound, etc., and current environmental physical state information such as layout state information of temporary obstacles or layout state of fixed physical objects, etc., which the present application is not limited to.
The optical related information to be projected can be used for indicating the optical related information of the original content needing to be projected and displayed, such as holographic projection light wave information needing to be projected and displayed, and the like. The optical related information to be projected can contain appearance characteristic information such as shape, color, texture and the like of an object or a scene to be projected, and can be converted into a visible projection picture through specific optical processing and projection equipment.
Specifically, the current projection space information and the current environmental state information may be obtained through a preset sensor, for example, a depth sensor to obtain the current projection space information, for example, a temperature sensor, a humidity sensor, etc. to obtain the current environmental state information. Alternatively, the image of the projection space may be photographed by a computer vision technique, such as by two cameras, to further calculate and obtain the current projection space information and the current environmental state information by matching the feature points in the left and right images. Alternatively, the optical related information to be projected may be obtained in advance, and the optical related information to be projected is taken as holographic projection light wave information as an example, and the holographic projection light wave information may be obtained by performing pre-hologram on the content to be projected and displayed.
And S20, determining reference projection image information according to the current projection space information and the information related to the optics to be projected.
The reference projection image information may be determined based on current projection space information and optical related information to be projected, and the projection form and the related information of the projection content preliminarily presented in the current space environment.
Specifically, the optical related information to be projected can be adjusted according to the size and shape of the projection space, for example, for a long and narrow corridor type projection space, the optical related information to be projected, which is originally suitable for square space display, can be stretched or cut down, so as to obtain the reference projection image information, and the reference projection image information can be completely and properly displayed in the long and narrow corridor type projection space.
Alternatively, different projection devices and spatial distances may require projection images of different resolutions, color patterns, and/or pixel depths to ensure a visual effect. For example, for a projection display scene with a larger spatial distance, reference projection image information with higher resolution may be set, and for a projection display scene with a smaller spatial distance, reference projection image information with lower resolution may be set, which is not limited in the present application.
S30, determining reference virtual element image information according to the current environment state information and the reference projection image information.
Wherein the reference virtual element image information may be determined based on the current environment state information and the reference projection image information, and the virtual element-related image data description information for adding to the reference projection image to achieve the reality augmentation effect. It will be appreciated that the reference virtual element image may be fused with the reference projection image and the real environment (e.g., the current projection space and environment) to enhance the immersive and interactive aspects of the projection presentation process.
Optionally, the reference virtual element image information may include, but is not limited to, various virtual elements, such as virtual ornaments, virtual natural scenes, etc., virtual light and shadow effects, such as simulated sunlight spots, light effects, shadows, lightning, etc., virtual humidity atmosphere elements, such as fog, snowflakes, smog, etc., virtual sound atmosphere elements, such as wind sounds, rain sounds, thunder, etc., virtual interaction elements, such as interactable objects, clickable buttons, etc., which are not limited in this application.
Optionally, for each virtual element, the reference virtual element image information may include a detailed description of a geometry, a size, a texture, a color appearance, and the like of each virtual element, for example, for a virtual stone bench element, the reference virtual element image information may include a length, a width, a height, a stone texture, a texture pattern, a color tone, and the like of the virtual stone bench, which is not limited in this aspect of the application.
Optionally, for the virtual light shadow, the reference virtual element image information may include information about a light shadow effect of the virtual light shadow under the current ambient lighting condition, such as information about a light receiving surface, a backlight surface, a highlight position, a shadow casting effect, and the like. Optionally, for the virtual interactive element, the reference virtual element image information may include information of interaction logic between the virtual interactive element and the user, such as response effect after the user touches the virtual button, and performance effect when the virtual object collides with or is blocked from the real object.
It is to be understood that determining the reference virtual element image information based on the current environmental state information and the reference projection image information refers to a process of how to determine the reference virtual element image information. In step S30, that is, determining reference virtual element image information according to the current environmental state information and the reference projection image information, includes the following steps:
S31, acquiring current environment atmosphere state information from the current environment state information, and acquiring current environment physical state information from the current environment state information;
S32, determining illumination atmosphere virtual element image information according to the current illumination atmosphere state information in the current environment atmosphere state information and the reference projection image information;
S33, determining humidity atmosphere virtual element image information according to the current humidity atmosphere state information in the current environment atmosphere state information and the reference projection image information;
s34, determining interactive virtual element image information according to the reference projection image information, the current environment atmosphere state information and the current environment physical state information;
and S35, determining reference virtual element image information according to the illumination atmosphere virtual element image information, the humidity atmosphere virtual element image information and the interaction virtual element image information.
The current environmental atmosphere state information can be used for describing the overall atmosphere condition of the environment where the current projection behavior is located. The current environmental atmosphere state information can integrate various factors in the environment, such as state information of related atmosphere sense which is mainly produced by illumination, humidity, temperature, sound and the like, and can provide basis for subsequently determining the reference virtual element which is fit with the atmosphere sense, so that the overall immersion sense can be enhanced when the reference virtual element is fused into the current environment.
In particular, the current ambient atmosphere status information may include, but is not limited to, lighting atmosphere, humidity atmosphere, temperature atmosphere, sound atmosphere, etc., to which the present application is not limited. Optionally, the following description will take the current environmental atmosphere state information including the current lighting atmosphere state information and the current humidity atmosphere state information as an example, and the present application is not limited thereto.
The lighting atmosphere virtual element image information can be determined based on the current lighting atmosphere state information and the reference projection image information, and is used for being added into the reference projection image to enhance lighting related atmosphere effect and enable the lighting related atmosphere effect to be more fit with image description information corresponding to the reference virtual element of the lighting sensation of the actual environment. The illumination virtual element image can simulate illumination to generate various effects such as light spots, light shadows, reflected light and the like, and further corresponds to illumination conditions in the environment so as to create a more real and natural illumination environment.
It is to be understood that determining the lighting atmosphere virtual element image information according to the current lighting atmosphere state information in the current environment atmosphere state information and the reference projection image information refers to a process of determining the lighting atmosphere virtual element image information. In step S32, that is, determining the lighting atmosphere virtual element image information according to the current lighting atmosphere state information in the current environment atmosphere state information and the reference projection image information, the method includes the following steps:
S321, acquiring current light source illumination intensity information, current light source illumination direction information and a current light source visual angle depth map from the current illumination atmosphere state information;
s322, determining reference virtual illumination direction information of the illumination atmosphere virtual element according to the current light source illumination direction information and preset virtual position information in the reference projection image information;
S323, determining reference virtual illumination intensity information of the illumination atmosphere virtual element according to the current light source illumination intensity information, the reference virtual illumination direction information and the diffuse reflection coefficient;
s324, determining reference virtual illumination shadow information of the illumination atmosphere virtual element according to the current light source visual angle depth map;
S325, determining reference virtual lighting highlight information of a lighting atmosphere virtual element according to the highlight reflection coefficient, the highlight index, the sight line direction, the current light source lighting direction information and the preset virtual position information;
S326, determining the virtual element image information of the illumination atmosphere according to the reference virtual illumination direction information, the reference virtual illumination intensity information, the reference virtual illumination shadow information and the reference virtual illumination highlight information.
It should be noted that, the current lighting atmosphere status information may include, but is not limited to, current light source lighting intensity information, current light source lighting direction information, current light source viewing angle depth map, and the like. The current light source illumination intensity information can be quantitative description information of brightness degree of light rays emitted by a light source in a current environment, can reflect illumination intensity of the whole environment, and can be used for determining brightness of a reference virtual element in a reference projection image and coordination with ambient environment illumination.
The current light source illumination direction information can be used for describing the specific direction of the light source transmitted in the current environment, can embody the condition that light rays are emitted to all directions from the light source, and has important effects on the follow-up determination of the light receiving condition of different parts of the surface of the reference virtual element and the overall light and shadow distribution. The current light source visual angle depth map can be understood as image information obtained by rendering a projection scene from the visual angle of a light source, wherein each pixel point in the image is not conventional color information, but corresponds to distance information, namely depth information, of the surface of an object in the scene from the light source, and the current light source visual angle depth map can be mainly used for judging shadow conditions of virtual elements in an illumination environment, space shielding relation with other objects and the like.
The reference virtual illumination direction information of the illumination atmosphere virtual element can be determined by combining the current light source illumination direction information and the position information of the illumination atmosphere virtual element, and is used for describing a data description of the actual illumination direction condition of the virtual element in the current illumination environment, and the data description determines the light receiving differences of different surfaces of the virtual element and the distribution trend of light shadows.
The reference virtual lighting direction information of the lighting atmosphere virtual element may be determined by integrating the current lighting direction information of the light source and preset virtual position information (i.e. the position information of the preset lighting atmosphere virtual element) in the reference projection image information, and is used for describing description information of the direction condition of the lighting actually received by the reference virtual element in the current lighting environment. The light receiving difference and the distribution trend of the light shadow at different positions on the surface of the virtual element can be determined by referring to the virtual illumination direction information, and the application is not limited to the light receiving difference and the distribution trend.
It can be understood that the lighting atmosphere virtual element has a specific coordinate position and a specific gesture (such as a rotation angle and the like) in the space, and the current lighting direction information of the light source can be combined to determine the actual angle of the light irradiated to each surface of the reference virtual element according to the light receiving principle of the lighting atmosphere virtual element in the space. Specifically, taking an object with a cuboid shape as a reference virtual element and being located below the light source obliquely as an example, the illumination direction received by each surface can be clarified by calculating the included angle between the light and the normal direction of each surface of the cuboid, such as the perpendicular illumination, the oblique illumination, or the backlight state, and the like, and the included angle between the illumination direction and the normal direction of the surface of the virtual element can be expressed and calculated by vector operation, so that the light receiving condition of the virtual element can be accurately described.
Optionally, the process of determining the reference virtual lighting direction information of the lighting atmosphere virtual element according to the current lighting direction information of the light source and the preset virtual position information in the reference projection image information may refer to the following formula:
Wherein, The reference virtual illumination direction information of the illumination atmosphere virtual element can be represented, namely, the included angle between the illumination direction of the light source and the normal direction of the virtual element surface; Cosine information that can represent a reference virtual illumination direction; The light source illumination direction vector in the current light source illumination direction information can be represented; a normal vector of a surface of a preset virtual element in preset virtual position information in the reference projection image information may be represented; the method can be used for representing click operation of the illumination direction vector of the light source and the normal vector of the surface of the preset virtual element; A module that can be used to represent a light source illumination direction vector; May be used to represent a modulus of a normal vector of the surface of the preset virtual element.
The reference virtual illumination intensity information of the illumination atmosphere virtual element can be calculated based on the current light source illumination intensity information, the reference virtual illumination direction information and the diffuse reflection coefficient, and is used for representing quantitative description information of the brightness degree of the light rays, which is presented by the illumination atmosphere virtual element under the current illumination environment through the diffuse reflection effect. The reference virtual lighting intensity information may determine a base brightness level of the reference virtual element in the projected image and a degree of adaptation to the ambient light.
Optionally, the process of determining the reference virtual illumination intensity information of the illumination atmosphere virtual element according to the current light source illumination intensity information, the reference virtual illumination direction information and the diffuse reflection coefficient may refer to the following formula:
Wherein, Reference virtual lighting intensity information, which may represent lighting atmosphere virtual elements; Can represent diffuse reflection coefficients; current light source illumination intensity information may be represented; cosine information referring to the virtual illumination direction may be represented.
The reference virtual lighting shadow information of the lighting atmosphere virtual element can be determined by means of the current light source visual angle depth map and is related description information for describing the shadow condition of the lighting atmosphere virtual element generated in the current lighting environment. The reference virtual lighting shadow information may include, but is not limited to, information of the position, shape, size, shade, etc. of the shadow. The shadow effect plays a key role in reflecting the stereoscopic impression of the virtual element and the fusion degree with the environment.
Specifically, the depth value corresponding to each part of the virtual element can be compared with the position relationship between the surrounding objects and the light source by analyzing the current light source visual angle depth map. For example, if the portion a of the virtual element is shown in the depth map to be farther from the light source than the object B, and the object B is between the light source and the portion a of the virtual element, the portion a of the virtual element may create shadows according to the light shielding principle. The information such as the depth difference value can be further analyzed to determine the specific shape, size (such as the projection range of the shadow on the virtual element a portion or other receiving surface) and the degree of darkness (influenced by factors such as the intensity of the light source, the distance, etc.).
The high light index can be a parameter related to the smoothness of the surface of the virtual element in the illumination atmosphere and can be used for controlling the concentration degree and the diffusion range of the high light on the surface of the virtual element. The larger the value of the high light index, the more concentrated and sharper the high light. Alternatively, the highlight effect of the reference virtual element can be molded by simulating the transition effect of the different material surfaces from smooth to rough, and combining with the highlight reflection coefficient.
Specifically, for the virtual element simulating the metal material, if the effect of smooth and glossy surface is expected to be reflected, a higher highlight index can be set, at the moment, the highlight area can be bright, the area is smaller and the boundary is clear under illumination, so as to better accord with the visual cognition of human eyes on the highlight of the smooth metal, and for the virtual element simulating the slightly rough wood, the highlight index can be set relatively lower, so that the highlight area is relatively more dispersed and the boundary is more fuzzy, and thus the highlight area can be more naturally attached to the texture of the wood.
The line of sight direction may indicate a direction in which an observer (may be a virtual angle of view, or may be a viewing device such as a human eye or a camera that actually views the projection) looks at the virtual element. The reference virtual lighting high-light information of the lighting atmosphere virtual element can be determined by combining various factors such as high-light reflection coefficient, high-light index, line-of-sight direction, current light source lighting direction information, preset virtual position information and the like, and is used for describing related description information of the high-light condition generated on the surface of the lighting atmosphere virtual element in the current lighting environment.
Optionally, the process of determining the reference virtual lighting highlight information of the lighting atmosphere virtual element according to the highlight reflection coefficient, the highlight index, the sight line direction, the current light source lighting direction information and the preset virtual position information may refer to the following formula:
Wherein, I 3 can represent the reference virtual lighting high-light information of the lighting atmosphere virtual element, k s can represent the high-light reflection coefficient, I light can represent the current light source lighting intensity information; The included angle between the sight line direction and the reflected light direction can be represented; cosine information of an included angle between the line of sight direction and the reflected light direction can be represented, and n can represent a high light index. Specifically, since the reflection angle of the light is equal to the incident angle, the direction of the reflected light can be further calculated according to the optical reflection law through the current illumination direction of the light source and the normal direction of the surface of the virtual object.
Optionally, the process of determining the image information of the virtual element of the lighting atmosphere according to the reference virtual lighting direction information, the reference virtual lighting intensity information, the reference virtual lighting shadow information and the reference virtual lighting highlight information may refer to the following formula:
Wherein, I L can represent the lighting atmosphere virtual element image information, I 1 can represent the reference virtual lighting intensity information; Cosine information of the reference virtual lighting direction may be represented, I 2 reference virtual lighting shadow information, and I 3 may represent reference virtual lighting highlight information.
The method comprises the steps of determining light receiving differences of different surfaces of virtual elements according to virtual illumination direction information, creating a basis for light-dark contrast, enabling reference virtual illumination intensity information to be clear, enabling reference virtual illumination shadow information to be capable of modeling three-dimensional and spatial relations, enabling reference virtual illumination highlight information to be capable of reflecting glossiness and texture, integrating the information related to illumination, enabling comprehensive information of appearance forms of the virtual elements in the illumination atmosphere to be completely depicted, enabling the virtual elements in the illumination atmosphere to be further generated by means of graphics rendering and other technologies, enabling the virtual elements in the illumination atmosphere to be perfectly matched with the environment in terms of illumination effect when the virtual elements are integrated into the reference projection image, achieving realistic reality enhancement projection effect, and enabling viewers to feel visual experience of natural fusion of the virtual elements and the real illumination environment.
The humidity atmosphere virtual element image information can enable image description information corresponding to reference virtual elements which are determined based on the current humidity atmosphere state information and the reference projection image information and are used for creating visual effects related to humidity and enhancing environmental humidity atmosphere feelings in the reference projection image. The humidity atmosphere virtual element image can reflect atmosphere sense related to humidity in the current environment mainly through simulating phenomena such as water vapor, water drops, fog and the like which are common in the high-humidity environment, or visual manifestations such as drying, dust raising and the like in the low-humidity environment.
Specifically, in the projection image corresponding to the environment atmosphere with higher humidity, the humidity atmosphere virtual element image information may be determined according to the current humidity atmosphere state information and the reference projection image information, and at this time, the humidity atmosphere virtual element image information may include image information of a virtual water vapor element and/or a virtual fog element, etc. Optionally, taking the virtual fog element as an example, the humidity atmosphere virtual element image information may include the concentration of the virtual fog element (such as thick fog or thin light fog, which may be represented by transparency, color shade, etc. of the pixel), the range (which may relate to the area size and boundary condition of the fog, the dynamic effect (such as the drifting direction and speed of the fog, and the flow form of the fog in breeze may be simulated), etc. so that the projection image presents a visual effect as if the projection image is placed in a humid environment, such as adding the fog element in a virtual valley scene, and creating an atmosphere feeling of high early morning humidity and surrounding fog.
Specifically, in the projection image corresponding to the low-humidity environment atmosphere, the humidity atmosphere virtual element image information may be determined according to the current humidity atmosphere state information and the reference projection image information, where the humidity atmosphere virtual element image information may include image information of a virtual dust element and/or a virtual dry texture. Optionally, taking a virtual dust raising element as an example, fine dust particles raised when wind blows across the ground (such as by adding dust particle images with a certain transparency and dynamic effect, the flying track, speed and other parameters can be set), or taking reference projection image information as a rural way as an example, fine cracks and other texture changes generated by drying can be added on the soil surface of the rural way, so as to visually strengthen the drying atmosphere in a low humidity environment.
Optionally, the sound sensor may be used to collect background sound in the current environment, such as wind sound, rain sound, etc., so that the background sound in the current environment may be obtained from the current environmental atmosphere state information, so as to further use the background sound in the projection display process. For example, if the current environmental atmosphere status information indicates that the current environment is a silent, slightly dull, rainy night atmosphere, the virtual projection element may include a darker lighting effect, a rainy sound with a getting out of the way, and the like, which is not limited in the present application.
The current environment physical state information may be related attribute information of an object actually existing in the current environment. Specifically, the geometric shape, the material, the position, the spatial relationship among the objects and the like of the objects actually existing in the current environment can be obtained by means of manual detection or computer vision technology in advance. The current environment physical state information can be used as a basis to determine how the reference virtual element interacts and fuses with the object in the current environment, so that the coordination of the virtual element with the current environment in terms of space layout and function realization is ensured.
The interactive virtual element image information may be related information which is determined after integrating the reference atmosphere effect information, the reference shape size information, the reference texture information and the reference projection position information and is used for describing the appearance performance (including shape, size, texture, light and shadow effect and the like) of the interactive virtual element in the projection scene and the presentation state in the projection image. The interactive virtual element image information can provide specific image basis for subsequent operations such as fusing the interactive virtual element with the reference projection image, so that the reference virtual element can be fused into the reference projection image more naturally, and the projection processing effect of reality enhancement is better realized.
It is to be understood that determining the interactive virtual element image information according to the reference projection image information, the current environmental atmosphere state information and the current environmental physical state information refers to a process of determining the interactive virtual element image information. In step S34, that is, according to the reference projection image information, the current environmental atmosphere state information and the current environmental physical state information, interactive virtual element image information is determined, which includes the following steps:
s341, acquiring first geometric information, first size information, first texture information and first spatial position information of a first reference object from the current environment object state information;
s342, determining reference shape size information of the interactive virtual element according to the first geometric shape information and the first size information;
S343, determining reference texture information of the interactive virtual element according to the first texture information;
s344, according to the first space position information, the reference projection position information of the virtual element is interacted truly;
s345, performing light and shadow effect processing according to the reference projection image information, the current environmental atmosphere state information, the reference shape and size information, the reference material texture information and the reference projection position information to obtain reference atmosphere effect information of the interactive virtual element;
s346, according to the reference atmosphere effect information, the reference shape size information, the reference texture information and the reference projection position information, the interactive virtual element image information is determined.
The current environmental entity state information may include one or more reference entity related state information. Optionally, taking the current environment including the first reference object as an example, the current environment object state information may include, but is not limited to, first geometry information, first size information, first texture information, first spatial position information, and the like of the first reference object, which is not limited in the present application.
The first geometric shape information can describe the external shape characteristic information of the first reference object in the current environment in detail. Specifically, for regular-shape objects, such as cuboid tables, cylindrical columns and the like, the shape of the regular-shape objects can be determined through parameters of length, width, height, radius and the like, and for irregular shapes, such as naturally grown trees, sculptures with unique shapes and the like, the outline and the surface morphology of the irregular-shape objects can be accurately depicted by means of point cloud data, three-dimensional models and the like.
The first dimension information may indicate a specific dimension value of the first reference physical object, such as a specific length (measured in units of length of meters, centimeters, etc.), a specific width, a specific height, or a related dimension parameter (for objects of circular shape, spherical shape, etc.) of radius, diameter, etc. The first size information can determine the size proportion relation of the reference virtual element matched with the first size information so as to avoid the situation that the reference virtual element is not coordinated with a real object in space. For example, taking a virtual ornament element placed on a real table as an example, the size of the virtual ornament element needs to be adapted to the size of the real table to better conform to the actual visual experience.
The first texture information may indicate a texture type and a corresponding texture feature of the first reference physical surface. The material can be wood, metal, plastic, stone, fabric and the like, and the texture information can comprise surface patterns, grains, roughness or smoothness and other visual expression information. For example, the wood material may include a wood grain texture, and the metal material may include a metallic luster or a specific processed grain. The first texture information is helpful to select texture of the matched virtual element, so that the virtual element is more close to a real object visually, and the fused sense of reality is enhanced.
The first spatial location information may be used to indicate a specific coordinate location of the first reference object in the entire environmental space, and a relative positional relationship of the first reference object with other objects. It may be generally expressed in three-dimensional space coordinates (such as x, y, z coordinate values under a preset space coordinate system), and may also include azimuth relationships (such as adjacent, up-down, front-back, etc.) and distance information between objects.
The reference shape size information of the interactive virtual element can be determined based on the geometric shape information and the size information of the first reference physical object in the current environment, and is used for describing the shape characteristics and the corresponding size range related information of the interactive virtual element in the projection scene. The reference shape and size information can enable the reference virtual element to be matched with the existing object in shape and size, so that a foundation can be laid for subsequent fusion and interaction operation.
Specifically, if the first reference object is a cuboid, the reference virtual element can be designed to be attached to or correspond to the first reference object according to an application scene, for example, a virtual cuboid box is designed to be placed on the first reference object, if the first reference object is a tree with irregular shape, the reference virtual element can be designed to be a virtual vine element climbing along the trunk shape, and the shape of the vine can be determined according to contour features such as the trend of the trunk of the tree.
The reference texture information of the interactive virtual element may be determined according to texture information of a first reference object in the current environment, and is used for describing the type of texture that should be adopted by the interactive virtual element in the projection scene and related information of the corresponding texture feature. The texture information of the reference material can enable the reference virtual element to correspond to the real object in terms of material texture, so that the sense of reality and the sense of integration of the virtual element in the projection image are further enhanced.
Specifically, if the first reference object is a wood material, the interactive virtual element may preferentially select a wood material or a material with a visual effect similar to that of the wood material, so that the reference virtual element appears to be made of a similar material to that of the object. After determining the texture, performing texture mapping operation on texture features corresponding to the texture, such as applying proper texture patterns, textures and the like to the surface of the virtual element, so that the virtual element presents a surface texture similar to that of a real object. For the first reference physical object made of metal materials, if the reference virtual element is made of metal materials, corresponding metal luster textures and processing textures can be accurately mapped to the surface of the reference virtual element, so that optical performances such as reflection, scattering and the like of the reference virtual element under illumination are more consistent with those of a metal object in reality. Through the matching of the texture of the material, the reference virtual element can be better fused into the real environment, so that the overall projection effect is improved.
The reference projection position information of the interactive virtual element can be determined according to the spatial position information of the first reference object in the current environment, and is related information for describing the specific position where the interactive virtual element should be placed in the projection scene and corresponding spatial coordinates, gestures and other information. The reference projection position information can better ensure that the virtual elements are matched with the real environment in the space layout, and more accords with the conventional cognition of people on the position relation of the object in the space, so that the interaction function between the virtual elements and the real object can be realized in the subsequent steps.
In particular, the projected position of the reference virtual element can be further determined, typically based on the spatial position of the first reference object. For example, taking a flowerpot in which a first reference object is a corner in a real environment as an example, if a virtual flower element is to be added to the flowerpot, the position of the flower may be set to a corresponding spatial position inside the flowerpot. Optionally, fine tuning may be performed appropriately according to factors such as the overall layout of the projected scene, visual aesthetics, and interaction requirements with other virtual or real elements. For example, in order to avoid the occlusion conflict between the virtual element and other objects in the projected image, or to create a better spatial hierarchy, the virtual element may be slightly translated and rotated by a certain angle to further determine the final projection position.
The reference atmosphere effect information of the interactive virtual element can be related information which is obtained by integrating the reference projection image information, the current environment atmosphere state information, the reference shape and size information, the reference texture information and the reference projection position information and performing light effect processing and is used for describing the effects of illumination, shadow, high light, whole light shadow atmosphere and the like presented by the interactive virtual element under the environment atmosphere of the current projection scene. The reference atmosphere effect information can enable the reference virtual element to be more matched with the reference projection image and the real environment in the light and shadow representation, and further the sense of reality and immersion of the virtual element can be enhanced.
Specifically, the shadow casting condition (including the shape, the size, the position and the like of the shadow) of the virtual element can be calculated according to the position and the illumination direction of the light source and the shape, the position and the like of the virtual element, so that the shadow casting condition is consistent with the shadow expression of an object in a real environment, meanwhile, the highlight position, the intensity and the distribution range of the surface of the virtual element can be determined according to the highlight reflection characteristic (reference material texture information) of a material, the line-of-sight direction of an observer and other factors, so that the luster effect conforming to the texture of the material is obtained, the comprehensive processing is further carried out on the shadow effect, and the virtual element can present a vivid shadow atmosphere in a projection scene and is further naturally fused with the surrounding environment.
The reference shape and size information can determine the basic geometric appearance of the reference virtual element to enable the reference virtual element to have definite size and shape in space, the reference material texture information can endow the surface of the reference virtual element with texture and visual characteristics to enable the surface of the reference virtual element to be more realistic, the reference projection position information can standardize the placement position of the reference virtual element in a projection scene to enable the placement position to be more consistent with space layout logic, and the reference atmosphere effect information can enable the virtual element to be more matched with the current environment in environment atmosphere (such as light and shadow expression of illumination and the like), so that a vivid visual effect is created. The information is integrated, so that the appearance of the interactive virtual element in the projection scene can be comprehensively depicted, and corresponding interactive virtual element images are generated by graphics rendering and other technologies, so that the interactive virtual element images can achieve better reality enhancement effect in aspects of appearance and interactivity with the environment, and further, the reality and immersion of the whole projection scene are improved.
And S40, performing reality enhancement processing on the reference virtual element image information and the reference projection image information to obtain target projection image information.
The target projection image information may be a projection image with a more immersive sensation obtained after the reference virtual element image and the reference projection image are subjected to the reality enhancing process. Specifically, the target projection image information may include position information of the reference virtual element and the reference projection image after being accurately registered in space, so as to ensure that the placement of the reference virtual element in the reference projection image is more consistent with logic in real space and visual aesthetic feeling, may include pixel fusion effects processed by transparency fusion and other modes, so that the reference virtual element and the reference projection image are naturally excessive, and may include light and shadow effects after the coordination of integral light and shadow are comprehensively considered, so that scenes in the reference virtual element and the reference projection image can be consistent in appearance under illumination. Through the reality augmentation fusion processing of the above aspect, the projection effects of mutual fusion and mutual complementation between the reference virtual element and the reference projection image can be more completely presented, and finally the target projection image information is obtained.
It should be understood that, performing reality augmentation processing on the reference virtual element image information and the reference projection image information to obtain target projection image information, which refers to a process how to obtain the target projection image information. In step S40, namely, performing reality augmentation processing on the reference virtual element image information and the reference projection image information to obtain target projection image information, including the following steps:
s41, acquiring a first characteristic point from the reference virtual element image information, and acquiring a second characteristic point corresponding to the first characteristic point from the reference projection image information;
S42, performing spatial registration processing according to the first characteristic points and the second characteristic points to obtain spatial information of a target projection image;
S43, carrying out transparency fusion processing on the reference virtual element image information and the reference projection image information to obtain transparency information of a target projection image;
And S44, determining target projection image information according to the space information of the target projection image and the transparency information of the target projection image.
The first feature point may be a representative key point extracted from the reference virtual element image information and capable of helping image matching and subsequent processing. The first feature point may have a unique visual feature in the reference virtual element image, for example, may be a corner point of an object in the reference virtual element image, a turning point of an edge, a point with obvious texture change, and the like. The first feature point can be used as a feature identifier to find the corresponding relation between the reference virtual element image and the reference projection image, so that more accurate fusion processing can be realized.
Specifically, a Harris (Harris) corner detection algorithm can be adopted, and the algorithm can be based on the gray level change condition of the local area of the image, and the characteristic value of the autocorrelation matrix is calculated to determine the corner with significant gray level change in multiple directions. A Scale-invariant feature transform (SIFT) algorithm may be employed to detect feature points in different Scale spaces, and a corresponding feature descriptor may be generated for each feature point for describing image features of a local region around the feature point, thereby facilitating subsequent matching operations.
The second feature point may be a key point extracted from the reference projection image information and having a matching relationship with the first feature point. The second feature point may be a representative point in the reference projection image. Specifically, a second feature point corresponding to the first feature point can be found through a specific matching algorithm, so that a transformation relationship between the reference projection image and the reference virtual element image can be determined according to a spatial relationship between the second feature point and the first feature point, and further operations such as spatial registration and the like can be realized, so that the virtual element can be ensured to be accurately fused to a proper position in the projection image.
Specifically, the correspondence relationship may be determined by calculating the similarity between feature descriptors of two feature points (such as feature descriptors generated by using the SIFT algorithm described above). Alternatively, the similarity between the two feature points may be determined by calculating the euclidean distance or the like. For example, the feature descriptors of the first feature points may be calculated, and the feature points that have the smallest distance and satisfy the preset threshold condition from the euclidean distances of the feature descriptors of all the second feature points in the reference projection image may be regarded as the second feature points corresponding to the feature descriptors.
Based on the first characteristic points and the second characteristic points, a spatial transformation relation between the reference virtual element image and the reference projection image is determined, so that after spatial registration processing is carried out according to the spatial transformation relation, the reference virtual element can be accurately placed on a spatial position corresponding to the reference projection image, the spatial alignment of the reference virtual element and the reference projection image is realized, the spatial information of the target projection image is obtained, and the virtual element can be ensured to be more in accordance with real spatial logic and visual aesthetic feeling when being fused into the projection image.
Further, the reference virtual element image information and the reference projection image information are mixed according to a preset transparency rule, so that transparency information of a target projection image for describing transparency degree information of each pixel in the target projection image can be obtained, the reference virtual element can show transparency effects of different degrees when being fused into the reference projection image, fusion transition can be more natural, and fusion feeling between the reference virtual element and the reference projection image can be enhanced.
Specifically, the above-described transparency fusion processing procedure may be performed based on the transparency channel (alpha channel) fusion manner. By way of example, taking the reference virtual element image with an alpha channel as an example, the alpha value of each pixel in the reference virtual element image may represent the transparency of the reference virtual element at the pixel position, where a value ranging from 0 to 1,0 may represent complete transparency, and 1 may represent complete opacity.
Wherein R aim can represent transparency information on a Red (Red) channel in transparency information of the target projection image, R m can be used for representing color values of pixel points in the reference projection image on the R channel; the alpha value of the pixel corresponding to the reference virtual element image in the reference virtual element image information can be used for representing the alpha value of the pixel corresponding to the reference virtual element image; The color value of the pixel point in the reference virtual element image on the R channel can be expressed; transparency information on a Green (Green) channel in the transparency information of the target projection image may be represented; Can be used to represent the color value of the pixel point in the reference projection image on the G channel; the color value of the pixel point in the reference virtual element image on the G channel can be expressed; transparency information on a Blue (Blue) channel in the transparency information of the target projection image may be represented; can be used to represent the color value of the pixel point on the B channel in the reference projection image; may be used to represent the color value of the pixel point on the B-channel in the reference virtual element image.
By calculating pixel by pixel in the above manner, the transparency information of the whole target projection image can be obtained, so that the reference virtual element can be naturally blended into the reference projection image, for example, some semitransparent effects (such as virtual semitransparent smog, virtual semitransparent light shadow and the like) in the reference virtual element can be more reasonably displayed in the reference projection image through transparency blending processing.
And S50, carrying out projection display on the target projection image information under the current projection space and the current environment state.
After the above-mentioned target projection image information is obtained, at this time, the target projection image information can be more accurately and more closely projected into the current projection space by means of a projection device (such as a high-definition projector and/or a holographic projector) according to the current projection space and the current environmental state.
Specifically, the projection device can completely and clearly present the target projection image fused with the reference virtual element in the current projection space under the influence of the current illumination, temperature, humidity, the existing environmental conditions such as real objects and the like according to the data of spatial layout, pixel colors, light and shadow effects, transparency and the like contained in the target projection image information, so that a viewer in the body can more intuitively see the perfect fusion effect of the virtual element and the real environment, and can be better immersed in the unique visual experience after the reality enhancement processing.
It can be understood that the target projection image information may be projection image information more in accordance with the current projection space and the current environment state, and when the current projection space is changed, for example, when the projection device moves to another projection space, more in accordance with the changed projection space information, the projection display image information can be redetermined, or when the current environment state is changed, for example, when the current environment changes from sunny days to rainy days, the more in accordance with the changed environment state information can be redetermined, which is not limited in the application.
In the above scheme, the reference projection image information can be determined according to the current projection space information, the current environment state information and the optical related information to be projected by acquiring the current projection space information, the current environment state information and the optical related information to be projected, and the reference virtual element image information can be determined according to the current environment state information and the reference projection image information, so that the reference virtual element image information and the reference projection image information can be subjected to reality enhancement processing to obtain target projection image information, and then the target projection image information can be subjected to projection display in the current projection space and the current environment state, so that the effectiveness of projection display can be improved, and the purposes of more effective, richer and more interesting projection display can be realized.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present invention.
In an embodiment, a projection image processing device based on reality augmentation is provided, and the projection image processing device based on reality augmentation corresponds to the projection image processing method based on reality augmentation in the above embodiment one by one. As shown in fig. 2, the projection image processing apparatus based on reality augmentation includes an acquisition module 101, a determination module 102, and a processing module 103. The functional modules are described in detail as follows:
An obtaining module 101, configured to obtain current projection space information, current environmental state information, and optical related information to be projected;
A first determining module 102, configured to determine reference projection image information according to the current projection space information and the optical related information to be projected;
A second determining module 103, configured to determine reference virtual element image information according to the current environmental state information and the reference projection image information;
the processing module 104 is configured to perform reality augmentation processing on the reference virtual element image information and the reference projection image information to obtain target projection image information;
And the display module 105 is used for performing projection display on the target projection image information under the current projection space and the current environment state.
In an embodiment, the second determining module 103 is configured to determine reference virtual element image information according to the current environmental status information and the reference projection image information, specifically configured to:
Acquiring current environment atmosphere state information from the current environment state information, and acquiring current environment physical state information from the current environment state information;
determining lighting atmosphere virtual element image information according to the current lighting atmosphere state information in the current environment atmosphere state information and the reference projection image information;
determining humidity atmosphere virtual element image information according to the current humidity atmosphere state information in the current environment atmosphere state information and the reference projection image information;
Determining interactive virtual element image information according to the reference projection image information, the current environment atmosphere state information and the current environment physical state information;
and determining reference virtual element image information according to the illumination atmosphere virtual element and the interaction virtual element.
In an embodiment, the second determining module 103 is configured to determine, according to the current lighting atmosphere state information and the reference projection image information in the current environmental atmosphere state information, lighting atmosphere virtual element image information, specifically configured to:
Acquiring current light source illumination intensity information, current light source illumination direction information and a current light source visual angle depth map from the current illumination atmosphere state information;
determining reference virtual illumination direction information of the illumination atmosphere virtual element according to the current light source illumination direction information and preset virtual position information in the reference projection image information;
determining reference virtual illumination intensity information of the illumination atmosphere virtual element according to the current light source illumination intensity information, the reference virtual illumination direction information and the diffuse reflection coefficient;
Determining reference virtual lighting shadow information of the lighting atmosphere virtual element according to the current light source visual angle depth map;
Determining reference virtual lighting highlight information of a lighting atmosphere virtual element according to the highlight reflection coefficient, the highlight index, the sight line direction, the current light source lighting direction information and the preset virtual position information;
And determining the virtual element image information of the lighting atmosphere according to the reference virtual lighting direction information, the reference virtual lighting intensity information, the reference virtual lighting shadow information and the reference virtual lighting highlight information.
In an embodiment, the second determining module 103 is configured to determine the interactive virtual element image information according to the reference projection image information, the current environmental atmosphere state information, and the current environmental physical state information, and is specifically configured to:
acquiring first geometric information, first size information, first texture information and first spatial position information of a first reference object from the current environment object state information;
determining reference shape size information of the interactive virtual element according to the first geometric shape information and the first size information;
Determining reference texture information of the interactive virtual element according to the first texture information;
According to the first space position information, the reference projection position information of the virtual element is interacted truly;
Performing light and shadow effect processing according to the reference projection image information, the current environmental atmosphere state information, the reference shape and size information, the reference material texture information and the reference projection position information to obtain reference atmosphere effect information of the interactive virtual element;
And determining interactive virtual element image information according to the reference atmosphere effect information, the reference shape and size information, the reference texture information and the reference projection position information.
In an embodiment, the processing module 104 is configured to perform a reality augmentation process on the reference virtual element image information and the reference projection image information to obtain target projection image information, and is specifically configured to:
Acquiring a first characteristic point from the reference virtual element image information, and acquiring a second characteristic point corresponding to the first characteristic point from the reference projection image information;
Performing spatial registration processing according to the first characteristic points and the second characteristic points to obtain spatial information of a target projection image;
carrying out transparency fusion processing on the reference virtual element image information and the reference projection image information to obtain transparency information of a target projection image;
And determining target projection image information according to the space information of the target projection image and the transparency information of the target projection image.
The invention provides a projection image processing device based on reality augmentation, which can determine reference projection image information according to current projection space information, current environment state information and optical related information to be projected by acquiring the current projection space information, the current environment state information and the optical related information to be projected, and determine reference virtual element image information according to the current environment state information and the reference projection image information, so that the reference virtual element image information and the reference projection image information can be subjected to reality augmentation processing to obtain target projection image information, and the target projection image information can be subjected to projection display in the current projection space and the current environment state, thereby improving the effectiveness of projection display and being beneficial to realizing the purposes of more effective, richer and more interesting projection display.
For specific limitations of the projection image processing apparatus based on reality augmentation, reference may be made to the above limitations of the projection image processing method based on reality augmentation, and no further description is given here. The respective modules in the above-described reality-augmentation-based projection image processing apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, and the internal structure of which may be as shown in fig. 3. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes non-volatile and/or volatile storage media and internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the computer device is for communicating with an external client via a network connection. The computer program, when executed by a processor, implements functions or steps of a service side of a projection image processing method based on reality augmentation.
In one embodiment, a computer device is provided, which may be a client, the internal structure of which may be as shown in FIG. 4. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the computer device is for communicating with an external server via a network connection. The computer program is executed by a processor to implement functions or steps of a client side of a reality-augmentation-based projection image processing method.
In one embodiment, a computer device is provided comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the steps of when executing the computer program:
Acquiring current projection space information, current environment state information and optical related information to be projected;
Determining reference projection image information according to the current projection space information and the optical related information to be projected;
determining reference virtual element image information according to the current environment state information and the reference projection image information;
Performing reality enhancement processing on the reference virtual element image information and the reference projection image information to obtain target projection image information;
and under the current projection space and the current environment state, the target projection image information is subjected to projection display.
The invention provides a computer device, which can determine reference projection image information according to current projection space information, current environment state information and optical related information to be projected by acquiring the current projection space information, the current environment state information and the optical related information to be projected, and determine reference virtual element image information according to the current environment state information and the reference projection image information, so that the reference virtual element image information and the reference projection image information can be subjected to reality enhancement processing to obtain target projection image information, and the target projection image information can be subjected to projection display in the current projection space and the current environment state, thereby improving the effectiveness of projection display and being beneficial to realizing the purposes of more effective, richer and more interesting projection display.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of:
Acquiring current projection space information, current environment state information and optical related information to be projected;
Determining reference projection image information according to the current projection space information and the optical related information to be projected;
determining reference virtual element image information according to the current environment state information and the reference projection image information;
Performing reality enhancement processing on the reference virtual element image information and the reference projection image information to obtain target projection image information;
and under the current projection space and the current environment state, the target projection image information is subjected to projection display.
The invention provides a computer readable storage medium, which can determine reference projection image information according to current projection space information, current environment state information and optical related information to be projected by acquiring the current projection space information, the current environment state information and the optical related information to be projected, and determine reference virtual element image information according to the current environment state information and the reference projection image information, so that the reference virtual element image information and the reference projection image information can be subjected to reality enhancement processing to obtain target projection image information, and the target projection image information can be subjected to projection display in the current projection space and the current environment state, thereby improving the effectiveness of projection display, and being beneficial to realizing the purposes of more effective, richer and more interesting projection display.
It should be noted that, the functions or steps implemented by the computer readable storage medium or the computer device may correspond to the relevant descriptions of the server side and the client side in the foregoing method embodiments, and are not described herein for avoiding repetition.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous link (SYNCHLINK) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions.
The foregoing embodiments are merely illustrative of the technical solutions of the present invention, and not restrictive, and although the present invention has been described in detail with reference to the foregoing embodiments, it should be understood by those skilled in the art that modifications may still be made to the technical solutions described in the foregoing embodiments or equivalent substitutions of some technical features thereof, and that such modifications or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.
Claims (10)
1. A projected image processing method based on reality augmentation, the method comprising:
Acquiring current projection space information, current environment state information and optical related information to be projected;
Determining reference projection image information according to the current projection space information and the optical related information to be projected;
determining reference virtual element image information according to the current environment state information and the reference projection image information;
Performing reality enhancement processing on the reference virtual element image information and the reference projection image information to obtain target projection image information;
and under the current projection space and the current environment state, the target projection image information is subjected to projection display.
2. The reality-augmentation-based projection image processing method of claim 1, wherein the determining reference virtual element image information from the current environmental state information and the reference projection image information comprises:
Acquiring current environment atmosphere state information from the current environment state information, and acquiring current environment physical state information from the current environment state information;
determining lighting atmosphere virtual element image information according to the current lighting atmosphere state information in the current environment atmosphere state information and the reference projection image information;
determining humidity atmosphere virtual element image information according to the current humidity atmosphere state information in the current environment atmosphere state information and the reference projection image information;
Determining interactive virtual element image information according to the reference projection image information, the current environment atmosphere state information and the current environment physical state information;
and determining reference virtual element image information according to the illumination atmosphere virtual element and the interaction virtual element.
3. The method according to claim 2, wherein determining the lighting atmosphere virtual element image information according to the current lighting atmosphere state information in the current environmental atmosphere state information and the reference projection image information comprises:
Acquiring current light source illumination intensity information, current light source illumination direction information and a current light source visual angle depth map from the current illumination atmosphere state information;
determining reference virtual illumination direction information of the illumination atmosphere virtual element according to the current light source illumination direction information and preset virtual position information in the reference projection image information;
determining reference virtual illumination intensity information of the illumination atmosphere virtual element according to the current light source illumination intensity information, the reference virtual illumination direction information and the diffuse reflection coefficient;
Determining reference virtual lighting shadow information of the lighting atmosphere virtual element according to the current light source visual angle depth map;
Determining reference virtual lighting highlight information of a lighting atmosphere virtual element according to the highlight reflection coefficient, the highlight index, the sight line direction, the current light source lighting direction information and the preset virtual position information;
And determining the virtual element image information of the lighting atmosphere according to the reference virtual lighting direction information, the reference virtual lighting intensity information, the reference virtual lighting shadow information and the reference virtual lighting highlight information.
4. A projected image processing method based on reality augmentation as claimed in claim 3, wherein said determining interactive virtual element image information from said reference projected image information, said current environmental atmosphere state information and said current environmental real object state information comprises:
acquiring first geometric information, first size information, first texture information and first spatial position information of a first reference object from the current environment object state information;
determining reference shape size information of the interactive virtual element according to the first geometric shape information and the first size information;
Determining reference texture information of the interactive virtual element according to the first texture information;
According to the first space position information, the reference projection position information of the virtual element is interacted truly;
Performing light and shadow effect processing according to the reference projection image information, the current environmental atmosphere state information, the reference shape and size information, the reference material texture information and the reference projection position information to obtain reference atmosphere effect information of the interactive virtual element;
And determining interactive virtual element image information according to the reference atmosphere effect information, the reference shape and size information, the reference texture information and the reference projection position information.
5. The method according to any one of claims 1 to 4, wherein performing the reality augmentation processing on the reference virtual element image information and the reference projection image information to obtain target projection image information includes:
Acquiring a first characteristic point from the reference virtual element image information, and acquiring a second characteristic point corresponding to the first characteristic point from the reference projection image information;
Performing spatial registration processing according to the first characteristic points and the second characteristic points to obtain spatial information of a target projection image;
carrying out transparency fusion processing on the reference virtual element image information and the reference projection image information to obtain transparency information of a target projection image;
And determining target projection image information according to the space information of the target projection image and the transparency information of the target projection image.
6. A projection image processing apparatus based on reality augmentation, characterized in that the projection image processing apparatus based on reality augmentation comprises:
The acquisition module is used for acquiring current projection space information, current environment state information and optical related information to be projected;
The first determining module is used for determining reference projection image information according to the current projection space information and the optical related information to be projected;
The second determining module is used for determining reference virtual element image information according to the current environment state information and the reference projection image information;
The processing module is used for carrying out reality enhancement processing on the reference virtual element image information and the reference projection image information to obtain target projection image information;
And the display module is used for carrying out projection display on the target projection image information under the current projection space and the current environment state.
7. The projection image processing apparatus based on reality augmentation as claimed in claim 6, wherein the second determining module is configured to determine reference virtual element image information based on the current environmental state information and the reference projection image information, in particular for:
Acquiring current environment atmosphere state information from the current environment state information, and acquiring current environment physical state information from the current environment state information;
determining lighting atmosphere virtual element image information according to the current lighting atmosphere state information in the current environment atmosphere state information and the reference projection image information;
determining humidity atmosphere virtual element image information according to the current humidity atmosphere state information in the current environment atmosphere state information and the reference projection image information;
Determining interactive virtual element image information according to the reference projection image information, the current environment atmosphere state information and the current environment physical state information;
and determining reference virtual element image information according to the illumination atmosphere virtual element and the interaction virtual element.
8. The projection image processing apparatus based on reality augmentation as claimed in claim 7, wherein the second determining module is configured to determine the lighting atmosphere virtual element image information according to the current lighting atmosphere state information in the current environment atmosphere state information and the reference projection image information, specifically configured to:
Acquiring current light source illumination intensity information, current light source illumination direction information and a current light source visual angle depth map from the current illumination atmosphere state information;
determining reference virtual illumination direction information of the illumination atmosphere virtual element according to the current light source illumination direction information and preset virtual position information in the reference projection image information;
determining reference virtual illumination intensity information of the illumination atmosphere virtual element according to the current light source illumination intensity information, the reference virtual illumination direction information and the diffuse reflection coefficient;
Determining reference virtual lighting shadow information of the lighting atmosphere virtual element according to the current light source visual angle depth map;
Determining reference virtual lighting highlight information of a lighting atmosphere virtual element according to the highlight reflection coefficient, the highlight index, the sight line direction, the current light source lighting direction information and the preset virtual position information;
And determining the virtual element image information of the lighting atmosphere according to the reference virtual lighting direction information, the reference virtual lighting intensity information, the reference virtual lighting shadow information and the reference virtual lighting highlight information.
9. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the reality-augmentation-based projection image processing method of any one of claims 1 to 5 when the computer program is executed.
10. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the reality-augmentation-based projection image processing method of any one of claims 1 to 5.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202510180026.7A CN119653062A (en) | 2025-02-19 | 2025-02-19 | A projection image processing method, device, terminal and medium based on reality augmentation |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202510180026.7A CN119653062A (en) | 2025-02-19 | 2025-02-19 | A projection image processing method, device, terminal and medium based on reality augmentation |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN119653062A true CN119653062A (en) | 2025-03-18 |
Family
ID=94940120
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202510180026.7A Pending CN119653062A (en) | 2025-02-19 | 2025-02-19 | A projection image processing method, device, terminal and medium based on reality augmentation |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN119653062A (en) |
Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104077802A (en) * | 2014-07-16 | 2014-10-01 | 四川蜜蜂科技有限公司 | Method for improving displaying effect of real-time simulation image in virtual scene |
| CN108010118A (en) * | 2017-11-28 | 2018-05-08 | 网易(杭州)网络有限公司 | Virtual objects processing method, virtual objects processing unit, medium and computing device |
| CN110876047A (en) * | 2018-09-04 | 2020-03-10 | 比亚迪股份有限公司 | Vehicle exterior projection method, device, equipment and storage medium |
| CN111833423A (en) * | 2020-06-30 | 2020-10-27 | 北京市商汤科技开发有限公司 | Presentation method, presentation device, presentation equipment and computer-readable storage medium |
| CN113223139A (en) * | 2021-05-26 | 2021-08-06 | 深圳市商汤科技有限公司 | Augmented reality shadow estimation method and device and computer storage medium |
| CN113727084A (en) * | 2021-11-03 | 2021-11-30 | 恒林家居股份有限公司 | Self-adaptive brightness control method and system for projection equipment |
| CN118135152A (en) * | 2023-12-14 | 2024-06-04 | 联通沃音乐文化有限公司 | Virtual-real fusion processing method for AR implantation in XR system |
-
2025
- 2025-02-19 CN CN202510180026.7A patent/CN119653062A/en active Pending
Patent Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104077802A (en) * | 2014-07-16 | 2014-10-01 | 四川蜜蜂科技有限公司 | Method for improving displaying effect of real-time simulation image in virtual scene |
| CN108010118A (en) * | 2017-11-28 | 2018-05-08 | 网易(杭州)网络有限公司 | Virtual objects processing method, virtual objects processing unit, medium and computing device |
| CN110876047A (en) * | 2018-09-04 | 2020-03-10 | 比亚迪股份有限公司 | Vehicle exterior projection method, device, equipment and storage medium |
| CN111833423A (en) * | 2020-06-30 | 2020-10-27 | 北京市商汤科技开发有限公司 | Presentation method, presentation device, presentation equipment and computer-readable storage medium |
| CN113223139A (en) * | 2021-05-26 | 2021-08-06 | 深圳市商汤科技有限公司 | Augmented reality shadow estimation method and device and computer storage medium |
| CN113727084A (en) * | 2021-11-03 | 2021-11-30 | 恒林家居股份有限公司 | Self-adaptive brightness control method and system for projection equipment |
| CN118135152A (en) * | 2023-12-14 | 2024-06-04 | 联通沃音乐文化有限公司 | Virtual-real fusion processing method for AR implantation in XR system |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11694392B2 (en) | Environment synthesis for lighting an object | |
| CA2282637C (en) | Method for rendering shadows on a graphical display | |
| Li et al. | Physically-based editing of indoor scene lighting from a single image | |
| US5572635A (en) | Method for changing a natural image based on a light environment in the natural image | |
| WO1998038591A9 (en) | Method for rendering shadows on a graphical display | |
| CN111047506B (en) | Environmental map generation and hole filling | |
| CN101183276A (en) | Interactive system based on camera projector technology | |
| US7583264B2 (en) | Apparatus and program for image generation | |
| Ganovelli et al. | Introduction to computer graphics: A practical learning approach | |
| JP2018077702A (en) | Texture adjustment support system and texture adjustment support method | |
| CN116894922A (en) | Night vision image generation method based on real-time graphic engine | |
| US20090080803A1 (en) | Image processing program, computer-readable recording medium recording the program, image processing apparatus and image processing method | |
| CN119653062A (en) | A projection image processing method, device, terminal and medium based on reality augmentation | |
| Schofield | Non-photorealistic rendering: a critical examination and proposed system. | |
| CN110136239B (en) | Method for enhancing illumination and reflection reality degree of virtual reality scene | |
| CN111210391A (en) | Interactive Mural Restoration System | |
| JP7390265B2 (en) | Virtual viewpoint video rendering device, method and program | |
| Brogni et al. | An interaction system for the presentation of a virtual egyptian flute in a real museum | |
| Martos et al. | Realistic virtual reproductions. Image-based modelling of geometry and appearance | |
| CN119888024B (en) | Human body posture multi-vision recognition AI training data set automatic generation and identification method based on simulation environment | |
| CN119701349B (en) | Display method, device, electronic device, and readable storage medium | |
| CN118890442B (en) | Twin system video mapping method, device, electronic device and storage medium | |
| Schofield | Nonphotorealistic Rendering | |
| CN120655804A (en) | Semitransparent material effect manufacturing method and device, storage medium and electronic equipment | |
| Martos et al. | Acquisition and reproduction of surface appearance in architectural orthoimages |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |