CN116351046B - Object display method and device in virtual scene - Google Patents
Object display method and device in virtual scene Download PDFInfo
- Publication number
- CN116351046B CN116351046B CN202111624890.XA CN202111624890A CN116351046B CN 116351046 B CN116351046 B CN 116351046B CN 202111624890 A CN202111624890 A CN 202111624890A CN 116351046 B CN116351046 B CN 116351046B
- Authority
- CN
- China
- Prior art keywords
- pixel
- data
- action
- target object
- channel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/52—Controlling the output signals based on the game progress involving aspects of the displayed game scene
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/55—Controlling game characters or game objects based on the game progress
- A63F13/57—Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
- A63F13/577—Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game using determination of contact between game characters or objects, e.g. to avoid collision between virtual racing cars
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A20/00—Water conservation; Efficient water supply; Efficient water use
- Y02A20/152—Water filtration
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention relates to the field of virtual reality, and particularly discloses an object display method and device in a virtual scene, wherein the method comprises the steps of obtaining azimuth information of a target object in a fluid area; reading an environmental channel map set for the fluid region, and determining pixel map data of target pixels corresponding to the azimuth information stored in the environmental channel map; and determining an action adjustment parameter according to the pixel map data of the target pixel, and displaying the virtual action of the target object according to the action adjustment parameter. The method enables the virtual action of the target object to be adaptively adjusted along with the scene change, improves the simulation degree of object display, and avoids the defect that the character animation model of the same virtual character is fixed.
Description
Technical Field
The embodiment of the invention relates to the field of virtual reality, in particular to an object display method and device in a virtual scene.
Background
With the development of virtual reality technology, the variety and number of virtual objects that can be displayed in a virtual scene are increasing. For example, in a game-like virtual scene, a virtual object corresponding to a game user can be presented, and a virtual object corresponding to a virtual character such as a monster or superman can be presented.
In the related art, in order to facilitate the presentation of each virtual character, a corresponding character animation model is preset for each virtual character, and the motion information of the virtual character is presented through the preset character animation model.
However, the inventor finds that in the process of implementing the present invention, at least the following defects exist in the existing mode that the character animation model of the same virtual character is a preset fixed model, so that the action of the virtual character cannot be dynamically changed along with the scene change.
Disclosure of Invention
In view of the foregoing, the present invention has been made to provide a method and apparatus for displaying objects in a virtual scene that overcomes or at least partially solves the foregoing problems.
According to an aspect of the present invention, there is provided an object display method in a virtual scene, including:
acquiring azimuth information of a target object in a fluid area;
Reading an environmental channel map set for the fluid region, and determining pixel map data of target pixels corresponding to the azimuth information stored in the environmental channel map;
And determining an action adjustment parameter according to the pixel map data of the target pixel, and displaying the virtual action of the target object according to the action adjustment parameter.
Optionally, before the azimuth information of the target object in the fluid area is obtained, the method further comprises the step of performing prebaking operation on the fluid area to obtain the environmental channel map;
and the reading of the environmental channel map drawn for the fluid region includes reading, by a graphics processor, the environmental channel map set for the fluid region.
Optionally, the performing a prebaking operation on the fluid region to obtain the environmental channel map includes:
Dividing the fluid area into a plurality of pixels, respectively generating pixel map data of each pixel, and storing the pixel map data of each pixel through the environment channel map.
Optionally, the pixel map data of the pixel includes at least one of collision attribute data corresponding to a first channel, pixel status data corresponding to a second channel, and motion impact data corresponding to a third channel.
Optionally, the collision attribute data comprises collision type data or non-collision type data, and the action adjustment parameters comprise collision response type adjustment parameters or non-collision response type adjustment parameters;
The pixel state data comprises fluid state data or solid state data, and the action adjustment parameters comprise fluid type adjustment parameters or solid type adjustment parameters;
The action influence data includes distance class influence data related to a distance value from a horizontal plane, element class influence data related to an associated element in water, and water quality class influence data related to a water quality attribute, and the action adjustment parameters include a distance class adjustment parameter, an element class adjustment parameter, and a water quality class adjustment parameter.
Optionally, the generating pixel map data of each pixel includes:
respectively acquiring first original pixel data of each pixel and second original pixel data of an associated pixel associated with the pixel for each pixel;
and carrying out preset operation on the first original pixel data and the second original pixel data, and generating pixel mapping data of the pixel according to an operation result.
Optionally, the first original pixel data of the pixel comprises first original pixel data corresponding to an original data channel, and the second original pixel data of the associated pixel associated with the pixel comprises second original pixel data corresponding to the original data channel;
the pixel map data of the pixel generated according to the operation result includes corrected pixel data corresponding to the corrected data channel.
Optionally, the virtual action of the target object comprises an interaction action generated by the interaction operation triggered by the target object;
the displaying the virtual action of the target object according to the action adjustment parameter comprises:
And under the condition that the interactive operation triggered by the target object is detected, acquiring the action adjustment parameter, and displaying the interactive action generated by the interactive operation triggered by the target object according to the action adjustment parameter.
Optionally, the virtual actions of the target object include object visual actions presented by a visual model of the target object;
the displaying the virtual action of the target object according to the action adjustment parameter comprises:
and transmitting the motion adjustment parameters to the visual model of the target object so that the visual model of the target object can display the visual motion of the object according to the motion adjustment parameters.
According to still another aspect of the present invention, there is provided an object display apparatus in a virtual scene, including:
The acquisition module is suitable for acquiring the azimuth information of the target object in the fluid area;
a determining module adapted to read an environmental channel map set for the fluid region, determine pixel map data of target pixels corresponding to the orientation information stored in the environmental channel map;
And the display module is suitable for determining an action adjustment parameter according to the pixel mapping data of the target pixel and displaying the virtual action of the target object according to the action adjustment parameter.
Optionally, the acquisition module is further adapted to perform a prebaking operation on the fluid region to obtain the environmental channel map;
And the determination module is specifically adapted to read, by a graphics processor, the ambient channel map set for the fluid region.
Optionally, the acquiring module is specifically adapted to:
Dividing the fluid area into a plurality of pixels, respectively generating pixel map data of each pixel, and storing the pixel map data of each pixel through the environment channel map.
Optionally, the pixel map data of the pixel includes at least one of collision attribute data corresponding to a first channel, pixel status data corresponding to a second channel, and motion impact data corresponding to a third channel.
Optionally, the collision attribute data comprises collision type data or non-collision type data, and the action adjustment parameters comprise collision response type adjustment parameters or non-collision response type adjustment parameters;
The pixel state data comprises fluid state data or solid state data, and the action adjustment parameters comprise fluid type adjustment parameters or solid type adjustment parameters;
The action influence data includes distance class influence data related to a distance value from a horizontal plane, element class influence data related to an associated element in water, and water quality class influence data related to a water quality attribute, and the action adjustment parameters include a distance class adjustment parameter, an element class adjustment parameter, and a water quality class adjustment parameter.
Optionally, the acquiring module is specifically adapted to:
respectively acquiring first original pixel data of each pixel and second original pixel data of an associated pixel associated with the pixel for each pixel;
and carrying out preset operation on the first original pixel data and the second original pixel data, and generating pixel mapping data of the pixel according to an operation result.
Optionally, the first original pixel data of the pixel comprises first original pixel data corresponding to an original data channel, and the second original pixel data of the associated pixel associated with the pixel comprises second original pixel data corresponding to the original data channel;
The acquisition module is particularly adapted to correct pixel data corresponding to the correction data channel.
Optionally, the virtual action of the target object comprises an interaction action generated by the interaction operation triggered by the target object;
the display module is specifically adapted to:
And under the condition that the interactive operation triggered by the target object is detected, acquiring the action adjustment parameter, and displaying the interactive action generated by the interactive operation triggered by the target object according to the action adjustment parameter.
Optionally, the virtual actions of the target object include object visual actions presented by a visual model of the target object;
the display module is specifically adapted to:
and transmitting the motion adjustment parameters to the visual model of the target object so that the visual model of the target object can display the visual motion of the object according to the motion adjustment parameters.
According to still another aspect of the present invention, there is provided an electronic device including a processor, a memory, a communication interface, and a communication bus through which the processor, the memory, and the communication interface communicate with each other;
The memory is configured to store at least one executable instruction, where the executable instruction causes the processor to execute an operation corresponding to the object display method in the virtual scene.
According to still another aspect of the embodiments of the present invention, there is provided a computer storage medium having at least one executable instruction stored therein, the executable instruction causing a processor to perform operations corresponding to an object display method in a virtual scene as described above.
In the method and the device for displaying the object in the virtual scene, firstly, the azimuth information of the target object in the fluid area is acquired, then, the pixel map data of the target pixel corresponding to the azimuth information stored in the environment channel map set for the fluid area is determined, and finally, the motion adjustment parameter is determined according to the pixel map data of the target pixel, and the virtual motion of the target object is displayed according to the motion adjustment parameter. Therefore, according to the method, the action adjustment parameters corresponding to the pixel map data of the target pixels can be obtained according to the azimuth of the target object in the fluid region and the environment channel map set for the fluid region, so that the virtual action of the target object is adjusted according to the action adjustment parameters, the virtual action of the target object can be adaptively adjusted along with scene changes, the simulation degree of object display is improved, and the defect that the character animation model of the same virtual character is fixed is avoided.
The foregoing description is only an overview of the present invention, and is intended to be implemented in accordance with the teachings of the present invention in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present invention more readily apparent.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
FIG. 1 is a flow chart of a method for displaying objects in a virtual scene according to an embodiment of the present invention;
FIG. 2 is a flowchart of a method for displaying objects in a virtual scene according to another embodiment of the present invention;
fig. 3 is a block diagram showing an object display device in a virtual scene according to still another embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device according to another embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Fig. 1 is a flowchart of an object display method in a virtual scene according to an embodiment of the present invention. As shown in fig. 1, the method includes:
step S110, acquiring the azimuth information of the target object in the fluid area.
The target object refers to a virtual object in a virtual scene, wherein the virtual object needs to dynamically adjust a display mode along with scene change. In specific implementation, a preset designated object or a virtual object entering a designated area can be determined as a target object. For example, a virtual object corresponding to a game user may be set as a target object, and for example, a virtual object entering a specified fluid area or an underwater area containing aquatic weeds may be set as a target object. The method for determining the target object is not limited.
The fluid region is a region composed of a flowable liquid element, wherein the liquid element may be a fresh water element, a brine element, or the like. When the target object is detected to enter the fluid region, object coordinate data of the target object are acquired, and then azimuth information of the target object in the fluid region is determined according to the object coordinate data. The azimuth information comprises various information capable of representing the direction and the position of the target object, such as a distance value from a horizontal plane, a distance value from the water bottom, a distance value from a river bank and the like.
Step S120, reading an environmental channel map set for the fluid region, and determining pixel map data of a target pixel corresponding to the azimuth information stored in the environmental channel map.
In the present embodiment, an environmental channel map for storing various types of data relating to environmental information of a fluid region in a mapped manner is set in advance for the fluid region. Wherein the ambient channel map is used to store pixel map data for each pixel. Wherein each pixel corresponds to a designated area within the fluid region. Correspondingly, according to the azimuth information obtained in the previous step, the pixel map data of the target pixel corresponding to the azimuth information is obtained from the environment channel map.
Step S130, determining an action adjustment parameter according to the pixel map data of the target pixel, and displaying the virtual action of the target object according to the action adjustment parameter.
The pixel map data of the target pixel is used to represent the environmental attribute of the location area represented by the target pixel, and since the environmental attribute can affect the virtual motion of the target object, there is a certain correspondence between the pixel map data and the motion adjustment parameter, and based on the correspondence, the motion adjustment parameter corresponding to the pixel map data can be determined, so that the virtual motion of the target object can be displayed according to the motion adjustment parameter.
The motion adjustment parameters are used for adjusting information such as motion type, motion scale, motion frequency and the like of the target object so that virtual motion of the target object can be dynamically changed along with scene changes.
Therefore, according to the method, the action adjustment parameters corresponding to the pixel map data of the target pixels can be obtained according to the azimuth of the target object in the fluid region and the environment channel map set for the fluid region, so that the virtual action of the target object is adjusted according to the action adjustment parameters, the virtual action of the target object can be adaptively adjusted along with scene changes, the simulation degree of object display is improved, and the defect that the character animation model of the same virtual character is fixed is avoided.
Fig. 2 is a flowchart of an object display method in a virtual scene according to still another embodiment of the present invention. As shown in fig. 2, the method includes:
Step S210, presetting an environment channel map of the fluid area.
In the 3D technology, attribute data such as a material of an object can be controlled by a mapping method. Wherein, a plurality of mapping channels can be arranged, each channel is used for storing attribute data with different dimensions, and common dimensions comprise fixed patterns of objects, highlight information of the objects, transparency information of the objects, concave-convex fluctuation information of the objects and the like. In summary, attribute data of multiple dimensions can be efficiently stored through multiple channels in the map. By means of the mapping storage mode, the data size of the 3D model can be reduced to a great extent, and the data access efficiency is greatly improved.
In the present embodiment, an environmental channel map, which refers to a map for storing environmental attribute information in a multi-channel manner, is set in advance for a fluid region. In specific implementation, a prebaking operation is performed on the fluid region to obtain the environmental channel map. First, the fluid region is divided into a plurality of pixels, pixel map data of each pixel is generated, and then, the pixel map data of each pixel is stored through the environmental channel map. The fluid region may be a two-dimensional region corresponding to a cross section of a river bed, or may be a three-dimensional region corresponding to the entire river region, and the region range and the region form of the fluid region are not limited in the present invention. Wherein the fluid region is divided into a plurality of sub-regions, each sub-region being described by a pixel. Wherein, the pixel size of each pixel in the fluid area can be flexibly set according to actual conditions.
It can be seen that each pixel in the ambient channel map corresponds to a sub-region in the fluid region, with the stored pixel map data describing the region properties of that sub-region. The region attribute may be a multi-dimensional attribute and thus the pixel map data includes sets of data corresponding to respective channels, each channel for storing region attribute data of one dimension.
For example, in one implementation, the pixel map data for a pixel includes at least one of collision attribute data corresponding to a first channel, pixel state data corresponding to a second channel, and motion impact data corresponding to a third channel. The collision attribute data stored in the first channel comprises collision type data or non-collision type data used for describing the collision condition of the subarea. For example, if the subarea is a river bed area, the collision attribute data is collision class data to indicate that the virtual object is subject to collision at the position, and if the subarea is a river bed area, the collision attribute data is non-collision class data to indicate that the virtual object is not subject to collision at the position. The pixel state data stored in the second channel includes fluid state data or solid state data for indicating that the corresponding sub-region is fluid or solid. In addition, the pixel status data in the second channel may further include information such as biological status, water quality status, etc. contained in the fluid, which is not limited in the present invention. The action impact data stored in the third channel includes various types of data capable of affecting the action of the virtual object, including, for example, at least one of distance class impact data related to distance values from the horizontal plane, element class impact data related to associated elements in the water, and water quality class impact data related to water quality attributes.
Optionally, in one implementation, in order to increase the speed of subsequent computation and reduce the complexity of computation, the pixel map data stored in the environmental channel map is pre-computed, and through a pre-computing process, intermediate data can be obtained in advance, and result data can be obtained by computing, so that the result data is directly stored in the environmental channel map, so as to reduce the time consumption of subsequent computation.
In order to facilitate the promotion of subsequent processing efficiency by means of a pre-calculation mode, when pixel map data of each pixel are respectively generated, the method is realized by firstly obtaining first original pixel data of the pixel and second original pixel data of an associated pixel associated with the pixel for each pixel respectively, then carrying out a pre-calculation on the first original pixel data and the second original pixel data, and generating the pixel map data of the pixel according to the calculation result. The first original pixel data refers to original pixel map data stored in a current pixel, and the second original pixel data refers to original pixel map data stored in an associated pixel having an association relationship with the current pixel. It follows that the pixel map data in the current pixel can be determined jointly with other associated pixels. In specific implementation, a channel may be additionally added to store the pixel map data obtained after the pre-calculation. Correspondingly, the first original pixel data of the pixel comprises first original pixel data corresponding to an original data channel, the second original pixel data of the associated pixel associated with the pixel comprises second original pixel data corresponding to the original data channel, and the pixel map data of the pixel generated according to the operation result comprises correction pixel data corresponding to a correction data channel. Wherein a channel for storing the original data on which the pre-computation depends is referred to as an original data channel, which may be the first channel, the second channel, or the third channel mentioned above; the channel for storing the pre-calculated correction data is referred to as a correction data channel, and may specifically be an added fourth channel or the like. For example, assume that a first pixel, a second pixel, and a third pixel correspond to a fluid top pixel, a fluid middle pixel, and a fluid bottom pixel in that order. Wherein the pixel map data of each pixel includes collision attribute data corresponding to a first channel, pixel state data corresponding to a second channel, and motion impact data corresponding to a third channel. Since the motion-affecting data corresponding to the third channel may be further affected by the neighboring pixels, for example, the motion-affecting data of the pixel located below may be affected by the pixel located above. Correspondingly, in this example, the third channel is taken as the original data channel, and the fourth channel is newly added as the correction data channel. Accordingly, the original pixel data of the third pixel includes motion influence data 1 corresponding to the third channel in the third pixel, motion influence data 2 corresponding to the third channel in the first associated pixel (i.e., the second pixel), and motion influence data 3 corresponding to the third channel in the second associated pixel (i.e., the first pixel). By performing a weighting operation on the above-described motion-affecting data 1, motion-affecting data 2, and motion-affecting data 3, motion-affecting correction data stored in the fourth channel of the third pixel is obtained.
The above description is merely exemplary, and in practical situations, the data channel and the specific data on which the pre-calculation depends may be flexibly determined, so long as the subsequent calculation efficiency can be improved. The invention is not limited in this regard. Or in other implementations, the original data on which the pre-calculation depends may be separately stored in an original data table, the original pixel data of each pixel is stored through the original data table, and correspondingly, the pixel map data of each pixel stored in the environmental channel map is pre-calculated according to the original pixel data of the associated pixel associated with the current pixel in the original data table. In short, the present invention is not limited to a specific storage mode, as long as the purpose of reducing the subsequent operation amount can be achieved by pre-calculation.
Step S220, acquiring the azimuth information of the target object in the fluid area.
The target object refers to a virtual object in a virtual scene, wherein the virtual object needs to dynamically adjust a display mode along with scene change. In specific implementation, a preset designated object or a virtual object entering a designated area can be determined as a target object. For example, a virtual object corresponding to a game user may be set as a target object, and for example, a virtual object entering a fluid setting area may be determined as a target object. The method for determining the target object is not limited. The fluid setting area may be a whole fluid area or a partial fluid area, for example, an area containing water monster or a complex water quality condition. Correspondingly, when the target object is detected to enter the fluid region, the azimuth information of the target object in the fluid region is acquired, and the method can be realized in a ray detection mode or a collision box detection mode.
Step S230, reading the environmental channel map set for the fluid region, and determining the pixel map data of the target pixel corresponding to the azimuth information stored in the environmental channel map.
Wherein the environmental channel map set for the fluid region is read by the graphics processor. Because the graphic processor is provided with a plurality of processing channels, the data of each channel in the environment channel map can be efficiently read in a multi-channel parallel processing mode, and the data reading rate is remarkably improved. In specific implementation, a target pixel matched with the azimuth information is determined according to the azimuth information, and then pixel map data stored in the target pixel is obtained. Since the pixel map data includes data of a plurality of channels, it is necessary to read the pixel map data of each channel separately.
Step S240, determining motion adjustment parameters according to the pixel map data of the target pixel, and displaying the virtual motion of the target object according to the motion adjustment parameters.
The motion adjustment parameters may include various types, including, for example, a speed adjustment parameter, an amplitude adjustment parameter, and/or a type adjustment parameter, and accordingly, when the virtual motion of the target object is displayed according to the motion adjustment parameter, the motion speed, the motion amplitude, and/or the motion type may be determined according to the speed adjustment parameter, the amplitude adjustment parameter, and/or the type adjustment parameter, so as to display the virtual motion matched with the motion speed, the motion amplitude, and/or the motion type.
Specifically, the motion adjustment parameter may be determined according to the type of the pixel map data:
When the collision attribute data includes collision class data or non-collision class data, the action adjustment parameters include collision response class adjustment parameters or non-collision response class adjustment parameters. The collision response type adjustment parameters are used for controlling virtual actions of the target object to be in collision response states, such as collision states of breakage, rebound and the like.
When the pixel state data includes fluid state data or solid state data, the motion adjustment parameter includes a fluid type adjustment parameter or a solid type adjustment parameter. Wherein the fluid class adjustment parameter is used to control the virtual action of the target object to appear as a floating state in the fluid.
When the action-affecting data includes distance-class affecting data related to a distance value from a horizontal plane, element-class affecting data related to an associated element in water, and water quality-class affecting data related to a water quality attribute, the action-adjusting parameters include a distance-class adjusting parameter, an element-class adjusting parameter, and a water quality-class adjusting parameter. The distance type adjustment parameter is used for reflecting the magnitude of the underwater pressure, and when the distance value from the horizontal plane is larger, the underwater pressure is larger, and accordingly, the action frequency of the virtual action of the target object is lower (the movement is laborious due to the increase of resistance). The element type adjusting parameters are used for reflecting the influence of the related elements (including biological elements or non-biological elements) in the water on the target object, and the water quality type adjusting parameters are used for reflecting the influence of the water quality (including the content of impurities in the water, the pollution degree and the like) on the target object.
In short, the pixel map data and the motion adjustment parameters have a preset mapping relation, and the motion adjustment parameters currently adapted to the target object can be determined based on the mapping relation, so that the virtual motion of the target object can be flexibly adjusted.
In the present embodiment, the virtual actions of the target object include various types of actions, for example, at least the following two types of virtual actions:
The first type of virtual action is an interactive action generated by an interactive operation triggered by the target object. Correspondingly, under the condition that the interactive operation triggered by the target object is detected, the action adjustment parameter is acquired, and the interactive action generated by the interactive operation triggered by the target object is displayed according to the action adjustment parameter.
The second type of virtual motion is an object visual motion presented by a visual model of the target object, and correspondingly, motion adjustment parameters are transmitted to the visual model of the target object so that the visual model of the target object can display the object visual motion according to the motion adjustment parameters.
In summary, the method can acquire the motion adjustment parameters corresponding to the pixel map data of the target pixels according to the azimuth of the target object in the fluid region and the environmental channel map set for the fluid region, so as to adjust the virtual motion of the target object according to the motion adjustment parameters, thereby enabling the virtual motion of the target object to be adaptively adjusted along with scene changes, improving the simulation degree of object display, and avoiding the defect that the character animation model of the same virtual character is fixed. In addition, the map storage mode has the technical advantages of high reading speed and low processing delay, is beneficial to improving the smoothness of a display interface and avoids the problem of blocking the interface. In addition, the pre-calculated data can be further stored in the environment channel map, so that the calculated amount in the subsequent processing process is further reduced, and the display speed is improved. In addition, the method stores the information such as the water depth, the water quality, the influence value and the like of the whole river area in advance, and the animation can be influenced directly according to the position in the follow-up process. Since the structure of the whole river is stored in advance, the collision can be directly calculated without using a collision box for detection, and the collision detection efficiency is improved.
In addition, in one specific example, a channel map is baked out of a water area in advance, and an action adjustment parameter is determined mainly based on depth information of a current target object. For example, the larger the Alpha value is, the deeper the depth value (i.e., the greater the distance of the target object from the horizontal plane) is determined by the Alpha value of the water body itself. In addition, the above-mentioned action impact data corresponding to the third channel may be determined by the depth value. In addition, the motion influence data may be determined based on the depth value, and may be further determined based on water quality information, wherein the water quality information is composed of material information and environmental information. Wherein the material information is the element information in the water. Accordingly, the action influence data (namely, the depth influence value) =the depth value is water quality information, wherein the water quality information=the element information in water is environment related information. It follows that in this particular example, the action impact data is determined mainly from the depth value (i.e. the distance value from the horizontal plane), the elemental information in the water and the environment related information. Wherein the element information in the water comprises ice elements, sea monster elements and the like. The environment-related information is determined based on the flow rate, temperature, etc. of the water.
In addition, the pixel map data in this example specifically includes collision attribute data (such as riverbed edge information and collision data) corresponding to the R channel, pixel state data (such as 1 for a pixel state of water and 2 for a pixel state of non-water) corresponding to the G channel, and motion influence data (i.e., depth influence value, which may range from 100 to 199) corresponding to the B channel. In addition, besides the storage through the RGB channels, the storage through the UVW channels is also possible, or the pixel map data of different dimensions can be performed through less than three channels or more than three channels.
Example III
Fig. 3 is a schematic structural diagram of an object display device in a virtual scene according to a third embodiment of the present invention, where the structure includes:
An acquisition module 41 adapted to acquire positional information of the target object within the fluid region;
A determining module 42 adapted to read an ambient channel map set for the fluid region, determine pixel map data of target pixels corresponding to the orientation information stored in the ambient channel map;
The display module 43 is adapted to determine motion adjustment parameters according to the pixel map data of the target pixel, and display the virtual motion of the target object according to the motion adjustment parameters.
Optionally, the acquisition module is further adapted to perform a prebaking operation on the fluid region to obtain the environmental channel map;
And the determination module is specifically adapted to read, by a graphics processor, the ambient channel map set for the fluid region.
Optionally, the acquiring module is specifically adapted to:
Dividing the fluid area into a plurality of pixels, respectively generating pixel map data of each pixel, and storing the pixel map data of each pixel through the environment channel map.
Optionally, the pixel map data of the pixel includes at least one of collision attribute data corresponding to a first channel, pixel status data corresponding to a second channel, and motion impact data corresponding to a third channel.
Optionally, the collision attribute data comprises collision type data or non-collision type data, and the action adjustment parameters comprise collision response type adjustment parameters or non-collision response type adjustment parameters;
The pixel state data comprises fluid state data or solid state data, and the action adjustment parameters comprise fluid type adjustment parameters or solid type adjustment parameters;
The action influence data includes distance class influence data related to a distance value from a horizontal plane, element class influence data related to an associated element in water, and water quality class influence data related to a water quality attribute, and the action adjustment parameters include a distance class adjustment parameter, an element class adjustment parameter, and a water quality class adjustment parameter.
Optionally, the acquiring module is specifically adapted to:
respectively acquiring first original pixel data of each pixel and second original pixel data of an associated pixel associated with the pixel for each pixel;
and carrying out preset operation on the first original pixel data and the second original pixel data, and generating pixel mapping data of the pixel according to an operation result.
Optionally, the first original pixel data of the pixel comprises first original pixel data corresponding to an original data channel, and the second original pixel data of the associated pixel associated with the pixel comprises second original pixel data corresponding to the original data channel;
The acquisition module is particularly adapted to correct pixel data corresponding to the correction data channel.
Optionally, the virtual action of the target object comprises an interaction action generated by the interaction operation triggered by the target object;
the display module is specifically adapted to:
And under the condition that the interactive operation triggered by the target object is detected, acquiring the action adjustment parameter, and displaying the interactive action generated by the interactive operation triggered by the target object according to the action adjustment parameter.
Optionally, the virtual actions of the target object include object visual actions presented by a visual model of the target object;
the display module is specifically adapted to:
and transmitting the motion adjustment parameters to the visual model of the target object so that the visual model of the target object can display the visual motion of the object according to the motion adjustment parameters.
The specific structure and working principle of each module may refer to the description of the corresponding parts of the method embodiment, and are not repeated here.
Yet another embodiment of the present application provides a non-volatile computer storage medium storing at least one executable instruction for performing the method for displaying an object in a virtual scene in any of the above method embodiments. The executable instructions may be particularly useful for causing a processor to perform the operations corresponding to the method embodiments described above.
Fig. 4 shows a schematic structural diagram of an electronic device according to another embodiment of the present invention, and the specific embodiment of the present invention is not limited to the specific implementation of the electronic device.
As shown in FIG. 4, the electronic device may include a processor 502, a communication interface (Communications Interface) 506, a memory 504, and a communication bus 508.
Wherein:
Processor 502, communication interface 506, and memory 504 communicate with each other via communication bus 508.
A communication interface 506 for communicating with network elements of other devices, such as clients or other servers.
The processor 502 is configured to execute the program 510, and may specifically perform the relevant steps in the embodiment of the method for displaying an object in a virtual scene.
In particular, program 510 may include program code including computer-operating instructions.
The processor 502 may be a central processing unit CPU, or an Application-specific integrated Circuit ASIC (Application SPECIFIC INTEGRATED Circuit), or one or more integrated circuits configured to implement embodiments of the present invention. The one or more processors included in the electronic device may be the same type of processor, such as one or more CPUs, or different types of processors, such as one or more CPUs and one or more ASICs.
Memory 504 for storing program 510. The memory 504 may comprise high-speed RAM memory or may further comprise non-volatile memory (non-volatile memory), such as at least one disk memory.
The program 510 may be specifically configured to cause the processor 502 to perform the respective operations corresponding to the above-described method embodiments.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose devices may also be used with the teachings herein. The required structure for the construction of such devices is apparent from the description above. In addition, the present invention is not directed to any particular programming language. It will be appreciated that the teachings of the present invention described herein may be implemented in a variety of programming languages, and the above description of specific languages is provided for disclosure of enablement and best mode of the present invention.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claimed invention requires more features than are expressly recited in each claim.
Those skilled in the art will appreciate that the modules in the apparatus of the embodiments may be adaptively changed and disposed in one or more apparatuses different from the embodiments. The modules or units or components of the embodiments may be combined into one module or unit or component and, furthermore, they may be divided into a plurality of sub-modules or sub-units or sub-components. Any combination of all features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be used in combination, except insofar as at least some of such features and/or processes or units are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, any of the claimed embodiments can be used in any combination.
Various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functions of some or all of the components in an apparatus according to embodiments of the present invention may be implemented in practice using a microprocessor or Digital Signal Processor (DSP). The present invention can also be implemented as an apparatus or device program (e.g., a computer program and a computer program product) for performing a portion or all of the methods described herein. Such a program embodying the present invention may be stored on a computer readable medium, or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names.
Claims (11)
1. An object display method in a virtual scene, comprising:
Acquiring azimuth information of a target object in a fluid area, wherein the fluid area is an area formed by flowable liquid elements;
Reading an environmental channel map set for the fluid region, and determining pixel map data of target pixels corresponding to the azimuth information stored in the environmental channel map;
Determining motion adjustment parameters according to the pixel map data of the target pixel, and displaying virtual motion of the target object according to the motion adjustment parameters;
The pixel map data comprises at least one of collision attribute data corresponding to a first channel, pixel state data corresponding to a second channel and action influence data corresponding to a third channel, wherein the action influence data comprises at least one of distance class influence data related to a distance value from a horizontal plane, element class influence data related to associated elements in water and water quality class influence data related to water quality attributes.
2. The method of claim 1, wherein the acquiring the orientation information of the target object in the fluid region is preceded by performing a prebaking operation on the fluid region to obtain the environmental channel map;
And the reading the environmental channel map set for the fluid region includes reading, by a graphics processor, the environmental channel map set for the fluid region.
3. The method of claim 2, wherein the pre-baking the fluid region to obtain the environmental channel map comprises:
Dividing the fluid area into a plurality of pixels, respectively generating pixel map data of each pixel, and storing the pixel map data of each pixel through the environment channel map.
4. The method of claim 3, wherein the collision attribute data comprises collision class data or non-collision class data, and the action-adjustment parameters comprise collision response class adjustment parameters or non-collision response class adjustment parameters;
The pixel state data comprises fluid state data or solid state data, and the action adjustment parameters comprise fluid type adjustment parameters or solid type adjustment parameters;
The action adjustment parameters further include at least one of a distance class adjustment parameter, an element class adjustment parameter, and a water quality class adjustment parameter.
5. The method of claim 4, wherein the separately generating pixel map data for each pixel comprises:
respectively acquiring first original pixel data of each pixel and second original pixel data of an associated pixel associated with the pixel for each pixel;
and carrying out preset operation on the first original pixel data and the second original pixel data, and generating pixel mapping data of the pixel according to an operation result.
6. The method of claim 5, wherein the first raw pixel data for the pixel comprises first raw pixel data corresponding to a raw data channel, and the second raw pixel data for the associated pixel associated with the pixel comprises second raw pixel data corresponding to the raw data channel;
the pixel map data of the pixel generated according to the operation result includes corrected pixel data corresponding to the corrected data channel.
7. The method of any of claims 1-6, wherein the virtual action of the target object comprises an interaction action resulting from an interaction operation triggered by the target object;
the displaying the virtual action of the target object according to the action adjustment parameter comprises:
And under the condition that the interactive operation triggered by the target object is detected, acquiring the action adjustment parameter, and displaying the interactive action generated by the interactive operation triggered by the target object according to the action adjustment parameter.
8. The method of any of claims 1-6, wherein the virtual actions of the target object include object visual actions presented by a visual model of the target object;
the displaying the virtual action of the target object according to the action adjustment parameter comprises:
and transmitting the motion adjustment parameters to the visual model of the target object so that the visual model of the target object can display the visual motion of the object according to the motion adjustment parameters.
9. An object display device in a virtual scene, comprising:
The acquisition module is suitable for acquiring azimuth information of a target object in a fluid area, wherein the fluid area is an area formed by flowable liquid elements;
a determining module adapted to read an environmental channel map set for the fluid region, determine pixel map data of target pixels corresponding to the orientation information stored in the environmental channel map;
The display module is suitable for determining action adjustment parameters according to the pixel mapping data of the target pixels and displaying virtual actions of the target objects according to the action adjustment parameters;
The pixel map data comprises at least one of collision attribute data corresponding to a first channel, pixel state data corresponding to a second channel and action influence data corresponding to a third channel, wherein the action influence data comprises at least one of distance class influence data related to a distance value from a horizontal plane, element class influence data related to associated elements in water and water quality class influence data related to water quality attributes.
10. An electronic device comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete communication with each other through the communication bus;
the memory is configured to store at least one executable instruction, where the executable instruction causes the processor to perform the operations corresponding to the method for displaying objects in a virtual scene according to any one of claims 1 to 8.
11. A computer storage medium having stored therein at least one executable instruction for causing a processor to perform operations corresponding to the method for displaying objects in a virtual scene as claimed in any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111624890.XA CN116351046B (en) | 2021-12-28 | 2021-12-28 | Object display method and device in virtual scene |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111624890.XA CN116351046B (en) | 2021-12-28 | 2021-12-28 | Object display method and device in virtual scene |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116351046A CN116351046A (en) | 2023-06-30 |
CN116351046B true CN116351046B (en) | 2025-04-15 |
Family
ID=86910567
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111624890.XA Active CN116351046B (en) | 2021-12-28 | 2021-12-28 | Object display method and device in virtual scene |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116351046B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109598777A (en) * | 2018-12-07 | 2019-04-09 | 腾讯科技(深圳)有限公司 | Image rendering method, device, equipment and storage medium |
CN111784789A (en) * | 2020-06-22 | 2020-10-16 | 上海米哈游天命科技有限公司 | Landform generation method and device, computer equipment and storage medium |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002200343A (en) * | 2000-10-30 | 2002-07-16 | Sony Computer Entertainment Inc | Storage medium, program, method, program executing system, and program executing device |
JP2003091738A (en) * | 2001-09-17 | 2003-03-28 | Namco Ltd | Image generation system, program, and information storage medium |
US8353767B1 (en) * | 2007-07-13 | 2013-01-15 | Ganz | System and method for a virtual character in a virtual world to interact with a user |
CN112562050B (en) * | 2020-11-27 | 2023-07-18 | 成都完美时空网络技术有限公司 | Virtual object wind animation generation method and device, storage medium and terminal |
-
2021
- 2021-12-28 CN CN202111624890.XA patent/CN116351046B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109598777A (en) * | 2018-12-07 | 2019-04-09 | 腾讯科技(深圳)有限公司 | Image rendering method, device, equipment and storage medium |
CN111784789A (en) * | 2020-06-22 | 2020-10-16 | 上海米哈游天命科技有限公司 | Landform generation method and device, computer equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN116351046A (en) | 2023-06-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108846791B (en) | Rendering method and device of physical model and electronic equipment | |
WO2019052313A1 (en) | Liquid simulation method, liquid interaction method and apparatus | |
US11579466B2 (en) | Method, device, apparatus and computer readable storage medium of simulating volumetric 3D display | |
US10762681B2 (en) | Map generation system and method for generating an accurate building shadow | |
US20150235410A1 (en) | Image processing apparatus and method | |
US11468633B1 (en) | Methods and systems for tile-based graphics processing | |
US8982127B2 (en) | Computing device and method for establishing three dimensional coordinate system using graphics | |
KR20170036419A (en) | Graphics processing apparatus and method for determining LOD (level of detail) for texturing of graphics pipeline thereof | |
CN111583398B (en) | Image display method, device, electronic equipment and computer readable storage medium | |
CN103617640A (en) | Line pattern anti-aliasing display method | |
KR20170043367A (en) | The method and apparatus for texture filtering | |
KR100277803B1 (en) | 3D graphic display | |
US20250239012A1 (en) | Constructing image depth information based on camera angles | |
CN108074285B (en) | Volume cloud simulation method and volume cloud simulation device | |
CN112580213A (en) | Method and apparatus for generating display image of electric field lines, and storage medium | |
US6982719B2 (en) | Switching sample buffer context in response to sample requests for real-time sample filtering and video generation | |
US20230032860A1 (en) | Image processing apparatus and method and non-transitory computer readable medium | |
CN116351046B (en) | Object display method and device in virtual scene | |
US10664223B2 (en) | Methods and apparatus for mapping virtual surface to physical surface on curved display | |
CN111681307B (en) | Implementation method of dynamic three-dimensional coordinate axis applied to three-dimensional software | |
CN116188633B (en) | Method, device, medium and electronic equipment for generating simulated remote sensing image | |
CN116266340A (en) | Method, device, computer equipment and storage medium for enhanced display of graphic elements | |
CN116363267A (en) | Animation display method and device for action object | |
CN117453095A (en) | Three-dimensional object selection method, device, medium and equipment | |
WO2023005724A1 (en) | Virtual model rendering method and apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |