CN119206028A - A method for generating a WebGPU real-time rendering pipeline - Google Patents
A method for generating a WebGPU real-time rendering pipeline Download PDFInfo
- Publication number
- CN119206028A CN119206028A CN202411737607.8A CN202411737607A CN119206028A CN 119206028 A CN119206028 A CN 119206028A CN 202411737607 A CN202411737607 A CN 202411737607A CN 119206028 A CN119206028 A CN 119206028A
- Authority
- CN
- China
- Prior art keywords
- rendering
- webgpu
- pipeline
- matrix
- obtaining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
- G06T15/205—Image-based rendering
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- General Physics & Mathematics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Image Generation (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention relates to a generation method of WebGPU real-time rendering pipelines, which comprises the steps of obtaining camera cones of 3D objects to be tested, transmitting the camera cones to octree container objects, carrying out gradual traversal and category distinction, obtaining rendering objects and storing the rendering objects in a rendering queue, obtaining global data and MVP matrixes of each rendering object and uploading the global data and MVP matrixes to a GPU at one time respectively, traversing each rendering object, obtaining corresponding materials of each rendering object, determining specific rendering parameters, material names, material state values, texture maps and macro definition values of the rendering objects according to the materials, uploading the specific rendering parameters to the GPU, judging whether the rendering pipelines are built according to Cache, if yes, completing the creation, directly calling the rendering pipelines to render, and if no Cache exists, creating the rendering pipelines to render.
Description
Technical Field
The invention relates to the technical field of computer graphics, in particular to a WebGPU real-time rendering pipeline generation method.
Background
In the prior art, webGPU engines are few, most of the engines are used for producing different data binding logics (BindGroup) and rendering pipelines (RENDERPIPE) by predefining data binding slots and rendering pipelines of a certain material and calling a bottom API provided by a browser, so that more rendering logics are packaged according to different rendering conditions, the flexibility is poor, and in scene multi-batch rendering, the rendering pipelines can be continuously switched, so that the rendering efficiency is lower.
Therefore, there is a need for a method for generating WebGPU real-time rendering pipeline.
Disclosure of Invention
First, the technical problem to be solved
In view of the above-mentioned drawbacks and shortcomings of the prior art, the present invention provides a method for generating WebGPU real-time rendering pipeline, which solves the technical problems of low rendering efficiency and low resource reuse rate in the prior art.
(II) technical scheme
In order to achieve the above purpose, the main technical scheme adopted by the invention comprises the following steps:
The embodiment of the invention provides a generation method of WebGPU real-time rendering pipelines, which comprises the following steps:
S100, acquiring a camera view cone of a 3D object to be detected, transmitting the camera view cone to an octree container object, performing step-by-step traversal and category distinction, acquiring a transparent rendering object and an opaque rendering object, and storing the transparent rendering object and the opaque rendering object into a rendering queue;
the camera view cone is obtained through viewpoint information provided by a camera;
S200, acquiring global data of the rendering queue and a model matrix of each rendering object in the rendering queue, and acquiring an MVP matrix of each rendering object according to the global data of the rendering queue and the model matrix of each rendering object;
S300, traversing each object in a rendering queue, obtaining a corresponding material, determining specific rendering parameters, material names, material state values, texture maps and macro definition values of the rendering objects according to the material, and uploading the specific rendering parameters of the rendering objects to a GPU at one time;
s400, judging whether the rendering pipeline is established according to the Cache, if yes, directly calling the rendering pipeline to render, if no, establishing the rendering pipeline, calling the rendering pipeline to render each rendering object in the rendering queue, and obtaining a 2D image corresponding to the 3D object to be tested;
the Cache is set at the time the rendering pipeline is created.
Optionally, the S100 includes:
S110, traversing the octree container object gradually, and starting from a root node, checking whether the bounding box of the octree container object node intersects with a camera cone layer by layer downwards;
S120, checking whether a bounding box of the three-dimensional object in the node intersects with a camera view cone or not if so, marking a rendering object, continuing traversing if so, and continuing traversing downwards if not;
And S130, if the bounding box of the octree container object node is not intersected with the camera cone, directly skipping the node and continuing to traverse downwards.
Optionally, the S100 further includes:
S140, dividing the rendering objects into opaque rendering objects and transparent rendering objects according to renderOrder attributes of each rendering object in the rendering queue;
The renderOrder attribute of the transparent rendering object is larger than a set value, and the renderOrder attribute of the opaque rendering object is smaller than the set value.
Optionally, in S200, the global data specifically includes:
projection matrix of camera, view matrix of camera, planar resolution, mouse position, time stamp, and exposure.
Optionally, in S200, obtaining the MVP matrix of each rendering object according to the global data of the rendering queue and the model matrix of each rendering object includes:
Inputting a projection matrix, a view matrix and a model matrix of a rendering object in the global data into the following formula to obtain an MVP matrix:
MVP matrix = projection matrix x view matrix x model matrix.
Optionally, in S400, creating the rendering pipeline includes:
s410, acquiring a loader Module cache;
s420, constructing slot information parameters of textures of a sampler required by BindGroupLayout according to a loader code, and generating BindGroupLayout;
And S430, generating a Cache Key according to the material state value and the macro definition value, acquiring whether a reusable pipeline exists or not, and if not, calling WebGPU a bottom layer interface to create a rendering pipeline, transmitting the created BindGroupLayout, the rendering state contained in the material and a loader Module, and outputting the rendering pipeline.
Optionally, the S400 further includes:
S440, according to a pre-set ordering algorithm, ordering the rendering queues, obtaining ordered rendering queues, and rendering the rendering objects according to the ordering;
The pre-set sorting algorithm comprises the following steps:
firstly, rendering opaque rendering objects, and then rendering transparent rendering objects;
For transparent rendering object renderOrder with larger attribute, rendering more forward, if the value is 0, sequencing according to the sequence of adding the rendering queue by default;
for opaque rendered objects, the larger the value, the more forward rendered, and the smaller the backward rendered, as opposed to transparent rendered objects.
Optionally, the step S410 specifically includes:
and searching whether the compiled loader Module exists or not according to the name of the loader code, the macro variable and the specific loader code generation Shader Module Cache Key, and if not, creating the loader Module.
Optionally, the step S100 further includes:
S000, initializing an engine, acquiring a 3D object to be detected, performing octree division on the 3D object to be detected, and acquiring an octree container object;
The initializing the engine specifically comprises the following steps:
WebGPU contexts are initialized, the loader system is initialized, the global binding system is initialized, the global RENDER TARGET is initialized, the resource manager is initialized, and the input system is initialized.
Optionally, the method further comprises:
when the rendering state of a certain three-dimensional object changes, and no other rendering object is in use in the current rendering pipeline, destroying the current rendering pipeline in an asynchronous mode;
And the asynchronous mode is to schedule and destroy frame by frame in a fixed batch.
(III) beneficial effects
The method for generating the WebGPU real-time rendering pipeline has the advantages that repeated calculation and repeated rendering are avoided due to adoption of step-by-step traversal and intersection judgment optimization of the octree, meanwhile, the uploading frequency is reduced, the data interaction efficiency is improved due to the adoption of a mode of one-time transmission of batch data, in addition, multiplexing of resources is allowed through setting of different Cache keys, the utilization rate of the resources is increased, and meanwhile, minimized occupation of the resources is guaranteed.
Drawings
FIG. 1 is a flow chart of a method for generating WebGPU a real-time rendering pipeline according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method of generating WebGPU real-time rendering pipelines in embodiment 2 of the present invention;
FIG. 3 is a flow diagram of an initializing a rendering pipeline in embodiment 2 of the present invention;
FIG. 4 is a block diagram of the rendering flow of the entire engine in embodiment 2 of the present invention;
FIG. 5 is a schematic diagram of an octree structure in accordance with an embodiment of the present invention;
FIG. 6 is a schematic view of a camera view cone according to an embodiment of the invention;
Fig. 7 is a schematic view of a scene view in embodiment 3 of the present invention.
Detailed Description
The invention will be better explained by the following detailed description of the embodiments with reference to the drawings.
Visual cone-visual cone is a core concept in computer graphics, especially in three-dimensional rendering and geometric culling techniques. The cone visually depicts a three-dimensional region of space that can be seen from an observer (e.g., a camera or player's perspective). This region is shaped like an incomplete cuboid, which tapers at both ends, like a rectangular pyramid truncated, as shown in fig. 6.
Octree As shown in FIG. 5, an octree is a tree data structure for organizing three-dimensional spatial data. It recursively divides the space into eight equal sub-regions (similar to a quadtree in two dimensions), each of which can continue to divide until some stopping condition is met (e.g., a certain depth limit is reached or each sub-region contains only one or zero data points). Each node of the octree represents a spatial region, and leaf nodes typically represent regions that actually contain data, while non-leaf nodes are used to aid in spatial partitioning and searching.
Cache Key, caching Key words, and in an engine, some multiplexing functional modules hit through the Key words.
RenderOrder render sequence number.
Sader, a language running in the GPU, which in the present embodiment is WGSL.
Shader Moudle compiling the loader code to generate a compiling module.
ShaderModule Cache Key cache key of Shader Moudle.
BindGroupLayout a section of configuration code transferred to the GPU, which is used for obtaining the data specification format of the cache data inside the GPU.
The WebGPU real-time rendering pipeline generation method provided by the embodiment of the invention aims to solve the technical problems of lower rendering efficiency and poor multiplexing rate and flexibility in the prior art, adopts the step-by-step traversal and intersection judgment optimization of octrees, avoids repeated calculation and repeated rendering, reduces uploading times and improves the data interaction efficiency by adopting a mode of transmitting batch data once, and in addition, the embodiment of the invention sets different Cache keys to allow multiplexing of resources, increases the resource utilization rate and ensures minimized occupation of the resources.
In order that the above-described aspects may be better understood, exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present invention are shown in the drawings, it should be understood that the present invention may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
Example 1
Referring to fig. 1, a method for generating WebGPU real-time rendering pipeline according to this embodiment includes:
s100, acquiring a camera view cone of a 3D object to be detected, transmitting the camera view cone to an octree container object, performing step-by-step traversal and category distinction, acquiring a transparent rendering object and an opaque rendering object, and storing the transparent rendering object and the opaque rendering object into a rendering queue;
the camera view cone is obtained through viewpoint information provided by a camera;
S200, acquiring global data of the rendering queue and a model matrix of each rendering object in the rendering queue, and acquiring an MVP matrix of each rendering object according to the global data of the rendering queue and the model matrix of each rendering object;
S300, traversing each object in a rendering queue, obtaining a corresponding material, determining specific rendering parameters, material names, material state values, texture maps and macro definition values of the rendering objects according to the material, and uploading the specific rendering parameters of the rendering objects to a GPU at one time;
s400, judging whether the rendering pipeline is established according to the Cache, if yes, directly calling the rendering pipeline to render, if no, establishing the rendering pipeline, calling the rendering pipeline to render each rendering object in the rendering queue, and obtaining a 2D image corresponding to the 3D object to be tested;
the Cache is set at the time the rendering pipeline is created.
In a specific implementation, the specific rendering parameters include color, direction, etc., and these parameters are all of a numeric type, e.g., color is 4 floating point number vectors, direction is a3 floating point number vector, so that the engine defines a structure of numeric data of materials in the loader to store these data. Further, the specific rendering parameters of the rendering object are all submitted to the GPU at one time, and parameters such as the material name, the material state value, the texture map and the like of the rendering object are not required to be uploaded to the GPU, but are transferred from the CPU to the GPU through a special interface for use when the GPU needs the data.
In this embodiment, the viewpoint information provided by the camera includes position information, direction information, view angle information, near clipping plane and far clipping plane information of the camera.
The position information is used for determining the origin of the visual cone of the camera, the direction information is the observation direction of the camera, the direction information is used for determining the forward direction of the visual cone, the viewing angle defines the wide degree of the visual cone, the shape of the visual cone is determined, and the near clipping surface and the far clipping surface information define the front and rear boundaries of the visual cone.
According to the embodiment, the rendering of the three-dimensional object can be more efficiently completed, the WebGPU bottom layer proxy interface is directly provided through the browser, and the rendering interface of the bottom layer is called, so that the developer has stronger capability of managing and organizing scheduling, and the GPU function is exerted to the greatest extent.
Example 2
Referring to fig. 2, a method for generating WebGPU a real-time rendering pipeline according to an embodiment of the present invention includes:
Step S100, acquiring a camera view cone of a 3D object to be detected, transmitting the camera view cone to an octree container object, performing step-by-step traversal and category distinction, acquiring a transparent rendering object and an opaque rendering object, and storing the transparent rendering object and the opaque rendering object in a rendering queue;
The camera view cone is obtained through viewpoint information provided by a camera;
Step 200, acquiring global data of a rendering queue and a model matrix of each rendering object in the rendering queue, and acquiring an MVP matrix of each rendering object according to the global data of the rendering queue and the model matrix of each rendering object;
Step S300, traversing each object in the rendering queue, obtaining the corresponding material, determining specific rendering parameters, material names, material state values, texture maps and macro definition values of the rendering objects according to the material, and uploading the specific rendering parameters of the rendering objects to the GPU at one time;
Step S400, judging whether a rendering pipeline is established according to the Cache, if the Cache exists, establishing the rendering pipeline, directly calling the rendering pipeline to render, if the Cache does not exist, establishing the rendering pipeline, calling the rendering pipeline to render each rendering object in a rendering queue, and obtaining a 2D image corresponding to the 3D object to be tested;
The Cache is set at the time the rendering pipeline is created.
In this embodiment, step S100 includes:
Step S110, traversing the octree container object step by step, starting from the root node, checking whether the bounding box of the octree container object node intersects with the camera view cone layer by layer downwards;
Step S120, checking whether a bounding box of the three-dimensional object in the node is intersected with a camera view cone or not, marking a rendering object if the bounding box is intersected, continuing traversing, and if the bounding box is not intersected, continuing traversing downwards;
step S130, if the bounding box of the octree container object node is not intersected with the camera cone, the node is directly skipped, and the downward traversal is continued.
Step S140, dividing the rendering objects into opaque rendering objects and transparent rendering objects according to renderOrder attributes of each rendering object in the rendering queue;
The renderOrder attribute of the transparent rendered object is greater than the set point and the renderOrder attribute of the opaque rendered object is less than the set point.
The set value is set according to the actual situation. For example, the setting value may be 3000, and at this time, renderOrder is a transparent rendering object if the attribute is greater than 3000, and is an opaque rendering object if the attribute is less than 3000.
In a specific implementation process, the screening principle of the octree is as follows:
objects that are not within the octree are all stored in the root node of the octree (level 0), an object is placed in each level 1 if it is contained in level 0, and other more than level 1 spaces are stored in the parent if it is occupied in subsets, for example, if an object is stored in both 3-1 and 3-2 spaces, then it is stored in their parent level 2, and a change in the position of an object and bounding box triggers the recalculation of the octree position, cleaning up the original position.
According to the rule, the 0 level is an object which is not in the octree, the 1 level can be stored in the octree and can be contained in the octree, and part of the three-dimensional object is in the space, the 2 level or more can be stored in the octree, and the rule is a common method, so that a large number of rendering objects of the 0 level can be transferred to the 1 level, the 1 level comprises 81 levels, the objects can be divided into eight subspaces, and the 1 level filtering can be used for filtering a large number of objects during rendering and filtering, so that the aim of high performance is achieved.
The global data specifically includes:
projection matrix of camera, view matrix of camera, planar resolution, mouse position, time stamp, and exposure.
In this embodiment, in step S200, according to global data of a rendering queue and a model matrix of each rendering object, obtaining an MVP matrix of each rendering object includes:
Inputting a projection matrix, a view matrix and a model matrix of a rendering object in global data into the following formula to obtain an MVP matrix:
MVP matrix = projection matrix x view matrix x model matrix.
In a specific implementation, a model matrix for each rendered object is determined based on its material, the model matrix representing the position, rotation and scaling of the object in the world coordinate system.
The method comprises the steps of initializing a unit matrix, applying translation transformation according to the position information of a rendering object, applying rotation transformation according to the position information of the rendering object, applying scaling transformation according to the scaling information of the rendering object, and multiplying the transformation matrices in sequence to obtain a final model matrix.
Referring to fig. 3, in step S400, creating a rendering pipeline includes:
Step S410, obtaining a loader Module cache;
Step S420, constructing BindGroupLayout slot information parameters of the texture of the sampler according to the loader code, and generating BindGroupLayout;
and step S430, generating a Cache Key according to the material state value and the macro definition value, acquiring whether a reusable pipeline exists or not, and if not, calling WebGPU a bottom layer interface to create a rendering pipeline, transmitting the created BindGroupLayout, the rendering state contained in the material and a loader Module, and outputting the rendering pipeline.
Step S440, according to a pre-set ordering algorithm, ordering the rendering queues, obtaining ordered rendering queues, and rendering the rendering objects according to the ordering.
Step S420 specifically further includes using the name of the texture as a Cache Key, if a certain texture already exists in the Cache, directly multiplexing the texture without re-creation, and recording the reference number of each texture, wherein when the reference number is 0, the texture is not used by any material any more, so that the texture can be destroyed safely;
Similarly, the sampler uses a similar buffer mechanism, takes the state of the sampler as a Cache Key, and directly multiplexes a certain sampler if the sampler is already in the buffer.
By the cache mechanism, the cost of repeatedly creating textures and samplers is reduced, so that the occupation of GPU memory is reduced, unnecessary resource creation and destruction operations are reduced, and the rendering performance is improved. Meanwhile, the addressing time is reduced by multiplexing resources, and the rendering efficiency is further improved.
The pre-set sorting algorithm comprises:
firstly, rendering opaque rendering objects, and then rendering transparent rendering objects;
For transparent rendering object renderOrder with larger attribute, rendering more forward, if the value is 0, sequencing according to the sequence of adding the rendering queue by default;
for opaque rendered objects, the larger the value, the more forward rendered, and the smaller the backward rendered, as opposed to transparent rendered objects.
In a specific implementation process, step S410 specifically includes:
and searching whether the compiled loader Module exists or not according to the name of the loader code, the macro variable and the specific loader code generation Shader Module Cache Key, and if not, creating the loader Module.
In this embodiment, the step S100 further includes:
step S000, initializing an engine, acquiring a 3D object to be detected, performing octree division on the 3D object to be detected, and acquiring an octree container object;
The initializing the engine specifically comprises the following steps:
WebGPU contexts are initialized, the loader system is initialized, the global binding system is initialized, the global RENDER TARGET is initialized, the resource manager is initialized, and the input system is initialized.
Specifically, initializing WebGPU the context includes initializing the Canvas window, creating WebGPU Adapter an adapter through which to request a WebGPU Device object for executing the actual graphics or computation command.
When the loader system is initialized, a series of loader codes are preset in the engine, and macro definitions are newly added in the codes to improve code reusability. For example, different material types may select which code segments to enable by changing the macro definition value. In addition, the binding point (Bindings) in the loader code is set to auto-allocate, which means that the specific location will be decided by compile-time.
When the global binding system is initialized, a memory space is created and initialized according to global variables, and the memory space is used for storing global variables such as a camera projection matrix, a screen size, a mouse position, a time stamp and the like, the contents are stored in a Buffer (global Buffer), and the frequently used parameters can be efficiently transferred to the GPU by setting the Buffer.
Global RENDER TARGET is initialized, RENDER TARGET refers to a buffer for receiving image data output, specifically a memory region, typically a Texture or a set of textures (Texture) or a frame buffer (frame buffer), for storing pixel data after shader processing. If the rendered result needs post-processing, such as applying some image effects, the rendered result needs to be stored in a defined texture. For this reason, initializing global RENDER TARGET sets a plurality of textures as targets, and stores color, depth, normal information and material properties respectively, so that the subsequent post-processing is convenient for secondary use.
The method comprises the steps of initializing a resource manager, wherein the resource manager is responsible for loading and caching various types of resource files, such as texture mapping, a 3D model and the like, and is stored in a memory environment once the resource is loaded and analyzed once, and the resource is directly extracted from the cache when requested next time, so that the performance cost caused by repeated loading can be obviously reduced.
Further, the input system is initialized, namely the initialization operation of the related mouse and keyboard is bound, and the event scheduling of the input device is provided for the whole engine.
In this embodiment, a method for generating WebGPU real-time rendering pipeline further includes:
In the rendering of each frame, if the part of the rendering state of the three-dimensional object is checked to change, the macro definition value is changed, the material parameters are changed, the geometry is changed, the re-initialization of the rendering pipeline is triggered, the rendering pipeline is re-generated, and if no other rendering object is used, the originally used pipeline is destroyed.
If a user sets the material rendering state of a three-dimensional object as a Bundle mode, a rendering pipeline records a series of information such as the rendering state, a CPU and a GPU only need to submit once, the subsequent rendering maintains the mode, the communication time of the CPU and the GPU is reduced, the rendering performance of some three-dimensional objects can be greatly improved by using the mode, and the engine can realize the efficient rendering of the mode by integrating codes only by changing the rendering state of the material.
In this embodiment, as shown in fig. 4, the method further includes:
when the rendering state of a certain three-dimensional object changes, and no other rendering object is in use by the current rendering pipeline, destroying the current rendering pipeline in an asynchronous mode.
The asynchronous mode schedules destruction for a fixed batch frame by frame.
The generation method of WebGPU real-time rendering pipelines adopts the step-by-step traversal and intersection judgment optimization of octree, avoids repeated calculation and repeated rendering, reduces uploading times by adopting a way of transmitting batch data once, improves the efficiency of data interaction, and additionally allows multiplexing of resources by setting different Cache keys, increases the utilization rate of the resources and ensures minimized occupation of the resources.
Example 3
A method for generating WebGPU real-time rendering pipeline in this embodiment, taking rendering a cube and a sphere as an example, specifically explains the process of engine initialization and asset generation and creation before implementation of embodiment 1:
the first step, the engine initialization mainly comprises:
WebGPU context initialization;
The initialization of the bottom layer WebGPU mainly comprises initialization of Canvas windows, creation of WebGPU Adapter adapters, initialization of WebGPU Device, acquisition of functional characteristics owned by a browser and a user computer and the like.
Initializing a loader system;
The whole engine system is preset with a plurality of rows of loader codes, macro definition is added to the loader codes, and the macro definition is not purely WGSL, so that code multiplexing can be realized, and the bind slots are set to be auto values and stored in a cache through names of materials, thereby facilitating engine call.
Initializing a global binding system;
The global binding initialization opens up a memory space according to global variables, an initialization value of each variable in the global Buffer is uploaded to the GPU, and the global Buffer mainly comprises variables including a projection matrix of a camera, a view matrix of the camera, a screen size, a position of a mouse, a time stamp and the like, and the contents are stored in one Buffer and each frame is uploaded to the GPU.
Global RENDERTARGET initialization;
if the rendering result needs post-processing, the rendering result needs to be stored in a defined texture Buffer, wherein the initialization can store the rendering color, the rendering vertex position, the rendering normal line and the rendering material property in four different textures, so that the subsequent post-processing is convenient for secondary use.
Initializing a resource manager;
The method includes initializing the cache of the resource, storing all loads and resolved results in the cache, hitting the cache by the secondary loads, and obtaining the resolved resource from the cache directly after reuse, wherein a resource manager provides a series of resource obtaining methods such as texture mapping and models.
Initializing an input system;
Binding an initialization operation of an associated mouse and keyboard provides event scheduling of the input device for the entire engine.
Secondly, asset generation and creation:
The user creates multiple views, see fig. 7, where one view contains a camera and a scene, the scene itself contains the background sky, three-dimensional objects can be contained within the scene, the cubes and spheres to be created are the three-dimensional objects to be rendered, each three-dimensional object to be rendered needs to contain two parts of content, namely geometry and materials, the geometry is used to represent the geometry of this object, it is composed by a series of triangular faces, the geometric information representing the triangular faces typically requires vertices, normals, UV, vertex colors, tangents, indexes, these information typically being generated by engine automated calculations, or from model files (model files are typically created by some model editing tools such as: 3dmax, c4d, blancher, etc.), each triangle has three vertices, each vertex is represented by three numbers [ x, y, z ], 8 vertices are needed for a cube, 12 triangles are needed for a six-square face, eight vertices are combined into 12 triangles by indexing, each triangle face has a corresponding direction, the vector of direction is a normal, the normal is represented by [ x, y, z ], and UV is texture coordinates, which are used to correspond to the position of a color point on a picture and the position of the triangle face if we want to paste a picture on the plane, the UV coordinates are generally between 0-1, greater or less than the value, and are equivalent to repeated values.
The texture may be regarded as a set of parameters that instruct the rendering engine how to generate the rendering pipeline, where the parameters of the texture include a rendering related state value, a macro definition value, a rendering parameter, a texture map, and a loader code, where the rendering related state value indicates rules used by the GPU in performing internal calculations, such as whether depth is on, whether to compare, etc., and how to calculate fusion for a semitransparent object, and macro definition is not supported in the WebGPU internal interface. The engine provides macro definition keywords through a custom set of loader code parsing tools, and then uses relevant codes multiplexed like C++, which can add pre-compiled logic, remove excess loader code that does not belong to the rendering pipeline, leaving only available code.
For different rendering situations, only the macro definition needs to be changed, for example, for a geometric body, if a vertex color is added for each vertex in the vertex attribute, the vertex can be colored, so that the colors in the material have some functional conflicts, and the material colors and the vertex colors can be dynamically used according to macro settings of users.
Rendering parameters including color, roughness, metallization, ambient light reflection intensity, a series of texture maps, etc., which can be understood as a picture that is rendered onto the surface of the object by UV coordinate values of the user geometry;
The loader code is a code executed inside the GPU, WGSL grammar code is used inside WebGPU, the loader code is divided into two parts, one part is a vertex Shader code used for calculating vertices, the other part is a fragment Shader used for shading geometry, in materials, different codes are written according to different materials, the Shader code is generally defined according to different materials, and a user only needs to pay attention to state values and rendering parameters of the materials.
In the description of the present invention, it should be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present invention, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
In the present invention, unless explicitly specified and limited otherwise, the terms "mounted," "connected," "secured," and the like are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally formed, mechanically connected, electrically connected, directly connected, indirectly connected via an intervening medium, or in communication between two elements or in an interaction relationship between two elements. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art according to the specific circumstances.
In the present invention, unless expressly stated or limited otherwise, a first feature is "on" or "under" a second feature, which may be in direct contact with the first and second features, or in indirect contact with the first and second features via an intervening medium. Moreover, a first feature "above," "over" and "on" a second feature may be a first feature directly above or obliquely above the second feature, or simply indicate that the first feature is higher in level than the second feature. The first feature being "under", "below" and "beneath" the second feature may be the first feature being directly under or obliquely below the second feature, or simply indicating that the first feature is level lower than the second feature.
In the description of the present specification, the terms "one embodiment," "some embodiments," "examples," "particular examples," or "some examples," etc., refer to particular features, structures, materials, or characteristics described in connection with the embodiment or example as being included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
While embodiments of the present invention have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the invention, and that alterations, modifications, substitutions and variations may be made in the above embodiments by those skilled in the art within the scope of the invention.
Claims (10)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202411737607.8A CN119206028B (en) | 2024-11-29 | 2024-11-29 | Generation method of WebGPU real-time rendering pipeline |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202411737607.8A CN119206028B (en) | 2024-11-29 | 2024-11-29 | Generation method of WebGPU real-time rendering pipeline |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN119206028A true CN119206028A (en) | 2024-12-27 |
| CN119206028B CN119206028B (en) | 2025-05-16 |
Family
ID=94062876
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202411737607.8A Active CN119206028B (en) | 2024-11-29 | 2024-11-29 | Generation method of WebGPU real-time rendering pipeline |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN119206028B (en) |
Citations (15)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR20050064580A (en) * | 2003-12-24 | 2005-06-29 | 한국전자통신연구원 | 3d graphic plug-in system and the method using hardware shader |
| US20080238919A1 (en) * | 2007-03-27 | 2008-10-02 | Utah State University | System and method for rendering of texel imagery |
| US20100091018A1 (en) * | 2008-07-11 | 2010-04-15 | Advanced Micro Devices, Inc. | Rendering Detailed Animated Three Dimensional Characters with Coarse Mesh Instancing and Determining Tesselation Levels for Varying Character Crowd Density |
| CN105741341A (en) * | 2016-01-27 | 2016-07-06 | 桂林长海发展有限责任公司 | Three-dimensional space environment imaging system and method |
| CN110309458A (en) * | 2018-03-30 | 2019-10-08 | 北京东晨工元科技发展有限公司 | BIM model based on WebGL is shown and rendering method |
| CN112347546A (en) * | 2020-11-30 | 2021-02-09 | 久瓴(江苏)数字智能科技有限公司 | BIM rendering method, device and computer-readable storage medium based on lightweight device |
| GB202114030D0 (en) * | 2021-09-30 | 2021-11-17 | Imagination Tech Ltd | Rendering an image of a 3-D scene |
| CN114330689A (en) * | 2021-12-29 | 2022-04-12 | 北京字跳网络技术有限公司 | Data processing method, device, electronic device and storage medium |
| CN116152039A (en) * | 2023-04-18 | 2023-05-23 | 北京渲光科技有限公司 | Image rendering method |
| CN116956603A (en) * | 2023-07-26 | 2023-10-27 | 深圳潮向数字科技有限公司 | Wind power plant model construction method, device, equipment and storage medium |
| CN118052921A (en) * | 2024-02-06 | 2024-05-17 | 西南石油大学 | Image rendering method, device, storage medium and product |
| WO2024145067A1 (en) * | 2022-12-27 | 2024-07-04 | Schlumberger Technology Corporation | Seismic imaging framework |
| CN118411397A (en) * | 2023-01-28 | 2024-07-30 | 华为技术有限公司 | Image processing method and related equipment |
| CN118505873A (en) * | 2024-05-21 | 2024-08-16 | 珠海莫界科技有限公司 | 3D rendering method, device, computer equipment and storage medium |
| CN118606396A (en) * | 2024-06-19 | 2024-09-06 | 北京山海础石信息技术有限公司 | A method, system, device and storage medium for high-dimensional data visualization rendering |
-
2024
- 2024-11-29 CN CN202411737607.8A patent/CN119206028B/en active Active
Patent Citations (15)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR20050064580A (en) * | 2003-12-24 | 2005-06-29 | 한국전자통신연구원 | 3d graphic plug-in system and the method using hardware shader |
| US20080238919A1 (en) * | 2007-03-27 | 2008-10-02 | Utah State University | System and method for rendering of texel imagery |
| US20100091018A1 (en) * | 2008-07-11 | 2010-04-15 | Advanced Micro Devices, Inc. | Rendering Detailed Animated Three Dimensional Characters with Coarse Mesh Instancing and Determining Tesselation Levels for Varying Character Crowd Density |
| CN105741341A (en) * | 2016-01-27 | 2016-07-06 | 桂林长海发展有限责任公司 | Three-dimensional space environment imaging system and method |
| CN110309458A (en) * | 2018-03-30 | 2019-10-08 | 北京东晨工元科技发展有限公司 | BIM model based on WebGL is shown and rendering method |
| CN112347546A (en) * | 2020-11-30 | 2021-02-09 | 久瓴(江苏)数字智能科技有限公司 | BIM rendering method, device and computer-readable storage medium based on lightweight device |
| GB202114030D0 (en) * | 2021-09-30 | 2021-11-17 | Imagination Tech Ltd | Rendering an image of a 3-D scene |
| CN114330689A (en) * | 2021-12-29 | 2022-04-12 | 北京字跳网络技术有限公司 | Data processing method, device, electronic device and storage medium |
| WO2024145067A1 (en) * | 2022-12-27 | 2024-07-04 | Schlumberger Technology Corporation | Seismic imaging framework |
| CN118411397A (en) * | 2023-01-28 | 2024-07-30 | 华为技术有限公司 | Image processing method and related equipment |
| CN116152039A (en) * | 2023-04-18 | 2023-05-23 | 北京渲光科技有限公司 | Image rendering method |
| CN116956603A (en) * | 2023-07-26 | 2023-10-27 | 深圳潮向数字科技有限公司 | Wind power plant model construction method, device, equipment and storage medium |
| CN118052921A (en) * | 2024-02-06 | 2024-05-17 | 西南石油大学 | Image rendering method, device, storage medium and product |
| CN118505873A (en) * | 2024-05-21 | 2024-08-16 | 珠海莫界科技有限公司 | 3D rendering method, device, computer equipment and storage medium |
| CN118606396A (en) * | 2024-06-19 | 2024-09-06 | 北京山海础石信息技术有限公司 | A method, system, device and storage medium for high-dimensional data visualization rendering |
Non-Patent Citations (4)
| Title |
|---|
| ROBERT KONRAD ETC: "A Cross-Platform Graphics API Solution for Modern and Legacy Development Styles", 《SERIOUS GAMES (JCSG 2023)》, 14 October 2023 (2023-10-14) * |
| 孟晓宁;王宝华;: "图形处理中GPU固定渲染管线的研究", 集成电路应用, no. 02, 10 February 2018 (2018-02-10) * |
| 王芳;秦磊华;: "基于BRDF和GPU并行计算的全局光照实时渲染", 图学学报, no. 05, 15 October 2016 (2016-10-15) * |
| 薄海光;吴立新;余接情;谢磊;: "基于GPU加速的SDOG并行可视化实验", 地理与地理信息科学, no. 04, 15 July 2013 (2013-07-15) * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN119206028B (en) | 2025-05-16 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US12243151B2 (en) | Reduced acceleration structures for ray tracing systems | |
| CN113178014B (en) | Scene model rendering method and device, electronic equipment and storage medium | |
| US7164420B2 (en) | Ray tracing hierarchy | |
| US8725466B2 (en) | System and method for hybrid solid and surface modeling for computer-aided design environments | |
| CN113822788B (en) | Early release of resources in ray tracing hardware | |
| CN113593028B (en) | A method for constructing a three-dimensional digital earth for avionics display and control | |
| US20050151734A1 (en) | Method and apparatus for rendering, storing and editing voxel objects | |
| CN119206028B (en) | Generation method of WebGPU real-time rendering pipeline | |
| CN116993894B (en) | Virtual picture generation method, device, equipment, storage medium and program product | |
| Kaiser | Efficient Rendering of Earth Surface for Air Traffic Visualization | |
| CN115457189B (en) | PBD (skeletal driven software) simulation system and method based on cluster coloring | |
| US20250299421A1 (en) | Data processing systems | |
| CN120388118A (en) | Efficient rendering method and system for 3D geological models based on grid shading pipeline | |
| CN119206100A (en) | A digital twin method based on real-time rendering | |
| CN119273825A (en) | Three-dimensional data rendering method, device and electronic equipment | |
| CN119295639A (en) | A large scene hidden layer fading algorithm based on heterogeneous computing power acceleration | |
| Kaiser | Efektivní zobrazování zemského povrchu pro vizualizaci letecké dopravy | |
| Koca et al. | Implementation Details of the Sample Application Using the Proposed Hybrid Terrain Representation | |
| Daungklang et al. | Create a process for manually editing Rapid 3D Mapping Datasets | |
| SERRAYE | Design and realization of a voxelization method via shaders | |
| Zlabinger | A project completed as part of the requirements for the BSc (Hons) Computer Studies entitled Fast Real-Time Terrain Visualisation Algorithms | |
| Schubert | Flexible and Efficient View Dependent Simplification |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |