CN117635803A - Parameter configuration method and device for coloring model, computer equipment and storage medium - Google Patents
Parameter configuration method and device for coloring model, computer equipment and storage medium Download PDFInfo
- Publication number
- CN117635803A CN117635803A CN202211060155.5A CN202211060155A CN117635803A CN 117635803 A CN117635803 A CN 117635803A CN 202211060155 A CN202211060155 A CN 202211060155A CN 117635803 A CN117635803 A CN 117635803A
- Authority
- CN
- China
- Prior art keywords
- coloring
- model
- initial
- rendering
- shading
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
- G06T15/506—Illumination models
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
Landscapes
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Architecture (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Image Generation (AREA)
Abstract
The application discloses a parameter configuration method, device, computer equipment and storage medium of a coloring model, and belongs to the technical field of image processing. According to the method and the device, the reference image is rendered to the target object through the debugged second coloring model, the initial image is rendered to the target object through the first coloring model to be debugged, the machine can automatically reversely spread the difference to the coloring parameters input by the first coloring model according to the difference between the reference image and the initial image, the difference between the reference image and the initial image is gradually reduced through iterative adjustment of the coloring parameters, the rendering effect of the target image output when iteration is stopped is approximately consistent with that of the reference image, and therefore the optimal coloring parameters are automatically searched for through iterative search by the machine, manual debugging of the coloring parameters by a fine arts staff is not needed, the labor cost of coloring model migration is greatly reduced, and the migration efficiency is improved.
Description
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method and apparatus for configuring parameters of a coloring model, a computer device, and a storage medium.
Background
In the field of graphic rendering research, after a three-dimensional model of an object is created, the three-dimensional model is colored with a coloring model, thereby rendering the object in a display screen. The shading process, shading parameters, texture mapping, etc. are all different for different shading models, but in application scenarios where the shading models are upgraded or the shading models of different platforms/configurations are staged, there is a general need to migrate a three-dimensional model from one shading model to another.
At present, in the migration process of the coloring model, a texture map suitable for a new model is manually drawn by a technician according to the existing rendering result of the old model, and coloring parameters of the new model are manually debugged, so that the labor cost is high and the migration efficiency is low in the migration process of the coloring model.
Disclosure of Invention
The embodiment of the application provides a parameter configuration method, a device, computer equipment and a storage medium of a coloring model, which can reduce the labor cost of the coloring model migration process and improve the migration efficiency. The technical scheme is as follows:
in one aspect, a method for configuring parameters of a coloring model is provided, the method comprising:
rendering the target object with initial coloring parameters based on the first coloring model to obtain an initial image;
Adjusting the initial coloring parameters based on the difference between the initial image and a reference image to obtain adjusted coloring parameters, wherein the reference image is obtained by rendering the target object based on a second coloring model, and the second coloring model and the first coloring model are used for rendering the object in the same virtual environment;
rendering the target object with the adjusted coloring parameters based on the first coloring model to obtain an initial image of the next iteration;
and iteratively executing operations of adjusting the coloring parameters and rendering the initial image, and outputting the adjusted target coloring parameters, wherein a similar condition is met between a target image obtained by rendering the target object by the target coloring parameters based on the first coloring model and the reference image.
In one aspect, there is provided a parameter configuration apparatus of a coloring model, the apparatus comprising:
the rendering module is used for rendering the target object according to the initial coloring parameters based on the first coloring model to obtain an initial image;
the coloring parameter adjustment module is used for adjusting the initial coloring parameters based on the difference between the initial image and the reference image to obtain adjusted coloring parameters, the reference image is obtained by rendering the target object based on a second coloring model, and the second coloring model and the first coloring model are used for rendering the object in the same virtual environment;
The rendering module is further configured to render the target object with the adjusted rendering parameter based on the first rendering model, to obtain an initial image of a next iteration;
and the iteration output module is used for iteratively executing operations of adjusting the coloring parameters and rendering the initial image and outputting the adjusted target coloring parameters, wherein the target image rendered by the target object based on the first coloring model and the target coloring parameters accords with the similarity condition with the reference image.
In some embodiments, the initial shading parameters include an initial texture map, an initial light intensity coefficient, and an initial light intensity offset;
the rendering module is used for:
obtaining texture coordinates output by each model vertex of the target object;
sampling from the initial texture map based on texture coordinates of each model vertex to obtain texture pixels of each model vertex;
coloring the model vertexes of the target object based on the texture pixels of each model vertex to obtain a pixel coloring result;
and rendering the pixel coloring result according to the initial light intensity coefficient and the initial light intensity offset based on the first coloring model to obtain the initial image.
In some embodiments, the apparatus further comprises:
and a first configuration module, configured to configure vertex shaders of the first shading model and the second shading model, so that the vertex shaders output texture coordinates of each model vertex of the target object.
In some embodiments, the apparatus further comprises:
a second configuration module, configured to configure a pixel shader of the first shading model, so that the pixel shader outputs the pixel shading result and a differential amount of each initial shading parameter, where the differential amount characterizes a change amount of shading parameter adaptation required by a change amount of the initial image.
In some embodiments, the shading parameter adjustment module comprises:
a differential amount acquisition unit configured to acquire a differential amount of the initial coloring parameter, the differential amount characterizing a change amount of the coloring parameter adaptive adjustment required for a change amount of the initial image;
an error amount acquisition unit configured to acquire an error amount between the initial image and the reference image;
and the coloring parameter adjusting unit is used for adjusting the initial coloring parameter based on the error quantity and the differential quantity to obtain the adjusted coloring parameter.
In some embodiments, the initial shading parameters include an initial texture map, an initial light intensity coefficient, and an initial light intensity offset;
the differential-amount acquisition unit is configured to:
and acquiring a first differential value of the initial texture map, a second differential value of the initial light intensity coefficient and a third differential value of the initial light intensity offset.
In some embodiments, the shading parameter adjusting unit is configured to:
adjusting the initial texture map based on the error amount and the first differential amount to obtain an adjusted texture map;
based on the error amount and the second differential amount, adjusting the initial light intensity coefficient to obtain an adjusted light intensity coefficient;
and adjusting the initial light intensity offset based on the error amount and the third differential amount to obtain an adjusted light intensity offset.
In some embodiments, in case the first shading model is a lambert shading model, the differential quantity acquisition unit is further for:
acquiring a normal vector of each pixel point in a texture space;
acquiring an illumination direction and an illumination color based on the attribute information of the virtual environment;
and acquiring the first differential amount based on the normal vector of each pixel point, the illumination direction, the illumination color, the initial light intensity coefficient and the initial light intensity offset.
In some embodiments, in case the first shading model is a lambert shading model, the differential quantity acquisition unit is further for:
obtaining mapping parameters of each pixel point and normal vectors of each pixel point in a texture space;
acquiring an illumination direction and an illumination color based on the attribute information of the virtual environment;
and acquiring the second differential amount based on the mapping parameter of each pixel point, the normal vector of each pixel point, the illumination direction and the illumination color.
In some embodiments, in case the first shading model is a lambert shading model, the differential quantity acquisition unit is further for:
and acquiring the third differential amount based on the mapping parameters of each pixel point in the texture space.
In some embodiments, the iterative output module is to:
outputting the target coloring parameter under the condition that the iteration step number reaches the set step number; or alternatively, the first and second heat exchangers may be,
the target coloring parameter is output in a case where an error amount between the initial image and the reference image meets a convergence condition.
In one aspect, a computer device is provided that includes one or more processors and one or more memories having at least one computer program stored therein, the at least one computer program loaded and executed by the one or more processors to implement a parameter configuration method for a shading model as described above.
In one aspect, a computer readable storage medium having stored therein at least one computer program loaded and executed by a processor to implement a method of parameter configuration for a shading model as described above is provided.
In one aspect, a computer program product is provided that includes one or more computer programs stored in a computer-readable storage medium. The one or more processors of the computer device are capable of reading the one or more computer programs from the computer-readable storage medium, the one or more processors executing the one or more computer programs so that the computer device is capable of performing the above-described parameter configuration method of the shading model.
The beneficial effects that technical scheme that this application embodiment provided include at least:
the method comprises the steps of rendering a target object through a debugged second coloring model to obtain a reference image, rendering the target object through a first coloring model to be debugged to obtain an initial image, automatically reversely spreading the difference to coloring parameters input by the first coloring model according to the difference between the reference image and the initial image by a machine, and further gradually reducing the difference between the reference image and the initial image by iteratively adjusting the coloring parameters, so that the rendering effect of the target image output by the first coloring model is approximately consistent with that of the reference image when iteration is stopped, and in the migration process of the coloring model, the machine automatically iterates and searches for the optimal coloring parameters without manually debugging the coloring parameters by an artist, thereby greatly reducing the labor cost of coloring model migration and improving the migration efficiency of the coloring model.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic view of an implementation environment of a method for configuring parameters of a coloring model according to an embodiment of the present application;
FIG. 2 is a flowchart of a method for configuring parameters of a coloring model according to an embodiment of the present application;
FIG. 3 is a flowchart of a method for configuring parameters of a coloring model according to an embodiment of the present application;
FIG. 4 is an interface schematic diagram of a configuration interface for rendering a scene provided in an embodiment of the present application;
FIG. 5 is an interface schematic diagram of another configuration interface for rendering a scene provided by embodiments of the present application;
FIG. 6 is an interface schematic diagram of a rendering environment provided by an embodiment of the present application;
FIG. 7 is an interface schematic diagram of an operation interface of a rendering engine according to an embodiment of the present application;
FIG. 8 is a migration flow diagram of a conventional coloring model according to an embodiment of the present application;
FIG. 9 is a parameter training flow diagram of a texture space based shading model provided by an embodiment of the present application;
FIG. 10 is a rendering flow chart of an initial image provided by an embodiment of the present application;
FIG. 11 is a graph showing a comparison of rendering effects before and after a vertex shader is rewritten according to an embodiment of the present disclosure;
FIG. 12 is a schematic diagram showing a comparison of albedo map and coordinate space after vertex shading and rewriting according to an embodiment of the present application;
FIG. 13 is a schematic diagram of a reference image rendered by a PBR coloring model according to an embodiment of the present application;
FIG. 14 is a schematic illustration of an initial image rendered by a Lanbert coloring model provided in an embodiment of the present application;
FIG. 15 is a technical framework diagram of a shading model migration scheme based on texture space differential rendering provided by an embodiment of the present application;
FIG. 16 is a schematic diagram of a progress bar for migration training of a coloring model provided by an embodiment of the present application;
FIG. 17 is a rendering effect diagram of a coloring model after migration training of coloring parameters according to an embodiment of the present application;
FIG. 18 is a schematic diagram of a migration flow of a coloring model provided in an embodiment of the present application;
FIG. 19 is a graph of test results for a coloring model under different test conditions provided in an embodiment of the present application;
FIG. 20 is a schematic structural diagram of a parameter configuration apparatus for a coloring model according to an embodiment of the present application;
FIG. 21 is a schematic structural diagram of a computer device according to an embodiment of the present application;
fig. 22 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The terms "first," "second," and the like in this application are used to distinguish between identical or similar items that have substantially the same function and function, and it should be understood that there is no logical or chronological dependency between the "first," "second," and "nth" terms, nor is it limited to the number or order of execution.
The term "at least one" in this application means one or more, meaning "a plurality of" means two or more, for example, a plurality of coloring parameters means two or more.
The term "comprising at least one of a or B" in this application relates to the following cases: only a, only B, and both a and B.
User-related information (including, but not limited to, user equipment information, personal information, behavioral information, etc.), data (including, but not limited to, data for analysis, stored data, presented data, etc.), and signals referred to in this application, when applied to a particular product or technology in the methods of embodiments of the present application, are subject to user approval, consent, authorization, or substantial authorization by parties, and the collection, use, and processing of the related information, data, and signals requires compliance with relevant laws and regulations and standards of the relevant country and region. For example, the model data of the target object referred to in this application are all acquired with sufficient authorization.
Hereinafter, terms related to embodiments of the present application will be explained.
Virtual environment: is a virtual environment that an application displays (or provides) while running on a terminal. The virtual environment may be a simulation environment for the real world, a semi-simulation and semi-fictional virtual environment, or a pure fictional virtual environment. The virtual environment may be any one of a two-dimensional virtual environment, a 2.5-dimensional virtual environment, or a three-dimensional virtual environment, and the dimensions of the virtual environment are not limited in the embodiments of the present application. For example, the virtual environment may include sky, land, sea, etc., the land may include environmental elements of a desert, city, etc., and the user may control the virtual object to move in the virtual environment. Alternatively, the virtual environment may also be used for virtual environment countermeasure between at least two virtual objects, with virtual resources available for at least two virtual objects in the virtual environment.
Virtual object: refers to objects that occupy a portion of space in a virtual environment. The virtual object includes: virtual objects, virtual figures, virtual animals, virtual plants, cartoon figures, etc., such as figures, animals, plants, oil drums, walls, stones, etc., displayed in a virtual environment. The virtual environment may include a plurality of virtual objects, each having its own shape and volume in the virtual environment, occupying a portion of the space in the virtual environment. Alternatively, when the virtual environment is a three-dimensional virtual environment, the virtual object is a three-dimensional stereoscopic model. Alternatively, when the virtual scene is a 2.5-dimensional or 2-dimensional virtual scene, the virtual object is implemented using a 2.5-dimensional or 2-dimensional model, which is not limited in the embodiment of the present application.
Game engine: refers to the core components of some compiled editable computer game systems or some interactive real-time image applications. These systems provide game designers with the various tools required to write games in order to allow the game designer to easily and quickly make game programs without starting from zero. The game engine comprises the following systems: rendering engines (i.e., "renderers," including two-dimensional and three-dimensional image engines), physics engines, collision detection systems, sound effects, script engines, computer animations, artificial intelligence, network engines, and scene management.
Rendering engine: in the field of image technology, a rendering engine refers to: rendering the three-dimensional model modeled for the virtual object into a two-dimensional image, so that the stereoscopic effect of the three-dimensional model is still maintained in the two-dimensional image. In particular, in the field of game technology, a virtual scene in which a game is placed and all virtual objects in the virtual scene are rendered by driving a rendering pipeline in a GPU (Graphics Processing Unit, image processor) through a rendering engine after model data of a modeled three-dimensional model is imported to the rendering engine, so that objects indicated by the three-dimensional model are visually presented on a display screen of a game device.
Rendering pipeline: graphics rendering flow running in the GPU. In image rendering, the following rendering pipelines are typically involved: vertex shaders, rasterizes, pixel shaders, by writing code in the shaders, can control the GPU to render the rendering components to draw.
Vertex shader: and (3) an optional link of the GPU rendering pipeline, wherein the program can calculate each model vertex of the three-dimensional model one by one according to codes, and output the result to the next stage.
Rasterizing: and (3) an uncodable link of the GPU rendering pipeline, wherein the program automatically assembles the results output by the vertex shader or the geometric shader into triangles, rasterizes the triangles into discrete pixel points according to configuration and outputs the discrete pixel points to the pixel shader.
A pixel shader: and (3) a necessary link of the GPU rendering pipeline, wherein the program performs coloring calculation on the rasterized pixels according to codes, and outputs the rasterized pixels to a frame buffer area after passing the test to complete one rendering pipeline flow.
Coloring model: the shading model refers to a rendering program framework of a three-dimensional model of any object on a pixel shader of the GPU when rendering such object.
Coloring model migration: in the field of graphics rendering research and industry applications, the process of migrating a three-dimensional model or scene from one shading model to another is referred to as shading model migration.
PBR (physical-Based Rendering): a physical-based rendering coloring model is obtained by calculating a PBR coloring model through a physical-optical theory, the visual effect of rendering the obtained image is more real, but the performance cost of the PBR coloring model is larger, and the PBR coloring model is generally suitable for games of high-performance equipment such as PC (Personal Computer ) terminals.
Lambert (lambbert): i.e. a lambert color model, which is an empirical rendering method, the performance of which is good, but the visual expressive power of the rendered image is limited, i.e. the performance overhead of which is lower than that of a PBR color model, but the visual effect of the rendered image is not as good as that of a PBR color model.
Coordinate transformation: in the rendering process of the three-dimensional model, the same three-dimensional model usually involves different coordinate systems, coordinate transformation refers to the process of transforming coordinates in one space coordinate system into another space coordinate system, the transformation relation between the coordinate systems is usually represented by a matrix, the left side of the matrix takes a standard vector, and the obtained value is the transformed coordinates.
Unified equipment space (Normalized Device Coordinate, NDC): the standard equipment space is a set of three-dimensional coordinate system which is irrelevant to a specific equipment platform, and can be obtained after perspective division is carried out on a four-dimensional clipping coordinate system.
Texture (UV) space: the UV space is a coordinate space mapped to a two-dimensional plane after two-dimensional expansion of the surface of a three-dimensional model, and is generally generated by a modeling software UV expansion function (i.e., UV expansion function, UV tiling function).
In the process of coloring model upgrade or coloring model classification configured for multiple platforms or multiple different devices, coloring model migration is usually involved in the process of coloring model upgrade for a three-dimensional model of a target object, for example, migration from an old coloring model to a new coloring model in the process of updating a new coloring model, for example, migration from a coloring model of a PC side to a coloring model of a mobile side or migration from a coloring model of a mobile side to a coloring model of a PC side is required in the case that different coloring models are deployed for the PC side and the mobile side by the same game.
Because the coloring process and the coloring parameters (such as illumination parameters, texture mapping, etc.) of different coloring models are different, in order to ensure that the coloring model migration is not easily perceived by a user, the coloring rendering results before and after the coloring model migration are always required to be the same.
In the conventional game development process, a new coloring model is usually developed by a graphic program developer or a technical artist, then a texture map suitable for the new coloring model is drawn by the artist according to the existing rendering result of the old coloring model on the target object, and then an initial value of an illumination parameter is given after the texture map is fixed, the target object is rendered according to the new coloring model, the newly drawn texture map and the given illumination parameter to obtain a new rendering result, and the artist repeatedly debugs the illumination parameter according to the new rendering result until the new rendering result is consistent with the existing rendering result visually, so that the texture map and the illumination parameter which are adapted by the new coloring model are obtained.
In the above manner, the process is seriously dependent on the manual drawing of texture mapping and the adjustment of illumination parameters by an artist on a new coloring model, and the artist needs to spend a lot of time on the migration of the coloring model, which causes the efficiency bottleneck of the game development production process. In view of this, embodiments of the present application provide a rendering model migration scheme for differential rendering based on texture space using a micro-renderable theory approach.
In the micro-renderable theory, the rendering pipeline is rewritten to make the micro-renderable, so that the difference between rendering results can be continuously used in the rendering process, the difference is reversely transmitted to rendering parameters according to differential information, and the input rendering parameters are reversely deduced from the rendering results in a gradient descent mode, however, in the coloring model migration process, the triangle rasterization flow is involved, but the micro-renderable of the triangle rasterization flow cannot be operated in the GPU, and the micro-renderable frame is complex, and if the CPU (Central Processing Unit ) is used for realizing the micro-renderable frame, the rendering performance, accuracy and usability are low.
In the embodiment of the application, on the basis of the micro-rendering theory, the model vertexes of the three-dimensional model are directly unfolded into the texture space to conduct differential rendering and gradient descent, which is equivalent to skipping the differential step of triangle rasterization, so that on the basis of the rendering result (namely a reference image) of the existing second coloring model in the texture space, the machine can conduct differential rendering on the first coloring model to be debugged, so that migration from the second coloring model to the first coloring model is realized, the whole rendering process is micro in the whole coloring model migration process, and the parameter automatic configuration of the coloring model can be implemented by utilizing a GPU rendering pipeline.
The following describes the environment in which embodiments of the present application are implemented.
Fig. 1 is a schematic diagram of an implementation environment of a parameter configuration method of a coloring model according to an embodiment of the present application. Referring to fig. 1, the implementation environment includes: a terminal 101 and a server 102.
The terminal 101 installs and runs an application program supporting a virtual environment. Optionally, the application program includes: MOBA (Multiplayer Online Battle Arena), shooting games, MMORPG (Massive Multiplayer Online Role-play Game), virtual reality applications, three-dimensional map programs, multiplayer survival games, etc., or the application may also be a cloud Game application.
In some embodiments, the terminal 101 is a terminal used by a user, when the terminal 101 runs a game application, in response to an operation of opening triggered by the user in the game application, the terminal 101 loads virtual environment data and three-dimensional model data of each virtual object located in the virtual environment, the virtual environment data and each three-dimensional model data are both transferred into a rendering engine, and the rendering engine drives a rendering pipeline of the GPU to render the virtual environment itself and each virtual object located in the virtual environment, thereby obtaining a virtual environment screen, wherein the virtual environment screen is used for presenting the virtual environment and each virtual object located in the virtual environment, and then the terminal 101 displays the virtual environment screen in a display screen.
The terminal 101 is directly or indirectly connected to the server 102 through wired or wireless communication, which is not limited herein.
Server 102 includes at least one of a server, a plurality of servers, a cloud computing platform, or a virtualization center. The server 102 is used to provide background services for applications that support virtual environments. Optionally, the server 102 takes on primary image rendering work and the terminal 101 takes on secondary image rendering work; alternatively, the server 102 performs the secondary image rendering work, and the terminal 101 performs the primary image rendering work; alternatively, a distributed computing architecture is used between the server 102 and the terminal 101 for collaborative image rendering.
Optionally, the server 102 is a stand-alone physical server, or a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs (Content Delivery Network, content delivery networks), and basic cloud computing services such as big data and artificial intelligence platforms.
In some embodiments, assuming that the second coloring model is an existing coloring model, a technician may debug coloring parameters of the second coloring model for a target object, so as to ensure that a reference image obtained after the second coloring model renders the target object achieves a better visual effect, and in a coloring model migration process, a new first coloring model is known, but what coloring parameters should be input to the first coloring model is unknown.
In a coloring model upgrading scene, a first coloring model refers to a new coloring model after upgrading, and a second coloring model refers to an old coloring model to be upgraded; in a shading model hierarchical scenario for multiple platforms or multiple different device configurations, a first shading model is a shading model of a certain platform or device configuration that has not been debugged, a second shading model is a shading model of another platform or device configuration that has been debugged, e.g., the first shading model is a PC-side shading model, the second shading model is a mobile-side shading model, or the first shading model is a mobile-side shading model, and the second shading model is a PC-side shading model.
Note that the device types of the terminal 101 include: at least one of a smart phone, a tablet computer, a smart speaker, a smart watch, a smart palm phone, a portable game device, a vehicle-mounted terminal, a laptop portable computer, and a desktop computer, but is not limited thereto. For example, terminal 101 is a smart phone, or other hand-held portable gaming device. The following embodiments are illustrated with terminal 101 comprising a smart phone.
Those skilled in the art will recognize that the number of terminals described above may be greater or lesser. Such as only one of the terminals, or tens or hundreds of terminals, or more. The number of terminals and the device type are not limited in the embodiment of the present application.
Fig. 2 is a flowchart of a method for configuring parameters of a coloring model according to an embodiment of the present application. Referring to fig. 2, this embodiment is performed by a computer device, which is exemplified by a server, and includes the steps of:
201. the server renders the target object with initial rendering parameters based on the first rendering model to obtain an initial image.
In some embodiments, the server determines a first shading model of the parameter to be debugged and a second shading model of the parameter after debugging, in other words, during migration of the shading models, the first shading model is a shading model after migration, and the second shading model is a shading model before migration.
Illustratively, in a shading model upgrade scenario, the first shading model refers to the updated new shading model, and the second shading model refers to the old shading model to be upgraded.
Illustratively, in a shading model hierarchical scenario for multiple platforms or multiple different device configurations, a first shading model is a shading model of a certain platform or device configuration that has not been debugged, and a second shading model is a shading model of another platform or device configuration that has been debugged. For example, the first shading model is a PC-side shading model, the second shading model is a mobile-side shading model, or the first shading model is a mobile-side shading model, and the second shading model is a PC-side shading model.
It should be noted that it is only necessary to ensure that the first coloring model and the second coloring model are different coloring models, for example, the first coloring model is a lambert coloring model and the second coloring model is a PBR coloring model, and the embodiment of the present application does not specifically limit which coloring model the first coloring model and the second coloring model are.
In some embodiments, after the first coloring model is determined, the coloring parameters of the first coloring model are configured in an initialized manner to obtain initial coloring parameters. Further, determining a target object aimed by a coloring model migration process, loading model data of a three-dimensional model of the target object, importing the model data of the three-dimensional model of the target object into a rendering engine, driving a rendering pipeline in a GPU through the rendering engine, and respectively performing vertex coloring, rasterization and pixel coloring on the three-dimensional model of the target object in the rendering pipeline, wherein in a pixel coloring stage, the three-dimensional model of the target object is colored by using a first coloring model configured with initial coloring parameters, so as to obtain an initial image.
202. The server adjusts the initial coloring parameters based on the difference between the initial image and the reference image to obtain adjusted coloring parameters, the reference image is obtained by rendering the target object based on a second coloring model, and the second coloring model and the first coloring model are used for rendering the object in the same virtual environment.
In some embodiments, after the second shading model is determined in step 201, since the second shading model is a shading model that has been parameter debugged, the rendering result of the second shading model on the target object may be used as a reference image of the first shading model in the differential rendering process, in other words, because the shading model needs to keep the rendering effect consistent before and after migration, the reference image symbolizes the rendering effect expected to be achieved after the shading parameter of the first shading model is debugged.
In some embodiments, after model data of the three-dimensional model of the target object is imported to a rendering engine, a rendering pipeline in the GPU is driven by the rendering engine, vertex shading, rasterization and pixel shading are respectively performed on the three-dimensional model of the target object in the rendering pipeline, and in a pixel shading stage, the three-dimensional model of the target object is shaded by using a second shading model with parameters being debugged, so as to obtain a reference image.
After the reference image and the initial image in step 201 are obtained, the difference between the initial image and the reference image can be determined by comparing the initial image with the reference image, and then, according to the difference between the initial image and the reference image, the initial coloring parameters of the original first coloring model can be adjusted, so as to obtain the adjusted coloring parameters.
In some embodiments, in view of the micro-renderable theory, the initial rendering parameters are adjusted based on a gradient descent method, and the adjusted rendering parameters are obtained by back-propagating the difference between the initial image and the reference image to the initial rendering parameters, so that the configuration process of the whole rendering parameters is automatically performed by a machine, and the migration flow of the whole rendering model is converted into a differentiable rendering flow.
203. And the server renders the target object with the adjusted coloring parameters based on the first coloring model to obtain an initial image of the next iteration.
In some embodiments, the server drives the rendering pipeline in the GPU again through the rendering engine in a manner similar to step 201, and vertex shading, rasterizing, and pixel shading are performed again on the three-dimensional model of the target object in the rendering pipeline, wherein in the pixel shading stage, the three-dimensional model of the target object is shaded with the first shading model configured with the adjusted shading parameters, resulting in an initial image for the next iteration.
204. The server iteratively executes operations of adjusting the coloring parameters and rendering the initial image, and outputs the adjusted target coloring parameters, wherein a similar condition is met between a target image obtained by rendering the target object with the target coloring parameters based on the first coloring model and the reference image.
In some embodiments, after obtaining the coloring parameter of the next iteration, the server calculates the difference between the reference image and the initial image of the next iteration in a manner similar to step 202, and further continues to adjust the coloring parameter of the first coloring model according to the difference calculated in the next iteration, in other words, iteratively performs the operations of adjusting the coloring parameter and rendering the initial image until the initial image rendered in a certain iteration meets the similarity condition with the reference image, at this time, the iteration is stopped, the initial image obtained in the last iteration is output as the final rendered target image, and the coloring parameter used in the last iteration is output as the target coloring parameter.
In other words, after the first coloring model renders the three-dimensional model of the target object with the target coloring parameters, the obtained target image and the reference image conform to the similarity condition, that is, the coloring parameters of the first coloring model are considered to be adjusted, so that the target image rendered by the first coloring model is approximately consistent with the reference image rendered by the second coloring model, thus meeting the business requirement of keeping the rendering effect consistent before and after the coloring model is transferred, and after the target coloring parameters of the first coloring model are output, the server can save the target coloring parameters of the first coloring model this time for the access requirement of other subsequent devices.
After the target coloring parameters are obtained through adjustment, when a display requirement for displaying the target object in the virtual scene exists in any terminal, the terminal can request the target coloring parameters of the target object in the first coloring model to the server, the server returns the adjusted target coloring parameters to the terminal, so that the GPU at the terminal side can render the three-dimensional model of the target object according to the target coloring parameters based on the first coloring model to obtain a target image, the rendering effect of the target object is equivalent to that of converting the target object from the reference image to the target image after the target object is migrated from the second coloring model to the first coloring model, and the rendering effect of the target object is approximate to be consistent or has small error due to the fact that similar conditions are met between the target image and the reference image.
All the above optional solutions can be combined to form an optional embodiment of the present disclosure, which is not described in detail herein.
According to the method provided by the embodiment of the invention, the reference image is rendered on the target object through the debugged second coloring model, the initial image is rendered on the target object by utilizing the first coloring model to be debugged, the machine can automatically reversely propagate the difference to the coloring parameter input by the first coloring model according to the difference between the reference image and the initial image, and further gradually reduce the difference between the reference image and the initial image by iteratively adjusting the coloring parameter, so that the rendering effect of the target image output by the first coloring model is approximately consistent with that of the reference image when iteration is stopped, and in the migration process of the coloring model, the machine automatically iterates and searches for the optimal coloring parameter without manually debugging the coloring parameter by an artist, so that the labor cost of coloring model migration is greatly reduced, and the migration efficiency of the coloring model is improved.
In the above embodiment, it is briefly introduced how a machine automatically configures the rendering parameters of the first rendering model in an iterative manner given a reference image, in this embodiment, how differential rendering is performed in the texture space to implement the migration process of the rendering model, otherwise, since the micro-renderable triangle rasterization cannot be performed in the GPU, by transferring the differential rendering to the texture space, the differential step of triangle rasterization is skipped, so that the whole differential rendering process can be performed in the rendering pipeline of the GPU, greatly improving the migration performance of the rendering model, which will be described below.
Fig. 3 is a flowchart of a method for configuring parameters of a coloring model according to an embodiment of the present application. Referring to fig. 3, this embodiment is performed by a computer device, which is exemplified by a server, and includes the steps of:
301. the server configures a rendering environment that migrates from the second shading model to the first shading model.
Wherein the second shading model is used to render objects in the same virtual environment as the first shading model.
In some embodiments, after determining the first shading model and the second shading model, the server configures a rendering environment of the shading model migration process, optionally, the rendering environment is a predefined virtual environment for training shading parameters, or the rendering environment is a virtual environment where the target object is actually applied, for example, when the target object is a game prop, the rendering environment is a virtual environment predefined by a technician and dedicated to training the shading parameters, or the rendering environment is a game environment where the game prop is actually located, which is not particularly limited in the embodiments of the present application.
In some embodiments, taking a rendering environment as a predefined rendering environment for training the rendering parameters as an example, a technician may create a rendering scene in the rendering engine, and take the rendering scene as a rendering environment for migration training of the rendering model.
In one exemplary scene, a rendering scene is newly built in a rendering engine, a configuration interface of the rendering scene is displayed in the rendering engine, and a technician can configure various scene parameters of the rendering scene in the configuration interface, such as configuring camera projection parameters, camera background color, scene depth parameters, and the like, which is not particularly limited in the embodiment of the present application.
As shown in fig. 4, there is shown one possible configuration interface 400 for rendering a scene, in which configuration interface 400 the camera projection parameters may be modified to an orthogonal projection of size 1 and the camera background clear to solid black.
As shown in fig. 5, another possible configuration interface 500 for rendering a scene is shown, in which configuration interface 500 a setting menu 502 for the screen output image size is expanded by a drop down option 501, in which setting menu 502 a plurality of selectable image sizes are displayed. Illustratively, a size configuration window 510 is also displayed in the configuration interface 500, in which a technician can customize the width and height of the image size, and after the configuration of the width and height of the image size is completed, for example, the screen output image size is modified to 2048×2048, and clicking the confirm option 511 completes the configuration of the screen output image size, or clicking the cancel option 512 returns the original configuration of the screen output image size.
In some embodiments, after the rendering environment is configured, model data of a three-dimensional model of the target object is imported into the rendering environment, and then, since different coloring models correspond to different coloring styles, in other words, the first coloring model and the second coloring model have different coloring styles, in order to conveniently compare the rendering effects before and after training, the target object can be rendered by using the first coloring model in the rendering environment to obtain a reference rendering map (i.e., a reference image) of the first coloring model, and meanwhile, the target object is rendered by using the second coloring model which is not debugged, so as to obtain a pre-training rendering map (i.e., an initial image) of the second coloring model.
As shown in fig. 6, a possible rendering environment 600 is shown, taking a target object as an example of a virtual puppy, in the rendering environment 600, a reference rendering graph 601 obtained by rendering the virtual puppy by a second coloring model with debugged coloring parameters is placed at the origin (0, 0) of the world coordinates, for example, the second coloring model is a PBR model, then, a pre-training rendering graph 602 obtained by rendering the virtual puppy by a first coloring model with coloring parameters to be debugged is placed at the setting (0, 2) of the world coordinates, for example, the first coloring model is a lambert coloring model, wherein, since the initial coloring parameters of the first coloring model are automatically learned by a machine, the initial coloring parameters can be set to any value.
302. The server configures the vertex shaders of both the first shading model and the second shading model such that the vertex shaders output texture coordinates for each model vertex of the target object.
In some embodiments, the vertex shader needs to be configured for both the first shading model and the second shading model such that the NDC coordinates (i.e., standard device space coordinates) output by the native vertex shader are modified to UV coordinates (i.e., texture coordinates). In other words, before the vertex shader is configured, the vertex shader parses the three-dimensional model of the target object to output NDC coordinates of each model vertex, and after the vertex shader is configured, the vertex shader parses the three-dimensional model of the target object to output UV coordinates of each model vertex.
In one exemplary scenario, the vertex shader is modified in the first shading model or the second shading model, and the following code is modified at the shader coordinate transformation such that the vertex shader outputs UV coordinates (instead of NDC coordinates):
v2f o;
normal vertex coloring
o.pos.xy=v.texcoord.xy*2-1;
o.pos.z=0;
o.pos.w=1;
return o;
Through the codes, two-dimensional UV coordinates (x, y) of the output model vertex in the UV space, namely the texture space, are uniformly configured to be 0 for any model vertex o in the three-dimensional model of the target object, and meanwhile, the depth coordinate z of each model vertex is uniformly configured to be 1 for each homogeneous coordinate w of the model vertex.
303. The server configures a pixel shader of the first shading model to output the pixel shading result and a differential amount of each initial shading parameter.
Wherein the differential quantity characterizes the amount of change of the shading parameter adaptation required by the amount of change of the initial image.
In some embodiments, the vertex shader needs to configure both the first shading model and the second shading model, but the pixel shader only needs to configure the first shading model to be debugged, and the pixel shader can be modified to be micro by rewriting the code of the pixel shader of the first shading model.
In some embodiments, the first shading model is illustrated as a lambert shading model in which three initial shading parameters are involved: the initial texture map (set as X1), the initial light intensity coefficient (set as X2), and the initial light intensity offset (set as X3), the three initial rendering parameters are input as texture parameters to the pixel shader, so that the pixel shader renders the target object according to the input texture parameters.
It should be noted that, the initial light intensity coefficient X2 and the initial light intensity offset X3 are both constant parameters (but the constant parameters are to be adjusted), and the initial texture map X1 is the main color map of the three-dimensional model of the target object, in other words, the initial texture map X1 is an unknown (i.e. not yet debugged) model map, and the initial light intensity coefficient X2 and the initial light intensity offset X3 are both constant parameters which are unknown (i.e. not yet debugged).
In the stage of modifying the pixel shader for the lambert model, the definition of the output of the pixel shader is modified, and the definition of the output of the pixel shader is attached with the differential amount of each initial shading parameter, so that in the rendering stage, the pixel shader outputs not only the texture pixel of each model vertex in the three-dimensional model of the target object, but also the differential amount of each initial shading parameter, wherein the types and the numbers of the shading parameters of different shading models are different, and the differential amounts of the output shading parameters are different.
In an exemplary scenario, illustrated by way of example with a first shading model as a lambert shading model, the output definition of the pixel shader is accompanied by differential amounts of the initial texture map X1, the initial light intensity coefficient X2, and the initial light intensity offset X3, which can be modified by:
wherein, the floating-point variable y refers to the texture pixel of each model vertex, the floating-point variable dydx1 refers to the first differential amount of the initial texture map X1, the floating-point variable dydx2 refers to the second differential amount of the initial light intensity coefficient X2, and the floating-point variable dydx3 refers to the third differential amount of the initial light intensity offset X3.
On the basis of modifying the above definition of the output for the pixel shader, the differential calculation results of each of the three initial shading parameters can be attached to the output of the pixel shader, as shown in the following code:
from the above codes, it can be seen that, for any model vertex o in the three-dimensional model of the target object, after being processed by the pixel shader, the following 4 data are output: the texel y of the model vertex o, the first differential dydx1 of the initial texture map X1 in this iteration, the second differential dydx2 of the initial light intensity coefficient X2 in this iteration, and the third differential dydx3 of the initial light intensity offset X3 in this iteration.
After the output definition and output result modification for the pixel shader are completed, the pixel shader is configured, and the process proceeds to the training process of steps 304-309 described below. In some embodiments, the automated training process for the shading parameters of the first shading model is triggered by style migration options provided in a menu bar of the rendering engine.
As shown in fig. 7, an operation interface 700 of a rendering engine is shown, a style migration option 701 is provided in the operation interface 700, after clicking the style migration option 701, a trainable first coloring model 702 is displayed in the operation interface 700, taking the first coloring model 702 as a lambert coloring model as an example, clicking the trainable first coloring model 702, a training option 703 for the first coloring model 702 is displayed in the operation interface 700, for example, a word of "start training" is displayed on the training option 703, and clicking the training option 703 can trigger a training process of a subsequent automatic coloring parameter.
304. The server renders the target object with given rendering parameters based on the second rendering model to obtain a reference image.
In some embodiments, since the second shading model is a shading model that has been parameter debugged, that is, the shading parameters of the second shading model are given shading parameters without training in the migration process of the shading model, in other words, the rendering result of the second shading model on the target object may be used as a reference image of the first shading model in the differential rendering process, and the reference image symbolizes the expected rendering effect after the shading parameters of the first shading model are debugged, because the shading model needs to keep the rendering effect consistent before and after the migration.
In some embodiments, after model data of the three-dimensional model of the target object is imported to the rendering engine, a rendering pipeline in the GPU is driven by the rendering engine, vertex shading, rasterizing, and pixel shading are performed on the three-dimensional model of the target object in the rendering pipeline, respectively, and in a pixel shading stage, the three-dimensional model of the target object is shaded using a second shading model based on given shading parameters, resulting in a reference image.
305. The server renders the target object with initial rendering parameters based on the first rendering model to obtain an initial image.
In some embodiments, after the first coloring model is determined, the coloring parameters of the first coloring model are configured in an initialized manner to obtain initial coloring parameters. Further, after model data of the three-dimensional model of the target object is imported into the rendering engine, driving a rendering pipeline in the GPU through the rendering engine, and respectively performing vertex coloring, rasterization and pixel coloring on the three-dimensional model of the target object in the rendering pipeline, wherein in a pixel coloring stage, the three-dimensional model of the target object is colored by using a first coloring model configured with initial coloring parameters, so as to obtain an initial image in the iteration.
In some embodiments, taking the first shading model as a lambert shading model as an example, the rendering process of the target object by the first shading model is described, and the lambert shading model involves three initial shading parameters: an initial texture map X1, an initial intensity coefficient X2, and an initial intensity offset X3.
Next, the micro-renderable principle of the lambert coloring model will be explained.
For the differential rendering pipeline of the GPU, defining its input as a high-dimensional vector X, defining the rendering process as f (X), and defining the output rendering result as Y, y=f (X) is established. Wherein the rendering process f (X) includes three stages of vertex coloring, rasterization and pixel coloring, and the generated rendering result Y refers to a rendering result of the coloring model displayed on the screen space, that is, the rendering result refers to a two-dimensional image displayed on the display screen. Assuming that given an expected rendering result Y '(here referred to as a reference image), if it is desired to solve in reverse the shading parameters X' (here referred to as shading parameters to be adjusted by the first shading model) that should be input to the shading model, then only the inverse function f- 1 (x) Substituting Y' into f- 1 (x) Obtaining X' =f- 1 (Y'). However, due to the inverse function f- 1 (x) In fact, it is not present, so that a more accurate numerical solution to the color parameter X' can be obtained by iterative progressive optimization using a gradient descent method.
Illustratively, any coloring parameter to be adjusted is represented by x, a rendering process of the coloring model based on the coloring parameter x is represented by f (x), an initial image output by each iteration is represented by Y, and a reference image is represented by Y0.
Assuming that the differential function f' (x) of f (x) has been found, the following formula:
for rendering process y=f (x), it is desirable that the error amount Δ of Y and Y0 is as small as possible, assuming that a measure of the error amount is defined as follows:
Δ 2 =[f(x)-Y0] 2
for delta 2 Differentiation with respect to x is performed to obtain the following differential amounts:
for the coloring parameter x, an initial value x is selected 0 As an initial coloring parameter, f (x) is then substituted to obtain f (x 0 ) Let delta 0 =f(x 0 ) Y0, the differential massFor the gradient of the current error amount with respect to x, let:
where x characterizes any one of the shading parameters of the shading model, as for the lambert shading model, x may be: an initial texture map X1, an initial intensity coefficient X2, or an initial intensity offset X3.
Wherein Δ represents an error amount between a rendering result f (x) rendered based on the rendering parameter x using the rendering model and the reference image Y0, the error amount being used to measure a difference between the rendering result f (x), i.e., the initial image y=f (x), and the reference image Y0.
Wherein the differential massFor characterizing the differential quantity with respect to x for the square of the error quantity delta.
Where f '(x) characterizes the derivative function of rendering process f (x) of the shading model, i.e. f' (x) represents the first derivative of f (x).
Wherein t represents the learning rate of the training process of the coloring parameter, and the learning rate t in the engineering has any value ranging from 0.001 to 0.01.
Wherein the subscript of each variable represents the number of iteration steps, e.g., x 1 Characterization of the coloring parameters, delta, for iteration 1 1 The error amount of the 1 st iteration is characterized.
As can be seen from the above formula, the gradient descent learning process can be iteratively started only by obtaining the differential function f' (x) of f (x). The coloring parameters of each iteration can be obtained by solving the coloring parameters of the previous iteration, and similarly, a series of coloring parameters { x ] can be obtained by solving the iteration 0 ,x 1 ,x 2 ,…,x n Assuming that the solution is stopped after the nth iteration, the approximate solution obtained for the coloring parameter is x n 。
In some embodiments, the iteration is stopped if n reaches a set number of steps, completing the gradient descent learning process, or alternatively, in the approximation of solution x n So that the error amount delta n =f(x n ) When Y0 approaches the preset minimum value, the iteration is stopped after meeting the convergence condition, the gradient descent learning process is completed, or alternatively, the solution x is approximated n And under the condition that the error amount exceeds a certain time or the number of steps is not reduced any more, the method is regarded as meeting the convergence condition, and the iteration is stopped to complete the gradient descent learning process.
Next, a solution concept of the differential function f' (x) will be described.
For the rendering process f (x) of the rendering pipeline in the GPU, the information in the coloring parameter x can be subjected to discrete sampling when being subjected to rasterization, and the complete rendering process f (x) is not differentiable due to the reasons that the segmentation function appears in the f (x) formula and the like caused by the elimination process, and the incapacitating represents that the differential function f' (x) cannot be solved. In the differential rendering process, the rasterization process is improved by boundary sampling or probability density functions and the like, so that the rasterization process of f (x) can be micro, but the rasterization process is not compatible with a GPU rendering pipeline after being improved to be micro, so that the rasterization process can only run in a CPU, the whole rendering framework running in the CPU is quite complex, and the consistency and the efficiency of rendering the same target object by different coloring models are influenced.
As shown in fig. 8, in the conventional shading model migration process, the rendering process f (x) involves the following stages: vertex shading 801, rasterization 802, pixel shading 803, and output 804. The geometric model and the camera parameters in the coloring parameter x are input into a vertex shader from a local space, then, a vertex coloring 801 process is executed by the vertex shader, the NDC coordinate of each model vertex is output, the conversion from the local space to a unified device coordinate space is realized, then, the screen coordinate of each model vertex is obtained by performing a rasterization 802 process on the triangle in the three-dimensional model, the conversion from the unified device coordinate space to the screen space is realized, then, the screen coordinate of each model vertex is input into a pixel shader, the material parameter and the illumination parameter in the coloring parameter x are input into a pixel shader from a texture space, the pixel shader samples the texture pixel of each model vertex in the texture space, and then, each model vertex is colored according to the texture pixel and is output into the screen space, so that a two-dimensional image (namely, the initial image of the iteration) finally output in the screen space is obtained.
Considering that the geometry model and camera parameters of the target object are unchanged in the migration process of the coloring model, the coloring parameters X to be trained and learned are only material parameters, which are focused on the pixel coloring stage, and for the lambert coloring model, the material parameters comprise an initial texture map X1, an initial light intensity coefficient X2 and an initial light intensity offset X3, wherein the initial light intensity coefficient X2 and the initial light intensity offset X3 are constant parameters, the constant parameters are usually a one-dimensional scalar or a vector with two to four dimensions (specific values need to be trained and learned), the initial texture map X1 is a map to be trained and trained, and the map sampling process is performed on a texture space.
As shown in fig. 9, a training flow based on texture space is shown, it can be seen that by configuring the vertex shader such that the vertex shader outputs texture coordinates instead of NDC coordinates, the vertex shader stage 901 is converted from unified device coordinate space to texture space, and the rasterization 902 is also output to texture space, while configuring the pixel shader such that the pixel shader outputs pixel shading results, and also outputs differential amounts of each shading parameter, this pixel shading 903 process is also implemented in texture space, and only pixel shading 903 process and output 904 process that need to be rendered and differentiated, and all the calculations involved in rendering processes are performed in texture space, so as to implement micro-renderable of the shading model migration process, and since the pixel shader is also rewritten, the pixel shading can also be converted into a differential process after the pixel shader is rewritten, so that the differential function f' (x) of this differential process can be solved.
Under the guidance of the gradient descent method and the transfer of the rasterization process to the texture space, the initial image rendering process in the step 305 is described, and as shown in fig. 10, the initial image rendering process involves the following substeps 3051-3054:
3051. The server obtains texture coordinates output by each model vertex of the target object.
In some embodiments, the vertex shader is configured in step 302 such that the vertex shader outputs texture coordinates for each model vertex in the three-dimensional model of the target object, the texture coordinates being used to characterize from which position in the texture map pixel sampling is required when shading the corresponding model vertex.
In some embodiments, the vertex shader does not output the original texture coordinates, but rather outputs transformed texture coordinates resulting from linear transformation of the original texture coordinates.
Comparing fig. 8 and fig. 9, it can be seen that, because the vertex data of the geometric model includes vertex coordinates and texture coordinates, the vertex coordinates indicate positions of vertices of the model, the texture coordinates indicate colors of the vertices of the model, in a conventional rendering process, the vertex coordinates are transformed from a local space to a unified device coordinate space to obtain NDC coordinates, in this embodiment of the present application, by rewriting a vertex shader, the vertex coordinates are transformed from the local space to the texture space to obtain texture coordinates, and then the texture coordinates are linearly transformed to obtain transformed texture coordinates, and because the transformed texture coordinates only relate to coordinates of xy two dimensions, the coordinates of xy two dimensions are assigned to xy components of the original NDC coordinates, and the z components and w components are uniformly configured, so that linear transformation of the texture coordinates and then output to a screen space to obtain screen coordinates can be realized, and then the pixel shader is output according to the texture coordinates.
Schematically, the vertex shader outputs transformed texture coordinates for each model vertex o by:
v2f o;
vertex coloring
o.pos.xy=v.texcoord.xy*2-1;
o.pos.z=0;
o.pos.w=1;
return o;
V is input information of the model vertex, v.texchord is two-dimensional texture coordinates of the model vertex, the value range is between 0 and 1, and xy coordinates of the model vertex in a texture space are respectively obtained to perform the following linear transformation: the original xy coordinates are multiplied by 2 and subtracted by 1, the value range of the new xy coordinates can be transformed to be between-1 and 1, the new xy coordinates are assigned to xy components of the NDC coordinates, the depth value w of the NDC coordinates is uniformly set to be 0, the perspective component of the NDC coordinates is set to be 1, and four-dimensional coordinates (v.texcord.x.2-1 and v.texcord.y.2-1, 0, 1) after the codes are assigned are used as the output of the vertex shader, so that texture space coordinate transformation can be realized.
3052. The server samples from the initial texture map based on the texture coordinates of each model vertex, resulting in a texel for each model vertex.
In some embodiments, after processing by the vertex shader, the texture coordinates of each model vertex are output, since the xy component of the texture coordinates of each model vertex is transformed between [ -1,1], and each pixel in the initial texture map corresponds to a value between [ -1,1], which represents a unique corresponding texture pixel that can be sampled in the initial texture map according to the texture coordinates of each model vertex, indicating the color of the model vertex. For example, if the texture coordinates of a model vertex are (-0.5, 0.5), the color vector of the texel whose coordinates are (-0.5, 0.5) is extracted from the initial texture map, and this color vector is the color that the pixel shader has colored the model vertex in step 3053 described below.
3053. The server colors the model vertexes of the target object based on the texture pixels of each model vertex to obtain a pixel coloring result.
In some embodiments, the texel of each model vertex can be sampled from the initial texture map, optionally, a color vector, such as an RGB (red green blue) vector or a color vector indicated by another color model, by step 3052, and then coloring the model vertex according to the color vector of the texel of each model vertex, and repeating the above operations to obtain the pixel coloring result of the target object.
3054. And the server renders the pixel coloring result according to the initial light intensity coefficient and the initial light intensity offset based on the first coloring model to obtain the initial image.
In some embodiments, after the pixel shader performs coloring on each model vertex to obtain a pixel coloring result, the pixel coloring result is further required to perform illumination baking on the pixel coloring result by using an initial light intensity coefficient and an initial light intensity offset through an illumination algorithm of the first coloring model, and finally an initial image is output.
It should be noted that, since the vertex coloring, rasterizing and pixel coloring processes are all converted into the texture space, the initial image to be finally output is a UV-developed image under one texture space, as shown in fig. 11, the left part 1101 shows the rendering result output under the conventional rendering flow, the stereoscopic rendering effect of the target object in the screen space is presented, the right part 1102 shows the rendering result output after the vertex shader is rewritten, and the two-dimensional UV-plane developed image of the target object in the texture space is presented.
As shown in fig. 12, taking the lambert coloring model as the first coloring model and the PBR coloring model as the second coloring model as an example, the RBR coloring model may refer to an albedo map generated along with the geometric model in the rendering process, so that, for the target object of the virtual puppy holding the sword shield, the sword shield model can be multiplexed with the virtual objects of other hand-held swordshields, so that the geometric model is respectively built for the two objects of the virtual puppy and the sword shield, and it can be seen that the albedo map 1201 of the sword shield and the coordinate space 1202 output to the sword shield after the vertex shader is rewritten in the embodiment of the application are mutually matched, and similarly, the albedo map 1203 of the virtual puppy and the coordinate space 1204 output to the virtual puppy after the vertex shader is rewritten in the embodiment of the application are mutually matched.
306. The server obtains the differential amount of the initial shading parameter.
In some embodiments, following the configuration of step 303, the differentiable pixel shader outputs a respective differential amount for each initial shading parameter in addition to the pixel shading results involved in step 3053. Illustratively, after the configuration of step 302, the vertex shader outputs linearly transformed texture coordinates for each model vertex, in other words, after rasterization in texture space, the inputs of the pixel shader are each pixel attribute i, constant parameter K, illumination parameter L and mapping parameter T (i) in texture space, the calculation process is also the corresponding shading model f (), the variable x is substituted into pixel attribute i, and the calculation result is the color value f (x, K, L, T (x)) of each point in texture space. The pixel attribute i is a unique corresponding texel sampled from the texture map for each model vertex, the constant parameter K varies with different rendering models, the illumination parameter L is determined by the rendering environment configured in step 301, and the map parameter T (i) is a texture map of the rendering model.
Taking the example that the first coloring model is a lambert coloring model as an illustration, initial coloring parameters related to the lambert coloring model include: an initial texture map X1, an initial intensity coefficient X2, and an initial intensity offset X3. The coloring process of the lambert coloring model is defined as follows:
f(i.normal,L.dir,L.color,K.scale,K.offsef,T(i.uv))
=[(i.normal·L.dir)×L.color×K.scale+K.offset]×T(i.uv)
where i.normal is the normal vector of the fingerprint pixel i, i.uv is the texture coordinate of the fingerprint pixel i, l.dir is the illumination direction of the current rendering environment, l.color is the illumination color of the current rendering environment, k.scale is the initial light intensity coefficient X2 in the initial shading parameter, k.offset is the initial light intensity offset X3 in the initial shading parameter, and T (i.uv) is the initial texture map X1 in the initial shading parameter.
It can be seen that the objects that need to be solved by gradient descent are: two constant parameters, K.scale (i.e., initial intensity coefficient X2) and K.offset (i.e., initial intensity offset X3), and a posted map parameter, T (i.uv), i.e., initial texture map X1.
In other words, in the above step 306, the differential amounts of the three initial shading parameters are solved, that is, the first differential amount of the initial texture map X1, that is, the map parameter T (i.uv), the second differential amount of the initial light intensity coefficient X2, that is, the constant parameter k.scale, and the third differential amount of the initial light intensity offset X3, that is, the constant parameter k.offset are obtained. The process of acquiring the three differential amounts will be described in detail below, respectively.
(A) Acquisition procedure of first differential quantity
In some embodiments, taking a virtual environment where a rendering environment is a target object as an example, a normal vector i.normal of each pixel point in a texture space is obtained, then, based on attribute information of the virtual environment, an illumination direction l.dir and an illumination color l.color are obtained, and then, based on the normal vector i.normal of each pixel point, the illumination direction l.dir, the illumination color l.color, the initial light intensity coefficient k.scale and the initial light intensity offset k.offset, a first differential amount of a map parameter T (i.uv) is obtained.
Schematically, toCharacterizing a first differential value obtained by partial differentiation of the map parameter T (i.uv), the first differential value can be obtained by the following formula:
(B) Acquisition process of second differential quantity
In some embodiments, taking a virtual environment where a rendering environment is a target object as an example, a mapping parameter T (i.uv) of each pixel point and a normal vector i.normal of each pixel point in a texture space are obtained, then, based on attribute information of the virtual environment, an illumination direction l.dir and an illumination color l.color are obtained, and then, based on the mapping parameter T (i.uv) of each pixel point, the normal vector i.normal of each pixel point, the illumination direction l.dir and the illumination color l.color, a second differential amount of the constant parameter k.scale is obtained.
Schematically, toCharacterizing a second differential value obtained by partial differentiation of the constant parameter k.scale, the second differential value can be obtained by the following formula:
(C) Acquisition process of third differential quantity
In some embodiments, a mapping parameter T (i.uv) is obtained for each pixel in the texture space, and a third differential amount of the constant parameter k.offset is obtained based on the mapping parameter T (i.uv) for each pixel in the texture space. For example, the map parameter T (i.uv) is directly assigned as the third differential amount.
Schematically, toCharacterizing a third differential value obtained by partial differentiation of the constant parameter k.offset, the third differential value can be obtained by the following formula:
in summary, for the lambert coloring model, the partial differentiation of each of the three coloring parameters is solved by:
307. the server obtains an amount of error between the initial image and the reference image.
In some embodiments, the server may perform the difference between the initial image obtained in step 305 and the reference image obtained in step 304 to obtain a difference image, where the difference image represents an error amount between the initial image and the reference image, and the symbol Δ is used to characterize the error amount, for example.
Schematically, assuming that the second coloring model is a PBR coloring model, rendering the target object after rewriting the vertex shader of the PBR coloring model will obtain a rendering result Y ' in a texture space, and as shown in fig. 13, the reference image Y ' is a UV-developed image when the reference image Y ' is a rendering result obtained by rendering the PBR coloring model after rewriting the vertex shader of the PBR coloring model.
Schematically, assuming that the first shading model is a lambert shading model, after the vertex shader of the lambert shading model is rewritten, the target object is rendered, so as to obtain a rendering result Y in a texture space 0 As shown in the figure14 shows an initial image Y, which is a rendering result obtained by the lambert coloring model rendering after rewriting the vertex shader of the lambert coloring model 0 At this time, the initial image Y 0 Is a UV-unfolded graph representing the initial image of the 0 th iteration (i.e. referring to the coloring parameters configured according to random initialization), where only the initial values of the texture parameters are configured as follows: k.scale=0.5, k.offset=0.5, t (i.uv) =0.5.
Alternatively, the reference image Y' shown in FIG. 13 and the initial image Y of the 0 th iteration shown in FIG. 14 are combined 0 The difference is made to obtain the error delta of the 0 th iteration 0 。
308. The server adjusts the initial shading parameter based on the error amount and the differential amount to obtain an adjusted shading parameter.
In some embodiments, based on the error amount of the current iteration calculated in the step 307 and the differential amount of each initial coloring parameter output by the pixel shader in the step 306, the value of the initial coloring parameter of the next iteration can be calculated for each initial coloring parameter by using the gradient descent method as introduced in the step 305, that is, based on the error amount and the corresponding differential amount, the error amount can be reversely propagated to the coloring parameter of the next iteration through a gradient descent algorithm, so that migration training of the coloring model is performed after the differential equation is obtained, the optimal coloring parameter value of the first coloring model is solved by iteratively minimizing the error amount in the migration training process, and the coloring parameter obtained by the final iteration is ensured, so that the difference between the initial image and the reference image is minimized.
In some embodiments, x 0 Characterizing the initial shading parameters of the first shading model in iteration 0, the partial differentiation of the initial shading parameters, i.e. the differential quantity f' (x) of iteration 0, can be obtained by step 306 0 ) An initial image Y of the 0 th iteration can be obtained by step 305 0 The reference image Y' obtained in step 304 is combined with the initial image Y of the 0 th iteration 0 Difference is made to obtain the error delta of the 0 th iteration 0 Then the error amount delta 0 The initial shading parameter x is back-propagated to the next iteration by 1 :
Next, the initial shading parameter x for the 1 st iteration is utilized 1 Transmitting the initial image Y into a GPU rendering pipeline to perform next rendering to obtain an initial image Y of the 1 st iteration 1 Further, the reference image Y' and the initial image Y of the 1 st iteration 1 Difference is made to obtain the error delta of the 1 st iteration 1 And will be the error amount delta 1 The initial shading parameter x back-propagated to iteration 2 is propagated by the gradient descent method 2 And repeating the operations of adjusting the coloring parameters and rendering the initial image for a plurality of times by analogy until the error delta of a certain iteration accords with the convergence condition or the iteration step number reaches the set step number, and stopping the iteration from exiting the training.
Illustratively, the first shading model is exemplified as a lambert shading model, and initial shading parameters involved in the lambert shading model include: the initial texture map X1, the initial light intensity coefficient X2 and the initial light intensity offset X3 are respectively substituted into X in the formula by the three different initial coloring parameters in each iteration 0 Three adjusted coloring parameters x of the next iteration can be obtained by respectively calculating through a gradient descent algorithm 1 。
For the n-1 th iteration, the error amount delta based on the n-1 th iteration n-1 And texture map T (i.uv) in the n-1 th iteration n-1 Is a first differential quantity f' (T (i.uv)) n-1 ) Texture map T (i.uv) for the n-1 th iteration n-1 Adjusting to obtain an nth texture map T (i.uv) n Wherein n is an integer greater than or equal to 1.
For the n-1 th iteration, the error amount delta based on the n-1 th iteration n-1 And the light intensity coefficient K.scale in the n-1 th iteration n-1 Second differential amount f' (K.scale) n-1 ) For the light intensity coefficient K.scale of the n-1 th iteration n-1 Adjusting to obtain the nth light intensity coefficient K.scale n Wherein n is an integer greater than or equal to 1.
For the n-1 th iteration, the error amount delta based on the n-1 th iteration n-1 And the light intensity offset k.offset in the n-1 th iteration n-1 Is a third differential quantity f' (K.offset) n-1 ) Offset of intensity for the n-1 th iteration n-1 Adjusting to obtain an n-th light intensity offset n Wherein n is an integer greater than or equal to 1.
In the above steps 306-308, the initial rendering parameters are adjusted based on the difference between the initial image and the reference image to obtain adjusted rendering parameters, and, taking the lambert rendering model as an example, respective adjustment modes of each initial rendering parameter (the initial texture map X1, the initial light intensity coefficient X2, and the initial light intensity offset X3) are described, parameter adjustment is performed based on the error amount and the differential amount, the adjusted rendering parameters are obtained as the rendering parameters of the next iteration, and the above operations are iteratively performed, so that the error amount is continuously optimized to be gradually reduced.
309. And the server renders the target object with the adjusted coloring parameters based on the first coloring model to obtain an initial image of the next iteration.
Step 309 is the same as step 305, and will not be described here.
310. The server iteratively executes 306-309 above, and outputs the adjusted target rendering parameters and the rendered target image when the initial image and the reference image satisfy the similarity condition.
In some embodiments, in the case where the number of iterative steps reaches the set number of steps, for example, the set number of steps is 100, 1000, or the like, which is not particularly limited, the initial image and the reference image are considered to meet the similarity condition, so that the target coloring parameter is output.
In some embodiments, in the event that the amount of error between the initial image and the reference image meets a convergence condition, the similarity condition is considered to be met between the initial image and the reference image, thereby outputting the target shading parameter.
Illustratively, the convergence condition includes: the absolute value of the error amount between the initial image and the reference image is smaller than a preset error, and the preset error is any value larger than or equal to 0. Assuming that the absolute value of the error amount is smaller than the preset error, it means that the difference between the initial image and the reference image is small at this time, so that the rendering effect of the first coloring model after training is approximately consistent with the rendering effect of the second coloring model, the error amount is considered to have converged to the preset error, and thus the convergence condition is met.
Illustratively, the convergence condition includes: the variation of the error amount between the initial image and the reference image in two adjacent iterations is smaller than a preset variation, the variation is the difference between the error amounts in two adjacent iterations, the preset variation is any value, for example, the preset variation is 0.01, and the error amount in two adjacent iterations is assumed to be smaller than the preset variation, which indicates that the error amount tends to be converged and not reduced, and can be considered to be in accordance with the convergence condition.
Illustratively, the convergence condition includes: the change amount of the error amount between the initial image and the reference image in the preset time period is smaller than the preset change amount, the preset time period is any value larger than or equal to 0, for example, the preset time period is 30 seconds, 1 minute and the like, the change amount is the difference value between the error amounts in two adjacent iterations, the preset change amount is any value, for example, the preset change amount is 0.01, and the error amount is assumed to be smaller than the preset change amount in the set time period, so that the error amount is stably converged within a period of time and is not reduced any more, and the convergence condition can be considered to be met.
In this embodiment of the present application, operations of adjusting a coloring parameter and rendering an initial image are performed iteratively, and finally, when the initial image and a reference image meet the similarity condition, the adjusted target coloring parameter is output, and at the same time, a target image obtained by rendering the target object with the target coloring parameter based on the first coloring model may be output, in other words, the target image and the reference image meet the similarity condition. The target coloring parameter refers to a coloring parameter used in the last iteration, and the target image refers to an initial image rendered in the last iteration.
It should be noted that, after the iteration is stopped, the modification recovery of the configuration of the vertex shader in the above step 302 may be further performed, that is, the vertex shader is recovered to output NDC coordinates, and at the same time, the modification recovery of the configuration of the pixel shader in the above step 303 is performed, that is, the pixel shader is recovered to output only the pixel shading result and does not output a differential amount, so that the three-dimensional model of the target object is rendered according to the trained target shading parameters through the recovered first shading model, that is, the rendering result of the three-dimensional effect in the screen space (instead of the UV-developed view of the texture space) is obtained.
It should be noted that, in the embodiment of the present application, although both the illumination parameter and the camera parameter are configured to be fixed and unchanged values, this is only for easy explanation of the micro-renderable process, the embodiment of the present application is also applicable to a scene in which the illumination parameter and the camera parameter dynamically change, that is, in a case in which the illumination parameter and the camera parameter dynamically change, the embodiment of the present application is also capable of dynamically adapting, and solving to obtain the target rendering parameter optimally adapted to the first rendering model. After training, the error amount of the last iteration is the error between the rendering result of the first coloring model and the rendering result of the second coloring model.
As shown in fig. 15, a technical framework of a coloring model migration scheme based on texture space differential rendering in the embodiment of the present application is shown, and this coloring model migration scheme is used to iteratively obtain, through a machine, a target coloring parameter of a first coloring model to be debugged in a migration process of a coloring model, without a need of an artist to manually participate in a debugging process of the coloring parameter, through a differential rendering pipeline, texture space coordinate transformation, a differentiable pixel shader and a coloring model migration training process, the differential rendering pipeline can be optimized, a rendering parameter is limited to a texture space, a foreshortening line complicating caused by a triangle rasterizing process is avoided, and a CPU in the micro rendering process can be migrated into a GPU, thereby achieving the purpose of differentiable rendering on the GPU rendering pipeline. When the method is applied, a technician only needs to configure the vertex shader of the GPU rendering pipeline and rewrite the pixel shader to be tiny, so that the migration training process of the shading model can be automatically realized by a machine background.
As shown in fig. 16, after clicking the training option 703 in the rendering engine shown in fig. 7, a progress bar 1600 for migration training of the coloring model may be displayed in the server, the training progress of the current coloring parameters is updated in real time in the progress bar 1600, when the progress bar 1600 reaches the full progress, the coloring parameters representing the first coloring model are automatically adjusted, and the final target coloring parameters may be output and stored.
Taking the target object as the virtual puppy as an example in fig. 17, a virtual puppy 1701 (i.e., the target image) rendered by the first coloring model and a virtual puppy 1702 (i.e., the reference image) rendered by the second coloring model are shown after the coloring parameters of the first coloring model are adjusted, and it can be seen that the rendering effects between the virtual puppy 1701 and 1702 are approximately consistent as compared with the first coloring model before the training is started as shown in fig. 6.
All the above optional solutions can be combined to form an optional embodiment of the present disclosure, which is not described in detail herein.
According to the method provided by the embodiment of the invention, the reference image is rendered on the target object through the debugged second coloring model, the initial image is rendered on the target object by utilizing the first coloring model to be debugged, the machine can automatically reversely propagate the difference to the coloring parameter input by the first coloring model according to the difference between the reference image and the initial image, and further gradually reduce the difference between the reference image and the initial image by iteratively adjusting the coloring parameter, so that the rendering effect of the target image output by the first coloring model is approximately consistent with that of the reference image when iteration is stopped, and in the migration process of the coloring model, the machine automatically iterates and searches for the optimal coloring parameter without manually debugging the coloring parameter by an artist, so that the labor cost of coloring model migration is greatly reduced, and the migration efficiency of the coloring model is improved.
Hereinafter, a simple summary of migration flow of the coloring model according to the embodiment of the present application will be described with reference to fig. 18. As shown in fig. 18, in the rendering model migration scheme for differential rendering based on texture space, the product side involves the following steps: configuring a rendering environment; configuring a vertex shader; rewriting the micropixel shader; and calling a system function to perform migration training of the coloring model. It can be seen that a technician only needs to configure the rendering environment, the vertex shader and the pixel shader respectively, so that the migration flow of the coloring model can be realized by one key, and the iterative parameter adjustment process of the coloring parameters by the background is not required to be concerned, so that the migration scheme of the coloring model is lighter in design, easier to deploy, and high in migration efficiency (namely training efficiency) and accuracy, and the development efficiency and development quality of game products can be stably improved in game development.
As shown in fig. 19, taking the migration from the PBR coloring model (i.e., the second coloring model) to the lambert coloring model (i.e., the first coloring model) as an example, in the rendering environment 1900, when the albedo map of the PBR coloring model is directly adopted as the texture map and the illumination parameters of the PBR coloring model are directly shifted, the virtual puppy is rendered by applying the lambert coloring model to obtain a rendering result 1901, and the error amount between the rendering result 1901 of the lambert coloring model and the reference image of the PBR coloring model is equal to 0.202.
Next, consider two cases, namely, the illumination direction is unchanged and the illumination direction is dynamically changed in the rendering environment.
Under the condition that the illumination direction is unchanged, by applying the coloring model migration scheme of the embodiment of the application, iteration parameter adjustment is carried out on coloring parameters (including texture mapping, light intensity coefficient and light intensity offset) of the Lanbert coloring model based on a gradient descent algorithm, a rendering result 1902 of a static illumination direction can be obtained after parameter adjustment is finished, at the moment, the error amount between the rendering result 1902 of the Lanbert coloring model and a reference image of the PBR coloring model is equal to 0, namely the loss rate of the coloring model after migration is close to 0, and therefore the rendering result 1902 is even the reference image obtained by rendering of the PBR coloring model.
Under the condition that the illumination direction dynamically changes, the coloring model migration scheme of the embodiment of the application is also applied, iteration parameter adjustment is carried out on coloring parameters (including texture mapping, light intensity coefficient and light intensity offset) of the Lanbert coloring model based on a gradient descent algorithm, a rendering result 1903 of the dynamic illumination direction can be obtained after parameter adjustment is finished, and at the moment, the error amount between the rendering result 1903 of the Lanbert coloring model and a reference image of the PBR coloring model can be converged to 0.106, namely, the error amount is reduced by half compared with the case without training.
It can be seen that after the coloring parameters are adjusted by applying the coloring model migration scheme in the embodiment of the application, the rendering effects of the PBR coloring model and the lambert coloring model are more consistent under the condition that the illumination direction is unchanged or the illumination direction is dynamically changed, and the visual appearance of the image is similar to the reference image of the original PBR coloring model.
According to the coloring model migration scheme based on texture space differential rendering, the xy component of the model vertex is assigned to the xy component of the NDC coordinate to serve as the output of the vertex shader, so that a GPU rendering pipeline can be conducted in a texture space, and further, the pixel shader is rewritten to be micro, so that the coloring parameters of the rendering process can be iteratively updated in the processes of rendering, difference making and back propagation in the texture space, and the optimal coloring parameters, namely target coloring parameters, are converged. The micro-rendering process can run on a GPU rendering pipeline, so that training efficiency is greatly improved; and in the process, inverse calculation from a standard equipment space to a texture space is not needed, and the training precision is improved. Moreover, the migration scheme of the coloring model can be adapted in Unreal, unity or other rendering engines, platforms or products, and has extremely high universality.
Fig. 20 is a schematic structural diagram of a parameter configuration apparatus for a coloring model according to an embodiment of the present application, as shown in fig. 20, where the apparatus includes:
a rendering module 2001, configured to render, based on the first rendering model, the target object with an initial rendering parameter, so as to obtain an initial image;
the shading parameter adjusting module 2002 is configured to adjust the initial shading parameter based on a difference between the initial image and a reference image, to obtain an adjusted shading parameter, where the reference image is obtained by rendering the target object based on a second shading model, and the second shading model and the first shading model are used for rendering objects in the same virtual environment;
the rendering module 2001 is further configured to render the target object with the adjusted rendering parameter based on the first rendering model, to obtain an initial image of a next iteration;
and an iteration output module 2003, configured to iteratively perform operations of adjusting the rendering parameters and rendering the initial image, and output the adjusted target rendering parameters, where a similarity condition is met between a target image obtained by rendering the target object with the target rendering parameters based on the first rendering model and the reference image.
According to the device provided by the embodiment of the application, the reference image is rendered to the target object through the debugged second coloring model, the initial image is rendered to the target object by utilizing the first coloring model to be debugged, the machine can automatically reversely propagate the difference to the coloring parameter input by the first coloring model according to the difference between the reference image and the initial image, and further gradually reduce the difference between the reference image and the initial image by iteratively adjusting the coloring parameter, so that the rendering effect of the target image output by the first coloring model is approximately consistent with that of the reference image when iteration is stopped, and in the migration process of the coloring model, the machine automatically iterates and searches for the optimal coloring parameter without manually debugging the coloring parameter by an artist, so that the labor cost of coloring model migration is greatly reduced, and the migration efficiency of the coloring model is improved.
In some embodiments, the initial shading parameters include an initial texture map, an initial light intensity coefficient, and an initial light intensity offset; the rendering module 2001 is configured to: obtaining texture coordinates output by each model vertex of the target object; sampling from the initial texture map based on texture coordinates of each model vertex to obtain texture pixels of each model vertex; coloring the model vertexes of the target object based on the texture pixels of each model vertex to obtain a pixel coloring result; and rendering the pixel coloring result according to the initial light intensity coefficient and the initial light intensity offset based on the first coloring model to obtain the initial image.
In some embodiments, the apparatus based on fig. 20 is composed, the apparatus further comprising: and the first configuration module is used for configuring the vertex shaders of the first shading model and the second shading model so that the vertex shaders output texture coordinates of each model vertex of the target object.
In some embodiments, the apparatus based on fig. 20 is composed, the apparatus further comprising: and a second configuration module, configured to configure the pixel shader of the first shading model, so that the pixel shader outputs the pixel shading result and a differential quantity of each initial shading parameter, wherein the differential quantity represents a change quantity of shading parameter adaptive adjustment required by a change quantity of the initial image.
In some embodiments, based on the device composition of fig. 20, the color parameter adjustment module 2002 includes: a differential amount acquisition unit configured to acquire a differential amount of the initial coloring parameter, the differential amount characterizing a change amount of the coloring parameter adaptive adjustment required for a change amount of the initial image; an error amount acquisition unit configured to acquire an error amount between the initial image and the reference image; and the coloring parameter adjusting unit is used for adjusting the initial coloring parameter based on the error quantity and the differential quantity to obtain the adjusted coloring parameter.
In some embodiments, the initial shading parameters include an initial texture map, an initial light intensity coefficient, and an initial light intensity offset; the differential-amount acquisition unit is configured to: a first differential amount of the initial texture map, a second differential amount of the initial light intensity coefficient, and a third differential amount of the initial light intensity offset are obtained.
In some embodiments, the color parameter adjustment unit is configured to: based on the error amount and the first differential amount, adjusting the initial texture map to obtain an adjusted texture map; based on the error amount and the second differential amount, adjusting the initial light intensity coefficient to obtain an adjusted light intensity coefficient; and adjusting the initial light intensity offset based on the error amount and the third differential amount to obtain an adjusted light intensity offset.
In some embodiments, in case the first shading model is a lambert shading model, the differential quantity obtaining unit is further configured to: acquiring a normal vector of each pixel point in a texture space; acquiring an illumination direction and an illumination color based on the attribute information of the virtual environment; the first differential amount is obtained based on the normal vector of each pixel, the illumination direction, the illumination color, the initial light intensity coefficient and the initial light intensity offset.
In some embodiments, in case the first shading model is a lambert shading model, the differential quantity obtaining unit is further configured to: obtaining mapping parameters of each pixel point and normal vectors of each pixel point in a texture space; acquiring an illumination direction and an illumination color based on the attribute information of the virtual environment; and acquiring the second differential amount based on the mapping parameter of each pixel point, the normal vector of each pixel point, the illumination direction and the illumination color.
In some embodiments, in case the first shading model is a lambert shading model, the differential quantity obtaining unit is further configured to: the third differential amount is obtained based on the mapping parameters of each pixel point in the texture space.
In some embodiments, the iterative output module 2003 is configured to: outputting the target coloring parameter under the condition that the iteration step number reaches the set step number; or, in the case where the error amount between the initial image and the reference image meets the convergence condition, outputting the target coloring parameter.
All the above optional solutions can be combined to form an optional embodiment of the present disclosure, which is not described in detail herein.
It should be noted that: the parameter configuration device of the coloring model provided in the above embodiment only uses the division of the above functional modules to illustrate when automatically debugging the coloring parameters, and in practical application, the above functional allocation can be completed by different functional modules according to needs, i.e. the internal structure of the device is divided into different functional modules to complete all or part of the functions described above. In addition, the parameter configuration device of the coloring model provided in the above embodiment and the parameter configuration method embodiment of the coloring model belong to the same concept, and the detailed implementation process of the parameter configuration device of the coloring model is detailed in the parameter configuration method embodiment of the coloring model, which is not described herein.
Fig. 21 is a schematic structural diagram of a computer device provided in an embodiment of the present application, and as shown in fig. 21, a parameter configuration method of a coloring model provided in an embodiment of the present application is performed by the computer device, which may be provided as a terminal 2100. Alternatively, the device types of the terminal 2100 include: smart phones, tablet computers, notebook computers or desktop computers. Terminal 2100 may also be referred to by other names of user devices, portable terminals, laptop terminals, desktop terminals, and the like.
In general, the terminal 2100 includes: a processor 2101 and a memory 2102.
Optionally, the processor 2101 includes one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 2101 is optionally implemented in hardware in at least one of a DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). In some embodiments, the processor 2101 includes a main processor and a coprocessor, the main processor being a processor for processing data in an awake state, also referred to as a CPU (Central Processing Unit ); a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 2101 is integrated with a GPU (Graphics Processing Unit, image processor) for taking care of rendering and drawing of the content that the display screen is required to display. In some embodiments, the processor 2101 further includes an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
In some embodiments, memory 2102 includes one or more computer-readable storage media, which are optionally non-transitory. Optionally, memory 2102 also includes high speed random access memory, as well as non-volatile memory, such as one or more disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 2102 is used to store at least one computer program for execution by processor 2101 to implement the parameter configuration methods of the shading model provided by the various embodiments herein.
In some embodiments, terminal 2100 may further optionally include: a peripheral interface 2103 and at least one peripheral. The processor 2101, the memory 2102, and the peripheral interface 2103 can be connected by a bus or signal lines. The individual peripheral devices can be connected to the peripheral device interface 2103 via buses, signal lines or a circuit board. In particular, the peripheral devices include a display screen 2104.
The peripheral interface 2103 may be used to connect at least one Input/Output (I/O) related peripheral device to the processor 2101 and the memory 2102. In some embodiments, the processor 2101, memory 2102, and peripheral interface 2103 are integrated on the same chip or circuit board; in some other embodiments, any one or both of the processor 2101, the memory 2102, and the peripheral interface 2103 are implemented on separate chips or circuit boards, which are not limited in this embodiment.
The display screen 2104 is used to display a UI (User Interface). Optionally, the UI includes graphics, text, icons, video, and any combination thereof. When the display 2104 is a touch screen, the display 2104 also has the ability to collect touch signals at or above the surface of the display 2104. The touch signal can be input to the processor 2101 as a control signal for processing. Optionally, the display 2104 is also used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards. In some embodiments, the display 2104 is one, providing a front panel of the terminal 2100; in other embodiments, the display 2104 is at least two, and is respectively disposed on different surfaces of the terminal 2100 or is in a folded design; in still other embodiments, the display 2104 is a flexible display disposed on a curved surface or a folded surface of the terminal 2100. Even alternatively, the display screen 2104 is arranged in a non-rectangular irregular pattern, i.e. a shaped screen. Optionally, the display screen 2104 is made of LCD (Liquid Crystal Display ), OLED (Organic Light-Emitting Diode) or other materials.
In some embodiments, terminal 2100 can further include one or more sensors 2110. The one or more sensors 2110 include, but are not limited to: an acceleration sensor 2111, a gyro sensor 2112, and an optical sensor 2113.
In some embodiments, the acceleration sensor 2111 detects the magnitude of acceleration on three coordinate axes of the coordinate system established with the terminal 2100. For example, the acceleration sensor 2111 is used to detect components of gravitational acceleration on three coordinate axes. Optionally, the processor 2101 controls the display screen 2104 to display a user interface in a lateral view or a longitudinal view according to gravitational acceleration signals acquired by the acceleration sensor 2111. The acceleration sensor 2111 is also used for acquisition of motion data of a game or a user.
In some embodiments, the gyro sensor 2112 detects a body direction and a rotation angle of the terminal 2100, and the gyro sensor 2112 and the acceleration sensor 2111 cooperate to collect a 3D motion of the user on the terminal 2100. The processor 2101 performs the following functions based on the data collected by the gyro sensor 2112: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
The optical sensor 2113 is used to collect the ambient light intensity. In one embodiment, the processor 2101 controls the display brightness of the display screen 2104 based on the intensity of ambient light collected by the optical sensor 2113. Specifically, when the intensity of the ambient light is high, the display brightness of the display screen 2104 is turned up; when the ambient light intensity is low, the display brightness of the display screen 2104 is turned down.
It will be appreciated by those skilled in the art that the structure shown in fig. 21 does not constitute a limitation of the terminal 2100, and more or less components than those illustrated can be included, or some components can be combined, or a different arrangement of components can be employed.
Fig. 22 is a schematic structural diagram of a computer device according to an embodiment of the present application, where the computer device 2200 may have a relatively large difference due to different configurations or performances, and the computer device 2200 includes one or more processors (Central Processing Units, CPU) 2201 and one or more memories 2202, where at least one computer program is stored in the memories 2202, and the at least one computer program is loaded and executed by the one or more processors 2201 to implement the parameter configuration method of the coloring model according to the embodiments. Optionally, the computer device 2200 further includes a wired or wireless network interface, a keyboard, an input/output interface, and other components for implementing the functions of the device, which are not described herein.
In an exemplary embodiment, a computer readable storage medium is also provided, for example a memory comprising at least one computer program executable by a processor in a computer device to perform the parameter configuration method of the shading model in the various embodiments described above. For example, the computer readable storage medium includes ROM (Read-Only Memory), RAM (Random-Access Memory), CD-ROM (Compact Disc Read-Only Memory), magnetic tape, floppy disk, optical data storage device, and the like.
In an exemplary embodiment, a computer program product is also provided, comprising one or more computer programs, the one or more computer programs stored in a computer readable storage medium. The one or more processors of the computer device are capable of reading the one or more computer programs from the computer-readable storage medium, and executing the one or more computer programs, so that the computer device is capable of executing to complete the parameter configuration method of the coloring model in the above embodiment.
Those of ordinary skill in the art will appreciate that all or a portion of the steps implementing the above-described embodiments can be implemented by hardware, or can be implemented by a program instructing the relevant hardware, optionally stored in a computer readable storage medium, optionally a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments is merely exemplary in nature and is in no way intended to limit the invention, since it is intended that all modifications, equivalents, improvements, etc. that fall within the spirit and scope of the invention.
Claims (15)
1. A method for configuring parameters of a coloring model, the method comprising:
rendering the target object with initial coloring parameters based on the first coloring model to obtain an initial image;
adjusting the initial coloring parameters based on the difference between the initial image and a reference image to obtain adjusted coloring parameters, wherein the reference image is obtained by rendering the target object based on a second coloring model, and the second coloring model and the first coloring model are used for rendering the object in the same virtual environment;
rendering the target object with the adjusted coloring parameters based on the first coloring model to obtain an initial image of the next iteration;
and iteratively executing operations of adjusting the coloring parameters and rendering the initial image, and outputting the adjusted target coloring parameters, wherein a similar condition is met between a target image obtained by rendering the target object by the target coloring parameters based on the first coloring model and the reference image.
2. The method of claim 1, wherein the initial shading parameters comprise an initial texture map, an initial light intensity coefficient, and an initial light intensity offset;
rendering the target object with initial rendering parameters based on the first rendering model to obtain an initial image, wherein the step of obtaining the initial image comprises the following steps:
obtaining texture coordinates output by each model vertex of the target object;
sampling from the initial texture map based on texture coordinates of each model vertex to obtain texture pixels of each model vertex;
coloring the model vertexes of the target object based on the texture pixels of each model vertex to obtain a pixel coloring result;
and rendering the pixel coloring result according to the initial light intensity coefficient and the initial light intensity offset based on the first coloring model to obtain the initial image.
3. The method of claim 2, wherein the obtaining texture coordinates output by each model vertex of the target object comprises:
the vertex shaders of the first and second shading models are each configured such that the vertex shaders output texture coordinates for each model vertex of the target object.
4. The method of claim 2, wherein the rendering the model vertices of the target object based on the texels of each model vertex, resulting in a pixel rendering result comprises:
the pixel shader of the first shading model is configured such that the pixel shader outputs the pixel shading result and a differential amount of each initial shading parameter that characterizes an amount of change in shading parameter adaptation required for the amount of change of the initial image.
5. The method of claim 1, wherein adjusting the initial shading parameters based on the difference between the initial image and the reference image, the adjusted shading parameters comprising:
acquiring differential quantities of the initial coloring parameters, wherein the differential quantities represent the change quantity of the coloring parameters required by the change quantity of the initial image;
acquiring an error amount between the initial image and the reference image;
and adjusting the initial coloring parameter based on the error amount and the differential amount to obtain the adjusted coloring parameter.
6. The method of claim 5, wherein the initial shading parameters comprise an initial texture map, an initial light intensity coefficient, and an initial light intensity offset;
The obtaining of the differential amount of the initial coloring parameter includes:
and acquiring a first differential value of the initial texture map, a second differential value of the initial light intensity coefficient and a third differential value of the initial light intensity offset.
7. The method of claim 6, wherein adjusting the initial shading parameter based on the error amount and the differential amount, the adjusted shading parameter comprising:
adjusting the initial texture map based on the error amount and the first differential amount to obtain an adjusted texture map;
based on the error amount and the second differential amount, adjusting the initial light intensity coefficient to obtain an adjusted light intensity coefficient;
and adjusting the initial light intensity offset based on the error amount and the third differential amount to obtain an adjusted light intensity offset.
8. The method of claim 6, wherein, in the case where the first shading model is a lambert shading model, the obtaining a first derivative of the initial texture map comprises:
acquiring a normal vector of each pixel point in a texture space;
acquiring an illumination direction and an illumination color based on the attribute information of the virtual environment;
And acquiring the first differential amount based on the normal vector of each pixel point, the illumination direction, the illumination color, the initial light intensity coefficient and the initial light intensity offset.
9. The method of claim 6, wherein, in the case where the first coloring model is a lambert coloring model, the obtaining the second differential quantity of the initial light intensity coefficient comprises:
obtaining mapping parameters of each pixel point and normal vectors of each pixel point in a texture space;
acquiring an illumination direction and an illumination color based on the attribute information of the virtual environment;
and acquiring the second differential amount based on the mapping parameter of each pixel point, the normal vector of each pixel point, the illumination direction and the illumination color.
10. The method of claim 6, wherein, in the case where the first coloring model is a lambert coloring model, the obtaining the third differential amount of the initial light intensity offset includes:
and acquiring the third differential amount based on the mapping parameters of each pixel point in the texture space.
11. The method of claim 1, wherein iteratively performing operations for adjusting shading parameters and rendering an initial image, outputting adjusted target shading parameters comprises:
Outputting the target coloring parameter under the condition that the iteration step number reaches the set step number; or alternatively, the first and second heat exchangers may be,
the target coloring parameter is output in a case where an error amount between the initial image and the reference image meets a convergence condition.
12. A parameter configuration apparatus for a coloring model, the apparatus comprising:
the rendering module is used for rendering the target object according to the initial coloring parameters based on the first coloring model to obtain an initial image;
the coloring parameter adjustment module is used for adjusting the initial coloring parameters based on the difference between the initial image and the reference image to obtain adjusted coloring parameters, the reference image is obtained by rendering the target object based on a second coloring model, and the second coloring model and the first coloring model are used for rendering the object in the same virtual environment;
the rendering module is further configured to render the target object with the adjusted rendering parameter based on the first rendering model, to obtain an initial image of a next iteration;
and the iteration output module is used for iteratively executing operations of adjusting the coloring parameters and rendering the initial image and outputting the adjusted target coloring parameters, wherein the target image rendered by the target object based on the first coloring model and the target coloring parameters accords with the similarity condition with the reference image.
13. A computer device comprising one or more processors and one or more memories, the one or more memories having stored therein at least one computer program loaded and executed by the one or more processors to implement the method of parameter configuration of a shading model according to any of claims 1 to 11.
14. A computer readable storage medium, characterized in that at least one computer program is stored in the computer readable storage medium, which is loaded and executed by a processor to implement the parameter configuration method of a shading model according to any one of claims 1 to 11.
15. A computer program product, characterized in that it comprises at least one computer program that is loaded and executed by a processor to implement the parameter configuration method of a shading model according to any one of claims 1 to 11.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202211060155.5A CN117635803A (en) | 2022-08-30 | 2022-08-30 | Parameter configuration method and device for coloring model, computer equipment and storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202211060155.5A CN117635803A (en) | 2022-08-30 | 2022-08-30 | Parameter configuration method and device for coloring model, computer equipment and storage medium |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN117635803A true CN117635803A (en) | 2024-03-01 |
Family
ID=90020491
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202211060155.5A Pending CN117635803A (en) | 2022-08-30 | 2022-08-30 | Parameter configuration method and device for coloring model, computer equipment and storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN117635803A (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2026016624A1 (en) * | 2024-07-16 | 2026-01-22 | 腾讯科技(深圳)有限公司 | Image processing method and apparatus, electronic device, storage medium, and program product |
-
2022
- 2022-08-30 CN CN202211060155.5A patent/CN117635803A/en active Pending
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2026016624A1 (en) * | 2024-07-16 | 2026-01-22 | 腾讯科技(深圳)有限公司 | Image processing method and apparatus, electronic device, storage medium, and program product |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| EP4120199B1 (en) | Image rendering method and apparatus, and electronic device and storage medium | |
| EP1594091B1 (en) | System and method for providing an enhanced graphics pipeline | |
| US8466919B1 (en) | Re-rendering a portion of an image | |
| CN116228943B (en) | Virtual object face reconstruction method, face reconstruction network training method and device | |
| Conlan | The blender python api | |
| US20200098168A1 (en) | High-quality object-space dynamic ambient occlusion | |
| CN117173371A (en) | Method and system for generating polygonal mesh approximating curved surface using root finding and iteration for mesh vertex position | |
| EP4394713A1 (en) | Image rendering method and apparatus, electronic device, computer-readable storage medium, and computer program product | |
| CN117101127A (en) | Image rendering methods, devices, electronic equipment and storage media in virtual scenes | |
| CN117078824A (en) | Parameter fitting methods, devices, equipment, storage media and program products | |
| US20030218610A1 (en) | System and method for implementing shadows using pre-computed textures | |
| US20120194535A1 (en) | Two-dimensional vector fills using topological recipes | |
| CN116664422B (en) | Image highlight processing method, device, electronic device and readable storage medium | |
| CN117635803A (en) | Parameter configuration method and device for coloring model, computer equipment and storage medium | |
| Calabuig-Barbero et al. | Computational model for hyper-realistic image generation using uniform shaders in 3D environments | |
| McMullen et al. | Graphics on web platforms for complex systems modelling and simulation | |
| US12064688B2 (en) | Methods and systems for determining decal projections intersecting spatial units in a frame of a game space | |
| CN115272558A (en) | WEBGL-based jewelry rendering method and device, terminal equipment and storage medium | |
| Inzerillo et al. | Optimization of cultural heritage virtual environments for gaming applications | |
| CN115457189B (en) | PBD (skeletal driven software) simulation system and method based on cluster coloring | |
| CN114357340B (en) | WebGL-based network graph processing method and device and electronic equipment | |
| Apers et al. | Interactive Light Map and Irradiance Volume Preview in Frostbite | |
| US20250139878A1 (en) | Compressed representation for digital assets | |
| CN121366239A (en) | Virtual scene texture processing methods, devices, electronic devices, computer-readable storage media, and computer program products | |
| Kavalans | Exploring Modern Shader Technologies and Their Capabilities with WebGPU |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |