CN114612579B - Image rendering method, device, computer equipment and readable storage medium - Google Patents
Image rendering method, device, computer equipment and readable storage medium Download PDFInfo
- Publication number
- CN114612579B CN114612579B CN202210187253.9A CN202210187253A CN114612579B CN 114612579 B CN114612579 B CN 114612579B CN 202210187253 A CN202210187253 A CN 202210187253A CN 114612579 B CN114612579 B CN 114612579B
- Authority
- CN
- China
- Prior art keywords
- rendering
- target scene
- channel
- data
- sub
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Generation (AREA)
Abstract
The application discloses an image rendering method, an image rendering device, computer equipment and a readable storage medium, which relate to the technical field of image processing, and are used for packaging a plurality of rendering commands for rendering a target scene to obtain a rendering command instruction set containing a calling sequence identifier; the rendering command instruction set, target scene rendering channel data and target scene frame cache data which are obtained in a preset mode are sent to a graphic processor; the graphic processor obtains target scene rendering data by sequentially calling a plurality of rendering commands in the rendering command instruction set according to the target scene rendering channel data and the target scene frame cache data; and sending the target scene rendering data to a memory or a video memory. The application can effectively reduce the interactive workload of the CPU and the GPU, thereby effectively reducing the rendering power consumption.
Description
The application relates to a split application of China patent application with the application number 202011508323.3, named as image rendering method, device, computer equipment and readable storage medium, which is filed by China patent office on 12 months 18 in 2020.
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image rendering method, an image rendering device, a computer device, and a readable storage medium.
Background
The scheme of designing the multi-sampling antialiasing rendering flow on the mobile device using the android platform in the existing mainstream mainly comprises two schemes, in the first technical scheme, on the android platform, the rendering flow is realized by using OpenGLES, and the rendering of the three-dimensional scene is realized by calling the function interface provided by OpenGLES. If multiple sampling antialiasing is to be added to the rendering flow, the method needs to be implemented by using the extended function glFramebufferTexture2DMultisampleEXT or glFramebufferTexture2DMultisampleIMG of OpenGLES; in the second technical scheme, on an android platform, the rendering of a scene is realized by using Vulkan, when the rendering is performed by using multi-sampling antialiasing, the method comprises three steps, namely firstly, rendering an opaque object of a three-dimensional scene, then, performing multi-sampling mixing on depth to obtain a single-sampling depth map, and finally, rendering the transparent object, wherein each step is required to record and submit rendering commands independently.
In the related art, the applicant found that there are at least the following problems: for the first prior art scheme using OpenGLES for rendering, openGLES has weaker rendering performance and buffering capacity and greater difficulty in implementing the multisampling antialiasing technique than Vulkan; aiming at the second technical scheme for rendering scenes in three steps, each step of CPU records rendering commands and submits the rendering commands to the GPU, so that the CPU and the GPU are more interacted, the rendering power consumption is higher, and the utilization rate of the on-chip cache of the GPU of the mobile platform is lower.
Disclosure of Invention
In view of this, the present application provides an image rendering method, apparatus, computer device and readable storage medium, and aims to solve the technical problems that the existing rendering performance and buffering capacity for scene rendering by OpenGLES are weak, especially the difficulty in implementing the multi-sampling antialiasing technology is high, and the existing CPU and GPU for scene rendering by Vulkan have more interactions and high rendering power consumption.
According to an aspect of the present application, there is provided an image rendering method including:
packaging a plurality of rendering commands for rendering a target scene to obtain a rendering command instruction set containing a calling sequence identifier, wherein the calling sequence identifier is used for representing the calling sequence of each rendering command;
The rendering command instruction set, target scene rendering channel data and target scene frame cache data which are obtained in a preset mode are sent to a graphic processor;
The graphic processor obtains target scene rendering data by sequentially calling a plurality of rendering commands in the rendering command instruction set according to the target scene rendering channel data and the target scene frame cache data;
and sending the target scene rendering data to a memory or a video memory.
According to another aspect of the present application, there is provided an image rendering apparatus including:
The packaging module is used for packaging a plurality of rendering commands for rendering the target scene to obtain a rendering command instruction set containing a calling sequence identifier, wherein the calling sequence identifier is used for representing the calling sequence of each rendering command;
The first sending module is used for sending the rendering command instruction set, the target scene rendering channel data and the target scene frame cache data which are obtained in a preset mode to the graphic processor;
the rendering module is used for enabling the graphic processor to obtain target scene rendering data by sequentially calling a plurality of rendering commands in the rendering command instruction set according to the target scene rendering channel data and the target scene frame cache data;
And the second sending module is used for sending the target scene rendering data to a memory or a video memory.
According to a further aspect of the present application there is provided a computer device comprising a memory storing a computer program and a processor implementing the steps of the above image rendering method when the processor executes the computer program.
According to still another aspect of the present application, there is provided a readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the above-described image rendering method.
By means of the technical scheme, the image rendering method, the device, the computer equipment and the readable storage medium are used for obtaining a rendering command instruction set containing a calling sequence identifier through packaging a plurality of rendering commands for rendering a target scene, the calling sequence identifier is used for representing the calling sequence of each rendering command, and then the rendering command instruction set, target scene rendering channel data and target scene frame cache data which are obtained in a preset mode are sent to a graphics processor, so that the graphics processor can obtain target scene rendering data by sequentially calling the plurality of rendering commands in the rendering command instruction set according to the target scene rendering channel data and the target scene frame cache data, and the target scene rendering data are sent to a memory or a video memory. Compared with the existing mode of performing scene rendering by OpenGLES or Vulkan, the method improves the mode that the existing CPU submits rendering commands recorded in each step to the GPU, the CPU obtains a rendering command instruction set through packaging, and sends a plurality of rendering commands for rendering target scenes, target scene rendering channel data and target scene frame cache data to the GPU at one time, so that the GPU can perform target scene rendering according to the plurality of rendering commands in the rendering command instruction set, the target scene rendering channel data and the target scene frame cache data of the CPU, obtain target scene rendering data, and send the target scene rendering data to a memory or a video memory, and under the condition that the rendering performance and the cache capacity of scene rendering based on OpenGLES are weak, the interactive workload of the CPU and the GPU is effectively reduced through optimizing a rendering engine architecture, and the effective reduction of rendering power consumption is realized.
The foregoing description is only an overview of the present application, and is intended to be implemented in accordance with the teachings of the present application in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present application more readily apparent.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the application. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
fig. 1 shows a flowchart of an image rendering method according to an embodiment of the present application;
fig. 2 is a schematic flow chart of another image rendering method according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an image rendering device according to an embodiment of the present application;
Fig. 4 is a schematic structural diagram of another image rendering apparatus according to an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the application to those skilled in the art.
The method aims at solving the technical problems that the rendering performance and the buffering capacity of the existing scene rendering by OpenGLES are weak, the interaction between a CPU and a GPU of the existing scene rendering by Vulkan is more, and the rendering power consumption is larger. The embodiment provides an image rendering method, which can effectively reduce the interaction workload of a CPU and a GPU on an android platform by optimizing the architecture of a rendering engine, thereby achieving the purpose of effectively reducing the rendering power consumption. As shown in fig. 1, the method includes:
101. And packaging the plurality of rendering commands for rendering the target scene to obtain a rendering command instruction set containing the calling sequence identification.
In this embodiment, the CPU records a plurality of rendering commands for rendering the target scene in one Command Buffer, and obtains a rendering Command instruction set including a calling order identifier for characterizing a calling order of each rendering Command by performing a packing process on the plurality of rendering commands for rendering the target scene in the Command Buffer. Therefore, the CPU records a plurality of rendering commands for rendering a target scene in one Command Buffer so as to package and send the plurality of rendering commands in the Command Buffer to the GPU, unlike the prior art, when the target scene is rendered, the CPU stores the recorded plurality of rendering commands for rendering the target scene in different Command buffers respectively, so that the rendering commands in the corresponding Command Buffer are sequentially sent to the GPU in the rendering process, the execution of each rendering step is realized, the workload of the information interaction between the CPU and the GPU is larger, the workload of the information interaction between the CPU and the GPU can be effectively reduced in a mode of packaging the plurality of rendering commands, the rendering efficiency is improved under an optimized rendering engine architecture, the running loss of the mobile device is effectively reduced while the work function of the CPU is simplified, and the problem of excessively fast heating of the mobile device is effectively improved.
102. And sending the rendering command instruction set, the target scene rendering channel data and the target scene frame cache data which are obtained in a preset mode to a graphic processor.
In this embodiment, the image rendering is mainly used for rendering a target scene in a frame of image, and the CPU sets corresponding rendering channel attribute information and frame buffer attribute information by creating a Vulkan rendering channel VKRENDERPASS used for scene rendering and creating a Vulkan frame buffer VkFramebuffer used for rendering a three-dimensional scene, where VKRENDERPASS and VkFramebuffer are types Vulkan, and VkFramebuffer is a type Vulkan created by a name of a frame buffer mechanism, so that the CPU can manage with abstract logic and send to the GPU to achieve a corresponding rendering effect.
103. And the graphic processor obtains target scene rendering data by sequentially calling a plurality of rendering commands in the rendering command instruction set according to the target scene rendering channel data and the target scene frame cache data.
In this embodiment, the rendering command instruction set includes a plurality of rendering commands for rendering the target scene, and the GPU obtains the rendering data of the target scene by sequentially calling corresponding rendering commands in the rendering command instruction set in a process of rendering the target scene by using the rendering channel data of the target scene and the frame buffer data of the target scene according to the rendering command instruction set for rendering the target scene received at one time.
According to the requirements of actual application scenes, a CPU (Central processing Unit) sets Vulkan rendering channels VKRENDERPASS and Vulkan frame buffers VkFramebuffer used in the target scene rendering process by calling vkCmdBeginRenderPass commands for triggering Vulkan the start of the rendering process and utilizing a target scene rendering channel SCENERENDERPASS and a target scene frame buffer sceneFrameBuffer so as to send the set target scene rendering channel data and target scene frame buffer data together with a rendering command instruction set to a GPU (graphics processing Unit) to realize the rendering of the target scenes; the GPU sequentially executes a first rendering process for rendering the opaque object according to all rendering commands of the target scene by utilizing a Vulkan multi-rendering channel MultiRenderPass mechanism to obtain a first rendering result (comprising a multi-sampling depth rendering result and color information after antialiasing), a second rendering process for performing multi-sampling information fusion processing on the multi-sampling depth rendering result in the first rendering result to obtain a second rendering result (comprising the depth information after antialiasing, namely a single-sampling depth rendering result), and a third rendering process for performing transparent object rendering on the second rendering result to obtain a third rendering result, and the third rendering result is used as target scene rendering data.
104. And sending the target scene rendering data to a memory or a video memory.
In this embodiment, the target scene rendering data includes color rendering information and depth rendering information of the target scene, and after the target scene rendering is completed based on the multisampling antialiasing technology, the corresponding target scene rendering data is written into the memory or the video memory, that is, the color rendering information of the target scene is stored in the non-multisampled color rendering target resource ColorTarget of the memory or the video memory, and the depth rendering information of the target scene is stored in the non-multisampled depth rendering target resource DEPTHTARGET.
By applying the technical scheme of the embodiment, a rendering command instruction set for rendering a target scene, and target scene rendering channel data and target scene frame cache data which are obtained in a preset mode are sent to a graphics processor; the graphic processor obtains target scene rendering data by sequentially calling a plurality of rendering commands in the rendering command instruction set according to the target scene rendering channel data and the target scene frame cache data; and sending the target scene rendering data to a memory or a video memory. Compared with the existing method for performing scene rendering by OpenGLES or Vulkan, in the embodiment, on the basis of the scene rendering based on the multisampling antialiasing technology, the CPU obtains a rendering command instruction set through packaging, and sends a plurality of rendering commands for target scene rendering to the GPU at one time, so that the GPU can perform target scene rendering according to the plurality of rendering commands in the rendering command instruction set, target scene rendering channel data and target scene frame cache data of the CPU to obtain target scene rendering data, and send the target scene rendering data to the display memory, and under the conditions that the rendering performance and the caching capability of the scene rendering based on OpenGLES are weak and the implementation difficulty of the multisampling antialiasing technology is high, the interactive workload of the CPU and the GPU is effectively reduced on the basis of the implementation of the multisampling antialiasing technology based on Vulkan scene rendering, and therefore the effective reduction of the rendering power consumption is achieved.
Based on the above principle, further, as a refinement and extension of the foregoing specific implementation manner of the embodiment shown in fig. 1, this embodiment further provides another image rendering method, as shown in fig. 2, where the method includes:
201. and packaging the plurality of rendering commands for rendering the target scene to obtain a rendering command instruction set containing the calling sequence identification.
202. A cache resource for storing rendering results generated in a rendering process is created in an on-chip cache of a graphics processor, the rendering results including a first rendering result output by a first sub-rendering channel for rendering an opaque object, a second rendering result output by a second sub-rendering channel for performing multi-sampling information fusion processing on multi-sampling depth rendering results in the first rendering result, and a third rendering result output by a third sub-rendering channel for rendering the transparent object.
In specific implementation, the on-chip cache of the GPU refers to a cache of the GPU, and the purpose of filtering a request to a memory controller, reducing access to a video memory and reducing consumption of video memory bandwidth is achieved by caching a rendering result generated in the whole rendering process of a target scene into a created cache resource.
203. The central processing unit creates Vulkan rendering channels and Vulkan frame caches for setting target scene rendering channel data and target scene frame cache data, and specifically comprises the following steps: the central processing unit creates Vulkan rendering channels according to a preset accessory description array; creating Vulkan frame buffer according to the Vulkan rendering channels and the accessory description arrays thereof; wherein, the accessory description array corresponds to the Vulkan frame buffer format one by one.
Further, as an alternative, specifically including: the target scene rendering channel data comprises attribute information for performing attribute setting on Vulkan rendering channels by utilizing a multi-rendering flow mechanism, and the Vulkan rendering channels comprise a first sub-rendering channel for rendering opaque objects, a second sub-rendering channel for performing multi-sampling information fusion processing on multi-sampling depth rendering results in first rendering results output by the first sub-rendering channel and a third sub-rendering channel for rendering transparent objects.
Further, as an alternative, specifically including: the target scene rendering channel data comprises an accessory description array of Vulkan rendering channels created in a central processing unit, and an index relation between the target scene rendering channel data and the first sub-rendering channel, the second sub-rendering channel and the third sub-rendering channel is established according to element index information in the accessory description array.
In implementations, the CPU creates Vulkan rendering channels VKRENDERPASS for target scene rendering using Vulkan functions using a multiple rendering flow mechanism. The index relation comprises index relation of accessory elements and layout attributes thereof, specifically, a rendering channel for rendering a target scene is created by utilizing Vulkan functions according to the accessory description array and is used as the rendering channel of the target scene. The specific setting of rendering channel data includes creating Vulkan an array of attachment descriptions VkAttachmentDescription of rendering channels in the CPU, denoted VKATTACHMENTS, containing 4 elements whose element indices are 0,1,2,3, respectively. The setting of the element index information in VKATTACHMENTS specifically includes:
The index relation of the attachment element is set so that the sub-rendering channel created later can call the corresponding data information based on the index attribute value of the attachment element. Specifically, the 4-element member loadOp and member stencilLoadOp attributes are set to vk_ ATTACHMENT _load_op_don_care to set the operation behavior of the pre-rendering data and template data in the corresponding attachment, i.e., the existing content is undefined, allowing the driver to discard or delete without saving the content, and the member stencilStoreOp attributes are set to vk_ ATTACHMENT _store_op_don_care to set the operation behavior of the post-rendering template data in the corresponding attachment, i.e., the existing content is undefined, allowing the driver to discard or delete without saving the content. Further, the member StoreOp attribute of the index 0 and index 1 elements is set to vk_ ATTACHMENT _store_op_load to set the content already existing in the current attachment, and the samples attribute value refers to the number of sampling points being 1. The member StoreOp attribute of the index 2 and index 3 elements is set to vk_ ATTACHMENT _store_op_don_care, so as to set the operation behavior of the rendered data in the corresponding attachment, that is, the existing content is undefined, to allow the driver to discard or delete the content without saving the content, and the samples attribute value is the number n of sampling points, where n can be 2 or 4.
The index relation of the layout attribute is set so that the sub-rendering channel created later can call the corresponding data information based on the index attribute value of the layout attribute. Specifically, the initial LAYOUT initialLayout attribute of the index 0 and index 2 elements are set to vk_image_layout_color_ ATTACHMENT _operation, and the FORMAT attribute is set to vk_format_r8b8g8a8_ UNORM. The initial LAYOUT initialLayout attribute of the index 1 and index 3 elements is set to vk_image_layout_depth_stem_ ATTACHMENT _operation, and the FORMAT attribute is set to vk_format_d32_flow. The final LAYOUT finalLayout attribute for index 0 and 1 elements is set to vk_image_layout_ SHADER _read_only_operation, the final LAYOUT finalLayout attribute for index 2 elements is set to vk_image_layout_color_ ATTACHMENT _operation, and the final LAYOUT finalLayout attribute for index 3 elements is set to vk_image_layout_depth_step_ ATTACHMENT _operation.
Further, the description information of three sub-rendering channels SubRenderPass required for the target scene rendering is created. Specifically, a Vulkan child rendering channel description VkSubpassDescription array, denoted vkSubpassDescs, is created, containing 3 elements, each element describing one child rendering channel. The setting of the description information of each sub rendering channel specifically comprises the following steps:
vkSubpassDescs [0] is Subpass array for describing opaque objects of a rendered scene, i.e., the first sub-rendering channel. Specifically, the COLOR attachment colorAttachmentCount attribute of vkSubpassDescs [0] is set to 1, its pColorAttachments attribute contains 1 VKATTACHMENTREFERENCE element, the attribute value of this element is 2 to point to the attachment of the specified index position (element index 2 in VKATTACHMENTS), and the layout attribute is set to vk_image_color_ ATTACHMENT _optima; pResolveAttachments for multi-sample antialiasing of COLOR attachment is set vkSubpassDescs [0] to include an element VKATTACHMENTREFERENCE with an attachment attribute value of 0 to point to the attachment at the specified index position (element index 0 in VKATTACHMENTS) and a layout attribute of vk_image_color_ ATTACHMENT _operation; the attachment PDEPTHSTENCILATTACHMENT for DEPTH and template data is set vkSubpassDescs [0] to contain a VKATTACHMENTREFERENCE element whose attribute value is 3 to point to the attachment at the specified index location and set the LAYOUT attribute to vk_image_layout_depth_stem_ ATTACHMENT _operation.
VkSubpassDescs [1] is SubRenderPass array for describing mixing multiple sampling depth rendering results (corresponding to the first rendering result output by the first sub-rendering channel) to obtain a single sampling depth result, that is, a second sub-rendering channel, and the multiple sampling depth rendering results are fused into the single sampling depth rendering result to obtain a second rendering result. Specifically, the input attachment inputAttachmentCount attribute value of vkSubpassDescs [1] is set to 1 to READ the multisampling depth rendering result pInputAttachments attribute from the shader to include an VKATTACHMENTREFERENCE element, the attribute value of this element is 3 to point to the attachment of the specified index location, and the LAYOUT attribute is set to vk_image_layout_ SHADER _read_only_operation; the attachment PDEPTHSTENCILATTACHMENT for DEPTH and template data is set vkSubpassDescs [1] to contain a VKATTACHMENTREFERENCE element whose attribute value is 1 to point to the attachment for the specified index position (element index 1 in VKATTACHMENTS), and a LAYOUT attribute is set to vk_image_layout_depth_stem_ ATTACHMENT _operation.
VkSubpassDescs [2] is Subpass array for describing rendering scene transparency, i.e. the third sub-rendering channel. Specifically, the COLOR attachment colorAttachmentCount attribute specified in vkSubpassDescs [2] is set to 1, its pColorAttachments attribute contains 1 VKATTACHMENTREFERENCE element, the attribute value of this element is 0to point to the attachment at the specified index position (element index 0 in VKATTACHMENTS), and the layout attribute is set to vk_image_color_ ATTACHMENT _operation to expect the attachment to function as a COLOR buffer; the attachment PDEPTHSTENCILATTACHMENT for DEPTH and template data is set vkSubpassDescs [2] to contain a VKATTACHMENTREFERENCE element whose attribute value is 1 to point to the attachment for the specified index position (element index 1 in VKATTACHMENTS), and a LAYOUT attribute is set to vk_image_layout_depth_stem_ ATTACHMENT _operation.
Further, as an alternative, the method specifically further includes: and establishing a rendering sequence among the first sub-rendering channel, the second sub-rendering channel and the third sub-rendering channel by creating a sub-rendering channel dependency array in a central processor.
In a specific implementation, the resource dependency relationship between the sub-rendering channels may be specified by using VkSubpassDependency structures, and in this embodiment, the rendering sequence between the first sub-rendering channel, the second sub-rendering channel, and the third sub-rendering channel is specified to be sequentially rendered, that is, after the execution of the first rendering flow corresponding to the first sub-rendering channel is completed, the second rendering flow corresponding to the second sub-rendering channel is executed, and after the execution of the second rendering flow is completed, the third rendering flow corresponding to the third sub-rendering channel is executed. Specifically, a child rendering channel dependent VkSubpassDependency array, denoted vkSubDependencies, is created, containing 2 elements.
The setting for VkSubpassDependency arrays specifically includes: SRCSTAGEMASK and DSTSTAGEMASK are used to specify which PIPELINE STAGEs produce data and use data, set the SRCSTAGEMASK attribute of 2 elements to vk_pipeline_stage_color_ ATTACHMENT _output_bit, specify the source PIPELINE STAGE of this dependency as the COLOR attachment OUTPUT STAGE; setting DSTSTAGEMASK the attribute vk_pipeline_state_frame_ SHADER _bit, designating the dependent target PIPELINE STAGE as FRAGMENT SHADER, i.e., FRAGMENT SHADER, must wait for the previous sub-rendering pass to complete the color attachment output STAGE before proceeding. SRCACCESSMASK and DSTACCESSMASK are used to specify how each source and target sub-rendering channel accesses data, set the SRCACCESSMASK attribute of 2 elements to vk_access_color_ ATTACHMENT _write_ BIT, dstAccessMask attribute to vk_access_ SHADER _read_bit, i.e., perform a shader READ operation after a COLOR attachment WRITE operation is completed.
Further, the dependency flag DEPENDENCYFLAGS attribute is set to VK DEPENDENCY BY REGION BIT, specifying that this dependency occurs in the frame buffer space. Wherein srcSubpass and dstSubpass are indexes of the sub-rendering channel array of the rendering channels, namely srcSubpass attribute of vkSubDependencies [0] is set to 0, dstSubpass attribute is set to 1, so as to convert input attachments to be read in from color attachments to shaders; srcSubpass attribute of vkSubDependencies [1] is set to 1 and dstSubpass attribute is set to 2.
Further, a rendering channel for rendering the target scene is created according to the attachment description array and serves as the target scene rendering channel. Specifically, the value of VkRenderPassCreateInfo element is set, and note that this element is vkRpInfo, its sType attribute value is vk_structure_type_render_pass_create_info, pNext attribute value is nullptr, attachmentCount attribute value is 4, patachments attribute value is VKATTACHMENTS, subpassCount attribute value is 3, psubpassies attribute value is vkSubpassDescs, dependencyCount attribute value is 2, pdependencies attribute value is vkSubDependencies. A rendering channel VKRENDERPASS for scene rendering is created based on the attribute information set in vkRpInfo, and this rendering channel VKRENDERPASS is described as a target scene rendering channel SCENERENDERPASS.
In implementations, vulkan frame buffer VkFramebuffer is created for target scene rendering. Specifically, vkFramebuffer for rendering a three-dimensional scene, that is, a frame buffer VkFramebuffer compatible with the rendering channel RENDERPASS is created, the number and type of its attachments are the same, and the width and height of the size of the target scene to be rendered (typically the screen resolution size of a mobile device) are OriginW and OriginH, respectively, recorded. Specifically, with OriginW wide, originH high, a multi-sample rendering target resource is created, including a color multi-sample rendering target resource MSColorTarget and a depth multi-sample rendering target resource MSDEPTHTARGET, to store color multi-sample rendering results into created MSColorTarget, and depth multi-sample rendering results into created MSDEPTHTARGET. Since the multi-sampling resource is intermediate data generated in the rendering process, the MTLStorageMode parameters corresponding to MSColorTarget and MSDEPTHTARGET are set to memoryless, and the number of sampling points is set to n, where n can be 2 or 4.
Further, with OriginW wide and OriginH high, non-multi-sample rendering target resources are created, including non-multi-sample color rendering target resource ColorTarget and non-multi-sample depth rendering target resource DEPTHTARGET, so that the non-multi-sample color rendering result and the non-multi-sample depth rendering result after the multi-sample antialiasing processing are sequentially stored to non-multi-sample color rendering target resource ColorTarget and non-multi-sample depth rendering target resource DEPTHTARGET.
Further, a VKIMAGEVIEW array, denoted attachments, is created, containing 4 elements, wherein attachments [0] is VKIMAGEVIEW of non-multisampled color rendering target resource ColorTarget, attachments [1] is VKIMAGEVIEW of non-multisampled depth rendering target resource DEPTHTARGET, attachments [2] is VKIMAGEVIEW of multisampled color rendering target resource MSColorTarget, attachments [3] is VKIMAGEVIEW of multisampled depth rendering target resource MSDEPTHTARGET.
Further, vulkan frame buffer VkFramebuffer for target scene rendering is created from the attachment description array and serves as a target scene frame buffer. Wherein the accessory description array corresponds to the frame buffer format one by one. Specifically, a VkFramebufferCreateInfo TYPE variable is set, and this variable is denoted as frameBufferInfo, where the sType attribute value is vk_structure_type_frame_create_info, the sNext attribute value is nullptr, the RENDERPASS attribute value is SCENERENDERPASS, the PATTACHMENTS attribute value is attachments, the attachmentCount attribute value is 4, the layers attribute value is 1, the width attribute value is OriginW, and the height attribute value is OriginH, that is, the index relationship of the attachment description arrays with different sub-rendering channel indexes is the index relationship of the array when creating the frame buffer Framebuffer. Thus, vkFrameBuffer is created based on the information set in frameBufferInfo, noting that this frame buffer VkFrameBuffer is the target scene frame buffer sceneFrameBuffer.
204. And sending the rendering command instruction set, the target scene rendering channel data and the target scene frame cache data which are obtained in a preset mode to a graphic processor.
In the above embodiment, as an alternative manner, the plurality of rendering commands includes a first rendering command and a first vkCmdNextSubpass command for rendering the opaque object corresponding to the first sub-rendering channel, a second rendering command and a second vkCmdNextSubpass command for performing multi-sampling information fusion processing on the multi-sampling depth rendering result corresponding to the second sub-rendering channel, and a third rendering command and a VKCMDENDRENDERPASS command for rendering the transparent object corresponding to the third sub-rendering channel.
In a specific implementation, when the CPU invokes vkCmdBeginRenderPass the command, the obtained target scene rendering channel SCENERENDERPASS and the target scene frame buffer sceneFrameBuffer are used to set VKRENDERPASS and VkFrameBuffer in the rendering process correspondingly, so as to complete the preparation work of target scene rendering. The recording process of the target scene rendering command specifically comprises the following steps:
Executing a first sub-rendering process, recording a rendering command for rendering the opaque object, and recording vkCmdNextSubpass commands after the first sub-rendering process is executed; executing a second sub-rendering process, wherein the second sub-rendering process mixes the multi-sampling depth rendering results to obtain a single-sampling depth result, calling subpassLoad functions in the rendering language implementation, setting the sampling point value with the index of 0 of the multi-sampling depth rendering target resource MSDEPTHTARGET as a mixed depth value, and recording vkCmdNextSubpass commands after the execution of the second sub-rendering process is completed; executing the third sub-rendering process, recording a rendering command for rendering the transparent object, and recording VKCMDENDRENDERPASS the command after the third sub-rendering process is executed.
205. And the graphic processor performs opaque object rendering on the target scene by utilizing the first sub-rendering channel according to the first rendering command to obtain a first rendering result.
206. And according to the second rendering command and the acquired multi-sampling depth rendering result in the first rendering result in the cache resource, performing multi-sampling information fusion processing on the target scene by using a second sub-rendering channel to obtain a second rendering result.
207. And according to the third rendering command and the acquired second rendering result in the cache resource, performing transparent object rendering on the target scene by utilizing a third sub-rendering channel, and taking the acquired third rendering result as target scene rendering data.
208. And the graphic processor sends the third rendering result to a memory or a video memory as target scene rendering data.
In a specific implementation, after the GPU finishes rendering the target scene, the color rendering data of the target scene subjected to the multisampling antialiasing processing is stored in the non-multisampling color rendering target resource ColorTarget, and the depth rendering data of the target scene is stored in the non-multisampling depth rendering target resource DEPTHTARGET.
According to the requirements of actual application scenes, the color rendering data and the depth rendering data of the target scenes in the non-multisampled color rendering target resource ColorTarget and the non-multisampled depth rendering target resource DEPTHTARGET can be used as rendering targets of subsequent rendering processes, and the rendering processes can be continued; the color rendering data and the depth rendering data of the target scene subjected to the multi-sampling antialiasing processing in the non-multi-sampling color rendering target resource ColorTarget and the non-multi-sampling depth rendering target resource DEPTHTARGET can also be used as texture resources for reading in a subsequent rendering process, and after all rendering operations are completed, the rendering image stored in ColorTarget is output, for example, the rendering image is displayed on a screen of the mobile device.
Therefore, by utilizing the multi-rendering flow MultiRenderPass mechanism of Vulkan, the rendering of the target scene can be realized by sequentially calling the corresponding rendering command and the corresponding sub-rendering channel in the rendering command instruction set, and combining the rendering result generated in the rendering process and cached in the read GPU on-chip cache. The method and the device have the advantages that the data interaction between the CPU and the GPU and the access to the video memory are reduced while the on-chip cache characteristic of the GPU of the mobile platform is fully utilized, so that the purposes of improving the rendering efficiency and reducing the bandwidth resource overhead are achieved.
By applying the technical scheme of the embodiment, the rendering command instruction set for rendering the target scene is processed by utilizing the target scene rendering channel data and the target scene frame buffer data from the CPU to render the target scene, the target scene rendering data is obtained and sent to the memory or the display memory, compared with the existing mode of performing scene rendering by utilizing OpenGLES or Vulkan, the embodiment is based on the mode of performing scene rendering based on the multisampling antialiasing technology, the CPU sends a plurality of rendering commands of the target scene rendering to the GPU at one time in a mode of obtaining the rendering command instruction set by packing, so that the GPU can perform target scene rendering according to a plurality of rendering commands in the rendering command instruction set, the target scene rendering channel data and the target scene frame buffer data of the CPU to obtain the target scene rendering data, and the target scene rendering data is sent to the memory or the display memory.
Further, as a specific implementation of the method shown in fig. 1, the embodiment provides an image rendering apparatus, as shown in fig. 3, including: the system comprises a packing module 33, a first sending module 34, a rendering module 35 and a second sending module 36.
The packing module 33 may be configured to perform packing processing on a plurality of rendering commands for rendering a target scene, to obtain a rendering command instruction set containing a calling sequence identifier, where the calling sequence identifier is used to characterize a calling sequence of each rendering command.
The first sending module 34 may be configured to send the rendering command instruction set, and the preset target scene rendering channel data and the target scene frame buffer data to a graphics processor.
The rendering module 35 may be configured to obtain, by using the graphics processor, the target scene rendering data by sequentially calling a plurality of rendering commands in the rendering command instruction set according to the target scene rendering channel data and the target scene frame buffer data.
And a second sending module 36, configured to send the target scene rendering data to a memory or a video memory.
In a specific application scenario, as shown in fig. 4, the present apparatus may further include: a caching module 31 and a creating module 32.
In a specific application scene, the target scene rendering channel data includes attribute information for performing attribute setting on Vulkan rendering channels by using a multi-rendering flow mechanism, and the Vulkan rendering channels include a first sub-rendering channel for rendering opaque objects, a second sub-rendering channel for performing multi-sampling information fusion processing on multi-sampling depth rendering results, and a third sub-rendering channel for rendering transparent objects.
In a specific application scenario, the buffer module 31 may be configured to create a buffer resource in an on-chip buffer of a graphics processor for storing a rendering result generated in a rendering process, where the rendering result includes a first rendering result output by a first sub-rendering channel for rendering an opaque object, a second rendering result output by a second sub-rendering channel for performing multi-sampling information fusion processing on a multi-sampling depth rendering result in the first rendering result, and a third rendering result output by a third sub-rendering channel for rendering the transparent object.
The second sending module 36 is specifically configured to send the third rendering result as target scene rendering data to a memory or a video memory by using the graphics processor.
In a specific application scenario, the plurality of rendering commands include a first rendering command and a first vkCmdNextSubpass command for rendering the opaque object corresponding to a first sub-rendering channel, a second rendering command and a second vkCmdNextSubpass command for performing multi-sampling information fusion processing on the multi-sampling depth rendering result corresponding to a second sub-rendering channel, and a third rendering command and a VKCMDENDRENDERPASS command for rendering the transparent object corresponding to a third sub-rendering channel.
In a specific application scenario, the rendering module 35 includes: a first rendering unit 351, a second rendering unit 352, and a third rendering unit 353.
The first rendering unit 351 may be configured to perform, by using the graphics processor according to the first rendering command, opaque object rendering on the target scene using the first sub-rendering channel, to obtain a first rendering result.
The second rendering unit 352 may be configured to perform multisampling information fusion processing on the target scene by using a second sub-rendering channel according to the second rendering command and the acquired multisampling depth rendering result in the first rendering result in the cache resource, so as to obtain a second rendering result.
And the third rendering unit 353 may be configured to perform transparent object rendering on the target scene by using a third sub-rendering channel according to the third rendering command and the acquired second rendering result in the cache resource, where the obtained third rendering result is used as target scene rendering data.
In a specific application scenario, the creation module 32 may be configured to create Vulkan rendering channels and Vulkan frame buffers for setting target scene rendering channel data and target scene frame buffer data by using a central processor.
In a specific application scenario, the creating module 32 includes: a first creation unit 321, a second creation unit 322.
The first creating unit 321 may be configured to create Vulkan a rendering channel according to a preset attachment description array by using the central processor.
A second creating unit 322, configured to create Vulkan frame buffer according to the Vulkan rendering channels and the attachment description arrays thereof; wherein, the accessory description array corresponds to the Vulkan frame buffer format one by one.
In a specific application scene, the target scene rendering channel data comprises an accessory description array of Vulkan rendering channels created in a central processor, and an index relation between the target scene rendering channel data and the first sub-rendering channel, the second sub-rendering channel and the third sub-rendering channel is established according to element index information in the accessory description array.
In a specific application scene, a rendering sequence among the first sub-rendering channel, the second sub-rendering channel and the third sub-rendering channel is established by creating a sub-rendering channel dependency array in a central processor.
It should be noted that, for other corresponding descriptions of each functional unit related to the image rendering device provided by the embodiment of the present application, reference may be made to corresponding descriptions in fig. 1 and fig. 2, and details are not repeated here.
Based on the above-described methods shown in fig. 1 and 2, correspondingly, the embodiment of the present application further provides a storage medium, on which a computer program is stored, which when executed by a processor, implements the above-described image rendering method shown in fig. 1 and 2.
Based on such understanding, the technical solution of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.), and includes several instructions for causing a computer device (may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective implementation scenario of the present application.
Based on the methods shown in fig. 1 and fig. 2 and the virtual device embodiment shown in fig. 3, in order to achieve the above objects, the embodiment of the present application further provides a computer device, which may specifically be a personal computer, a server, a network device, etc., where the entity device includes a storage medium and a processor; a storage medium storing a computer program; a processor for executing a computer program to implement the image rendering method as shown in fig. 1 and 2 described above.
Optionally, the computer device may also include a user interface, a network interface, a camera, radio Frequency (RF) circuitry, sensors, audio circuitry, WI-FI modules, and the like. The user interface may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), etc., and the optional user interface may also include a USB interface, a card reader interface, etc. The network interface may optionally include a standard wired interface, a wireless interface (e.g., bluetooth interface, WI-FI interface), etc.
It will be appreciated by those skilled in the art that the architecture of a computer device provided in this embodiment is not limited to this physical device, but may include more or fewer components, or may be combined with certain components, or may be arranged in a different arrangement of components.
The storage medium may also include an operating system, a network communication module. An operating system is a program that manages the hardware and software resources of a computer device, supporting the execution of information handling programs, as well as other software and/or programs. The network communication module is used for realizing communication among all components in the storage medium and communication with other hardware and software in the entity equipment.
From the above description of the embodiments, it will be apparent to those skilled in the art that the present application may be implemented by means of software plus necessary general hardware platforms, or may be implemented by hardware. By applying the technical scheme of the application, compared with the existing method for performing scene rendering by OpenGLES or Vulkan, the method for performing scene rendering by the multi-sampling antialiasing technology in the embodiment is characterized in that the CPU obtains a rendering command instruction set through packaging, and sends a plurality of rendering commands of scene rendering to the GPU at one time, so that the GPU can render a target scene according to the plurality of rendering commands in the rendering command instruction set, target scene rendering channel data and target scene frame cache data of the CPU to obtain target scene rendering data, and the target scene rendering data is sent to a memory or a display, and under the condition that the rendering performance and the cache capacity of the scene rendering based on OpenGLES are weak and the implementation difficulty of the multi-sampling antialiasing technology is high, the interactive workload of the CPU and the GPU is effectively reduced on the basis of the implementation of the multi-sampling antialiasing technology of the scene rendering based on Vulkan through optimizing the rendering engine architecture, and the effective reduction of the rendering power consumption is realized.
Those skilled in the art will appreciate that the drawing is merely a schematic illustration of a preferred implementation scenario and that the modules or flows in the drawing are not necessarily required to practice the application. Those skilled in the art will appreciate that modules in an apparatus in an implementation scenario may be distributed in an apparatus in an implementation scenario according to an implementation scenario description, or that corresponding changes may be located in one or more apparatuses different from the implementation scenario. The modules of the implementation scenario may be combined into one module, or may be further split into a plurality of sub-modules.
The above-mentioned inventive sequence numbers are merely for description and do not represent advantages or disadvantages of the implementation scenario. The foregoing disclosure is merely illustrative of some embodiments of the application, and the application is not limited thereto, as modifications may be made by those skilled in the art without departing from the scope of the application.
Claims (11)
1. An image rendering method, comprising:
Recording a plurality of rendering commands for rendering a target scene in a command buffer zone, and packaging the plurality of rendering commands for rendering the target scene to obtain a rendering command instruction set containing a calling sequence identifier, wherein the calling sequence identifier is used for representing the calling sequence of each rendering command;
The rendering command instruction set, target scene rendering channel data and target scene frame cache data which are obtained in a preset mode are sent to a graphic processor;
The graphic processor obtains target scene rendering data by sequentially calling a plurality of rendering commands in the rendering command instruction set according to the target scene rendering channel data and the target scene frame cache data;
and sending the target scene rendering data to a memory or a video memory.
2. The method of claim 1, wherein the target scene rendering channel data includes attribute information for attribute setting of Vulkan rendering channels using a multi-rendering flow mechanism, the Vulkan rendering channels including a first sub-rendering channel for rendering opaque objects, a second sub-rendering channel for multi-sample information fusion processing of multi-sample depth rendering results, and a third sub-rendering channel for rendering transparent objects.
3. The method according to claim 1 or 2, wherein a cache resource for storing rendering results generated in a rendering process is created in an on-chip cache of a graphics processor, the rendering results including a first rendering result output by a first sub-rendering channel for rendering an opaque object, a second rendering result output by a second sub-rendering channel for performing a multisampling information fusion process on a multisampling depth rendering result among the first rendering result, and a third rendering result output by a third sub-rendering channel for rendering the transparent object;
Further comprises: and the graphic processor sends the third rendering result to a memory or a video memory as target scene rendering data.
4. A method according to claim 3, wherein the plurality of rendering commands includes a first rendering command and a first vkCmdNextSubpass command for rendering the opaque object corresponding to a first sub-rendering channel, a second rendering command and a second vkCmdNextSubpass command for multi-sample information fusion processing of multi-sample depth rendering results corresponding to a second sub-rendering channel, and a third rendering command and a VKCMDENDRENDERPASS command for rendering the transparent object corresponding to a third sub-rendering channel.
5. The method of claim 4, wherein the graphics processor obtains the target scene rendering data by sequentially invoking a plurality of rendering commands in the rendering command instruction set based on the target scene rendering channel data and the target scene frame buffer data, comprising:
the graphic processor performs opaque object rendering on the target scene by utilizing a first sub-rendering channel according to the first rendering command to obtain a first rendering result;
According to the second rendering command and the acquired multi-sampling depth rendering result in the first rendering result in the cache resource, performing multi-sampling information fusion processing on the target scene by using a second sub-rendering channel to obtain a second rendering result;
and according to the third rendering command and the acquired second rendering result in the cache resource, performing transparent object rendering on the target scene by utilizing a third sub-rendering channel, and taking the acquired third rendering result as target scene rendering data.
6. The method as recited in claim 2, further comprising: the central processing unit creates Vulkan rendering channels and Vulkan frame caches for setting target scene rendering channel data and target scene frame cache data, and specifically comprises the following steps:
The central processing unit creates Vulkan rendering channels according to a preset accessory description array;
Creating Vulkan frame buffer according to the Vulkan rendering channels and the accessory description arrays thereof;
wherein, the accessory description array corresponds to the Vulkan frame buffer format one by one.
7. The method of claim 2 or 6, wherein the target scene rendering channel data includes an attachment description array of Vulkan rendering channels created in a central processor, and an index relationship is established with the first, second, and third sub-rendering channels according to element index information in the attachment description array.
8. The method of claim 2 or 6, wherein a rendering order between the first sub-rendering channel, the second sub-rendering channel, the third sub-rendering channel is established by creating a sub-rendering channel dependency array in a central processor.
9. An image rendering apparatus, comprising:
The packaging module is used for recording a plurality of rendering commands for rendering the target scene in a command buffer area, and packaging the plurality of rendering commands for rendering the target scene to obtain a rendering command instruction set containing a calling sequence identifier, wherein the calling sequence identifier is used for representing the calling sequence of each rendering command;
The first sending module is used for sending the rendering command instruction set, the target scene rendering channel data and the target scene frame cache data which are obtained in a preset mode to the graphic processor;
the rendering module is used for enabling the graphic processor to obtain target scene rendering data by sequentially calling a plurality of rendering commands in the rendering command instruction set according to the target scene rendering channel data and the target scene frame cache data;
And the second sending module is used for sending the target scene rendering data to a memory or a video memory.
10. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the image rendering method of any one of claims 1 to 8 when the computer program is executed.
11. A readable storage medium having stored thereon a computer program, characterized in that the computer program, when executed by a processor, implements the steps of the image rendering method of any of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210187253.9A CN114612579B (en) | 2020-12-18 | 2020-12-18 | Image rendering method, device, computer equipment and readable storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011508323.3A CN112652025B (en) | 2020-12-18 | 2020-12-18 | Image rendering method and device, computer equipment and readable storage medium |
CN202210187253.9A CN114612579B (en) | 2020-12-18 | 2020-12-18 | Image rendering method, device, computer equipment and readable storage medium |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011508323.3A Division CN112652025B (en) | 2020-12-18 | 2020-12-18 | Image rendering method and device, computer equipment and readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114612579A CN114612579A (en) | 2022-06-10 |
CN114612579B true CN114612579B (en) | 2024-10-15 |
Family
ID=75355349
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011508323.3A Active CN112652025B (en) | 2020-12-18 | 2020-12-18 | Image rendering method and device, computer equipment and readable storage medium |
CN202210187253.9A Active CN114612579B (en) | 2020-12-18 | 2020-12-18 | Image rendering method, device, computer equipment and readable storage medium |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011508323.3A Active CN112652025B (en) | 2020-12-18 | 2020-12-18 | Image rendering method and device, computer equipment and readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN112652025B (en) |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113835890A (en) * | 2021-09-24 | 2021-12-24 | 厦门雅基软件有限公司 | Rendering data processing method, device, equipment and storage medium |
CN113934491B (en) * | 2021-09-30 | 2023-08-22 | 阿里云计算有限公司 | Big data processing method and device |
CN114042312B (en) * | 2021-10-27 | 2025-04-11 | 广州三七极耀网络科技有限公司 | Character skin drawing method, system, and electronic device |
CN114760526A (en) * | 2022-03-31 | 2022-07-15 | 北京百度网讯科技有限公司 | Video rendering method and device, electronic equipment and storage medium |
CN117291788B (en) * | 2022-06-17 | 2024-10-18 | 象帝先计算技术(重庆)有限公司 | Graphics processing method, system, device and equipment |
CN115761100A (en) * | 2022-11-25 | 2023-03-07 | 网易(杭州)网络有限公司 | Scene rendering method, device, electronic device and storage medium |
CN115908678B (en) * | 2023-02-25 | 2023-05-30 | 深圳市益玩网络科技有限公司 | Bone model rendering method and device, electronic equipment and storage medium |
CN116185640B (en) * | 2023-04-20 | 2023-08-08 | 上海励驰半导体有限公司 | Image command processing method and device based on multiple GPUs, storage medium and chip |
CN117710183B (en) * | 2023-12-14 | 2025-03-25 | 摩尔线程智能科技(北京)股份有限公司 | Rendering instruction transmission method, operating system, electronic device, and storage medium |
CN118485781B (en) * | 2024-05-24 | 2024-11-01 | 苏州国之威文化科技有限公司 | AI-based virtual reality special effect cinema scene optimization method and system |
CN119127649B (en) * | 2024-09-26 | 2025-07-18 | 浙江大学 | An automatic multi-scene multi-resource rendering performance evaluation and image quality comparison method and system |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101295408A (en) * | 2007-04-27 | 2008-10-29 | 新奥特硅谷视频技术有限责任公司 | 3D videotext rendering method and system |
CN108140234A (en) * | 2015-10-23 | 2018-06-08 | 高通股份有限公司 | GPU operation algorithms selection based on order flow label |
CN111798365A (en) * | 2020-06-12 | 2020-10-20 | 完美世界(北京)软件科技发展有限公司 | Deep anti-saw data reading method, device, equipment and storage medium |
Family Cites Families (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8537166B1 (en) * | 2007-12-06 | 2013-09-17 | Nvidia Corporation | System and method for rendering and displaying high-resolution images |
GB0810311D0 (en) * | 2008-06-05 | 2008-07-09 | Advanced Risc Mach Ltd | Graphics processing systems |
US8675000B2 (en) * | 2008-11-07 | 2014-03-18 | Google, Inc. | Command buffers for web-based graphics rendering |
US8659616B2 (en) * | 2010-02-18 | 2014-02-25 | Nvidia Corporation | System, method, and computer program product for rendering pixels with at least one semi-transparent surface |
WO2012037504A1 (en) * | 2010-09-18 | 2012-03-22 | Ciinow, Inc. | A method and mechanism for delivering applications over a wan |
CN102722861A (en) * | 2011-05-06 | 2012-10-10 | 新奥特(北京)视频技术有限公司 | CPU-based graphic rendering engine and realization method |
US9547930B2 (en) * | 2011-11-30 | 2017-01-17 | Qualcomm Incorporated | Hardware switching between direct rendering and binning in graphics processing |
CN102810199B (en) * | 2012-06-15 | 2015-03-04 | 成都平行视野科技有限公司 | Image processing method based on GPU (Graphics Processing Unit) |
US9582848B2 (en) * | 2012-12-28 | 2017-02-28 | Apple Inc. | Sprite Graphics rendering system |
CN103106680B (en) * | 2013-02-16 | 2015-05-06 | 赞奇科技发展有限公司 | Implementation method for three-dimensional figure render based on cloud computing framework and cloud service system |
CN105023234B (en) * | 2015-06-29 | 2018-02-23 | 嘉兴慧康智能科技有限公司 | Figure accelerated method based on embedded system storage optimization |
CN105279253B (en) * | 2015-10-13 | 2018-12-14 | 上海联彤网络通讯技术有限公司 | Promote the system and method for webpage painting canvas rendering speed |
US10853118B2 (en) * | 2015-12-21 | 2020-12-01 | Intel Corporation | Apparatus and method for pattern-driven page table shadowing for graphics virtualization |
CA3013624C (en) * | 2017-08-09 | 2024-06-18 | Daniel Herring | Systems and methods for using egl with an opengl api and a vulkan graphics driver |
CN109891388A (en) * | 2017-10-13 | 2019-06-14 | 华为技术有限公司 | A kind of image processing method and device |
CN109669739A (en) * | 2017-10-16 | 2019-04-23 | 阿里巴巴集团控股有限公司 | A kind of interface rendering method, device, terminal device and storage medium |
CN108711182A (en) * | 2018-05-03 | 2018-10-26 | 广州爱九游信息技术有限公司 | Render processing method, device and mobile terminal device |
CN110163943B (en) * | 2018-11-21 | 2024-09-10 | 深圳市腾讯信息技术有限公司 | Image rendering method and device, storage medium and electronic device |
CN111400024B (en) * | 2019-01-03 | 2023-10-10 | 百度在线网络技术(北京)有限公司 | Resource calling method and device in rendering process and rendering engine |
CN111508055B (en) * | 2019-01-30 | 2023-04-11 | 华为技术有限公司 | Rendering method and device |
CN110471701B (en) * | 2019-08-12 | 2021-09-10 | Oppo广东移动通信有限公司 | Image rendering method and device, storage medium and electronic equipment |
CN110992462A (en) * | 2019-12-25 | 2020-04-10 | 重庆文理学院 | Batch processing drawing method for 3D simulation scene image based on edge calculation |
CN111798372B (en) * | 2020-06-10 | 2021-07-13 | 完美世界(北京)软件科技发展有限公司 | Image rendering method, device, equipment and readable medium |
-
2020
- 2020-12-18 CN CN202011508323.3A patent/CN112652025B/en active Active
- 2020-12-18 CN CN202210187253.9A patent/CN114612579B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101295408A (en) * | 2007-04-27 | 2008-10-29 | 新奥特硅谷视频技术有限责任公司 | 3D videotext rendering method and system |
CN108140234A (en) * | 2015-10-23 | 2018-06-08 | 高通股份有限公司 | GPU operation algorithms selection based on order flow label |
CN111798365A (en) * | 2020-06-12 | 2020-10-20 | 完美世界(北京)软件科技发展有限公司 | Deep anti-saw data reading method, device, equipment and storage medium |
Non-Patent Citations (1)
Title |
---|
"Vulkan填坑学习Day12—渲染通道";沉默的舞台剧;《https://blog.csdn.net/qq_35312463/article/details/103981577》;20200104;第1-4页 * |
Also Published As
Publication number | Publication date |
---|---|
CN112652025B (en) | 2022-03-22 |
CN112652025A (en) | 2021-04-13 |
CN114612579A (en) | 2022-06-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114612579B (en) | Image rendering method, device, computer equipment and readable storage medium | |
KR102695571B1 (en) | Methods and devices for supporting tensor objects in machine learning workloads | |
US8149242B2 (en) | Graphics processing apparatus, graphics library module and graphics processing method | |
JP6073533B1 (en) | Optimized multi-pass rendering on tile-based architecture | |
US8269782B2 (en) | Graphics processing apparatus | |
US9928637B1 (en) | Managing rendering targets for graphics processing units | |
JP6062438B2 (en) | System and method for layering using a tile-by-tile renderer | |
WO2021248705A1 (en) | Image rendering method and apparatus, computer program and readable medium | |
CN112801855B (en) | Method and device for scheduling rendering task based on graphics primitive and storage medium | |
TW201516953A (en) | Graphics processing systems | |
CN114116227B (en) | A display method, device and equipment based on the Wayland protocol without GPU support | |
JP2011529237A (en) | Mapping of graphics instructions to related graphics data in performance analysis | |
CN114669047A (en) | An image processing method, electronic device and storage medium | |
WO2023197762A1 (en) | Image rendering method and apparatus, electronic device, computer-readable storage medium, and computer program product | |
WO2021248706A1 (en) | Depth anti-aliasing data reading method and device, computer program and readable medium | |
WO2022161199A1 (en) | Image editing method and device | |
US8203567B2 (en) | Graphics processing method and apparatus implementing window system | |
US20240233263A1 (en) | Primitive rendering method and apparatus, computer device and storage medium | |
KR102645239B1 (en) | GPU kernel optimization with SIMO approach for downscaling using GPU cache | |
CN111243069B (en) | Scene switching method and system of Unity3D engine | |
KR100441080B1 (en) | operation method of pixel cache architecture in three-dimensional graphic accelerator | |
CN117956220B (en) | Rendering method, device, equipment, computer-readable storage medium, and program product | |
US20250118337A1 (en) | Video processing method and apparatus | |
CN114299210B (en) | Graphics rendering method, device, storage medium and electronic device | |
CN120335878A (en) | Image processing method and related device and medium program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
EE01 | Entry into force of recordation of patent licensing contract |
Application publication date: 20220610 Assignee: Beijing Xuanguang Technology Co.,Ltd. Assignor: Perfect world (Beijing) software technology development Co.,Ltd. Contract record no.: X2022990000514 Denomination of invention: Image rendering method, apparatus, computer device, and readable storage medium License type: Exclusive License Record date: 20220817 |
|
EE01 | Entry into force of recordation of patent licensing contract | ||
GR01 | Patent grant | ||
GR01 | Patent grant |