CN119722824A - Three-dimensional model processing method, electronic device, storage medium and program product - Google Patents
Three-dimensional model processing method, electronic device, storage medium and program product Download PDFInfo
- Publication number
- CN119722824A CN119722824A CN202411957950.3A CN202411957950A CN119722824A CN 119722824 A CN119722824 A CN 119722824A CN 202411957950 A CN202411957950 A CN 202411957950A CN 119722824 A CN119722824 A CN 119722824A
- Authority
- CN
- China
- Prior art keywords
- texture
- format
- dimensional model
- compressed
- file
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Image Generation (AREA)
- Processing Or Creating Images (AREA)
Abstract
The disclosure provides a processing method of a three-dimensional model, electronic equipment, a readable storage medium and a computer program product, wherein the processing method firstly obtains geometric feature information of the three-dimensional model to be processed, wherein the geometric feature information comprises vertex features, normal features and UV coordinates of textures of the three-dimensional model, then compresses the geometric feature information to obtain compressed geometric content, and finally writes the compressed geometric content into a file for storing compressed model data.
Description
Technical Field
The present disclosure relates to the field of three-dimensional model technology, and in particular, to a three-dimensional model processing method, an electronic device, a readable storage medium, and a computer program product.
Background
The generation of three-dimensional models by multi-angle photography is a technology developed in recent years, and three-dimensional models generated by the technology generally have specific formats, and three-dimensional model files under the formats are not compatible by a webpage end, are difficult to directly apply and display on the webpage end, and can be applied to the webpage end only after the three-dimensional models are converted into the specific formats.
At present, the three-dimensional model file obtained after format conversion of the three-dimensional model has higher requirements on rendering performance and higher required transmission cost, so that more system resources are occupied.
Disclosure of Invention
The present disclosure provides a method of processing a three-dimensional model, an electronic device, a readable storage medium, and a computer program product.
The first aspect of the disclosure provides a processing method of a three-dimensional model, which comprises the steps of obtaining geometric feature information of the three-dimensional model to be processed, wherein the geometric feature information comprises one or more of vertex features, normal features and UV coordinates of textures of the three-dimensional model, compressing the geometric feature information to obtain compressed geometric content, and writing the compressed geometric content into a file for storing compressed model data.
According to some embodiments of the present disclosure, the geometric feature information satisfies one or more of a vertex feature comprising one or more of a vertex position feature, a vertex color feature, a normal feature comprising a normal direction feature, the UV coordinates of the texture comprising one or more of UV coordinates of a front side of the texture, UV coordinates of a back side of the texture, and a tangent vector.
According to some embodiments of the disclosure, the method further comprises obtaining vertex index information of the three-dimensional model to be processed, and the compressing the geometric feature information is specifically based on the vertex index information.
According to some embodiments of the present disclosure, the geometric feature information includes a plurality of information items, and compressing the geometric feature information includes sequentially compressing the plurality of information items in the geometric feature information.
According to some embodiments of the present disclosure, the method further includes obtaining texture information of the three-dimensional model to be processed, and storing the obtained texture information as an image, performing texture compression on the image to obtain compressed texture content, the texture content being stored in a texture format, and writing the compressed texture content to the file for storing compressed model data.
According to some embodiments of the present disclosure, the image is compressed in a texture format specified as ETC1S format or UASTC format.
According to some embodiments of the present disclosure, KTX2 is used as a compressed container format when the image is compressed.
According to some embodiments of the present disclosure, after the texture content is written to the model file, the data format of the texture content in the model file satisfies one or more of information including a compression format used by the model file, information including a texture format used by the model file, and information including image resources for texture.
According to some embodiments of the present disclosure, the format of the file for storing the compressed model data meets the 3D Tiles specification.
A second aspect of the present disclosure proposes an electronic device, including a memory storing execution instructions, and a processor executing the execution instructions stored in the memory, such that the processor performs the method according to any one of the embodiments above.
A third aspect of the present disclosure proposes a readable storage medium having stored therein a computer program for implementing the method according to any of the embodiments described above when being executed by a processor.
A fourth aspect of the disclosure proposes a computer program product comprising a computer program for implementing the method according to any of the embodiments described above when being executed by a processor.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the disclosure and together with the description serve to explain the principles of the disclosure.
Fig. 1 illustrates an application scenario diagram of a method for processing a three-dimensional model according to some embodiments of the present disclosure.
Fig. 2-5 illustrate an overall flow diagram of a method M100 for processing a three-dimensional model according to some embodiments of the present disclosure.
Fig. 6 is a schematic block diagram of a structure of a processing apparatus of a three-dimensional model according to an embodiment of the present disclosure.
Fig. 7 is a block diagram illustrating a structure of an electronic device 1000 according to an embodiment of the present disclosure.
Detailed Description
The present disclosure is described in further detail below with reference to the drawings and the embodiments. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant content and not limiting of the present disclosure. It should be further noted that, for convenience of description, only a portion relevant to the present disclosure is shown in the drawings.
In addition, embodiments of the present disclosure and features of the embodiments may be combined with each other without conflict. The technical aspects of the present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Unless otherwise indicated, the exemplary implementations/embodiments shown are to be understood as providing exemplary features of various details of some ways in which the technical concepts of the present disclosure may be practiced. Thus, unless otherwise indicated, features of the various implementations/embodiments may be additionally combined, separated, interchanged, and/or rearranged without departing from the technical concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. Furthermore, when the terms "comprises" and/or "comprising," and variations thereof, are used in the present specification, the presence of stated features, integers, steps, operations, elements, components, and/or groups thereof is described, but the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof is not precluded. It is also noted that, as used herein, the terms "substantially," "about," and other similar terms are used as approximation terms and not as degree terms, and as such, are used to explain the inherent deviations of measured, calculated, and/or provided values that would be recognized by one of ordinary skill in the art.
For convenience of description and to make the technical solution of the embodiments of the present disclosure easier to understand, technical terms related to the embodiments of the present disclosure are introduced before describing the processing method of the three-dimensional model implemented by the present disclosure.
OSGB (Open SCENE GRAPH Binary) is a file format for storing and loading three-dimensional models.
GlTF (Graphics Language Transmission Format, graphic language transport format) is a system for efficient transmission and loading of 3D scenes and models by engines and applications, and glTF can be used as a general standard to implement 3D model export to the Web.
3D Tiles is an open specification for enabling efficient streaming and rendering of large three-dimensional geospatial datasets.
B3dm (Batched D Model, 3D Model in batch) is a data format conforming to the 3D Tiles specification.
Multi-angle photography refers to capturing images from a plurality of different angles, through which a three-dimensional model is constructed. For example, the oblique photography (Oblique photography) technology is to mount a plurality of sensors on the same flight platform, collect images from five different angles of one vertical and four inclinations at the same time, and generate a visually true three-dimensional city model through a modeling technology.
The three-dimensional city model generated by multi-angle imaging is usually stored by OSGB format files. However, the current Web terminal (Web terminal) cannot be directly compatible with and identify OSGB files, so that OSGB files need to be converted into formats which can be applied and identified by the Web terminal, such as glTF, b3dm and the like, which meet the 3D files specification.
In the related art, in the process of converting OSGB files into files in glTF format, since the model files are not compressed, the storage space occupied by the formed model files is larger, so that the requirements of the model files on transmission performance and rendering performance are higher, the transmission cost is higher, and the occupation of system resources is also higher.
To this end, the present disclosure proposes a method of processing a three-dimensional model.
Fig. 1 illustrates an application scenario diagram of a method for processing a three-dimensional model according to some embodiments of the present disclosure. In this application scenario, a processing end 10 and a service end 20 may be included.
The processing terminal 10 can communicate with the server terminal 20 to perform data transmission and reception or instruction transmission and reception. In the present disclosure, the processing side 10 and the service side 20 respectively include at least one processor and at least one memory. The processing end 10 and the service end 20 may be different types of electronic devices, such as a tablet computer, a notebook computer, a desktop computer, and the like. The server 20 may include a server, which may be a server having a physical form or a cloud server, and the type of the server is not limited in this disclosure.
Illustratively, the processing end 10 may compress the texture of the three-dimensional model and compress the model body (geometric features of the model), and store the model file obtained after compression to the server end 20, so that the client can obtain the model file from the server end 20 and realize the model display at the client with lower transmission cost and lower rendering performance.
Fig. 2 illustrates an overall flow diagram of a method M100 of processing a three-dimensional model according to some embodiments of the present disclosure. As shown in fig. 2, the processing method M100 of the three-dimensional model includes step S110, step S120, and step S130. Step S110, step S120 and step S130 belong to the geometric ontology compression step of the model.
S110, obtaining geometric characteristic information of the three-dimensional model to be processed. Wherein the geometric feature information comprises one or more of vertex features, normal features, and UV coordinates of the texture of the three-dimensional model.
The three-dimensional model to be processed may be stored in a file in OSGB format or in another format. Assuming that the three-dimensional model is a cell model and includes a plurality of building models, in step S110, the features of the geometric vertices of each building model may be obtained, normal features of different surfaces may be obtained, and UV coordinates of textures on different surfaces may be obtained. These acquired features are used as geometric feature information, and belong to the content to be compressed in the subsequent step.
UV coordinates are the coordinate system used to map two-dimensional textures to three-dimensional model surfaces. The UV coordinates are two-dimensional coordinates, denoted (u, v), from which the rendering engine determines the mapped positions of the different position points in the two-dimensional texture image on the stereoscopic model.
S120, compressing the geometric characteristic information to obtain compressed geometric content.
Illustratively, draco compression may be used to do the vertex features, normal features, and UV coordinates described above. Draco is an open source library that can be used to compress and decompress 3D geometric mesh and point cloud data. Draco support compression of multiple data types, including mesh, point cloud, vertex attributes (e.g., position, color, normal, etc.), and support both lossy and lossless compression. By compressing Draco, the storage and transmission efficiency of the 3D graphics can be improved, the loading speed is increased by reducing the file size, the bandwidth consumption is reduced, meanwhile, the method can accord with industry standards and is normally analyzed at the Web end, and the rendering frame rate of the model is kept free from floating.
KHR _ draco _mesh_compression is an extension of glTF to reduce the file size of geometric data (e.g., vertices, normals, texture coordinates, etc.) of a 3D model by using Draco compression algorithm, through which various applications can recognize files written with compressed geometric features.
And S130, writing the compressed geometric content into a file for storing the compressed model data.
The file for storing the compressed model data may be in a file format suitable for application at the web site, i.e. a format satisfying the 3D Tiles specification, such as glTF format or b3dm format. If the texture format adopted in the texture compression is the ASTC format, the file in the ASTC format is obtained after the texture compression, and then the content in the file is written into the file in the glTF format.
In the original OSGB format, information such as vertices, UV coordinates, normals and the like are sequentially stored according to vectors, and no compression is performed, which is required to reduce storage space of a file, increase file transmission speed and reduce rendering cost for a OSGB model file in units of GB (Gigabyte).
According to the processing method of the three-dimensional model, disclosed by the embodiment of the invention, the geometric information of the model is compressed, the compressed geometric information is written into the model file, so that the storage cost of the model file is reduced, and when the model file is applied by a program, the transmission cost and the rendering cost of the file can be reduced, and the transmission speed and the rendering speed are improved.
The uncompressed file size was tested to be 123KB (Kilobyte) and the compressed file size was 75KB. The compression amount is about 39%, so that the storage cost and the transmission cost of the file are effectively reduced.
The geometric feature information can meet one or more of the following requirements that a, the vertex feature comprises one or more of a vertex position feature and a vertex color feature, b, the normal feature comprises a normal direction feature, c, the UV coordinates of the texture comprise one or more of the UV coordinates of the front surface of the texture and the UV coordinates of the back surface of the texture, and d, the geometric feature information further comprises a tangent vector.
Taking glTF as an example, the vertex POSITION feature in item a may be defined in a POSITION field, where the POSITION field represents vertex POSITION information in the mesh, and the coordinates of each vertex in space are represented by three-dimensional vectors (x, y, z). The vertex COLOR feature in item b may be defined in the color_0 field, which provides vertex COLOR information by which the COLOR of the vertex can be directly stored without computing the COLOR through texture and illumination calculations. The NORMAL direction feature in item a may be defined in a NORMAL field, by which the direction of a point on the surface is represented for calculating illumination information. The UV coordinates in item c may be defined in TEXCOORD _0 and TEXCOORD _1 fields, where TEXCOORD _0 field corresponds to the coordinate set of the main texture (texture front) and TEXCOORD _1 field corresponds to the coordinate set of the texture back. The tangent vector in item d may be defined in the TANGENT field by which tangent vector information at the vertex is stored for computing the lighting effect of the normal map by tangent space.
Fig. 3 shows an overall flow diagram of a method M100 for processing a three-dimensional model according to further embodiments of the present disclosure. Referring to fig. 3, the model compressing step may further include step S111. The order of execution between step S111 and step S110 may be arbitrary, for example, step S110 and step S111 may be executed synchronously or sequentially.
S111, vertex index information of the three-dimensional model to be processed is obtained.
Index information indices are indices used to define vertices of polygons (e.g., triangles). The index information contains a series of integer indexes that point to specific locations in the array of vertices to define vertices that can form a polygon.
Accordingly, in step S120, the geometric feature information may be compressed based on the vertex index information.
The index information can avoid repeated storage of the same vertex data, and vertex information shared by a plurality of polygons does not need to be repeatedly stored. In the graphics rendering process, a graphics API (such as OpenGL or Direct 3D) can determine how to connect vertices to draw polygons according to an index list, so that memory space is saved and rendering efficiency is improved.
Taking the POSITION field as an example, in the geometric feature compression process of the OSGB model, index information indexes are determined by traversing the OSGB model, then information of the POSITION field is found according to the indexes, and corresponding compression operations are carried out on POSITION, TEXCOORD _0 and NORMAL in the whole model Mesh in a Draco mode in the model processing process.
Fig. 4 shows an overall flow diagram of a method M100 for processing a three-dimensional model according to further embodiments of the present disclosure. Referring to fig. 4, the geometric feature information may include a plurality of information items, and in step S120, the geometric feature information may be compressed in a manner that sequentially compresses the plurality of information items in the geometric feature information.
The compression process of the geometric features may be performed separately by field, for example, the POSITION field, the color_0 field, the NORMAL field, the TEXCOORD _0 field, the TEXCOORD _1 field, and the TANGENT field may be compressed sequentially.
Fig. 5 shows an overall flow diagram of a method M100 for processing a three-dimensional model according to further embodiments of the present disclosure. Referring to fig. 5, the processing method M100 of the three-dimensional model may further include step S210, step S220, and step S230. Step S210, step S220 and step S230 belong to the texture compression step of the model. It should be noted that, the execution sequence of the texture compression step and the geometric body compression step may be arbitrary, for example, the geometric body compression step and the texture compression step may be executed in parallel, or may be executed sequentially.
S210, acquiring texture information of the three-dimensional model to be processed, and storing the acquired texture information as an image.
After the texture information of the three-dimensional model is extracted through step S110, the texture information may be stored in a general image format, such as a format of JPEG (Joint Photographic Experts Group ), PNG (Portable Network Graphics, portable network graphic), BMP (Bitmap), SVG (Scalable Vector Graphics, scalable vector graphic), or the like.
The purpose of storing the image is to meet the input format requirement of a compression algorithm when the texture compression is carried out later.
S220, compressing the textures of the image to obtain compressed texture content. Wherein the texture content is stored in a texture format.
The texture compression refers to a compressed texture format, and the texture compression mode can be to divide the texture into blocks (blocks) or tiles (tiles) with fixed sizes, compress each block independently, and enable the whole video memory to occupy lower, and be directly read and rendered by the GPU without CPU decoding.
The purpose of texture compression is that an image file in a compressed format in storage space, after being released into memory, will change to an uncompressed format, such as RGB888, RGBA888, RGB565, or other uncompressed image formats. The texture compressed file in the compressed format in the storage space may also be in the compressed format after being released into memory and capable of being rendered by a GPU (Graphics Processing Unit, graphics processor).
For example, when a texture image in a JPEG format with a size of 1024×1024 exists in the memory in an RGBA format, the occupied video memory space is about 4M to 5.3M. If a certain texture compression technology is adopted for texture compression, the memory occupied by the obtained texture compression file can be reduced to about 1.3M, and the memory occupation is greatly reduced.
When the texture compression is performed, the adopted texture format can be selected according to the scene and the requirement, for example, the adopted texture format can be ETC (Ericsson Texture Compression) format suitable for an android system, DXT (DirectX Texture Compression) format suitable for a windows system, ASTC (Adaptive Scalable Texture Compression) format suitable for a cross-platform system and basic Universal format. The different texture formats correspond to corresponding texture compression techniques, ETC, DXT, ASTC and basic Universal are texture compression techniques developed by different developers, and the compression techniques correspond to respective compression algorithms. And (3) carrying out texture compression by adopting a corresponding texture compression technology to obtain a file with a corresponding texture format, wherein the texture format of the texture content obtained after compression is the adopted texture format. Texture compression algorithms are used to reduce the memory space occupied by a texture image in computer graphics processing while maintaining or approaching the quality of the original image. The algorithms compress texture data through different technologies, so that the efficiency and performance of graphic rendering are improved on the premise of ensuring the image quality.
And S230, writing the compressed texture content into a file for storing the compressed model data.
Since the execution order of the geometric body compression step and the texture compression step may be arbitrary, the start execution time of step S130 and step S230 may also be arbitrary. If step S230 starts to be performed after the completion of the step S130, a file for storing the compressed model data already exists, and in step S230, the compressed texture content is directly written to the existing file. If the step S130 has not been started before the step S230 starts to be performed, a file for storing the compressed model data may be created through the step S230, and the compressed geometric content may be directly written to an existing file when the step S130 is performed.
The texture of the model is compressed through the steps S210-S230, and the compressed texture content is written into the model file, so that the storage cost of the model file is reduced, and when the model file is applied by a program, the transmission cost and the rendering cost of the file can be further reduced, and the transmission speed and the rendering speed are improved.
In step S220, the image may be compressed by compressing the image according to a specific texture format. The specified texture format may be ETC1S format or UASTC format.
ETC1S is a subset of the ETC compression format for improving compression efficiency and file size, is suitable for low quality texture compression, and enables smaller transmission and memory usage, such as compression of images, photographs, map data, or albedo/specular/ETC textures.
UASTC (Universal ASTC) is a subset of ASTC compression format, UASTC format is suitable for high quality texture compression, enabling better lossless compression, such as compression of complex normal maps.
Taking texture compression of PNG images as an example, for a PNG image with a size of 207KB, the file size obtained by compression in the ETC1S format is 50KB, and the file size obtained by compression in the UASTC format is 438KB. Thus, ETC1S is smaller in compressed file format and UASTC is higher in texture quality than UASTC.
In step S220, when the image is compressed in the specified texture format, KTX2 may be used as the compressed container format. The container format herein refers to a texture container format. Texture container (Texture Container) refers to a data structure or file format for storing and managing texture data. There are a variety of texture containers that may be used, such as KTX, KTX2, and DDS (DirectDraw Surface).
KTX (Khronos Texture) is a cross-platform texture container format for storing and distributing compressed or uncompressed texture data. Compared with KTX, KTX2 increases the support for basic Universal hyper-compressed GPU textures, can generate compact textures, and can be effectively transcoded into various GPU compressed texture formats during operation. The basic Universal is an efficient texture compression format and tool set, and is mainly used for real-time graphics application. basisu is a basic Universal command line tool that can convert common image formats (e.g., PNG, JPEG, TIFF, etc.) to basic Universal formats (e.g., KTX 2) via basisu commands. In step S220, in the conversion process using KTX2, the conversion from OSGB model to 3D Tiles format may be performed by compiling with the source code of KTX2 or by compiling with the tool with the source code of basisu.
Meanwhile, by using KHR _texture_ basisu expansion, glTF/b3dm files can contain KTX2 textures, the downloading size is reduced, the memory size of the GPU is reduced by using a texture format supported by the machine, and the rendering speed on different devices and platforms is improved. Wherein KHR _texture_ basisu is an extension of glTF to support compression of textures using basic Universal in glTF file. This extension enables glTF models to contain efficient, cross-platform compatible compressed textures, thereby optimizing the loading and rendering performance of 3D resources, especially on mobile devices and Web platforms.
By using KTX2 and KHR _texture_ basisu expansion, the glTF model can remarkably reduce the size of texture files, reduce network transmission bandwidth and memory occupation, and further improve loading speed and rendering performance.
It should be noted that texture format and texture container format are two different concepts. Texture format refers to the coding or storage layout of the texture data itself, which determines how pixels are arranged in memory, how color information is represented, whether compression algorithms are applied, etc., for specifying how texture data is stored and processed in the GPU. Texture formats can affect how textures are represented on the GPU as well as performance and quality at the time of rendering. The texture container format refers to a file format for packaging and organizing texture data. It contains not only the original image data of the texture, but also other metadata (such as size, format, compression type, etc. of the texture) and generally supports more complex structures and functions. Texture container formats are used to transfer, store and manage texture resources, especially in 3D applications.
After the texture content is written into the model file (step S230), the data format of the texture content in the model file satisfies one or more of 1 including information of a compression format used by the model file, 2 including information of a texture format used by the model file, and 3 including information of an image resource for texture.
Regardless of whether the texture format is the ETC1S format, UASTC format, or other format, when the KHR _texture_ basisu extension is used for texture compression, the JSON (JavaScript Object Notation ) data structure written into the glTF file can meet certain format requirements, thereby enabling the use of the upper-corresponding texture format at the time of rendering.
The format requirements may include that the written data structure needs to list all extensions used in the current glTF file, i.e., item 1 above that the data format may satisfy. For example, KHR _texture_ basisu is an extension currently used to indicate that the file contains textures compressed using basic Universal, and then a corresponding expression is required in JSON code.
The format requirements may also include that the written data structure needs to list the texture used by the model, and the image resource index associated with that texture, i.e., item 2 above that the data format can satisfy.
The format requirements may also include that the written data structure needs to list the image resources for the texture, i.e. item 3 above that the data format can satisfy. For example, mimeType, mimeType may be included for representing a standard format of file type for specifying the format and encoding of the file when it is transmitted over a network.
Experiments with a three-dimensional cell model, in terms of single frame rendering rate, JPEG is about 120 milliseconds and KTX2 is about 31 milliseconds, which shows that KTX2 is faster than JPEG in rendering rate.
Based on any one of the embodiments, the present disclosure further provides a processing device for a three-dimensional model. Fig. 6 is a schematic block diagram of a structure of a processing apparatus of a three-dimensional model according to an embodiment of the present disclosure. As shown in fig. 6, the processing apparatus of the three-dimensional model includes a geometric feature acquisition module 110, a geometric feature compression module 120, and a writing module 130.
The geometric feature obtaining module 110 is configured to obtain geometric feature information of the three-dimensional model to be processed. Wherein the geometric feature information comprises one or more of vertex features, normal features, and UV coordinates of the texture of the three-dimensional model.
The geometric feature compression module 120 is configured to compress geometric feature information to obtain compressed geometric content.
The writing module 130 is configured to write the compressed geometric content to a file for storing the compressed model data.
With continued reference to fig. 6, the processing apparatus of the three-dimensional model may further include a texture information acquisition module 210 and a texture information compression module 220.
The texture information acquisition module 210 is configured to acquire texture information of a three-dimensional model to be processed, and store the acquired texture information as an image.
The texture information compression module 220 is configured to compress textures of the image to obtain compressed texture content. Wherein the texture content is stored in a texture format.
The writing module 130 is further configured to write the compressed texture content to a file for storing the compressed model data.
The processing means of the three-dimensional model may be in the form of computer software, and the respective modules of the processing means of the three-dimensional model may be implemented by means of computer software modules. The implementation process of the functions and roles of each module in the above device is specifically shown in the implementation process of the corresponding steps in the above method, and will not be described herein again.
The execution subject of the processing method of the three-dimensional model in the embodiment of the present disclosure may be an electronic device such as a computer or a server.
Accordingly, based on any of the above embodiments, the present disclosure also provides an electronic device that may perform the method for processing a three-dimensional model of any of the above embodiments of the present disclosure.
Fig. 7 is a block diagram illustrating a structure of an electronic device 1000 according to an embodiment of the present disclosure. Referring to fig. 7, the hardware structure of the electronic device 1000 may be implemented using a bus architecture. The bus architecture may include any number of interconnecting buses and bridges depending on the specific application of the hardware and the overall design constraints. Bus 1100 connects together various circuits including one or more processors 1200, memory 1300, and/or hardware modules. Bus 1100 may also connect various other circuits 1400, such as peripherals, voltage regulators, power management circuits, external antennas, and the like.
Bus 1100 may be an industry standard architecture (ISA, industry Standard Architecture) bus, a peripheral component interconnect (PCI, PERIPHERAL COMPONENT) bus, or an extended industry standard architecture (EISA, extended Industry Standard Component) bus, among others. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, only one connection line is shown in the figure, but not only one bus or one type of bus.
The processor 1200 may be a central processing unit (Central Processing Unit, CPU). The Processor 1200 may also be a chip such as another general purpose Processor, a digital signal Processor (DIGITAL SIGNAL Processor, DSP), an Application SPECIFIC INTEGRATED Circuit (ASIC), a Field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, a discrete gate or transistor logic device, a discrete hardware component, or a combination of the above.
The memory 1300 may be implemented as a non-transitory computer-readable storage medium that can be used to store non-transitory software programs, non-transitory computer-executable programs, and modules, such as program instructions of a computer program in embodiments of the present disclosure. The processor 1200 implements a method of processing a three-dimensional model by running non-transitory software programs, instructions, and modules stored in the memory 1300.
The memory 1300 may include a storage program area that may store an operating system, an application program required for at least one function, and a storage data area that may store data created by the processor 1200, such as structural data, model data, sample data, stiffness evaluation results, etc. of the bridge. In addition, memory 1300 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 1300 optionally includes memory remotely located relative to processor 1200, which may be connected to processor 1200 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The present disclosure also provides a readable storage medium having stored therein a computer program for implementing the above method when executed by a processor. A "readable storage medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples of a readable storage medium include an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, a portable read-only memory (CDROM), and so forth.
The present disclosure also provides a computer program product, and the methods of the present disclosure may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer programs or instructions. The processes or functions of the present disclosure are performed in whole or in part when the computer program or instructions are loaded and executed. The computer may be a general purpose computer, a special purpose computer, a computer network, a network device, a user device, a core network device, an OAM, or other programmable apparatus.
The computer program or instructions may be stored in a readable storage medium or transmitted from one readable storage medium to another readable storage medium, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center by wired or wireless means. The readable storage medium may be any available medium that can be accessed or a data storage device such as a server, data center, etc. that integrates one or more available media. Usable media may be magnetic media such as floppy disks, hard disks, magnetic tape, optical media such as digital video disks, and semiconductor media such as solid state disks. The computer readable storage medium may be volatile or nonvolatile storage medium, or may include both volatile and nonvolatile types of storage medium.
It will be apparent to those skilled in the art that embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to the disclosure. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In the description of the present specification, reference to the terms "one embodiment/manner," "some embodiments/manner," "example," "specific example," or "some examples," etc., means that a particular feature, structure, or characteristic described in connection with the embodiment/manner or example is included in at least one embodiment/manner or example of the present disclosure. In this specification, the schematic representations of the above terms are not necessarily for the same embodiment/manner or example. Furthermore, the particular features, structures, or characteristics described may be combined in any suitable manner in any one or more embodiments/manners or examples. Furthermore, the various embodiments/modes or examples described in this specification and the features of the various embodiments/modes or examples can be combined and combined by persons skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present disclosure, the meaning of "a plurality" is at least two, such as two, three, etc., unless explicitly specified otherwise.
It will be appreciated by those skilled in the art that the above-described embodiments are merely for clarity of illustration of the disclosure, and are not intended to limit the scope of the disclosure. Other variations or modifications will be apparent to persons skilled in the art from the foregoing disclosure, and such variations or modifications are intended to be within the scope of the present disclosure.
Claims (11)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411957950.3A CN119722824A (en) | 2024-12-27 | 2024-12-27 | Three-dimensional model processing method, electronic device, storage medium and program product |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411957950.3A CN119722824A (en) | 2024-12-27 | 2024-12-27 | Three-dimensional model processing method, electronic device, storage medium and program product |
Publications (1)
Publication Number | Publication Date |
---|---|
CN119722824A true CN119722824A (en) | 2025-03-28 |
Family
ID=95096845
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202411957950.3A Pending CN119722824A (en) | 2024-12-27 | 2024-12-27 | Three-dimensional model processing method, electronic device, storage medium and program product |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN119722824A (en) |
-
2024
- 2024-12-27 CN CN202411957950.3A patent/CN119722824A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20230108967A1 (en) | Micro-meshes, a structured geometry for computer graphics | |
US8542243B2 (en) | High-compression texture mapping | |
Dick et al. | Efficient geometry compression for GPU‐based decoding in realtime terrain rendering | |
US10699361B2 (en) | Method and apparatus for enhanced processing of three dimensional (3D) graphics data | |
CN111402380B (en) | GPU compressed texture processing method | |
US11948338B1 (en) | 3D volumetric content encoding using 2D videos and simplified 3D meshes | |
CN115908672A (en) | Three-dimensional scene rendering acceleration method, system, medium, device and terminal | |
US11893691B2 (en) | Point cloud geometry upsampling | |
WO2025092176A1 (en) | Reconstruction method and apparatus for three-dimensional entity model, device, medium and program product | |
CN118606396A (en) | A method, system, device and storage medium for high-dimensional data visualization rendering | |
US11418769B1 (en) | Viewport adaptive volumetric content streaming and/or rendering | |
CN114663566A (en) | Loading rendering processing method and system for three-dimensional model | |
CN116129019A (en) | Data processing method for oblique photography | |
Derzapf et al. | Dependency‐free parallel progressive meshes | |
Terrace et al. | Unsupervised conversion of 3D models for interactive metaverses | |
CN114491352A (en) | Model loading method and device, electronic equipment and computer readable storage medium | |
CN118830235A (en) | Geometric point cloud encoding | |
CN117710546A (en) | Data processing methods, devices, computing equipment clusters, media and program products | |
CN112995758B (en) | Encoding methods, decoding methods, storage media and equipment for point cloud data | |
CN119722824A (en) | Three-dimensional model processing method, electronic device, storage medium and program product | |
US12079934B1 (en) | Point cloud re-sampling using hierarchical sphere or disk distributions | |
EP4623416A1 (en) | Detection and indication of geometry reconstruction artifacts in point clouds based on local density | |
CN112802134B (en) | Coding method, device and terminal of three-dimensional model | |
CN111754423B (en) | High-module GPU skin smoothing processing method, device and electronic equipment | |
CN116129011A (en) | Video processing method, apparatus and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |