WO2023180841A1 - Mesh patch sub-division - Google Patents
Mesh patch sub-division Download PDFInfo
- Publication number
- WO2023180841A1 WO2023180841A1 PCT/IB2023/052104 IB2023052104W WO2023180841A1 WO 2023180841 A1 WO2023180841 A1 WO 2023180841A1 IB 2023052104 W IB2023052104 W IB 2023052104W WO 2023180841 A1 WO2023180841 A1 WO 2023180841A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- new
- vertex
- list
- generate
- point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
- G06T9/001—Model-based coding, e.g. wire frame
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/44—Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
Definitions
- the present invention relates to three dimensional graphics. More specifically, the present invention relates to coding of three dimensional graphics.
- volumetric content such as point clouds
- V3C visual volumetric video-based compression
- MPEG had issued a call for proposal (CfP) for compression of point clouds.
- CfP call for proposal
- MPEG is considering two different technologies for point cloud compression: 3D native coding technology (based on octree and similar coding methods), or 3D to 2D projection, followed by traditional video coding.
- 3D native coding technology based on octree and similar coding methods
- 3D to 2D projection followed by traditional video coding.
- TMC2 test model software
- This method has proven to be more efficient than native 3D coding, and is able to achieve competitive bitrates at acceptable quality.
- 3D point clouds of the projection-based method also known as the video-based method, or V-PCC
- the standard is expected to include in future versions further 3D data, such as 3D meshes.
- current version of the standard is only suitable for the transmission of an unconnected set of points, so there is nomechanism to send the connectivity of points, as it is required in 3D mesh compression.
- V-PCC V-PCC
- a mesh compression approach like TFAN or Edgebreaker.
- the limitation of this method is that the original mesh has to be dense, so that the point cloud generated from the vertices is not sparse and can be efficiently encoded after projection.
- the order of the vertices affect the coding of connectivity, and different method to reorganize the mesh connectivity have been proposed.
- An alternative way to encode a sparse mesh is to use the RAW patch data to encode the vertices position in 3D.
- RAW patches encode (x,y,z) directly
- all the vertices are encoded as RAW data
- the connectivity is encoded by a similar mesh compression method, as mentioned before.
- the vertices may be sent in any preferred order, so the order generated from connectivity encoding can be used.
- the method can encode sparse point clouds, however, RAW patches are not efficient to encode 3D data, and further data such as the attributes of the triangle faces may be missing from this approach.
- Methods include generating new triangles by splitting the received triangles’ edges according to their size, by inserting new vertices at the triangle’s centroids, by splitting the vertices, and by performing marching cubes in surfaces defined by the geometry images.
- a method programmed in a non-transitory memory of a device comprises receiving a face list and a vertex list, receiving a high resolution depth map, implementing patch mesh subdivision to generate a new face list and a new vertex list and implementing three dimensional reconstruction with the new face list and the new vertex list to generate a decoded mesh object.
- Implementing patch mesh subdivision to generate the new face list and the new vertex list includes: a triangle centroid implementation by placing a point in the middle of each triangle which is used to generate new triangles by drawing lines from the point to each vertex of the triangle. A location of the point is obtained using the high resolution depth map.
- Implementing patch mesh subdivision to generate the new face list and the new vertex list includes: an edge midpoint implementation by placing a point in the middle of each edge of a triangle and drawing a line from each point to another middle point of the triangle to generate four triangles. A three dimensional location of each point is obtained using the high resolution depth map.
- Implementing patch mesh subdivision to generate the new face list and the new vertex list includes: a vertex split implementation by removing edges, adding a vertex and drawing lines from existing vertices to the added vertex. The added vertex is placed based on three dimensional location information from the high resolution depth map.
- Implementing patch mesh subdivision to generate the new face list and the new vertex list includes: a marching cubes implementation by generating points between border triangles using marching cubes. Voxel information in a three dimensional space is obtained from the high resolution depth map.
- an apparatus comprises a non-transitory memory for storing an application, the application for: receiving a face list and a vertex list, receiving a high resolution depth map, implementing patch mesh subdivision to generate a new face list and a new vertex list and implementing three dimensional reconstruction with the new face list and the new vertex list to generate a decoded mesh object and a processor coupled to the memory, the processor configured for processing the application.
- Implementing patch mesh subdivision to generate the new face list and the new vertex list includes: a triangle centroid implementation by placing a point in the middle of each triangle which is used to generate new triangles by drawing lines from the point to each vertex of the triangle. A location of the point is obtained using the high resolution depth map.
- Implementing patch mesh subdivision to generate the new face list and the new vertex list includes: an edge midpoint implementation by placing a point in the middle of each edge of a triangle and drawing a line from each point to another middle point of the triangle to generate four triangles. A three dimensional location of each point is obtained using the high resolution depth map.
- Implementing patch mesh subdivision to generate the new face list and the new vertex list includes: a vertex split implementation by removing edges, adding a vertex and drawing lines from existing vertices to the added vertex. The added vertex is placed based on three dimensional location information from the high resolution depth map.
- Implementing patch mesh subdivision to generate the new face list and the new vertex list includes: a marching cubes implementation by generating points between border triangles using marching cubes. Voxel information in a three dimensional space is obtained from the high resolution depth map.
- a system comprises an encoder configured for: encoding content including a face list and a vertex list and a decoder configured for: receiving the face list and the vertex list, receiving a high resolution depth map, implementing patch mesh subdivision to generate a new face list and a new vertex list and implementing three dimensional reconstruction with the new face list and the new vertex list to generate a decoded mesh object.
- Implementing patch mesh subdivision to generate the new face list and the new vertex list includes: a triangle centroid implementation by placing a point in the middle of each triangle which is used to generate new triangles by drawing lines from the point to each vertex of the triangle. A location of the point is obtained using the high resolution depth map.
- Implementing patch mesh subdivision to generate the new face list and the new vertex list includes: an edge midpoint implementation by placing a point in the middle of each edge of a triangle and drawing a line from each point to another middle point of the triangle to generate four triangles. A three dimensional location of each point is obtained using the high resolution depth map.
- Implementing patch mesh subdivision to generate the new face list and the new vertex list includes: a vertex split implementation by removing edges, adding a vertex and drawing lines from existing vertices to the added vertex. The added vertex is placed based on three dimensional location information from the high resolution depth map.
- Implementing patch mesh subdivision to generate the new face list and the new vertex list includes: a marching cubes implementation by generating points between border triangles using marching cubes. Voxel information in a three dimensional space is obtained from the high resolution depth map.
- Figure 1 illustrates a flowchart of decoding content according to some embodiments.
- Figure 2 illustrates a diagram of a triangle centroid implementation according to some embodiments.
- Figure 3 illustrates a diagram of an edge midpoint implementation according to some embodiments.
- Figure 4 illustrates a diagram of a vertex split implementation according to some embodiments.
- Figure 5 illustrates a diagram of a marching cubes implementation according to some embodiments.
- Figure 6 illustrates a block diagram of an exemplary computing device configured to implement the mesh patch sub-division method according to some embodiments.
- Methods include generating new triangles by splitting the received triangles’ edges according to their size, by inserting new vertices at the triangle’s centroids, by splitting the vertices, and by performing marching cubes in surfaces defined by the geometry images.
- Figure 1 illustrates a flowchart of decoding content according to some embodiments.
- a face list and vertex list 110 are received.
- the decoder also receives a high resolution depth map 112.
- the high resolution depth map 112 is able to be used to generate a new face list and new vertex list 114 based on the received face list and vertex list 110 using patch mesh subdivision, in the step 100.
- a decoded mesh object 116 is able to be generated using 3D reconstruction, in the step 102.
- fewer or additional steps are implemented.
- the order of the steps is modified.
- FIG. 2 illustrates a diagram of a triangle centroid implementation according to some embodiments.
- the triangle list e.g., face and vertex lists
- Points are generated/placed in the triangles (e.g., in the middle of each triangle such that it is equidistant to the triangle’s vertices) as represented in image 202.
- the point placed in each triangle is able to be used to generate new triangles by generating/ drawing lines from the point to each vertex of the triangle.
- each triangle with a point placed inside will be divided into three new triangles.
- Triangles at the edges do not change (e.g., the centroids are only used for non-edge triangles).
- the 3D locations of each of the points (centroids) are obtained using the high resolution depth map.
- the high resolution depth map is able to be used to determine the depth the centroid is at (e.g., if the point is the same depth or a different depth when compared with the vertices of the triangle).
- the new triangles are used to generate a new face list and vertex list.
- the new face list and vertex list based on the triangle centroid implementation are able to be used to reconstruct the mesh object.
- Figure 3 illustrates a diagram of an edge midpoint implementation according to some embodiments.
- the triangle list is received as represented in image 300.
- Points e.g., vertices
- Points are generated/placed in the middle (or at some other position) of each edge of some of the triangles as represented in image 302.
- Each edge (with some exceptions) is divided (in the middle), and a line is generated/ drawn from one (middle) point to another (middle) point which turns each triangle into 4 triangles.
- Edges at a patch boundary are not allowed to be split.
- the lighter dots 304 represent new vertices after edge division
- the darker dots 306 represent vertices that are not permitted.
- the 3D locations of each of the points are obtained using the high resolution depth map.
- the high resolution depth map is able to be used to determine if the point is the same depth or a different depth when compared with the vertices of the triangle.
- the new triangles are used to generate a new face list and vertex list.
- the new face list and vertex list based on the edge midpoint implementation are able to be used to reconstruct the mesh object.
- Figure 4 illustrates a diagram of a vertex split implementation according to some embodiments.
- the triangle list is received as represented in image 400.
- Edges are split or removed, and then new edges are generated for some of the triangles as represented in image 402.
- two edges 410 of a triangle are removed, and then a new vertex 404 is placed based on 3D location information from the high resolution depth map.
- Edges are generated/ drawn from the initial vertices 406 to the new vertex 404 to generate new triangles.
- the new triangles are used to generate a new face list and vertex list.
- the new face list and vertex list based on the vertex split implementation are able to be used to reconstruct the mesh object.
- FIG. 5 illustrates a diagram of a marching cubes implementation according to some embodiments.
- Marching cubes is a technique used for surface reconstruction.
- the points in the middle are able to be generated using marching cubes as shown in image 502.
- the 3D location information from the high resolution depth map is also used in determining the locations of the new vertices.
- voxel information in the 3D space is obtained from the high resolution depth map.
- the new triangles are used to generate a new face list and vertex list.
- the new face list and vertex list based on the marching cubes implementation are able to be used to reconstruct the mesh object.
- Figure 6 illustrates a block diagram of an exemplary computing device configured to implement the mesh patch sub-division method according to some embodiments.
- the computing device 600 is able to be used to acquire, store, compute, process, communicate and/or display information such as images and videos including 3D content.
- the computing device 600 is able to implement any of the encoding/decoding aspects.
- a hardware structure suitable for implementing the computing device 600 includes a network interface 602, a memory 604, a processor 606, I/O device(s) 608, a bus 610 and a storage device 612.
- the choice of processor is not critical as long as a suitable processor with sufficient speed is chosen.
- the memory 604 is able to be any conventional computer memory known in the art.
- the storage device 612 is able to include a hard drive, CDROM, CDRW, DVD, DVDRW, High Definition disc/ drive, ultra-HD drive, flash memory card or any other storage device.
- the computing device 600 is able to include one or more network interfaces 602.
- An example of a network interface includes a network card connected to an Ethernet or other type of LAN.
- the I/O device(s) 608 are able to include one or more of the following: keyboard, mouse, monitor, screen, printer, modem, touchscreen, button interface and other devices.
- Mesh patch sub-division application(s) 630 used to implement the mesh patch sub-division implementation are likely to be stored in the storage device 612 and memory 604 and processed as applications are typically processed.
- mesh patch sub-division hardware 620 is included.
- the computing device 600 in Figure 6 includes applications 630 and hardware 620 for the mesh patch subdivision implementation, the mesh patch sub-division method is able to be implemented on a computing device in hardware, firmware, software or any combination thereof.
- the mesh patch sub-division applications 630 are programmed in a memory and executed using a processor.
- the mesh patch subdivision hardware 620 is programmed hardware logic including gates specifically designed to implement the mesh patch sub-division method.
- the mesh patch sub-division application(s) 630 include several applications and/or modules. In some embodiments, modules include one or more sub-modules as well. In some embodiments, fewer or additional modules are able to be included.
- suitable computing devices include a personal computer, a laptop computer, a computer workstation, a server, a mainframe computer, a handheld computer, a personal digital assistant, a cellular/mobile telephone, a smart appliance, a gaming console, a digital camera, a digital camcorder, a camera phone, a smart phone, a portable music player, a tablet computer, a mobile device, a video player, a video disc writer/player (e.g., DVD writer/player, high definition disc writer/player, ultra high definition disc writer/player), a television, a home entertainment system, an augmented reality device, a virtual reality device, smart jewelry (e.g., smart watch), a vehicle (e.g., a self-driving vehicle) or any other suitable computing device.
- a personal computer e.g., a laptop computer, a computer workstation, a server, a mainframe computer, a handheld computer, a personal digital assistant, a cellular/mobile telephone, a smart appliance, a gaming console
- a device acquires or receives 3D content (e.g., point cloud content).
- 3D content e.g., point cloud content.
- the mesh patch sub-division method is able to be implemented with user assistance or automatically without user involvement.
- the mesh patch sub-division method enables more efficient and more accurate 3D content decoding compared to previous implementations.
- a method programmed in a non-transitory memory of a device comprising: receiving a face list and a vertex list; receiving a high resolution depth map; implementing patch mesh subdivision to generate a new face list and a new vertex list; and implementing three dimensional reconstruction with the new face list and the new vertex list to generate a decoded mesh object.
- the method of clause 1 wherein implementing patch mesh subdivision to generate the new face list and the new vertex list includes: a triangle centroid implementation by placing a point in the middle of each triangle which is used to generate new triangles by drawing lines from the point to each vertex of the triangle.
- the method of clause 2 wherein a location of the point is obtained using the high resolution depth map.
- the method of clause 1 wherein implementing patch mesh subdivision to generate the new face list and the new vertex list includes: an edge midpoint implementation by placing a point in the middle of each edge of a triangle and drawing a line from each point to another middle point of the triangle to generate four triangles.
- the method of clause 4 wherein a three dimensional location of each point is obtained using the high resolution depth map.
- the method of clause 1 wherein implementing patch mesh subdivision to generate the new face list and the new vertex list includes: a vertex split implementation by removing edges, adding a vertex and drawing lines from existing vertices to the added vertex.
- the method of clause 6 wherein the added vertex is placed based on three dimensional location information from the high resolution depth map.
- An apparatus comprising: a non-transitory memory for storing an application, the application for: receiving a face list and a vertex list; receiving a high resolution depth map; implementing patch mesh subdivision to generate a new face list and a new vertex list; and implementing three dimensional reconstruction with the new face list and the new vertex list to generate a decoded mesh object; and a processor coupled to the memory, the processor configured for processing the application.
- implementing patch mesh subdivision to generate the new face list and the new vertex list includes: a triangle centroid implementation by placing a point in the middle of each triangle which is used to generate new triangles by drawing lines from the point to each vertex of the triangle.
- implementing patch mesh subdivision to generate the new face list and the new vertex list includes: a vertex split implementation by removing edges, adding a vertex and drawing lines from existing vertices to the added vertex.
- a system comprising: an encoder configured for: encoding content including a face list and a vertex list; and a decoder configured for: receiving the face list and the vertex list; receiving a high resolution depth map; implementing patch mesh subdivision to generate a new face list and a new vertex list; and implementing three dimensional reconstruction with the new face list and the new vertex list to generate a decoded mesh object.
- the system of clause 19 wherein implementing patch mesh subdivision to generate the new face list and the new vertex list includes: a triangle centroid implementation by placing a point in the middle of each triangle which is used to generate new triangles by drawing lines from the point to each vertex of the triangle.
- the system of clause 20 wherein a location of the point is obtained using the high resolution depth map.
- implementing patch mesh subdivision to generate the new face list and the new vertex list includes: an edge midpoint implementation by placing a point in the middle of each edge of a triangle and drawing a line from each point to another middle point of the triangle to generate four triangles.
- a three dimensional location of each point is obtained using the high resolution depth map.
- implementing patch mesh subdivision to generate the new face list and the new vertex list includes: a vertex split implementation by removing edges, adding a vertex and drawing lines from existing vertices to the added vertex.
- the system of clause 24 wherein the added vertex is placed based on three dimensional location information from the high resolution depth map.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Generation (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
Claims
Priority Applications (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202380013075.0A CN117730344A (en) | 2022-03-25 | 2023-03-06 | mesh patch subdivision |
| KR1020247030487A KR20240148896A (en) | 2022-03-25 | 2023-03-06 | Mesh Patch Refinement |
| EP23712604.0A EP4463826A1 (en) | 2022-03-25 | 2023-03-06 | Mesh patch sub-division |
| JP2024556711A JP2025510243A (en) | 2022-03-25 | 2023-03-06 | Mesh Patch Subdivision |
Applications Claiming Priority (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202263269906P | 2022-03-25 | 2022-03-25 | |
| US63/269,906 | 2022-03-25 | ||
| US17/987,836 US12315081B2 (en) | 2022-03-25 | 2022-11-15 | Mesh patch sub-division |
| US17/987,836 | 2022-11-15 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2023180841A1 true WO2023180841A1 (en) | 2023-09-28 |
Family
ID=85725007
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/IB2023/052104 Ceased WO2023180841A1 (en) | 2022-03-25 | 2023-03-06 | Mesh patch sub-division |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2023180841A1 (en) |
Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2021116838A1 (en) * | 2019-12-10 | 2021-06-17 | Sony Group Corporation | Mesh compression via point cloud representation |
-
2023
- 2023-03-06 WO PCT/IB2023/052104 patent/WO2023180841A1/en not_active Ceased
Patent Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2021116838A1 (en) * | 2019-12-10 | 2021-06-17 | Sony Group Corporation | Mesh compression via point cloud representation |
Non-Patent Citations (4)
| Title |
|---|
| "Subdivision Methods for Geometric Design", 1 November 2001, ELSEVIER, ISBN: 978-1-55860-446-9, article WARREN JOE ET AL: "Chapter 7 - Averaging Schemes for Polyhedral Meshes", pages: 198 - 238, XP093044881 * |
| DANILLO B GRAZIOSI (SONY) ET AL: "[V-CG] Sony's Dynamic Mesh Coding Call for Proposal Response", no. m59284, 25 March 2022 (2022-03-25), XP030300727, Retrieved from the Internet <URL:https://dms.mpeg.expert/doc_end_user/documents/138_OnLine/wg11/m59284-v1-m59284.zip> [retrieved on 20220325] * |
| FARAMARZI ESMAEIL ET AL: "Mesh Coding Extensions to MPEG-I V-PCC", 2020 IEEE 22ND INTERNATIONAL WORKSHOP ON MULTIMEDIA SIGNAL PROCESSING (MMSP), 21 September 2020 (2020-09-21), pages 1 - 5, XP055837185, DOI: 10.1109/MMSP48831.2020.9287057 * |
| ROSSIGNAC J: "Compressed progressive meshes", IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, IEEE, USA, vol. 6, no. 1, 1 January 2000 (2000-01-01), pages 79 - 93, XP008113948, ISSN: 1077-2626, DOI: 10.1109/2945.841122 * |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20240127489A1 (en) | Efficient mapping coordinate creation and transmission | |
| CN113302940B (en) | Point cloud encoding using homography | |
| US12183045B2 (en) | Mesh patch simplification | |
| US20240233189A1 (en) | V3c syntax extension for mesh compression using sub-patches | |
| JP2025530443A (en) | V3C Syntax Extensions for Mesh Compression | |
| US20230306687A1 (en) | Mesh zippering | |
| EP4479940A1 (en) | Mesh zippering | |
| US12315081B2 (en) | Mesh patch sub-division | |
| US12482187B2 (en) | V3C syntax new basemesh patch data unit | |
| EP4569480A1 (en) | Orthoatlas: texture map generation for dynamic meshes using orthographic projections | |
| WO2023180841A1 (en) | Mesh patch sub-division | |
| US12412313B2 (en) | Mesh geometry coding | |
| US12505578B2 (en) | Patch mesh connectivity coding | |
| US20240357147A1 (en) | DISPLACEMENT PACKING USING SINGLE LoD PER BLOCK | |
| WO2023180842A1 (en) | Mesh patch simplification | |
| US20250022179A1 (en) | Mesh segmentation | |
| WO2024150046A1 (en) | V3c syntax extension for mesh compression using sub-patches | |
| EP4463825A1 (en) | Patch mesh connectivity coding | |
| WO2024218599A1 (en) | Displacement packing using single lod per block | |
| EP4479938A1 (en) | Mesh geometry coding | |
| WO2024246640A1 (en) | V3c syntax new basemesh patch data unit | |
| WO2025012716A1 (en) | Mesh segmentation |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23712604 Country of ref document: EP Kind code of ref document: A1 |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 202380013075.0 Country of ref document: CN |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2023712604 Country of ref document: EP |
|
| ENP | Entry into the national phase |
Ref document number: 2023712604 Country of ref document: EP Effective date: 20240816 |
|
| ENP | Entry into the national phase |
Ref document number: 20247030487 Country of ref document: KR Kind code of ref document: A |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 1020247030487 Country of ref document: KR |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2024556711 Country of ref document: JP |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |