[go: up one dir, main page]

WO2021173489A1 - Apparatus, method, and system for providing a three-dimensional texture using uv representation - Google Patents

Apparatus, method, and system for providing a three-dimensional texture using uv representation Download PDF

Info

Publication number
WO2021173489A1
WO2021173489A1 PCT/US2021/019036 US2021019036W WO2021173489A1 WO 2021173489 A1 WO2021173489 A1 WO 2021173489A1 US 2021019036 W US2021019036 W US 2021019036W WO 2021173489 A1 WO2021173489 A1 WO 2021173489A1
Authority
WO
WIPO (PCT)
Prior art keywords
map
representation
subject
input image
geometry
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US2021/019036
Other languages
French (fr)
Inventor
Peter Oluwanisola FASOGBON
Goutham RANGU
Francesco Cricri
Emre B. Aksu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nokia Technologies Oy
Nokia of America Corp
Original Assignee
Nokia Technologies Oy
Nokia of America Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Technologies Oy, Nokia of America Corp filed Critical Nokia Technologies Oy
Publication of WO2021173489A1 publication Critical patent/WO2021173489A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping

Definitions

  • 3D models e.g., human or object models
  • 3D models can be used for various applications including, but not limited to, virtual reality, video editing, virtual clothes try-on, realistic 3D animations, and/or the like.
  • historical processes for generating textured 3D models can often require considerable effort and resources to produce detailed and realistic models.
  • an apparatus comprises at least one processor, and at least one memory including computer program code for one or more computer programs, the at least one memory and the computer program code configured to, with the at least one processor, cause, at least in part, the apparatus to receive at least one input image depicting a subject.
  • the at least one image comprises a standard representation of the subject.
  • the apparatus is further caused to determine at least one depth representation of the at least one input image.
  • the apparatus further causes, at least in part, a creation of a UV map representation of the subject based on the at least one input image and the at least one depth representation.
  • the UV map representation includes, for instance, a UV map-geometry representation of a three-dimensional shape of the subject and a UV map-visual representation of at least one visual characteristic of the at least one subject.
  • the apparatus further causes, at least in part, a training of a machine learning algorithm to infer the UV map representation, the UV map-geometry representation, the UV map- visual representation, or a combination thereof based on the standard representation.
  • a method comprises receiving at least one input image depicting a subject.
  • the at least one image comprises a standard representation of the subject.
  • the method also comprises determining at least one depth representation of the at least one input image.
  • the method further comprises causing, at least in part, a creation of a UV map representation of the subject based on the at least one input image and the at least one depth representation.
  • the UV map representation includes, for instance, a UV map-geometry representation of a three-dimensional shape of the subject and a UV map-visual representation of at least one visual characteristic of the at least one subject.
  • the method further comprises causing, at least in part, a training of a machine learning algorithm to infer the UV map representation, the UV map-geometry representation, the UV map-visual representation, or a combination thereof based on the standard representation.
  • a non-transitory computer-readable storage medium carries one or more sequences of one or more instructions which, when executed by one or more processors, cause, at least in part, an apparatus to receive at least one input image depicting a subject.
  • the at least one image comprises a standard representation of the subject.
  • the apparatus is further caused to determine at least one depth representation of the at least one input image.
  • the apparatus further causes, at least in part, a creation of a UV map representation of the subject based on the at least one input image and the at least one depth representation.
  • the UV map representation includes, for instance, a UV map-geometry representation of a three- dimensional shape of the subject and a UV map-visual representation of at least one visual characteristic of the at least one subject.
  • an apparatus comprises means for receiving at least one input image depicting a subject.
  • the at least one image comprises a standard representation of the subject.
  • the apparatus also comprises means for determining at least one depth representation of the at least one input image.
  • the apparatus further comprises means for causing, at least in part, a creation of a UV map representation of the subject based on the at least one input image and the at least one depth representation.
  • the UV map representation includes, for instance, a UV map-geometry representation of a three-dimensional shape of the subject and a UV map-visual representation of at least one visual characteristic of the at least one subject.
  • the apparatus further comprises means for causing, at least in part, a training of a machine learning algorithm to infer the UV map representation, the UV map-geometry representation, the UV map- visual representation, or a combination thereof based on the UV map representation, the representation.
  • a method comprising facilitating a processing of and/or processing (1) data and/or (2) information and/or (3) at least one signal, the (1) data and/or (2) information and/or (3) at least one signal based, at least in part, on (or derived at least in part from) any one or any combination of methods (or processes) disclosed in this application as relevant to any embodiment of the invention.
  • a method comprising facilitating access to at least one interface configured to allow access to at least one service, the at least one service configured to perform any one or any combination of network or service provider methods (or processes) disclosed in this application.
  • a method comprising facilitating creating and/or facilitating modifying (1) at least one device user interface element and/or (2) at least one device user interface functionality, the (1) at least one device user interface element and/or (2) at least one device user interface functionality based, at least in part, on data and/or information resulting from one or any combination of methods or processes disclosed in this application as relevant to any embodiment of the invention, and/or at least one signal resulting from one or any combination of methods (or processes) disclosed in this application as relevant to any embodiment of the invention.
  • a method comprising creating and/or modifying (1) at least one device user interface element and/or (2) at least one device user interface functionality, the (1) at least one device user interface element and/or (2) at least one device user interface functionality based at least in part on data and/or information resulting from one or any combination of methods (or processes) disclosed in this application as relevant to any embodiment of the invention, and/or at least one signal resulting from one or any combination of methods (or processes) disclosed in this application as relevant to any embodiment of the invention.
  • the methods can be accomplished on the service provider side or on the mobile device side or in any shared way between service provider and mobile device with actions being performed on both sides.
  • An apparatus comprising means for performing a method of the claims.
  • FIG. 1 is a diagram of a system capable of providing 3D textures using a UV map representation, according to one example embodiment
  • FIG. 2 is a diagram illustrating an example of a general process for generating a textured model, according to one example embodiment
  • FIGs. 3A and 3B are diagrams illustrating example inaccuracies encountered when generating textured models, according to one example embodiment
  • FIGs. 4A, 4B and 4C are diagrams illustrating examples of a standard map representation and a UV map representation, according to one example embodiment
  • FIG. 5 is a flowchart of a process for training a machine learning algorithm to estimate UV map representations from input images, according to one example embodiment
  • FIG. 6 is a diagram illustrating an example UV map representations of a training dataset, according to one example embodiment
  • FIG. 7 is a flowchart of a process for using a trained machine learning algorithm to estimate a UV map representation, according to one example embodiment
  • FIG. 8 is a diagram of hardware that can be used to implement an embodiment
  • FIG. 9 is a diagram of a chip set that can be used to implement an embodiment.
  • FIG. 10 is a diagram of a mobile terminal (e.g., handset or vehicle or part thereof) that can be used to implement an embodiment.
  • a mobile terminal e.g., handset or vehicle or part thereof
  • FIG. 1 is a diagram of a system capable of providing 3D textures using a UV map representation, according to one example embodiment.
  • the automatic generation of fully textured 3D model of various subjects is important for various applications, including, but not limited to, virtual reality, video editing, virtual clothes try-on, video conference, realistic 3D animations, etc.
  • virtual reality e.g., virtual reality
  • video editing e.g., virtual clothes try-on
  • video conference e.g., virtual clothes try-on
  • realistic 3D animations etc.
  • the process of creating a detailed and realistic human texture can be resource intensive and technically challenging.
  • a textured 3D model generally can be separated into a process for generating the 3D model (e.g., a 3D mesh) and then determining the visual characteristics (e.g., textures, bump maps, etc.) that are to be projected on the 3D model or mesh to produce the fully textured or rendered 3D model.
  • the 3D model e.g., a 3D mesh
  • the visual characteristics e.g., textures, bump maps, etc.
  • FIG. 2 is a diagram illustrating an example of a general process for generating a textured model, according to one example embodiment.
  • one or more images 201 e.g., from multiple views
  • a 3D model or mesh representing the human subject can then be extracted from the images 201.
  • the texture or visual characteristic of the human subject can also be extracted from the images 201.
  • the texture can be rendered onto the extracted 3D model or mesh.
  • the extracted texture can then be applied onto the 3D model or mesh to create the textured model 203.
  • the extraction of the 3D mesh and the texture (or other visual characteristics) enables the system 100 to manipulate the resulting textured model 203 in different ways.
  • the textured model 203 can be transformed into different variations by manipulating the applied textures.
  • This enables the rendering of different variations 205a-205d of the textured model 203.
  • Variation 205a for instance, renders a texture depicting different clothes on the 3D mesh of the subject.
  • Variation 205b can render a texture a completely different subject onto the 3D mesh of the original subject.
  • variations 205c and 205d can rendered the textured model from different views (e.g., the original textured model 203 is rendered from the front view, while variations 205c and 205d are rendered from respective side views).
  • accurate texture models depend on the accurate extraction of human pose and shape parameters (e.g., extraction of the 3D mesh) of the subject from input images (e.g., extraction of the human pose and shape parameters from the images 201).
  • Inaccurate parameters can result in issues such as, but not limited to, texture bleeding, ghosting, misalignment, spatial incoherence, illumination incoherence, and/or the like.
  • low input image resolution e.g., due to subject distance from the camera
  • FIGs. 3A and 3B are diagrams illustrating example inaccuracies encountered when generating textured models, according to one example embodiment.
  • inaccuracies 303a-303c result from a misalignment and some of the other issues described above between the extracted textures and the underlying 3D mesh to produce a textured model that represents the original subject less accurately.
  • the textured model 311 of FIG. 3B also exhibits inaccuracies 313a-313c that also result from various texture generation issues.
  • texture generation is an essential task for reconstructing realistic 3D models because the texture represents crucial information, e.g., for describing and/or identifying human or object instances, especially facial detail.
  • Some historical techniques focus on combining texture fragments from different views such as by blending multiple images into textures with various weighted average strategies and cues from body parts segmentation.
  • these methods can be sensitive to noise introduced by background noise, and human body/object pose and shape estimation.
  • the output textures can suffer from blurring and ghosting.
  • Other approaches can project images to appropriate vertices and faces. These approaches alleviate the blurring and ghosting problems, but can be vulnerable to texture bleeding.
  • the system 100 of FIG. 1 introduces a capability to provide an end-to-end machine learning-based system to generate fully textured 3D models (e.g., human and/or object models) using coarse 3D shape cues.
  • the system 100 predicts the complete UV map representation 105 of the texture map or map of any other visual characteristics of the subject (e.g., UV map-visual representation 107) with respect to a UV-geometry 109 of the subject using a supervised network 111.
  • the system 100 can process input images 101 to develop representations in both standard and UV map space.
  • the system 100 trains the network 111 (e.g., an encoder-decoder network) to directly regress the UV map representation 105 from the network inputs (e.g., color image 101 and depth image 103).
  • the use of a UV map representation 105 enables the system 100 to turn the hard 3D inference problem into an image- to-image translation which is amenable to available neural networks (e.g., convolutional neural networks) by encoding geometry and color on a common UV map representation 105.
  • the predicted UV map-visual representation 107 can be applied to a 3D model 113 to generate a textured model 115.
  • the system 100 provides an end-to-end supervised network 111 (e.g., based on a machine learning algorithm such as but not limited to a neural network or equivalent) to learn a UV map representation 105 of subject that embeds both the subject’s geometry via a UV map-geometry representation 109 and the subject’s visual characteristics (e.g., texture, bump map, etc.) via a UV map-visual representation 107 (e.g., also referred to as a UV map-color when the visual characteristics being represented is a texture of the subject).
  • a UV map representation 105 of subject that embeds both the subject’s geometry via a UV map-geometry representation 109 and the subject’s visual characteristics (e.g., texture, bump map, etc.) via a UV map-visual representation 107 (e.g., also referred to as a UV map-color when the visual characteristics being represented is a texture of the subject).
  • a UV map-visual representation 107 e.g., also referred to as a
  • the trained supervised network 111 takes a standard representation 117 of the subject (e.g., comprising the color image 101 and depth image 103) to directly infer the UV map-representation 105 of the subject.
  • the system 100 uses the UV map representation 105 to represent both the geometry and the texture (or other visual characteristics) of a full subject (e.g., a full human body or other object).
  • full refers to a complete 3D model of the subject (e.g., the entire human body) or a complete 3D model of a part of the subject (e.g., a hand, head, torso, etc. of the human body).
  • the system 100 denotes the standard map representation 117 (also referred to as a standard map) and the UV map representation 105 (also referred to as a UV map) in discussing the embodiments described herein.
  • the standard map 117 contains the input color image 101 and a depth image 103.
  • This depth image 103 can be generated from the shape and pose parameters (e.g., human and/or object shape and pose) learned from the input image 101 using any means known in the art.
  • the system 100 can back- project face visibility of the estimated 3D vertices (e.g., estimated from the depth image 103) to create the UV map representation 105 comprising the UV map-geometry 109 and UV map- visual or map-color 107.
  • the supervised network 111 can incorporate multiple views where input images 101 from the standard map representation 117 are of a subject in motion (e.g., 360 degree motion). This can provide a way to bypass getting accurate pose parameters over different views, and also lighting will not need to be constant over these views.
  • the supervised network 111 can also be trained to infer or predict the UV map representation 105 over time.
  • the supervised network 111 can infer changes in the UV map-geometry representation 109 and/or UV map- visual representation 107 as a sequence over time to effectively infer a movement of the subject over time.
  • the predicted sequences of UV map representations 105 can then be rendered on a 3D model that is morphed to reflect the sequence of geometries and/or textures to create a video or other media depicting the predicted movement of the subject in a realistic manner.
  • FIGs. 4A-4C are diagrams illustrating examples of a standard map representation 117 and a UV map representation 105, according to one example embodiment.
  • the standard map 117 includes a color image 101 and a depth image 103 extracted from the color image or from other equivalent means for determining or extracting depth information associated with the color image 101.
  • the color image 101 is a standard image captured using a camera sensor of a user equipment (UE) 119.
  • the system 100 uses this standard map 117 as an input to the supervised network 111 to predict or infer the UV map representation of the subject that embeds both the texture/color (or other visual characteristics) of the subject along with the 3D surface geometry of the subject.
  • the UV map representation 105 is a two-dimensional representation (e.g., based on a U-axis and a V-axis) of the 3D mesh or model of the subject where the 3D mesh has been “unwrapped” from the 3D shape of the subject and flattened on a plane represented by the U and V axes of the UV map representation 105.
  • the system 100 is able to infer the full visibility or complete texture (e.g., UV map-color 107) of the subject from the partial visibility of the standard map representation 117 of the subject.
  • FIGs. 4B and 4C illustrate examples in which different views in the standard map 117 input can still result in corresponding full UV map representations 105.
  • FIG. 4B and 4C illustrate examples in which different views in the standard map 117 input can still result in corresponding full UV map representations 105.
  • the standard map includes a color image 401 and depth image 403 that depict the subject from a front facing view.
  • the system 100 can use any process known in the art (e.g., Skinned MultiPerson Linear Model (SMPL)) to extract a 3D mesh from the coarse depth information in the depth image and/or color image to generate a 3D mesh 405.
  • SMPL Skinned MultiPerson Linear Model
  • the supervised network 111 uses the color image 401 and depth image 403 to infer a UV map representation 105 comprising a UV map- geometry 407 and UV map-visual/map-color 409 that provides complete visibility of the textures from all surfaces of the subject.
  • the supervised network 111 is trained using an adversarial loss to enable to accurate inference of the non-visible areas of the subject (e.g., the back).
  • FIG. 4C provides a similar example of the standard map 117 comprising a color image 411 and depth image 413, but in this example, the color image 411 depicts the subject in a side-facing view. Thus, the subject’s right side is visible but not the left side.
  • the 3D mesh extraction can still be used to infer a 3D mesh 415 of the subject.
  • the network 111 can infer the complete UV map 105 with the UV map- geometry 417 and UV map-visual/map-color 419 depicting the right side and the left side of the subject.
  • the embodiments of the UV map representation 105 described herein embed both geometry and texture/color/other visual characteristics.
  • the UV map-geometry 109 encoding the geometry of the subject contains three channels. The first two channels embed the coarse shape representation while the third channel embeds the depth representation.
  • the UV map-color 107 (or UV map-visual 107) encodes the true three channel pixel value corresponding to object or human body in the standard map 117. It is noted that the numbers and types of channels described above are provided by way of illustration and not as limitations.
  • channels can be used to represent the geometry or visual characteristics of a subject depending on the types of geometries/coordinate systems used and/or the types of visual characteristics being used. For example, three color channels can be used to represent a full color texture pixel, while a channel can be used to represent a surface or bump height of a pixel.
  • UV map-color/map-visual 107 there is a mapping between the UV map-color/map-visual 107 and the original 3D vertices so that the resulting UV map-color/map-visual 107 can be applied to fully texture a human or object model.
  • the advantages of using the UV map representation 105 rather than standard map representation 117 include but is not limited to: (i) it allows generation of realistic high resolution texture regardless of the resolution of the input color image; and (ii) it allows the system 100 to simplify the problem of human texture generation to “2D-2D” space rather than “3D-2D” space. This is possible thanks to the representation that maps a partial visibility of the input image to a full visibility on the 2D plane represented as UV map 105.
  • the network 111 is trained to minimize a weighted loss on the UV map 105 and maximize similarity between ground truth UV maps and estimated UV maps.
  • the inputs are the standard map color image 101 and depth image 103 which are used to predict the UV map outputs.
  • the depth image 103 and UV map-geometry 109 help the network 111 to capture global human or object shape and better understand the body or object features while eliminating the influence of background variation.
  • the embodiments described herein are also fast to compute (e.g., enabling higher frame rate textured 3D animation) using only coarse estimates of the input 3D shapes and pose parameters.
  • the system 100 includes one or more components that can perform the various example embodiments of providing 3D textures using UV map representations.
  • a UE 119 can include a texture client 121 to generate 3D textures according to the embodiments described herein.
  • the system 100 can include a texture platform 123 including a machine learning system 125 to generate 3D textures according to the embodiments described herein alone or in combination with the UE 119 and/or texture client 121, for instance, over a communication network 133.
  • the above presented modules and components of the system 100 can be implemented in circuitry, hardware, firmware, software, or a combination thereof. It is contemplated that the functions of these components may be combined or performed by other components of equivalent functionality.
  • the texture client 121 and/or texture platform 123 may be implemented as a module of any other components of the system 100 such as but not limited to a services platform 127, one or more services 129a- 129n of the services platform 127, and/or content providers 13 la-13 lm that use the UV map representation 105 outputs.
  • the texture client 121 and/or texture platform 123 may be implemented as a cloud-based service, local service, native application, or combination thereof. The functions of these modules are discussed with respect to FIGs. 3-9 below.
  • FIG. 5 is a flowchart of a process for training a machine learning algorithm to estimate UV map representations from input images, according to one example embodiment.
  • the texture client 121 and/or texture platform 123 may perform one or more portions of the process 500 and may be implemented in, for instance, a chip set including a processor and a memory as shown in FIG. 9.
  • texture client 121 and/or texture platform 123 can provide means for accomplishing various parts of the process 500, as well as means for accomplishing embodiments of other processes described herein in conjunction with other components of the system 100.
  • the process 500 is illustrated and described as a sequence of steps, it is contemplated that various embodiments of the process 500 may be performed in any order or combination and need not include all of the illustrated steps.
  • the system 100 receives at least one input image (e.g., color image 101) depicting a subject of interest (e.g., the human or object to be modeled and textured).
  • the inputs to the network 111 are the standard map representation 117 inputs: e.g., color images 101 and/or depth images 103.
  • the output of the network 111 is the UV map representation 105 including, e.g., the UV map-geometry representation 109 and/or the UV map- color/map-visual representation 107.
  • the system 100 need not output the UV map-geometry 109 because the UV map-geometry 109 encodes the shape and geometry of the subject to help the network learn better during training.
  • the system 100 determines at least one depth representation (e.g., a depth image 103) of the at least one input image of step 501.
  • the depth image 103 represents the pixel coordinate value (u, v), depth (w) of the estimated 3D mesh that has been transformed to align with the input color image 101.
  • U, V, W coordinate system is provided by way of illustration and not as a limitation, it is contemplated that any coordinate system can be used to indicate the depth information.
  • the transformation of the 3D mesh to the input color image 101 is based on 3D model parameters estimated based on object- specific models. For example, human subjects can be modeled according to SMPL (Skinned MultiPerson Linear Model) or equivalent.
  • the UV map-geometry 109 of a human subject can be based on the SMPL parameters estimated from image joint fitting.
  • the visible faces of the 3D points from a viewing camera of the input image 101 can be encoded in the depth image 103.
  • This representation ensures that the depth image 103 incorporates both shape and geometry information to help the network learn better the texture even if the human or object depth image silhouette does not perfectly align with human color image silhouette. This misalignment between the depth image 103 and color image 101 can result in the inaccuracies in the textured model described above.
  • the system 100 creates a UV map representation of the subject based on the at least one input image and the at least one depth representation.
  • the UV map representation includes, at least in part, a UV map-geometry representation of a three- dimensional shape of the subject and a UV map-visual representation of at least one visual characteristic of the at least one subject.
  • any ground-truth 3D model can be transformed to their corresponding UV map from the standard map input derived earlier.
  • the UV map is created by back-projecting the faces of the estimated 3D vertices onto the input image to interpolate the corresponding color appearance.
  • a pixel on the input depth image 103 can then be represented as P(u, v,w ) in the UV map-geometry 109, while the input color image 101 is represented as P(r, g, b ) in the UV map-color 107.
  • the combination of these individual UV maps 109 and 107 on both geometric and the color parts is P (it, v, w, r, g, b ) i j . From this representation, one can see that (r, g, b) encodes the color intensity value from the standard map 117 while (it, v, w) provides the location information from 3D space of each pixel or location i,j.
  • the system 100 causes a training of a machine learning algorithm or model (e.g., the supervised network 111) to infer the UV map representation, the UV map-geometry representation, UV map-visual representation, or a combination thereof based on the standard representation.
  • a machine learning algorithm or model e.g., the supervised network 111
  • the machine learning system 125 of the texture platform 123 can incorporate a supervised learning model (e.g., a logistic regression model, Random Forest model, and/or any equivalent model).
  • the machine learning system 125 can use a learner module that feeds feature sets (e.g., features extracted from the standard map 117 inputs such as the color image 101 and depth image 103) from the training data set into the machine learning model to compute a predicted matching feature (e.g., UV map representations 105) using an initial set of model parameters.
  • the learner module compares the predicted matching probability and the predicted feature to the ground truth data (e.g., the manually annotated feature labels) in the training data set for each observation (e.g., image) used for training.
  • the learner module then computes an accuracy of the predictions for the initial set of model parameters using one or more loss functions.
  • the learner module incrementally adjusts the model parameters until the model generates predictions at a desired or configured level of accuracy with respect to the manually annotated labels in the training data (e.g., the ground truth data).
  • a “trained” feature prediction model is a classifier with model parameters adjusted to make accurate predictions with respect to the training data set.
  • the network 111 of the system 100 adopts an encoder- decoder architecture that maps 256 x 256 x 6 standard map 117 inputs to 256 x 256 x 6 UV map 105 outputs.
  • the architecture described in this embodiment is provide by way of illustration and not as a limitation. It is contemplated that any other equivalent architecture can be used including using a smaller or larger grid or number of channels.
  • the system 100 can use a neural network (e.g., the first three layers of a VGGNet or equivalent). The number of layers and/or neurons to use of the network can be determined based on considering the balance between performance and speed. In on example, the system 100 can use additional layers such as but not limited to a designated number (e.g., four) of consequent up sampling and convolutional layers for the decoder.
  • the system 100 can use a weighted multi-task loss that the network 111 tries to minimize on the UV maps 105 using standard maps 117 as inputs.
  • the weighted multi-task loss can apply different loss functions and/or different weights for the loss functions differentially for the individual UV map-geometry 109 and UV map- visual/map-color 107, or for different parts of the subject (e.g., when the subject is a human, for different parts of the body).
  • the system 100 can apply a loss that favors smoothing for the UV map- geometry 109 to provide smoother 3D mesh models, and then apply a loss that favors maintaining high level detail for the UV map-visual 107 (e.g., to provide for higher detail textures).
  • the system 100 can employ a local weighting approach on different human body parts or parts of the object/subject of interest present on the UV map 105 for either the UV map-geometry 109 and/or UV map-visual 107.
  • the system 100 can employ both local and sample weighting strategies.
  • the samples where the face is clearly visible can be given less weight compared to the ones where the face is partly occluded in the input image. This helps, for instance, the network 111 to learn more from hard samples (e.g., learn to infer missing parts of occluded faces more accurately).
  • the visibility map (or 3D mesh) provided in 405 and 415 for example can be used to tune the network to learn more for missing parts.
  • L t is summarized in equations (1-6) below, where L g is the UV map-geometry loss and L c is the UV map-color loss.
  • the system 100 can use a weighted L 1 loss with a total variation regularizer L r . Based on this, the overall objective to minimize for the geometry part of the UV map 105 is described in equations (2-4). This minimization is done between the predicted UV map geometry v, w) and the ground truth geometry R ⁇ ; ⁇ ( ⁇ , v, w).
  • the mask is used to adjust the weight of each 3D point according to the human body or object part to which it belongs in the UV map 105. This can help to ensure that the network 111 does not over-fit to the body /object parts with larger areas relative to parts. In other words, this allows the system 100 to balance the supervision applied to different body/object parts on the UV map 105.
  • a total variation regularizer L r can be employed to encourage spatial smoothness of the UV map-geometry 109. For example, given R k defines human body or object part region on the UV map 105, then a k adjusts smoothing constraints on different body or object parts.
  • the parameter l can be set through validation.
  • the system 100 can use an loss between the predicted UV map color Pi j (r, g, b ) and the ground truth color Pi (r, g, b), as shown in equations (5 - 6). is the sample weight used to train the loss, and can be adapted to the visibility mask representing the human face or other body or object part in the input image 101. Accordingly it is the sample weight corresponding to visibility of the face or other designated body or object part present in the training sample. To compute the sample weights, the system 100 can use the visibility of the designated part (e.g., human face) in the input training images calculated from the normal of the 3D mesh faces (like 3D mesh 405 or 415 in FIG. 4B and 4C) from the camera viewpoints.
  • the designated part e.g., human face
  • oo is a mask that is used to adjust the weight of each 3D point according to the body or object part to which it belongs in the UV map 105.
  • the mask can give higher weights to the part representing the face or other designed body or object part of interest.
  • the weights can be defined so that loss optimization focuses more on the face and the body compared to the legs and hands (or any other designated body or object parts).
  • the input color image 101 of the standard map 117 generally depicts only the part of the subject that is in the line of sight of the camera, the color image 101 will not have data representing any portion of the subject that is occluded (e.g., an image 101 depicting a human facing towards the camera will not show the back of the human).
  • the ground truth UV map 105 of the training image 101 will have the full visibility of the subject depicting all surface textures of the subject on a 2D plane.
  • the system 100 can also apply an adversarial loss to improve inferences of non-visible portions of the subjects from an initial input image.
  • This adversarial loss can be based on a generative adversarial network (GAN) that includes a generator network and a discriminator network.
  • GAN generative adversarial network
  • the generator network is trained to infer missing, occluded, or non-visible portions of the input image 101, while the discriminator network is trained to evaluate the images generated by the generator network to determine classify whether they are real image or generated images.
  • the adversarial loss can then be used to maximize the photo-realism of the generated image or portions of images by maximizing the discrimination error of the discriminator network (e.g., maximize the generator network’s ability to fool the discriminator network).
  • the system 100 can collect a diverse set of images of various subjects (e.g., humans and/or objects).
  • the training set of images can include actual and/or synthetic images.
  • the system 100 can assemble a target number of images from a variety of sources. For example, the system 100 can obtain training dataset that are generated using SURREAL datasets, Human3.6M datasets, A36pose datasets (e.g., a proprietary dataset created by the inventors and not generally available to the public), and/or any other equivalent dataset.
  • SURREAL Synthetic hUmans foR REAL tasks
  • the system 100 can create any number of images frames, e.g., approximately 50,000 frames spanning 100 subjects with various clothing, backgrounds, and poses.
  • the system 100 can add images from more than one source or from a source that provides different features in the images.
  • the system 100 can collect training data from a source such as but not limited to A36pose.
  • A36pose dataset for instance, was created by the inventors to improve the diversity of the training images and provides images of subjects from non-rigid registration of SMPL to people in clothing.
  • the system can further obtain images of subjects with different visual appearances or features or identities such as moustache, chunky, bald, hairy etc.
  • the system 100 can render or obtain images with different backgrounds which further increase diversity.
  • Other dataset such as Human3.6M can provide imagery with subjects engaged in action sequences (e.g., to assist in training to infer movement of changes in the UV map-geometry 109 and/or UV map-color 107 over time to generate 3D animation videos). Images from this dataset can also be selected for challenging scenarios such as increase inter-occlusion of body parts and/or other objects.
  • the system 100 can present the occluded portions of images for manual inpainting to ensure complete UV maps 105.
  • FIG. 6 is a diagram illustrating an example UV map representations of a training dataset, according to one example embodiment.
  • a training data 600 includes a set of training images 601 depicting a diverse set of subjects (e.g., humans).
  • the training data 600 also includes ground truth UV maps 603 that include the UV map-geometries 109 (e.g., 3D meshes) of the each of the subjects and the UV map-colors 107 aligned to the respective UV map-geometries 109.
  • the training images 601 were cropped and scaled to 256x256 (or any other designated resolution) with full visibility of the whole subject (e.g., human body or object).
  • the system 100 processes their SMPL parameters (or equivalent modeling parameters) to create respective depth images and UV map representations as previously described.
  • the system 100 can create additional training images with perturbed SMPL parameters so that the depth images do not accurately align with the human body silhouette (e.g., to more accurately reflect the accuracy expected with real world images).
  • the images can be randomly translated, rotated, flipped, color jittered, and/or otherwise manipulated. This is not a trivial augmentation procedure since the corresponding ground-truth UV maps need to be transformed also.
  • FIG. 7 is a flowchart of a process for using a trained machine learning algorithm to estimate a UV map representation, according to one example embodiment.
  • the texture client 121 and/or texture platform 123 may perform one or more portions of the process 700 and may be implemented in, for instance, a chip set including a processor and a memory as shown in FIG. 9.
  • texture client 121 and/or texture platform 123 can provide means for accomplishing various parts of the process 700, as well as means for accomplishing embodiments of other processes described herein in conjunction with other components of the system 100.
  • the process 700 is illustrated and described as a sequence of steps, it is contemplated that various embodiments of the process 700 may be performed in any order or combination and need not include all of the illustrated steps.
  • the trained network 111 can be instantiated in a device (e.g., the UE 119/texture client 121 and/or the texture platform 123) to begin inferring 3D textures and models.
  • the system 100 can receive an input image or images depicting a subject for which a 3D texture is to be generated.
  • the input image is provided as a standard map representation 117.
  • the standard map representation 117 is a classical image representation.
  • the color image 101 represents the color of a pixel using standard color representation (e.g., R, G, B representation), and the depth image is a (u, v, w) representation in which u, v are image coordinates and w is the depth from coarsely known 3D points.
  • standard color representation e.g., R, G, B representation
  • the depth image is a (u, v, w) representation in which u, v are image coordinates and w is the depth from coarsely known 3D points.
  • the system 100 processes the input image(s) using the trained network 111 (e.g., a trained machine learning algorithm or model) to infer the UV map representation 105 of the standard representation 117.
  • the UV map representation 105 is an atlas representation in which the UV map-geometry 109 also is expressed as (u, v, w) in which the u, v values encode the shape of the subject (as opposed to the image coordinates of the classical representation), and w for the depth (e.g., with reference to a point within the 3D mesh of the subject such as from a central point of the 3D mesh).
  • the system 100 need not crop the input image to obtain a realistic texture (e.g., realistic UV map representation 105).
  • the system 100 can also use coarse depth supervision to avoid background noise from the input image 101.
  • step 705 the system 100 can apply the inferred UV map representation 105 (e.g., UV map-visual/map-color 107) onto a 3D mesh of model of the subject, and then render the textured 3D representation in a user interface of an application (step 707).
  • the inferred UV map representation 105 e.g., UV map-visual/map-color 107
  • the instantiated and trained network 111 is relatively light weight in terms of computer resource requirements, thereby enabling it’s use in more resource restricted devices (e.g., the UE 119).
  • the trained network 111 can enable real-time animation of textured 3D models in a variety of use cases (e.g., augmented reality, video conferencing, gaming, etc.).
  • the use of the UV map representation 105 enables higher resolution output on the UV map 105 than the input image 101 (e.g., low resolution imagery or when the subject is far from the camera). This effectively enables predictive upscaling of the from the standard map 117 to the UV map 105.
  • UV map representation 105 advantageously simplifies the search space of the texture generation problem to a transformation from 2D low resolution inputs (e.g., single or multiple images) to higher resolution 2D outputs (e.g., the UV map representation 105).
  • 2D low resolution inputs e.g., single or multiple images
  • 2D outputs e.g., the UV map representation 105
  • the problem would the more complex transformation of the 2D low resolution inputs to a 3D resolution space and vice versa.
  • the system 100 includes the texture client 121 of the UE 119 and/or the texture platform 123 for providing 3D texture generation using UV map representations according the various embodiments described herein.
  • the system 100 can include a computer vision system (e.g., associated with the UE 119) configured to use machine learning to detect subjects (e.g., humans and/or objects) depicted in images for generating 3D textures according to the embodiments described herein.
  • the texture platform 123 and/or texture client 121 includes a machine learning system 125 that is used to train and/or use the supervised network 111.
  • the supervised network 111 can be a neural network or other equivalent machine learning model (e.g., Support Vector Machines, Random Forest, etc.).
  • the neural network of the machine learning system 107 is a traditional convolutional neural network which consists of multiple layers of collections of one or more neurons.
  • the texture client 121 and/or texture platform 123 have connectivity over a communication network 133 to the services platform 127 that provides one or more services 129.
  • the services 129 may be third party services and include mapping services, navigation services, travel planning services, notification services, social networking services, content (e.g., audio, video, images, etc.) provisioning services, application services, storage services, contextual information determination services, location based services, information based services (e.g., weather, news, etc.), etc.
  • the services 129 use the output of texture client 121 and/or texture platform 123 to perform one or more functions or operations.
  • the texture client 121 and/or texture platform 123 may be a platform with multiple interconnected components.
  • the texture client 121 and/or texture platform 123 may include multiple servers, intelligent networking devices, computing devices, components and corresponding software for providing parametric representations of lane lines.
  • the texture client 121 and/or texture platform 123 may be a separate entity of the system 100, a part of the one or more services 129, a part of the services platform 127, or included within the UE 119.
  • content providers 131 a- 131 m may provide content or data (e.g., including image data, training data, textures, etc.) to the texture client 121 and/or texture platform 123.
  • the content provided may be any type of content, such as text content, audio content, video content, image content, etc.
  • the content providers may provide content that may aid in the generating 3D textures using a UV map representation according to the embodiments described herein.
  • the content providers may also store content (e.g., textures, training data, trained machine learning models, 3D mesh data, etc.) used or generated by the texture client 121 and/or texture platform 123.
  • the content providers may manage access to a central repository of data, and offer a consistent, standard interface to data.
  • the UE 119 is any type of mobile terminal, fixed terminal, or portable terminal including a built-in navigation system, a personal navigation device, mobile handset, station, unit, device, multimedia computer, multimedia tablet, Internet node, communicator, desktop computer, laptop computer, notebook computer, netbook computer, tablet computer, personal communication system (PCS) device, personal digital assistants (PDAs), audio/video player, digital camera/camcorder, positioning device, fitness device, television receiver, radio broadcast receiver, electronic book device, game device, or any combination thereof, including the accessories and peripherals of these devices, or any combination thereof. It is also contemplated that the UE 119 can support any type of interface to the user (such as “wearable” circuitry, etc.).
  • the communication network 133 of system 100 includes one or more networks such as a data network, a wireless network, a telephony network, or any combination thereof.
  • the data network may be any local area network (LAN), metropolitan area network (MAN), wide area network (WAN), a public data network (e.g., the Internet), short range wireless network, or any other suitable packet-switched network, such as a commercially owned, proprietary packet- switched network, e.g., a proprietary cable or fiber optic network, and the like, or any combination thereof.
  • the wireless network may be, for example, a cellular network and may employ various technologies including enhanced data rates for global evolution (EDGE), general packet radio service (GPRS), global system for mobile communications (GSM), Internet protocol multimedia subsystem (IMS), universal mobile telecommunications system (UMTS), etc., as well as any other suitable wireless medium, e.g., worldwide interoperability for microwave access (WiMAX), Long Term Evolution (LTE) networks, code division multiple access (CDMA), wideband code division multiple access (WCDMA), wireless fidelity (Wi-Fi), wireless LAN (WLAN), Bluetooth®, Internet Protocol (IP) data casting, satellite, mobile ad-hoc network (MANET), 5G New Radio, cloud Radio Access Network (RAN), and the like, or any combination thereof.
  • EDGE enhanced data rates for global evolution
  • GPRS general packet radio service
  • GSM global system for mobile communications
  • IMS Internet protocol multimedia subsystem
  • UMTS universal mobile telecommunications system
  • any other suitable wireless medium e.g., worldwide inter
  • the texture client 121 and/or texture platform 123 communicate with each other and other components of the system 100 using well known, new or still developing protocols.
  • a protocol includes a set of rules defining how the network nodes within the communication network 133 interact with each other based on information sent over the communication links.
  • the protocols are effective at different layers of operation within each node, from generating and receiving physical signals of various types, to selecting a link for transferring those signals, to the format of information indicated by those signals, to identifying which software application executing on a computer system sends or receives the information.
  • the conceptually different layers of protocols for exchanging information over a network are described in the Open Systems Interconnection (OSI) Reference Model.
  • OSI Open Systems Interconnection
  • Each packet typically comprises (1) header information associated with a particular protocol, and (2) payload information that follows the header information and contains information that may be processed independently of that particular protocol.
  • the packet includes (3) trailer information following the payload and indicating the end of the payload information.
  • the header includes information such as the source of the packet, its destination, the length of the payload, and other properties used by the protocol.
  • the data in the payload for the particular protocol includes a header and payload for a different protocol associated with a different, higher layer of the OSI Reference Model.
  • the header for a particular protocol typically indicates a type for the next protocol contained in its payload.
  • the higher layer protocol is said to be encapsulated in the lower layer protocol.
  • the headers included in a packet traversing multiple heterogeneous networks, such as the Internet typically include a physical (layer 1) header, a data-link (layer 2) header, an internetwork (layer 3) header and a transport (layer 4) header, and various application (layer 5, layer 6 and layer 7) headers as defined by the OSI Reference Model.
  • the processes described herein for providing 3D texture generation using UV map representations may be advantageously implemented via circuitry, software, hardware (e.g., general processor, Digital Signal Processing (DSP) chip, an Application Specific Integrated Circuit (ASIC), Field Programmable Gate Arrays (FPGAs), etc.), firmware or a combination thereof.
  • the term “circuitry” may refer to one or more or all of the following:
  • circuitry also covers an implementation of merely a hardware circuit or processor (or multiple processors) or portion of a hardware circuit or processor and its (or their) accompanying software and/or firmware.
  • circuitry also covers, for example and if applicable to the particular claim element, a baseband integrated circuit or processor integrated circuit for a mobile device or a similar integrated circuit in server, a cellular network device, or other computing or network device.
  • FIG. 8 illustrates a computer system 800 upon which an embodiment of the invention may be implemented.
  • Computer system 800 is programmed (e.g., via computer program code or instructions) to provide 3D texture generation using UV map representations as described herein and includes a communication mechanism such as a bus 810 for passing information between other internal and external components of the computer system 800.
  • Information also called data
  • Information is represented as a physical expression of a measurable phenomenon, typically electric voltages, but including, in other embodiments, such phenomena as magnetic, electromagnetic, pressure, chemical, biological, molecular, atomic, sub-atomic and quantum interactions. For example, north and south magnetic fields, or a zero and non-zero electric voltage, represent two states (0, 1) of a binary digit (bit). Other phenomena can represent digits of a higher base.
  • a superposition of multiple simultaneous quantum states before measurement represents a quantum bit (qubit).
  • a sequence of one or more digits constitutes digital data that is used to represent a number or code for a character.
  • information called analog data is represented by a near continuum of measurable values within a particular range.
  • a bus 810 includes one or more parallel conductors of information so that information is transferred quickly among devices coupled to the bus 810.
  • One or more processors 802 for processing information are coupled with the bus 810.
  • a processor 802 performs a set of operations on information as specified by computer program code related to providing 3D texture generation using UV map representations.
  • the computer program code is a set of instructions or statements providing instructions for the operation of the processor and/or the computer system to perform specified functions.
  • the code for example, may be written in a computer programming language that is compiled into a native instruction set of the processor. The code may also be written directly using the native instruction set (e.g., machine language).
  • the set of operations include bringing information in from the bus 810 and placing information on the bus 810.
  • the set of operations also typically include comparing two or more units of information, shifting positions of units of information, and combining two or more units of information, such as by addition or multiplication or logical operations like OR, exclusive OR (XOR), and AND.
  • Each operation of the set of operations that can be performed by the processor is represented to the processor by information called instructions, such as an operation code of one or more digits.
  • a sequence of operations to be executed by the processor 802, such as a sequence of operation codes, constitute processor instructions, also called computer system instructions or, simply, computer instructions.
  • Processors may be implemented as mechanical, electrical, magnetic, optical, chemical or quantum components, among others, alone or in combination.
  • Computer system 800 also includes a memory 804 coupled to bus 810.
  • the memory 804 such as a random access memory (RAM) or other dynamic storage device, stores information including processor instructions for providing 3D texture generation using UV map representations. Dynamic memory allows information stored therein to be changed by the computer system 800. RAM allows a unit of information stored at a location called a memory address to be stored and retrieved independently of information at neighboring addresses.
  • the memory 804 is also used by the processor 802 to store temporary values during execution of processor instructions.
  • the computer system 800 also includes a read only memory (ROM) 806 or other static storage device coupled to the bus 810 for storing static information, including instructions, that is not changed by the computer system 800. Some memory is composed of volatile storage that loses the information stored thereon when power is lost.
  • Information including instructions for providing 3D texture generation using UV map representations, is provided to the bus 810 for use by the processor from an external input device 812, such as a keyboard containing alphanumeric keys operated by a human user, or a sensor.
  • an external input device 812 such as a keyboard containing alphanumeric keys operated by a human user, or a sensor.
  • a sensor detects conditions in its vicinity and transforms those detections into physical expression compatible with the measurable phenomenon used to represent information in computer system 800.
  • Other external devices coupled to bus 810 used primarily for interacting with humans, include a display device 814, such as a cathode ray tube (CRT) or a liquid crystal display (LCD), or plasma screen or printer for presenting text or images, and a pointing device 816, such as a mouse or a trackball or cursor direction keys, or motion sensor, for controlling a position of a small cursor image presented on the display 814 and issuing commands associated with graphical elements presented on the display 814.
  • a display device 814 such as a cathode ray tube (CRT) or a liquid crystal display (LCD), or plasma screen or printer for presenting text or images
  • a pointing device 816 such as a mouse or a trackball or cursor direction keys, or motion sensor, for controlling a position of a small cursor image presented on the display 814 and issuing commands associated with graphical elements presented on the display 814.
  • a display device 814 such as a cathode ray tube (CRT
  • special purpose hardware such as an application specific integrated circuit (ASIC) 820
  • ASIC application specific integrated circuit
  • the special purpose hardware is configured to perform operations not performed by processor 802 quickly enough for special purposes.
  • application specific ICs include graphics accelerator cards for generating images for display 814, cryptographic boards for encrypting and decrypting messages sent over a network, speech recognition, and interfaces to special external devices, such as robotic arms and medical scanning equipment that repeatedly perform some complex sequence of operations that are more efficiently implemented in hardware.
  • Computer system 800 also includes one or more instances of a communications interface 870 coupled to bus 810.
  • Communication interface 870 provides a one-way or two-way communication coupling to a variety of external devices that operate with their own processors, such as printers, scanners and external disks. In general the coupling is with a network link 878 that is connected to a local network 880 to which a variety of external devices with their own processors are connected.
  • communication interface 870 may be a parallel port or a serial port or a universal serial bus (USB) port on a personal computer.
  • USB universal serial bus
  • communications interface 870 is an integrated services digital network (ISDN) card or a digital subscriber line (DSL) card or a telephone modem that provides an information communication connection to a corresponding type of telephone line.
  • ISDN integrated services digital network
  • DSL digital subscriber line
  • a communication interface 870 is a cable modem that converts signals on bus 810 into signals for a communication connection over a coaxial cable or into optical signals for a communication connection over a fiber optic cable.
  • communications interface 870 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN, such as Ethernet. Wireless links may also be implemented.
  • LAN local area network
  • the communications interface 870 sends or receives or both sends and receives electrical, acoustic or electromagnetic signals, including infrared and optical signals, that carry information streams, such as digital data.
  • the communications interface 870 includes a radio band electromagnetic transmitter and receiver called a radio transceiver.
  • the communications interface 870 enables connection to the communication network 133 for providing 3D texture generation using UV map representations.
  • the term computer-readable medium is used herein to refer to any medium that participates in providing information to processor 802, including instructions (e.g., computer program instructions) for execution.
  • the instructions can cause an apparatus (e.g., processor, computer, device, etc.) to perform one or more steps, functions, operations, etc. specified in the instructions or computer program instructions.
  • a computer program may comprise instructions for causing an apparatus to perform at least any of the steps, functions, operations, etc. specified in the instructions.
  • a computer-readable medium e.g., transitory or non-transitory
  • Such a medium may take many forms, including, but not limited to, non-volatile or non-transitory media, volatile or transitory media and transmission media.
  • Non volatile media include, for example, optical or magnetic disks, such as storage device 808.
  • Volatile media include, for example, dynamic memory 804.
  • Transmission media include, for example, coaxial cables, copper wire, fiber optic cables, and carrier waves that travel through space without wires or cables, such as acoustic waves and electromagnetic waves, including radio, optical and infrared waves. Signals include man-made transient variations in amplitude, frequency, phase, polarization or other physical properties transmitted through the transmission media.
  • Computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD-ROM, CDRW, DVD, any other optical medium, punch cards, paper tape, optical mark sheets, any other physical medium with patterns of holes or other optically recognizable indicia, a RAM, a PROM, an EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave, or any other medium from which a computer can read.
  • a floppy disk a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD-ROM, CDRW, DVD, any other optical medium, punch cards, paper tape, optical mark sheets, any other physical medium with patterns of holes or other optically recognizable indicia, a RAM, a PROM, an EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave, or any other medium from which a computer can read.
  • Network link 878 typically provides information communication using transmission media through one or more networks to other devices that use or process the information.
  • network link 878 may provide a connection through local network 880 to a host computer 882 or to equipment 884 operated by an Internet Service Provider (ISP).
  • ISP equipment 884 in turn provides data communication services through the public, world-wide packet-switching communication network of networks now commonly referred to as the Internet 890.
  • a computer called a server host 892 connected to the Internet hosts a process that provides a service in response to information received over the Internet.
  • server host 892 hosts a process that provides information representing video data for presentation at display 814. It is contemplated that the components of system can be deployed in various configurations within other computer systems, e.g., host 882 and server 892.
  • FIG. 9 illustrates a chip set 900 upon which an embodiment of the invention may be implemented.
  • Chip set 900 is programmed to provide 3D texture generation using UV map representations as described herein and includes, for instance, the processor and memory components described with respect to FIG. 8 incorporated in one or more physical packages (e.g., chips).
  • a physical package includes an arrangement of one or more materials, components, and/or wires on a structural assembly (e.g., a baseboard) to provide one or more characteristics such as physical strength, conservation of size, and/or limitation of electrical interaction. It is contemplated that in certain embodiments the chip set can be implemented in a single chip.
  • the chip set 900 includes a communication mechanism such as a bus 901 for passing information among the components of the chip set 900.
  • a processor 903 has connectivity to the bus 901 to execute instructions and process information stored in, for example, a memory 905.
  • the processor 903 may include one or more processing cores with each core configured to perform independently.
  • a multi-core processor enables multiprocessing within a single physical package. Examples of a multi-core processor include two, four, eight, or greater numbers of processing cores.
  • the processor 903 may include one or more microprocessors configured in tandem via the bus 901 to enable independent execution of instructions, pipelining, and multithreading.
  • the processor 903 may also be accompanied with one or more specialized components to perform certain processing functions and tasks such as one or more digital signal processors (DSP) 907, or one or more application- specific integrated circuits (ASIC) 909.
  • DSP digital signal processors
  • ASIC application- specific integrated circuits
  • a DSP 907 typically is configured to process real-world signals (e.g., sound) in real time independently of the processor 903.
  • an ASIC 909 can be configured to performed specialized functions not easily performed by a general purposed processor.
  • Other specialized components to aid in performing the inventive functions described herein include one or more field programmable gate arrays (FPGA) (not shown), one or more controllers (not shown), or one or more other special-purpose computer chips.
  • FPGA field programmable gate arrays
  • the processor 903 and accompanying components have connectivity to the memory 905 via the bus 901.
  • the memory 905 includes both dynamic memory (e.g., RAM, magnetic disk, writable optical disk, etc.) and static memory (e.g., ROM, CD-ROM, etc.) for storing executable instructions that when executed perform the inventive steps described herein to provide 3D texture generation using UV map representations.
  • the memory 905 also stores the data associated with or generated by the execution of the inventive steps.
  • FIG. 10 is a diagram of exemplary components of a mobile terminal (e.g., handset) capable of operating in the system of FIG. 1, according to one embodiment.
  • a radio receiver is often defined in terms of front-end and back-end characteristics.
  • the front-end of the receiver encompasses all of the Radio Frequency (RF) circuitry whereas the back-end encompasses all of the base-band processing circuitry.
  • Pertinent internal components of the telephone include a Main Control Unit (MCU) 1003, a Digital Signal Processor (DSP) 1005, and a receiver/transmitter unit including a microphone gain control unit and a speaker gain control unit.
  • a main display unit 1007 provides a display to the user in support of various applications and mobile station functions that offer automatic contact matching.
  • An audio function circuitry 1009 includes a microphone 1011 and microphone amplifier that amplifies the speech signal output from the microphone 1011. The amplified speech signal output from the microphone 1011 is fed to a coder/decoder (CODEC) 1013.
  • a radio section 1015 amplifies power and converts frequency in order to communicate with a base station, which is included in a mobile communication system, via antenna 1017.
  • the power amplifier (PA) 1019 and the transmitter/modulation circuitry are operationally responsive to the MCU 1003, with an output from the PA 1019 coupled to the duplexer 1021 or circulator or antenna switch, as known in the art.
  • the PA 1019 also couples to a battery interface and power control unit 1020.
  • a user of mobile station 1001 speaks into the microphone 1011 and his or her voice along with any detected background noise is converted into an analog voltage.
  • the analog voltage is then converted into a digital signal through the Analog to Digital Converter (ADC) 1023.
  • ADC Analog to Digital Converter
  • the control unit 1003 routes the digital signal into the DSP 1005 for processing therein, such as speech encoding, channel encoding, encrypting, and interleaving.
  • the processed voice signals are encoded, by units not separately shown, using a cellular transmission protocol such as global evolution (EDGE), general packet radio service (GPRS), global system for mobile communications (GSM), Internet protocol multimedia subsystem (IMS), universal mobile telecommunications system (UMTS), etc., as well as any other suitable wireless medium, e.g., microwave access (WiMAX), Long Term Evolution (LTE) networks, code division multiple access (CDMA), wireless fidelity (WiFi), satellite, and the like.
  • a cellular transmission protocol such as global evolution (EDGE), general packet radio service (GPRS), global system for mobile communications (GSM), Internet protocol multimedia subsystem (IMS), universal mobile telecommunications system (UMTS), etc.
  • EDGE global evolution
  • GPRS general packet radio service
  • GSM global system for mobile communications
  • IMS Internet protocol multimedia subsystem
  • UMTS universal mobile telecommunications system
  • any other suitable wireless medium e.g., microwave access (WiMAX), Long Term Evolution (LTE)
  • the encoded signals are then routed to an equalizer 1025 for compensation of any frequency-dependent impairments that occur during transmission though the air such as phase and amplitude distortion.
  • the modulator 1027 combines the signal with a RF signal generated in the RF interface 1029.
  • the modulator 1027 generates a sine wave by way of frequency or phase modulation.
  • an up- converter 1031 combines the sine wave output from the modulator 1027 with another sine wave generated by a synthesizer 1033 to achieve the desired frequency of transmission.
  • the signal is then sent through a PA 1019 to increase the signal to an appropriate power level.
  • the PA 1019 acts as a variable gain amplifier whose gain is controlled by the DSP 1005 from information received from a network base station.
  • the signal is then filtered within the duplexer 1021 and optionally sent to an antenna coupler 1035 to match impedances to provide maximum power transfer. Finally, the signal is transmitted via antenna 1017 to a local base station.
  • An automatic gain control (AGC) can be supplied to control the gain of the final stages of the receiver.
  • the signals may be forwarded from there to a remote telephone which may be another cellular telephone, other mobile phone or a land-line connected to a Public Switched Telephone Network (PSTN), or other telephony networks.
  • PSTN Public Switched Telephone Network
  • Voice signals transmitted to the mobile station 1001 are received via antenna 1017 and immediately amplified by a low noise amplifier (LNA) 1037.
  • LNA low noise amplifier
  • a down-converter 1039 lowers the carrier frequency while the demodulator 1041 strips away the RF leaving only a digital bit stream.
  • the signal then goes through the equalizer 1025 and is processed by the DSP 1005.
  • a Digital to Analog Converter (DAC) 1043 converts the signal and the resulting output is transmitted to the user through the speaker 1045, all under control of a Main Control Unit (MCU) 1003-which can be implemented as a Central Processing Unit (CPU) (not shown).
  • MCU Main Control Unit
  • CPU Central Processing Unit
  • the MCU 1003 receives various signals including input signals from the keyboard 1047.
  • the keyboard 1047 and/or the MCU 1003 in combination with other user input components (e.g., the microphone 1011) comprise a user interface circuitry for managing user input.
  • the MCU 1003 runs a user interface software to facilitate user control of at least some functions of the mobile station 1001 to provide 3D texture generation using UV map representations.
  • the MCU 1003 also delivers a display command and a switch command to the display 1007 and to the speech output switching controller, respectively.
  • the MCU 1003 exchanges information with the DSP 1005 and can access an optionally incorporated SIM card 1049 and a memory 1051.
  • the MCU 1003 executes various control functions required of the station.
  • the DSP 1005 may, depending upon the implementation, perform any of a variety of conventional digital processing functions on the voice signals. Additionally, DSP 1005 determines the background noise level of the local environment from the signals detected by microphone 1011 and sets the gain of microphone 1011 to a level selected to compensate for the natural tendency of the user of the mobile station 1001.
  • the CODEC 1013 includes the ADC 1023 and DAC 1043.
  • the memory 1051 stores various data including call incoming tone data and is capable of storing other data including music data received via, e.g., the global Internet.
  • the software module could reside in RAM memory, flash memory, registers, or any other form of writable computer-readable storage medium known in the art including non-transitory computer-readable storage medium.
  • the memory device 1051 may be, but not limited to, a single memory, CD, DVD, ROM, RAM, EEPROM, optical storage, or any other non-volatile or non-transitory storage medium capable of storing digital data.
  • An optionally incorporated SIM card 1049 carries, for instance, important information, such as the cellular phone number, the carrier supplying service, subscription details, and security information.
  • the SIM card 1049 serves primarily to identify the mobile station 1001 on a radio network.
  • the card 1049 also contains a memory for storing a personal telephone number registry, text messages, and user specific mobile station settings.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

An approach is provided for generating three-dimensional textures (3D) using UV map representation. The approach, for example, involves receiving at least one input image depicting a subject. The at least one image comprises a standard representation of the subject. The approach also involves determining at least one depth representation of the at least one input image. The approach further involves causing, at least in part, a creation of a UV map representation of the subject based on the at least one input image and the at least one depth representation. The UV map representation includes, for instance, a UV map-geometry representation of a three dimensional shape of the subject and a UV map-visual representation of at least one visual characteristic (e.g., texture) of the at least one subject.

Description

APPARATUS, METHOD, AND SYSTEM FOR PROVIDING A THREE-DIMENSIONAL TEXTURE USING UV REPRESENTATION
BACKGROUND
[0001] The generation of fully textured three-dimensional (3D) models (e.g., human or object models) can be used for various applications including, but not limited to, virtual reality, video editing, virtual clothes try-on, realistic 3D animations, and/or the like. However, historical processes for generating textured 3D models can often require considerable effort and resources to produce detailed and realistic models.
SOME EXAMPLE EMBODIMENTS
[0002] Therefore, there is a need for more efficiently providing 3D textures and/or other representations of visual characteristics of a human or object using a UV representation (or other equivalent two-dimensional (2D) representation of the 3D surface(s) of a model of the human or object).
[0003] According to one example embodiment, an apparatus comprises at least one processor, and at least one memory including computer program code for one or more computer programs, the at least one memory and the computer program code configured to, with the at least one processor, cause, at least in part, the apparatus to receive at least one input image depicting a subject. The at least one image comprises a standard representation of the subject. The apparatus is further caused to determine at least one depth representation of the at least one input image. The apparatus further causes, at least in part, a creation of a UV map representation of the subject based on the at least one input image and the at least one depth representation. The UV map representation includes, for instance, a UV map-geometry representation of a three-dimensional shape of the subject and a UV map-visual representation of at least one visual characteristic of the at least one subject. The apparatus further causes, at least in part, a training of a machine learning algorithm to infer the UV map representation, the UV map-geometry representation, the UV map- visual representation, or a combination thereof based on the standard representation.
[0004] According to another example embodiment, a method comprises receiving at least one input image depicting a subject. The at least one image comprises a standard representation of the subject. The method also comprises determining at least one depth representation of the at least one input image. The method further comprises causing, at least in part, a creation of a UV map representation of the subject based on the at least one input image and the at least one depth representation. The UV map representation includes, for instance, a UV map-geometry representation of a three-dimensional shape of the subject and a UV map-visual representation of at least one visual characteristic of the at least one subject. The method further comprises causing, at least in part, a training of a machine learning algorithm to infer the UV map representation, the UV map-geometry representation, the UV map-visual representation, or a combination thereof based on the standard representation.
[0005] According to another example embodiment, a non-transitory computer-readable storage medium carries one or more sequences of one or more instructions which, when executed by one or more processors, cause, at least in part, an apparatus to receive at least one input image depicting a subject. The at least one image comprises a standard representation of the subject. The apparatus is further caused to determine at least one depth representation of the at least one input image. The apparatus further causes, at least in part, a creation of a UV map representation of the subject based on the at least one input image and the at least one depth representation. The UV map representation includes, for instance, a UV map-geometry representation of a three- dimensional shape of the subject and a UV map-visual representation of at least one visual characteristic of the at least one subject. The apparatus further causes, at least in part, a training of a machine learning algorithm to infer the UV map representation, the UV map-geometry representation, the UV map-visual representation, or a combination thereof based on the standard representation. [0006] According to another example embodiment, an apparatus comprises means for receiving at least one input image depicting a subject. The at least one image comprises a standard representation of the subject. The apparatus also comprises means for determining at least one depth representation of the at least one input image. The apparatus further comprises means for causing, at least in part, a creation of a UV map representation of the subject based on the at least one input image and the at least one depth representation. The UV map representation includes, for instance, a UV map-geometry representation of a three-dimensional shape of the subject and a UV map-visual representation of at least one visual characteristic of the at least one subject. The apparatus further comprises means for causing, at least in part, a training of a machine learning algorithm to infer the UV map representation, the UV map-geometry representation, the UV map- visual representation, or a combination thereof based on the UV map representation, the representation.
[0007] In addition, for various example embodiments of the invention, the following is applicable: a method comprising facilitating a processing of and/or processing (1) data and/or (2) information and/or (3) at least one signal, the (1) data and/or (2) information and/or (3) at least one signal based, at least in part, on (or derived at least in part from) any one or any combination of methods (or processes) disclosed in this application as relevant to any embodiment of the invention.
[0008] For various example embodiments of the invention, the following is also applicable: a method comprising facilitating access to at least one interface configured to allow access to at least one service, the at least one service configured to perform any one or any combination of network or service provider methods (or processes) disclosed in this application.
[0009] For various example embodiments of the invention, the following is also applicable: a method comprising facilitating creating and/or facilitating modifying (1) at least one device user interface element and/or (2) at least one device user interface functionality, the (1) at least one device user interface element and/or (2) at least one device user interface functionality based, at least in part, on data and/or information resulting from one or any combination of methods or processes disclosed in this application as relevant to any embodiment of the invention, and/or at least one signal resulting from one or any combination of methods (or processes) disclosed in this application as relevant to any embodiment of the invention.
[0010] For various example embodiments of the invention, the following is also applicable: a method comprising creating and/or modifying (1) at least one device user interface element and/or (2) at least one device user interface functionality, the (1) at least one device user interface element and/or (2) at least one device user interface functionality based at least in part on data and/or information resulting from one or any combination of methods (or processes) disclosed in this application as relevant to any embodiment of the invention, and/or at least one signal resulting from one or any combination of methods (or processes) disclosed in this application as relevant to any embodiment of the invention.
[0011] In various example embodiments, the methods (or processes) can be accomplished on the service provider side or on the mobile device side or in any shared way between service provider and mobile device with actions being performed on both sides.
[0012] For various example embodiments, the following is applicable: An apparatus comprising means for performing a method of the claims.
[0013] Still other aspects, features, and advantages of the invention are readily apparent from the following detailed description, simply by illustrating a number of particular embodiments and implementations, including the best mode contemplated for carrying out the invention. The invention is also capable of other and different embodiments, and its several details can be modified in various obvious respects, all without departing from the spirit and scope of the invention. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as restrictive.
BRIEF DESCRIPTION OF THE DRAWINGS
[0014] The example embodiments of the invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings: [0015] FIG. 1 is a diagram of a system capable of providing 3D textures using a UV map representation, according to one example embodiment;
[0016] FIG. 2 is a diagram illustrating an example of a general process for generating a textured model, according to one example embodiment;
[0017] FIGs. 3A and 3B are diagrams illustrating example inaccuracies encountered when generating textured models, according to one example embodiment;
[0018] FIGs. 4A, 4B and 4C are diagrams illustrating examples of a standard map representation and a UV map representation, according to one example embodiment;
[0019] FIG. 5 is a flowchart of a process for training a machine learning algorithm to estimate UV map representations from input images, according to one example embodiment;
[0020] FIG. 6 is a diagram illustrating an example UV map representations of a training dataset, according to one example embodiment;
[0021] FIG. 7 is a flowchart of a process for using a trained machine learning algorithm to estimate a UV map representation, according to one example embodiment;
[0022] FIG. 8 is a diagram of hardware that can be used to implement an embodiment;
[0023] FIG. 9 is a diagram of a chip set that can be used to implement an embodiment; and
[0024] FIG. 10 is a diagram of a mobile terminal (e.g., handset or vehicle or part thereof) that can be used to implement an embodiment.
DESCRIPTION OF SOME EMBODIMENTS
[0025] Examples of a method, apparatus, and computer program for providing three- dimensional textures (3D) using UV map representation are disclosed. In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the example embodiments of the invention. It is apparent, however, to one skilled in the art that the embodiments of the invention may be practiced without these specific details or with an equivalent arrangement. In other instances, structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the embodiments of the invention.
[0026] FIG. 1 is a diagram of a system capable of providing 3D textures using a UV map representation, according to one example embodiment. The automatic generation of fully textured 3D model of various subjects (e.g., humans or other objects) is important for various applications, including, but not limited to, virtual reality, video editing, virtual clothes try-on, video conference, realistic 3D animations, etc. In the case of complex subjects such as humans, the process of creating a detailed and realistic human texture can be resource intensive and technically challenging. For example, a textured 3D model generally can be separated into a process for generating the 3D model (e.g., a 3D mesh) and then determining the visual characteristics (e.g., textures, bump maps, etc.) that are to be projected on the 3D model or mesh to produce the fully textured or rendered 3D model.
[0027] FIG. 2 is a diagram illustrating an example of a general process for generating a textured model, according to one example embodiment. In the example of FIG. 2, one or more images 201 (e.g., from multiple views) are captured of a human subject that is to be modeled. A 3D model or mesh representing the human subject can then be extracted from the images 201. In addition, the texture or visual characteristic of the human subject can also be extracted from the images 201. To produce the fully textured model 203, the texture can be rendered onto the extracted 3D model or mesh. The extracted texture can then be applied onto the 3D model or mesh to create the textured model 203. The extraction of the 3D mesh and the texture (or other visual characteristics) enables the system 100 to manipulate the resulting textured model 203 in different ways. For example, the textured model 203 can be transformed into different variations by manipulating the applied textures. This enables the rendering of different variations 205a-205d of the textured model 203. Variation 205a, for instance, renders a texture depicting different clothes on the 3D mesh of the subject. Variation 205b can render a texture a completely different subject onto the 3D mesh of the original subject. Then, variations 205c and 205d can rendered the textured model from different views (e.g., the original textured model 203 is rendered from the front view, while variations 205c and 205d are rendered from respective side views).
[0028] However, accurate texture models depend on the accurate extraction of human pose and shape parameters (e.g., extraction of the 3D mesh) of the subject from input images (e.g., extraction of the human pose and shape parameters from the images 201). Inaccurate parameters can result in issues such as, but not limited to, texture bleeding, ghosting, misalignment, spatial incoherence, illumination incoherence, and/or the like. In addition, low input image resolution (e.g., due to subject distance from the camera) can impact the texture resolution in real applications. These issues of inaccurate pose/shape parameters and/or low input image resolution can result in poor quality models.
[0029] FIGs. 3A and 3B are diagrams illustrating example inaccuracies encountered when generating textured models, according to one example embodiment. For example, in the textured model 301 of FIG. 3A, inaccuracies 303a-303c result from a misalignment and some of the other issues described above between the extracted textures and the underlying 3D mesh to produce a textured model that represents the original subject less accurately. Similarly, the textured model 311 of FIG. 3B also exhibits inaccuracies 313a-313c that also result from various texture generation issues.
[0030] As a result, there are significant technical challenges to avoiding such inaccuracies or issues when generating automatically generating textured 3D models, particularly when using a single image as an input for texture generation. This is because with a single image (or a limited number of images), the occlusion of various areas of the subject (e.g., caused by overlapping parts of human body or other objects that may block the subject) makes it particularly challenging to get the texture or visual characteristic information from the occluded parts. Moreover, the diversity of subject poses (e.g., human poses) and the background in the image(s) complicate the texture extraction process.
[0031] Moreover, texture generation is an essential task for reconstructing realistic 3D models because the texture represents crucial information, e.g., for describing and/or identifying human or object instances, especially facial detail. Some historical techniques focus on combining texture fragments from different views such as by blending multiple images into textures with various weighted average strategies and cues from body parts segmentation. However, these methods can be sensitive to noise introduced by background noise, and human body/object pose and shape estimation. As a result, the output textures can suffer from blurring and ghosting. Other approaches can project images to appropriate vertices and faces. These approaches alleviate the blurring and ghosting problems, but can be vulnerable to texture bleeding.
[0032] Aside from the multi-view based texture generation, another challenging problem is generating textures from a single image. However, conventional single image approaches can be computationally expensive, suffer from the low quality of generated textures, depend on underlying constraint that estimated human shape and pose parameters are accurate, etc.
[0033] To address these technical challenges, the system 100 of FIG. 1 introduces a capability to provide an end-to-end machine learning-based system to generate fully textured 3D models (e.g., human and/or object models) using coarse 3D shape cues. In one example embodiment, given an input color image 101 (e.g., a single image or multiple images) and coarse 3D estimates representation (e.g., a depth image 103), the system 100 predicts the complete UV map representation 105 of the texture map or map of any other visual characteristics of the subject (e.g., UV map-visual representation 107) with respect to a UV-geometry 109 of the subject using a supervised network 111. More specifically, the system 100 can process input images 101 to develop representations in both standard and UV map space. The system 100 then trains the network 111 (e.g., an encoder-decoder network) to directly regress the UV map representation 105 from the network inputs (e.g., color image 101 and depth image 103). The use of a UV map representation 105 enables the system 100 to turn the hard 3D inference problem into an image- to-image translation which is amenable to available neural networks (e.g., convolutional neural networks) by encoding geometry and color on a common UV map representation 105. The predicted UV map-visual representation 107 can be applied to a 3D model 113 to generate a textured model 115. [0034] In other words, the system 100 provides an end-to-end supervised network 111 (e.g., based on a machine learning algorithm such as but not limited to a neural network or equivalent) to learn a UV map representation 105 of subject that embeds both the subject’s geometry via a UV map-geometry representation 109 and the subject’s visual characteristics (e.g., texture, bump map, etc.) via a UV map-visual representation 107 (e.g., also referred to as a UV map-color when the visual characteristics being represented is a texture of the subject). In one example embodiment, the trained supervised network 111 takes a standard representation 117 of the subject (e.g., comprising the color image 101 and depth image 103) to directly infer the UV map-representation 105 of the subject. In this way, the system 100 uses the UV map representation 105 to represent both the geometry and the texture (or other visual characteristics) of a full subject (e.g., a full human body or other object). The term “full,” for instance, refers to a complete 3D model of the subject (e.g., the entire human body) or a complete 3D model of a part of the subject (e.g., a hand, head, torso, etc. of the human body).
[0035] In one example embodiment, the system 100 denotes the standard map representation 117 (also referred to as a standard map) and the UV map representation 105 (also referred to as a UV map) in discussing the embodiments described herein. By way of example, the standard map 117 contains the input color image 101 and a depth image 103. This depth image 103 can be generated from the shape and pose parameters (e.g., human and/or object shape and pose) learned from the input image 101 using any means known in the art. Thereafter, the system 100 can back- project face visibility of the estimated 3D vertices (e.g., estimated from the depth image 103) to create the UV map representation 105 comprising the UV map-geometry 109 and UV map- visual or map-color 107.
[0036] In one example embodiment, the supervised network 111 can incorporate multiple views where input images 101 from the standard map representation 117 are of a subject in motion (e.g., 360 degree motion). This can provide a way to bypass getting accurate pose parameters over different views, and also lighting will not need to be constant over these views. In yet another example embodiment, the supervised network 111 can also be trained to infer or predict the UV map representation 105 over time. In other words, the supervised network 111 can infer changes in the UV map-geometry representation 109 and/or UV map- visual representation 107 as a sequence over time to effectively infer a movement of the subject over time. The predicted sequences of UV map representations 105 can then be rendered on a 3D model that is morphed to reflect the sequence of geometries and/or textures to create a video or other media depicting the predicted movement of the subject in a realistic manner.
[0037] FIGs. 4A-4C are diagrams illustrating examples of a standard map representation 117 and a UV map representation 105, according to one example embodiment. As shown in FIG. 4A, the standard map 117 includes a color image 101 and a depth image 103 extracted from the color image or from other equivalent means for determining or extracting depth information associated with the color image 101. In one example embodiment, the color image 101 is a standard image captured using a camera sensor of a user equipment (UE) 119. The system 100 uses this standard map 117 as an input to the supervised network 111 to predict or infer the UV map representation of the subject that embeds both the texture/color (or other visual characteristics) of the subject along with the 3D surface geometry of the subject. In one example embodiment, the UV map representation 105 is a two-dimensional representation (e.g., based on a U-axis and a V-axis) of the 3D mesh or model of the subject where the 3D mesh has been “unwrapped” from the 3D shape of the subject and flattened on a plane represented by the U and V axes of the UV map representation 105. In this way, the system 100 is able to infer the full visibility or complete texture (e.g., UV map-color 107) of the subject from the partial visibility of the standard map representation 117 of the subject. This is because, unlike the color image 101 where the subject is only partially visible (e.g., the back of a front facing subject is not visible in the standard map representation 117), the UV map 105 is able to represent all surfaces of a 3D shape onto a 2D plane. Thus, the embodiments of the system 100 described herein effectively reduce the texture generation to a 2D-2D image inference problem. The resulting UV map 105 can then be applied or “wrapped” back on to a 3D model to texture the model. [0038] FIGs. 4B and 4C illustrate examples in which different views in the standard map 117 input can still result in corresponding full UV map representations 105. In the example of FIG. 4B, the standard map includes a color image 401 and depth image 403 that depict the subject from a front facing view. The system 100 can use any process known in the art (e.g., Skinned MultiPerson Linear Model (SMPL)) to extract a 3D mesh from the coarse depth information in the depth image and/or color image to generate a 3D mesh 405. The supervised network 111 uses the color image 401 and depth image 403 to infer a UV map representation 105 comprising a UV map- geometry 407 and UV map-visual/map-color 409 that provides complete visibility of the textures from all surfaces of the subject. The supervised network 111 is trained using an adversarial loss to enable to accurate inference of the non-visible areas of the subject (e.g., the back). FIG. 4C provides a similar example of the standard map 117 comprising a color image 411 and depth image 413, but in this example, the color image 411 depicts the subject in a side-facing view. Thus, the subject’s right side is visible but not the left side. The 3D mesh extraction can still be used to infer a 3D mesh 415 of the subject. In addition, because of the supervised network Ill’s training on adversarial loss, the network 111 can infer the complete UV map 105 with the UV map- geometry 417 and UV map-visual/map-color 419 depicting the right side and the left side of the subject.
[0039] As noted, the embodiments of the UV map representation 105 described herein embed both geometry and texture/color/other visual characteristics. In one example embodiment, the UV map-geometry 109 encoding the geometry of the subject contains three channels. The first two channels embed the coarse shape representation while the third channel embeds the depth representation. In addition, in a texturing use case, the UV map-color 107 (or UV map-visual 107) encodes the true three channel pixel value corresponding to object or human body in the standard map 117. It is noted that the numbers and types of channels described above are provided by way of illustration and not as limitations. It is contemplated that more or fewer channels can be used to represent the geometry or visual characteristics of a subject depending on the types of geometries/coordinate systems used and/or the types of visual characteristics being used. For example, three color channels can be used to represent a full color texture pixel, while a channel can be used to represent a surface or bump height of a pixel.
[0040] Finally, there is a mapping between the UV map-color/map-visual 107 and the original 3D vertices so that the resulting UV map-color/map-visual 107 can be applied to fully texture a human or object model. The advantages of using the UV map representation 105 rather than standard map representation 117 include but is not limited to: (i) it allows generation of realistic high resolution texture regardless of the resolution of the input color image; and (ii) it allows the system 100 to simplify the problem of human texture generation to “2D-2D” space rather than “3D-2D” space. This is possible thanks to the representation that maps a partial visibility of the input image to a full visibility on the 2D plane represented as UV map 105. In one example embodiment, the network 111 is trained to minimize a weighted loss on the UV map 105 and maximize similarity between ground truth UV maps and estimated UV maps. In one example embodiment, during inference, the inputs are the standard map color image 101 and depth image 103 which are used to predict the UV map outputs. The depth image 103 and UV map-geometry 109 help the network 111 to capture global human or object shape and better understand the body or object features while eliminating the influence of background variation. The embodiments described herein are also fast to compute (e.g., enabling higher frame rate textured 3D animation) using only coarse estimates of the input 3D shapes and pose parameters.
[0041] In one example embodiment, the system 100 includes one or more components that can perform the various example embodiments of providing 3D textures using UV map representations. For example, a UE 119 can include a texture client 121 to generate 3D textures according to the embodiments described herein. In addition or alternatively, the system 100 can include a texture platform 123 including a machine learning system 125 to generate 3D textures according to the embodiments described herein alone or in combination with the UE 119 and/or texture client 121, for instance, over a communication network 133. The above presented modules and components of the system 100 can be implemented in circuitry, hardware, firmware, software, or a combination thereof. It is contemplated that the functions of these components may be combined or performed by other components of equivalent functionality. Though depicted as separate entities in FIG. 1, it is contemplated that the texture client 121 and/or texture platform 123 may be implemented as a module of any other components of the system 100 such as but not limited to a services platform 127, one or more services 129a- 129n of the services platform 127, and/or content providers 13 la-13 lm that use the UV map representation 105 outputs. In another example embodiment, the texture client 121 and/or texture platform 123 may be implemented as a cloud-based service, local service, native application, or combination thereof. The functions of these modules are discussed with respect to FIGs. 3-9 below.
[0042] FIG. 5 is a flowchart of a process for training a machine learning algorithm to estimate UV map representations from input images, according to one example embodiment. In various embodiments, the texture client 121 and/or texture platform 123 may perform one or more portions of the process 500 and may be implemented in, for instance, a chip set including a processor and a memory as shown in FIG. 9. As such, texture client 121 and/or texture platform 123 can provide means for accomplishing various parts of the process 500, as well as means for accomplishing embodiments of other processes described herein in conjunction with other components of the system 100. Although the process 500 is illustrated and described as a sequence of steps, it is contemplated that various embodiments of the process 500 may be performed in any order or combination and need not include all of the illustrated steps.
[0043] In step 501, the system 100 receives at least one input image (e.g., color image 101) depicting a subject of interest (e.g., the human or object to be modeled and textured). In one embodiment, the inputs to the network 111 are the standard map representation 117 inputs: e.g., color images 101 and/or depth images 103. The output of the network 111 is the UV map representation 105 including, e.g., the UV map-geometry representation 109 and/or the UV map- color/map-visual representation 107. In one example embodiment, during inference, the system 100 need not output the UV map-geometry 109 because the UV map-geometry 109 encodes the shape and geometry of the subject to help the network learn better during training.
[0044] In step 503, the system 100 determines at least one depth representation (e.g., a depth image 103) of the at least one input image of step 501. The depth image 103 represents the pixel coordinate value (u, v), depth (w) of the estimated 3D mesh that has been transformed to align with the input color image 101. It is noted that the U, V, W coordinate system is provided by way of illustration and not as a limitation, it is contemplated that any coordinate system can be used to indicate the depth information. In one embodiment, the transformation of the 3D mesh to the input color image 101 is based on 3D model parameters estimated based on object- specific models. For example, human subjects can be modeled according to SMPL (Skinned MultiPerson Linear Model) or equivalent. SMPL renders the body mesh by calculating a linear function of pose and shape parameters, which enables the optimization of SMPL model by learning from massive data. In one example embodiment, the UV map-geometry 109 of a human subject can be based on the SMPL parameters estimated from image joint fitting. For instance, the visible faces of the 3D points from a viewing camera of the input image 101 can be encoded in the depth image 103. This representation ensures that the depth image 103 incorporates both shape and geometry information to help the network learn better the texture even if the human or object depth image silhouette does not perfectly align with human color image silhouette. This misalignment between the depth image 103 and color image 101 can result in the inaccuracies in the textured model described above.
[0045] In step 505, the system 100 creates a UV map representation of the subject based on the at least one input image and the at least one depth representation. As described above, the UV map representation includes, at least in part, a UV map-geometry representation of a three- dimensional shape of the subject and a UV map-visual representation of at least one visual characteristic of the at least one subject. During training, any ground-truth 3D model can be transformed to their corresponding UV map from the standard map input derived earlier. In addition, the UV map is created by back-projecting the faces of the estimated 3D vertices onto the input image to interpolate the corresponding color appearance.
[0046] For example, the system 100 can denote i = 1 ... W image width size and j = 1 ... H image height respectively. A pixel on the input depth image 103 can then be represented as P(u, v,w ) in the UV map-geometry 109, while the input color image 101 is represented as P(r, g, b ) in the UV map-color 107. The combination of these individual UV maps 109 and 107 on both geometric and the color parts is P (it, v, w, r, g, b ) ij . From this representation, one can see that (r, g, b) encodes the color intensity value from the standard map 117 while (it, v, w) provides the location information from 3D space of each pixel or location i,j.
[0047] In step 507, the system 100 causes a training of a machine learning algorithm or model (e.g., the supervised network 111) to infer the UV map representation, the UV map-geometry representation, UV map-visual representation, or a combination thereof based on the standard representation. For example, the machine learning system 125 of the texture platform 123 can incorporate a supervised learning model (e.g., a logistic regression model, Random Forest model, and/or any equivalent model). During training, the machine learning system 125 can use a learner module that feeds feature sets (e.g., features extracted from the standard map 117 inputs such as the color image 101 and depth image 103) from the training data set into the machine learning model to compute a predicted matching feature (e.g., UV map representations 105) using an initial set of model parameters. The learner module then compares the predicted matching probability and the predicted feature to the ground truth data (e.g., the manually annotated feature labels) in the training data set for each observation (e.g., image) used for training. The learner module then computes an accuracy of the predictions for the initial set of model parameters using one or more loss functions. If the accuracy or level of performance does not meet a threshold or configured level, the learner module incrementally adjusts the model parameters until the model generates predictions at a desired or configured level of accuracy with respect to the manually annotated labels in the training data (e.g., the ground truth data). In other words, a “trained” feature prediction model is a classifier with model parameters adjusted to make accurate predictions with respect to the training data set.
[0048] In one example embodiment, the network 111 of the system 100 adopts an encoder- decoder architecture that maps 256 x 256 x 6 standard map 117 inputs to 256 x 256 x 6 UV map 105 outputs. It is noted that the architecture described in this embodiment is provide by way of illustration and not as a limitation. It is contemplated that any other equivalent architecture can be used including using a smaller or larger grid or number of channels. For the encoder, the system 100 can use a neural network (e.g., the first three layers of a VGGNet or equivalent). The number of layers and/or neurons to use of the network can be determined based on considering the balance between performance and speed. In on example, the system 100 can use additional layers such as but not limited to a designated number (e.g., four) of consequent up sampling and convolutional layers for the decoder.
[0049] In one example embodiment, the system 100 can use a weighted multi-task loss that the network 111 tries to minimize on the UV maps 105 using standard maps 117 as inputs. The weighted multi-task loss can apply different loss functions and/or different weights for the loss functions differentially for the individual UV map-geometry 109 and UV map- visual/map-color 107, or for different parts of the subject (e.g., when the subject is a human, for different parts of the body). For example, the system 100 can apply a loss that favors smoothing for the UV map- geometry 109 to provide smoother 3D mesh models, and then apply a loss that favors maintaining high level detail for the UV map-visual 107 (e.g., to provide for higher detail textures). In addition, the system 100 can employ a local weighting approach on different human body parts or parts of the object/subject of interest present on the UV map 105 for either the UV map-geometry 109 and/or UV map-visual 107. However, for the UV map-color 107, the system 100 can employ both local and sample weighting strategies. In a human modeling case, the samples where the face is clearly visible, can be given less weight compared to the ones where the face is partly occluded in the input image. This helps, for instance, the network 111 to learn more from hard samples (e.g., learn to infer missing parts of occluded faces more accurately). The visibility map (or 3D mesh) provided in 405 and 415 for example can be used to tune the network to learn more for missing parts.
[0050] One example embodiment of the weighted multi-task loss Lt is summarized in equations (1-6) below, where Lg is the UV map-geometry loss and Lc is the UV map-color loss.
Lt = Lg + Lc (1)
[0051] In one example embodiment, for the UV map-geometry 109, the system 100 can use a weighted L1 loss with a total variation regularizer Lr. Based on this, the overall objective to minimize for the geometry part of the UV map 105 is described in equations (2-4). This minimization is done between the predicted UV map geometry
Figure imgf000018_0001
v, w) and the ground truth geometry Rί ;·(ΐί, v, w). In one example embodiment, the mask
Figure imgf000019_0001
is used to adjust the weight of each 3D point according to the human body or object part to which it belongs in the UV map 105. This can help to ensure that the network 111 does not over-fit to the body /object parts with larger areas relative to parts. In other words, this allows the system 100 to balance the supervision applied to different body/object parts on the UV map 105.
Figure imgf000019_0002
[0052] In one example embodiment, to encourage spatial smoothness of the UV map-geometry 109, a total variation regularizer Lr can be employed. For example, given Rk defines human body or object part region on the UV map 105, then ak adjusts smoothing constraints on different body or object parts. In one embodiment, the parameter l can be set through validation.
[0053] In one example embodiment, for the UV map-visual/map-color 107, the system 100 can use an loss between the predicted UV map color Pij(r, g, b ) and the ground truth color Pi (r, g, b), as shown in equations (5 - 6).
Figure imgf000019_0004
is the sample weight used to train the
Figure imgf000019_0003
loss, and can be adapted to the visibility mask representing the human face or other body or object part in the input image 101. Accordingly it is the sample weight corresponding to visibility of the face or other designated body or object part present in the training sample. To compute the sample weights, the system 100 can use the visibility of the designated part (e.g., human face) in the input training images calculated from the normal of the 3D mesh faces (like 3D mesh 405 or 415 in FIG. 4B and 4C) from the camera viewpoints.
Figure imgf000019_0005
[0054] oo is a mask that is used to adjust the weight of each 3D point according to the body or object part to which it belongs in the UV map 105. In one example embodiment, the mask can give higher weights to the part representing the face or other designed body or object part of interest. In this weight, the weights can be defined so that loss optimization focuses more on the face and the body compared to the legs and hands (or any other designated body or object parts). [0055] In one example embodiment, it is noted the input color image 101 of the standard map 117 generally depicts only the part of the subject that is in the line of sight of the camera, the color image 101 will not have data representing any portion of the subject that is occluded (e.g., an image 101 depicting a human facing towards the camera will not show the back of the human). However, the ground truth UV map 105 of the training image 101 will have the full visibility of the subject depicting all surface textures of the subject on a 2D plane. As a result, the system 100 can also apply an adversarial loss to improve inferences of non-visible portions of the subjects from an initial input image. This adversarial loss can be based on a generative adversarial network (GAN) that includes a generator network and a discriminator network. The generator network is trained to infer missing, occluded, or non-visible portions of the input image 101, while the discriminator network is trained to evaluate the images generated by the generator network to determine classify whether they are real image or generated images. The adversarial loss can then be used to maximize the photo-realism of the generated image or portions of images by maximizing the discrimination error of the discriminator network (e.g., maximize the generator network’s ability to fool the discriminator network).
[0056] In one embodiment, to train the network 111, the system 100 can collect a diverse set of images of various subjects (e.g., humans and/or objects). The training set of images can include actual and/or synthetic images. When using synthetic images, the system 100 can assemble a target number of images from a variety of sources. For example, the system 100 can obtain training dataset that are generated using SURREAL datasets, Human3.6M datasets, A36pose datasets (e.g., a proprietary dataset created by the inventors and not generally available to the public), and/or any other equivalent dataset. SURREAL (Synthetic hUmans foR REAL tasks), for instance, is a large scale synthetic dataset that supports the SMPL model and provides photorealistic images and corresponding texture UV maps 105 with good resolution and complete visibility UV map-color 107. The system 100 can create any number of images frames, e.g., approximately 50,000 frames spanning 100 subjects with various clothing, backgrounds, and poses. [0057] To increase diversity of training images, the system 100 can add images from more than one source or from a source that provides different features in the images. For example, the system 100 can collect training data from a source such as but not limited to A36pose. A36pose dataset, for instance, was created by the inventors to improve the diversity of the training images and provides images of subjects from non-rigid registration of SMPL to people in clothing. The system can further obtain images of subjects with different visual appearances or features or identities such as moustache, chunky, bald, hairy etc. In addition, the system 100 can render or obtain images with different backgrounds which further increase diversity.
[0058] Other dataset such as Human3.6M can provide imagery with subjects engaged in action sequences (e.g., to assist in training to infer movement of changes in the UV map-geometry 109 and/or UV map-color 107 over time to generate 3D animation videos). Images from this dataset can also be selected for challenging scenarios such as increase inter-occlusion of body parts and/or other objects. In one example embodiment, for the training dataset, where there are occlusions, the system 100 can present the occluded portions of images for manual inpainting to ensure complete UV maps 105.
[0059] FIG. 6 is a diagram illustrating an example UV map representations of a training dataset, according to one example embodiment. In the example of FIG. 6, a training data 600 includes a set of training images 601 depicting a diverse set of subjects (e.g., humans). The training data 600 also includes ground truth UV maps 603 that include the UV map-geometries 109 (e.g., 3D meshes) of the each of the subjects and the UV map-colors 107 aligned to the respective UV map-geometries 109. In one example embodiment, the training images 601 were cropped and scaled to 256x256 (or any other designated resolution) with full visibility of the whole subject (e.g., human body or object). Thereafter, the system 100 processes their SMPL parameters (or equivalent modeling parameters) to create respective depth images and UV map representations as previously described. In one example embodiment, the system 100 can create additional training images with perturbed SMPL parameters so that the depth images do not accurately align with the human body silhouette (e.g., to more accurately reflect the accuracy expected with real world images). In addition, the images can be randomly translated, rotated, flipped, color jittered, and/or otherwise manipulated. This is not a trivial augmentation procedure since the corresponding ground-truth UV maps need to be transformed also.
[0060] FIG. 7 is a flowchart of a process for using a trained machine learning algorithm to estimate a UV map representation, according to one example embodiment. In various embodiments, the texture client 121 and/or texture platform 123 may perform one or more portions of the process 700 and may be implemented in, for instance, a chip set including a processor and a memory as shown in FIG. 9. As such, texture client 121 and/or texture platform 123 can provide means for accomplishing various parts of the process 700, as well as means for accomplishing embodiments of other processes described herein in conjunction with other components of the system 100. Although the process 700 is illustrated and described as a sequence of steps, it is contemplated that various embodiments of the process 700 may be performed in any order or combination and need not include all of the illustrated steps.
[0061] After the supervised network 111 is trained (e.g., trained according to the embodiments of the process 500 of FIG. 5), the trained network 111 can be instantiated in a device (e.g., the UE 119/texture client 121 and/or the texture platform 123) to begin inferring 3D textures and models. [0062] In step 701, the system 100 can receive an input image or images depicting a subject for which a 3D texture is to be generated. The input image is provided as a standard map representation 117. As described above, the standard map representation 117 is a classical image representation. For example, the color image 101 represents the color of a pixel using standard color representation (e.g., R, G, B representation), and the depth image is a (u, v, w) representation in which u, v are image coordinates and w is the depth from coarsely known 3D points.
[0063] In step 703, the system 100 processes the input image(s) using the trained network 111 (e.g., a trained machine learning algorithm or model) to infer the UV map representation 105 of the standard representation 117. In one embodiment, the UV map representation 105 is an atlas representation in which the UV map-geometry 109 also is expressed as (u, v, w) in which the u, v values encode the shape of the subject (as opposed to the image coordinates of the classical representation), and w for the depth (e.g., with reference to a point within the 3D mesh of the subject such as from a central point of the 3D mesh). In one example embodiment, the system 100 need not crop the input image to obtain a realistic texture (e.g., realistic UV map representation 105). In one embodiment, the system 100 can also use coarse depth supervision to avoid background noise from the input image 101.
[0064] In step 705, the system 100 can apply the inferred UV map representation 105 (e.g., UV map-visual/map-color 107) onto a 3D mesh of model of the subject, and then render the textured 3D representation in a user interface of an application (step 707).
[0065] The instantiated and trained network 111 is relatively light weight in terms of computer resource requirements, thereby enabling it’s use in more resource restricted devices (e.g., the UE 119). In addition, the trained network 111 can enable real-time animation of textured 3D models in a variety of use cases (e.g., augmented reality, video conferencing, gaming, etc.). In addition, the use of the UV map representation 105 enables higher resolution output on the UV map 105 than the input image 101 (e.g., low resolution imagery or when the subject is far from the camera). This effectively enables predictive upscaling of the from the standard map 117 to the UV map 105. The use of the UV map representation 105 advantageously simplifies the search space of the texture generation problem to a transformation from 2D low resolution inputs (e.g., single or multiple images) to higher resolution 2D outputs (e.g., the UV map representation 105). In contrast, without the UV map representation according to the embodiments described herein, the problem would the more complex transformation of the 2D low resolution inputs to a 3D resolution space and vice versa.
[0066] Returning to FIG. 1, the system 100 includes the texture client 121 of the UE 119 and/or the texture platform 123 for providing 3D texture generation using UV map representations according the various embodiments described herein. In some use cases, the system 100 can include a computer vision system (e.g., associated with the UE 119) configured to use machine learning to detect subjects (e.g., humans and/or objects) depicted in images for generating 3D textures according to the embodiments described herein. In one embodiment, the texture platform 123 and/or texture client 121 includes a machine learning system 125 that is used to train and/or use the supervised network 111. The supervised network 111 , for instance, can be a neural network or other equivalent machine learning model (e.g., Support Vector Machines, Random Forest, etc.). In one embodiment, the neural network of the machine learning system 107 is a traditional convolutional neural network which consists of multiple layers of collections of one or more neurons.
[0067] In one embodiment, the texture client 121 and/or texture platform 123 have connectivity over a communication network 133 to the services platform 127 that provides one or more services 129. By way of example, the services 129 may be third party services and include mapping services, navigation services, travel planning services, notification services, social networking services, content (e.g., audio, video, images, etc.) provisioning services, application services, storage services, contextual information determination services, location based services, information based services (e.g., weather, news, etc.), etc. In one embodiment, the services 129 use the output of texture client 121 and/or texture platform 123 to perform one or more functions or operations.
[0068] In one embodiment, the texture client 121 and/or texture platform 123 may be a platform with multiple interconnected components. The texture client 121 and/or texture platform 123 may include multiple servers, intelligent networking devices, computing devices, components and corresponding software for providing parametric representations of lane lines. In addition, it is noted that the texture client 121 and/or texture platform 123 may be a separate entity of the system 100, a part of the one or more services 129, a part of the services platform 127, or included within the UE 119.
[0069] In one embodiment, content providers 131 a- 131 m (also referred to as content providers 131) may provide content or data (e.g., including image data, training data, textures, etc.) to the texture client 121 and/or texture platform 123. The content provided may be any type of content, such as text content, audio content, video content, image content, etc. In one embodiment, the content providers may provide content that may aid in the generating 3D textures using a UV map representation according to the embodiments described herein. In one embodiment, the content providers may also store content (e.g., textures, training data, trained machine learning models, 3D mesh data, etc.) used or generated by the texture client 121 and/or texture platform 123. In another embodiment, the content providers may manage access to a central repository of data, and offer a consistent, standard interface to data.
[0070] By way of example, the UE 119 is any type of mobile terminal, fixed terminal, or portable terminal including a built-in navigation system, a personal navigation device, mobile handset, station, unit, device, multimedia computer, multimedia tablet, Internet node, communicator, desktop computer, laptop computer, notebook computer, netbook computer, tablet computer, personal communication system (PCS) device, personal digital assistants (PDAs), audio/video player, digital camera/camcorder, positioning device, fitness device, television receiver, radio broadcast receiver, electronic book device, game device, or any combination thereof, including the accessories and peripherals of these devices, or any combination thereof. It is also contemplated that the UE 119 can support any type of interface to the user (such as “wearable” circuitry, etc.).
[0071] In one embodiment, the communication network 133 of system 100 includes one or more networks such as a data network, a wireless network, a telephony network, or any combination thereof. It is contemplated that the data network may be any local area network (LAN), metropolitan area network (MAN), wide area network (WAN), a public data network (e.g., the Internet), short range wireless network, or any other suitable packet-switched network, such as a commercially owned, proprietary packet- switched network, e.g., a proprietary cable or fiber optic network, and the like, or any combination thereof. In addition, the wireless network may be, for example, a cellular network and may employ various technologies including enhanced data rates for global evolution (EDGE), general packet radio service (GPRS), global system for mobile communications (GSM), Internet protocol multimedia subsystem (IMS), universal mobile telecommunications system (UMTS), etc., as well as any other suitable wireless medium, e.g., worldwide interoperability for microwave access (WiMAX), Long Term Evolution (LTE) networks, code division multiple access (CDMA), wideband code division multiple access (WCDMA), wireless fidelity (Wi-Fi), wireless LAN (WLAN), Bluetooth®, Internet Protocol (IP) data casting, satellite, mobile ad-hoc network (MANET), 5G New Radio, cloud Radio Access Network (RAN), and the like, or any combination thereof.
[0072] By way of example, the texture client 121 and/or texture platform 123 communicate with each other and other components of the system 100 using well known, new or still developing protocols. In this context, a protocol includes a set of rules defining how the network nodes within the communication network 133 interact with each other based on information sent over the communication links. The protocols are effective at different layers of operation within each node, from generating and receiving physical signals of various types, to selecting a link for transferring those signals, to the format of information indicated by those signals, to identifying which software application executing on a computer system sends or receives the information. The conceptually different layers of protocols for exchanging information over a network are described in the Open Systems Interconnection (OSI) Reference Model.
[0073] Communications between the network nodes are typically effected by exchanging discrete packets of data. Each packet typically comprises (1) header information associated with a particular protocol, and (2) payload information that follows the header information and contains information that may be processed independently of that particular protocol. In some protocols, the packet includes (3) trailer information following the payload and indicating the end of the payload information. The header includes information such as the source of the packet, its destination, the length of the payload, and other properties used by the protocol. Often, the data in the payload for the particular protocol includes a header and payload for a different protocol associated with a different, higher layer of the OSI Reference Model. The header for a particular protocol typically indicates a type for the next protocol contained in its payload. The higher layer protocol is said to be encapsulated in the lower layer protocol. The headers included in a packet traversing multiple heterogeneous networks, such as the Internet, typically include a physical (layer 1) header, a data-link (layer 2) header, an internetwork (layer 3) header and a transport (layer 4) header, and various application (layer 5, layer 6 and layer 7) headers as defined by the OSI Reference Model. [0074] The processes described herein for providing 3D texture generation using UV map representations may be advantageously implemented via circuitry, software, hardware (e.g., general processor, Digital Signal Processing (DSP) chip, an Application Specific Integrated Circuit (ASIC), Field Programmable Gate Arrays (FPGAs), etc.), firmware or a combination thereof. As used in this application, the term “circuitry” may refer to one or more or all of the following:
(a) hardware-only circuit implementations (such as implementations in only analog and/or digital circuitry) and
(b) combinations of hardware circuits and software, such as (as applicable):
(i) a combination of analog and/or digital hardware circuit(s) with software/firmware and
(ii) any portions of hardware processor(s) with software (including digital signal processor(s)), software, and memory(ies) that work together to cause an apparatus, such as a mobile phone or server, to perform various functions) and
(c) hardware circuit(s) and or processor(s), such as a microprocessor(s) or a portion of a microprocessor(s), that requires software (e.g., firmware) for operation, but the software may not be present when it is not needed for operation.”
[0075] This definition of circuitry applies to all uses of this term in this application, including in any claims. As a further example, as used in this application, the term circuitry also covers an implementation of merely a hardware circuit or processor (or multiple processors) or portion of a hardware circuit or processor and its (or their) accompanying software and/or firmware. The term circuitry also covers, for example and if applicable to the particular claim element, a baseband integrated circuit or processor integrated circuit for a mobile device or a similar integrated circuit in server, a cellular network device, or other computing or network device. Such exemplary hardware for performing the described functions is detailed below. [0076] FIG. 8 illustrates a computer system 800 upon which an embodiment of the invention may be implemented. Computer system 800 is programmed (e.g., via computer program code or instructions) to provide 3D texture generation using UV map representations as described herein and includes a communication mechanism such as a bus 810 for passing information between other internal and external components of the computer system 800. Information (also called data) is represented as a physical expression of a measurable phenomenon, typically electric voltages, but including, in other embodiments, such phenomena as magnetic, electromagnetic, pressure, chemical, biological, molecular, atomic, sub-atomic and quantum interactions. For example, north and south magnetic fields, or a zero and non-zero electric voltage, represent two states (0, 1) of a binary digit (bit). Other phenomena can represent digits of a higher base. A superposition of multiple simultaneous quantum states before measurement represents a quantum bit (qubit). A sequence of one or more digits constitutes digital data that is used to represent a number or code for a character. In some embodiments, information called analog data is represented by a near continuum of measurable values within a particular range.
[0077] A bus 810 includes one or more parallel conductors of information so that information is transferred quickly among devices coupled to the bus 810. One or more processors 802 for processing information are coupled with the bus 810.
[0078] A processor 802 performs a set of operations on information as specified by computer program code related to providing 3D texture generation using UV map representations. The computer program code is a set of instructions or statements providing instructions for the operation of the processor and/or the computer system to perform specified functions. The code, for example, may be written in a computer programming language that is compiled into a native instruction set of the processor. The code may also be written directly using the native instruction set (e.g., machine language). The set of operations include bringing information in from the bus 810 and placing information on the bus 810. The set of operations also typically include comparing two or more units of information, shifting positions of units of information, and combining two or more units of information, such as by addition or multiplication or logical operations like OR, exclusive OR (XOR), and AND. Each operation of the set of operations that can be performed by the processor is represented to the processor by information called instructions, such as an operation code of one or more digits. A sequence of operations to be executed by the processor 802, such as a sequence of operation codes, constitute processor instructions, also called computer system instructions or, simply, computer instructions. Processors may be implemented as mechanical, electrical, magnetic, optical, chemical or quantum components, among others, alone or in combination.
[0079] Computer system 800 also includes a memory 804 coupled to bus 810. The memory 804, such as a random access memory (RAM) or other dynamic storage device, stores information including processor instructions for providing 3D texture generation using UV map representations. Dynamic memory allows information stored therein to be changed by the computer system 800. RAM allows a unit of information stored at a location called a memory address to be stored and retrieved independently of information at neighboring addresses. The memory 804 is also used by the processor 802 to store temporary values during execution of processor instructions. The computer system 800 also includes a read only memory (ROM) 806 or other static storage device coupled to the bus 810 for storing static information, including instructions, that is not changed by the computer system 800. Some memory is composed of volatile storage that loses the information stored thereon when power is lost. Also coupled to bus 810 is a non-volatile (persistent) storage device 808, such as a magnetic disk, optical disk or flash card, for storing information, including instructions, that persists even when the computer system 800 is turned off or otherwise loses power.
[0080] Information, including instructions for providing 3D texture generation using UV map representations, is provided to the bus 810 for use by the processor from an external input device 812, such as a keyboard containing alphanumeric keys operated by a human user, or a sensor. A sensor detects conditions in its vicinity and transforms those detections into physical expression compatible with the measurable phenomenon used to represent information in computer system 800. Other external devices coupled to bus 810, used primarily for interacting with humans, include a display device 814, such as a cathode ray tube (CRT) or a liquid crystal display (LCD), or plasma screen or printer for presenting text or images, and a pointing device 816, such as a mouse or a trackball or cursor direction keys, or motion sensor, for controlling a position of a small cursor image presented on the display 814 and issuing commands associated with graphical elements presented on the display 814. In some embodiments, for example, in embodiments in which the computer system 800 performs all functions automatically without human input, one or more of external input device 812, display device 814 and pointing device 816 is omitted.
[0081] In the illustrated embodiment, special purpose hardware, such as an application specific integrated circuit (ASIC) 820, is coupled to bus 810. The special purpose hardware is configured to perform operations not performed by processor 802 quickly enough for special purposes. Examples of application specific ICs include graphics accelerator cards for generating images for display 814, cryptographic boards for encrypting and decrypting messages sent over a network, speech recognition, and interfaces to special external devices, such as robotic arms and medical scanning equipment that repeatedly perform some complex sequence of operations that are more efficiently implemented in hardware.
[0082] Computer system 800 also includes one or more instances of a communications interface 870 coupled to bus 810. Communication interface 870 provides a one-way or two-way communication coupling to a variety of external devices that operate with their own processors, such as printers, scanners and external disks. In general the coupling is with a network link 878 that is connected to a local network 880 to which a variety of external devices with their own processors are connected. For example, communication interface 870 may be a parallel port or a serial port or a universal serial bus (USB) port on a personal computer. In some embodiments, communications interface 870 is an integrated services digital network (ISDN) card or a digital subscriber line (DSL) card or a telephone modem that provides an information communication connection to a corresponding type of telephone line. In some embodiments, a communication interface 870 is a cable modem that converts signals on bus 810 into signals for a communication connection over a coaxial cable or into optical signals for a communication connection over a fiber optic cable. As another example, communications interface 870 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN, such as Ethernet. Wireless links may also be implemented. For wireless links, the communications interface 870 sends or receives or both sends and receives electrical, acoustic or electromagnetic signals, including infrared and optical signals, that carry information streams, such as digital data. For example, in wireless handheld devices, such as mobile telephones like cell phones, the communications interface 870 includes a radio band electromagnetic transmitter and receiver called a radio transceiver. In certain embodiments, the communications interface 870 enables connection to the communication network 133 for providing 3D texture generation using UV map representations.
[0083] The term computer-readable medium is used herein to refer to any medium that participates in providing information to processor 802, including instructions (e.g., computer program instructions) for execution. For example, the instructions can cause an apparatus (e.g., processor, computer, device, etc.) to perform one or more steps, functions, operations, etc. specified in the instructions or computer program instructions. According, a computer program may comprise instructions for causing an apparatus to perform at least any of the steps, functions, operations, etc. specified in the instructions. Similarly, a computer-readable medium (e.g., transitory or non-transitory) may comprise instructions (e.g., program instructions, computer program instructions, or equivalent) for causing an apparatus to perform any of the specified steps, functions, operations, etc. Such a medium may take many forms, including, but not limited to, non-volatile or non-transitory media, volatile or transitory media and transmission media. Non volatile media include, for example, optical or magnetic disks, such as storage device 808. Volatile media include, for example, dynamic memory 804. Transmission media include, for example, coaxial cables, copper wire, fiber optic cables, and carrier waves that travel through space without wires or cables, such as acoustic waves and electromagnetic waves, including radio, optical and infrared waves. Signals include man-made transient variations in amplitude, frequency, phase, polarization or other physical properties transmitted through the transmission media. Common forms of computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD-ROM, CDRW, DVD, any other optical medium, punch cards, paper tape, optical mark sheets, any other physical medium with patterns of holes or other optically recognizable indicia, a RAM, a PROM, an EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave, or any other medium from which a computer can read.
[0084] Network link 878 typically provides information communication using transmission media through one or more networks to other devices that use or process the information. For example, network link 878 may provide a connection through local network 880 to a host computer 882 or to equipment 884 operated by an Internet Service Provider (ISP). ISP equipment 884 in turn provides data communication services through the public, world-wide packet-switching communication network of networks now commonly referred to as the Internet 890.
[0085] A computer called a server host 892 connected to the Internet hosts a process that provides a service in response to information received over the Internet. For example, server host 892 hosts a process that provides information representing video data for presentation at display 814. It is contemplated that the components of system can be deployed in various configurations within other computer systems, e.g., host 882 and server 892.
[0086] FIG. 9 illustrates a chip set 900 upon which an embodiment of the invention may be implemented. Chip set 900 is programmed to provide 3D texture generation using UV map representations as described herein and includes, for instance, the processor and memory components described with respect to FIG. 8 incorporated in one or more physical packages (e.g., chips). By way of example, a physical package includes an arrangement of one or more materials, components, and/or wires on a structural assembly (e.g., a baseboard) to provide one or more characteristics such as physical strength, conservation of size, and/or limitation of electrical interaction. It is contemplated that in certain embodiments the chip set can be implemented in a single chip.
[0087] In one embodiment, the chip set 900 includes a communication mechanism such as a bus 901 for passing information among the components of the chip set 900. A processor 903 has connectivity to the bus 901 to execute instructions and process information stored in, for example, a memory 905. The processor 903 may include one or more processing cores with each core configured to perform independently. A multi-core processor enables multiprocessing within a single physical package. Examples of a multi-core processor include two, four, eight, or greater numbers of processing cores. Alternatively or in addition, the processor 903 may include one or more microprocessors configured in tandem via the bus 901 to enable independent execution of instructions, pipelining, and multithreading. The processor 903 may also be accompanied with one or more specialized components to perform certain processing functions and tasks such as one or more digital signal processors (DSP) 907, or one or more application- specific integrated circuits (ASIC) 909. A DSP 907 typically is configured to process real-world signals (e.g., sound) in real time independently of the processor 903. Similarly, an ASIC 909 can be configured to performed specialized functions not easily performed by a general purposed processor. Other specialized components to aid in performing the inventive functions described herein include one or more field programmable gate arrays (FPGA) (not shown), one or more controllers (not shown), or one or more other special-purpose computer chips.
[0088] The processor 903 and accompanying components have connectivity to the memory 905 via the bus 901. The memory 905 includes both dynamic memory (e.g., RAM, magnetic disk, writable optical disk, etc.) and static memory (e.g., ROM, CD-ROM, etc.) for storing executable instructions that when executed perform the inventive steps described herein to provide 3D texture generation using UV map representations. The memory 905 also stores the data associated with or generated by the execution of the inventive steps.
[0089] FIG. 10 is a diagram of exemplary components of a mobile terminal (e.g., handset) capable of operating in the system of FIG. 1, according to one embodiment. Generally, a radio receiver is often defined in terms of front-end and back-end characteristics. The front-end of the receiver encompasses all of the Radio Frequency (RF) circuitry whereas the back-end encompasses all of the base-band processing circuitry. Pertinent internal components of the telephone include a Main Control Unit (MCU) 1003, a Digital Signal Processor (DSP) 1005, and a receiver/transmitter unit including a microphone gain control unit and a speaker gain control unit. A main display unit 1007 provides a display to the user in support of various applications and mobile station functions that offer automatic contact matching. An audio function circuitry 1009 includes a microphone 1011 and microphone amplifier that amplifies the speech signal output from the microphone 1011. The amplified speech signal output from the microphone 1011 is fed to a coder/decoder (CODEC) 1013.
[0090] A radio section 1015 amplifies power and converts frequency in order to communicate with a base station, which is included in a mobile communication system, via antenna 1017. The power amplifier (PA) 1019 and the transmitter/modulation circuitry are operationally responsive to the MCU 1003, with an output from the PA 1019 coupled to the duplexer 1021 or circulator or antenna switch, as known in the art. The PA 1019 also couples to a battery interface and power control unit 1020.
[0091] In use, a user of mobile station 1001 speaks into the microphone 1011 and his or her voice along with any detected background noise is converted into an analog voltage. The analog voltage is then converted into a digital signal through the Analog to Digital Converter (ADC) 1023. The control unit 1003 routes the digital signal into the DSP 1005 for processing therein, such as speech encoding, channel encoding, encrypting, and interleaving. In one embodiment, the processed voice signals are encoded, by units not separately shown, using a cellular transmission protocol such as global evolution (EDGE), general packet radio service (GPRS), global system for mobile communications (GSM), Internet protocol multimedia subsystem (IMS), universal mobile telecommunications system (UMTS), etc., as well as any other suitable wireless medium, e.g., microwave access (WiMAX), Long Term Evolution (LTE) networks, code division multiple access (CDMA), wireless fidelity (WiFi), satellite, and the like.
[0092] The encoded signals are then routed to an equalizer 1025 for compensation of any frequency-dependent impairments that occur during transmission though the air such as phase and amplitude distortion. After equalizing the bit stream, the modulator 1027 combines the signal with a RF signal generated in the RF interface 1029. The modulator 1027 generates a sine wave by way of frequency or phase modulation. In order to prepare the signal for transmission, an up- converter 1031 combines the sine wave output from the modulator 1027 with another sine wave generated by a synthesizer 1033 to achieve the desired frequency of transmission. The signal is then sent through a PA 1019 to increase the signal to an appropriate power level. In practical systems, the PA 1019 acts as a variable gain amplifier whose gain is controlled by the DSP 1005 from information received from a network base station. The signal is then filtered within the duplexer 1021 and optionally sent to an antenna coupler 1035 to match impedances to provide maximum power transfer. Finally, the signal is transmitted via antenna 1017 to a local base station. An automatic gain control (AGC) can be supplied to control the gain of the final stages of the receiver. The signals may be forwarded from there to a remote telephone which may be another cellular telephone, other mobile phone or a land-line connected to a Public Switched Telephone Network (PSTN), or other telephony networks.
[0093] Voice signals transmitted to the mobile station 1001 are received via antenna 1017 and immediately amplified by a low noise amplifier (LNA) 1037. A down-converter 1039 lowers the carrier frequency while the demodulator 1041 strips away the RF leaving only a digital bit stream. The signal then goes through the equalizer 1025 and is processed by the DSP 1005. A Digital to Analog Converter (DAC) 1043 converts the signal and the resulting output is transmitted to the user through the speaker 1045, all under control of a Main Control Unit (MCU) 1003-which can be implemented as a Central Processing Unit (CPU) (not shown).
[0094] The MCU 1003 receives various signals including input signals from the keyboard 1047. The keyboard 1047 and/or the MCU 1003 in combination with other user input components (e.g., the microphone 1011) comprise a user interface circuitry for managing user input. The MCU 1003 runs a user interface software to facilitate user control of at least some functions of the mobile station 1001 to provide 3D texture generation using UV map representations. The MCU 1003 also delivers a display command and a switch command to the display 1007 and to the speech output switching controller, respectively. Further, the MCU 1003 exchanges information with the DSP 1005 and can access an optionally incorporated SIM card 1049 and a memory 1051. In addition, the MCU 1003 executes various control functions required of the station. The DSP 1005 may, depending upon the implementation, perform any of a variety of conventional digital processing functions on the voice signals. Additionally, DSP 1005 determines the background noise level of the local environment from the signals detected by microphone 1011 and sets the gain of microphone 1011 to a level selected to compensate for the natural tendency of the user of the mobile station 1001.
[0095] The CODEC 1013 includes the ADC 1023 and DAC 1043. The memory 1051 stores various data including call incoming tone data and is capable of storing other data including music data received via, e.g., the global Internet. The software module could reside in RAM memory, flash memory, registers, or any other form of writable computer-readable storage medium known in the art including non-transitory computer-readable storage medium. For example, the memory device 1051 may be, but not limited to, a single memory, CD, DVD, ROM, RAM, EEPROM, optical storage, or any other non-volatile or non-transitory storage medium capable of storing digital data.
[0096] An optionally incorporated SIM card 1049 carries, for instance, important information, such as the cellular phone number, the carrier supplying service, subscription details, and security information. The SIM card 1049 serves primarily to identify the mobile station 1001 on a radio network. The card 1049 also contains a memory for storing a personal telephone number registry, text messages, and user specific mobile station settings.
[0097] While the invention has been described in connection with a number of embodiments and implementations, the invention is not so limited but covers various obvious modifications and equivalent arrangements, which fall within the purview of the appended claims. Although features of the invention are expressed in certain combinations among the claims, it is contemplated that these features can be arranged in any combination and order.

Claims

WHAT IS CLAIMED IS:
1. An apparatus comprising: at least one processor; and at least one memory including computer program code for one or more programs, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following, receive at least one input image depicting a subject, wherein the at least one image comprises a standard representation of the subject; determine at least one depth representation of the at least one input image; cause, at least in part, a creation of a UV map representation of the subject based on the at least one input image and the at least one depth representation, wherein the UV map representation includes, at least in part, a UV map-geometry representation of a three-dimensional shape of the subject and a UV map-visual representation of at least one visual characteristic of the at least one subject; and cause, at least in part, a training of a machine learning algorithm to infer the UV map representation, the UV map-geometry, the UV map-visual representation, or a combination thereof based on the standard representation.
2. The apparatus of claim 1, wherein the apparatus is further caused to: process, using the trained machine learning algorithm, at least one other input image depicting the subject or another subject to infer an estimated UV map-visual representation; and cause, at least in part, a texturing of a three-dimensional representation of the subject using the estimated UV map-visual representation.
3. The apparatus of claim 1, wherein the machine learning algorithm is further trained to infer a movement or a pose of the subject over a time period.
4. The apparatus of claim 3, wherein the apparatus is further caused to: process, using the trained machine learning algorithm, at least one other input image depicting the subject or another subject to infer an estimated UV map-visual representation of the movement or the pose over the time period; and cause, at least in part, a rendering of a video based on the movement or the pose over the time period.
5. The apparatus of claim 1, wherein the three-dimensional shape of the UV map-geometry is defined based, at least in part, on a first channel representing a U position on a U-axis, a second channel representing a V position on a V-axis, and a third channel representing a depth with respect to a reference point for one or more pixels of the at least one image associated with the three-dimensional shape.
6. The apparatus of claim 1, wherein the at least one visual characteristic includes, at least part, a texture of the subject, and wherein the UV map-visual representation is a UV map-color representation.
7. The apparatus of claim 6, wherein the UV map-color presentation is defined based, at least in part, on a first channel representing a red color intensity, a second channel representing a blue color intensity, and a third channel representing a green color intensity for one or more pixels of the at least one image associated with the texture of the subject.
8. The apparatus of claim 1, wherein the at least one input image depicts a partial visibility of the subject, and wherein the UV map representation depicts a full visibility of subject mapped onto a two-dimensional plane.
9. The apparatus of claim 8, wherein the inferring of the UV map-geometry representation, the UV map-visual representation, or a combination thereof includes, at least in part, inferring the full visibility of the subject from the partial visibility of the subject.
10. The apparatus of claim 9, wherein the full visibility of the subject is inferred based, at least in part, on one or more generative adversarial networks (GANs).
11. The apparatus of claim 1, wherein the creation of the UV map representation is based, at least in part, on a back-projection of a subject visibility onto the at least one depth representation, the three-dimensional shape of the subject, or a combination thereof.
12. The apparatus of claim 1, wherein the training of the machine learning algorithm includes, at least in part, minimizing a weighted loss of the UV map representation, a maximizing of a similarity between a ground-truth UV map-representation and an estimated UV map-representation.
13. The apparatus of claim 1, wherein the training of the machine learning algorithm is based on a weighted multi-task loss, and wherein the weighted multi-task loss can apply different loss functions to the UV map-geometry representation, the UV map-visual representation, different parts of the subject, or a combination thereof.
14. The apparatus of claim 1, wherein the training of the machine learning algorithm is based on a face-identity loss.
15. A method comprising: receiving at least one input image depicting a subject; determining at least one depth representation of the at least one input image, wherein the at least one image comprises a standard representation of the subject; causing, at least in part, a creation of a UV map representation of the subject based on the at least one input image and the at least one depth representation, wherein the UV map representation includes, at least in part, a UV map-geometry representation of a three- dimensional shape of the subject and a UV map-visual representation of at least one visual characteristic of the at least one subject; and causing, at least in part, a training of a machine learning algorithm to infer the UV map representation, the UV map-geometry, the UV map-visual representation, or a combination thereof based on the standard representation.
16. The method of claim 15, further comprising: processing, using the trained machine learning algorithm, at least one other input image depicting the subject or another subject to infer an estimated UV map-visual representation; and causing, at least in part, a texturing of a three-dimensional representation of the subject using the estimated UV map-visual representation.
17. The method of claim 15, wherein the three-dimensional shape of the UV map-geometry is defined based, at least in part, on a first channel representing a U position on a U-axis, a second channel representing a V position on a V-axis, and a third channel representing a depth with respect to a reference point for one or more pixels of the at least one image associated with the three-dimensional shape.
18. A non-transitory computer-readable storage medium comprising program instructions for causing an apparatus to perform at least the following: receiving at least one input image depicting a subject, wherein the at least one image comprises a standard representation of the subject; determining at least one depth representation of the at least one input image; causing, at least in part, a creation of a UV map representation of the subject based on the at least one input image and the at least one depth representation, wherein the UV map representation includes, at least in part, a UV map-geometry representation of a three- dimensional shape of the subject and a UV map-visual representation of at least one visual characteristic of the at least one subject; and causing, at least in part, a training of a machine learning algorithm to infer the UV map representation, the UV map-geometry, the UV map-visual representation, or a combination thereof based on the standard representation.
19. The non-transitory computer-readable storage medium of claim 18, wherein the apparatus is caused to further perform: processing, using the trained machine learning algorithm, at least one other input image depicting the subject or another subject to infer an estimated UV map-visual representation; and causing, at least in part, a texturing of a three-dimensional representation of the subject using the estimated UV map-visual representation.
20. The non-transitory computer-readable storage medium of claim 18, wherein the three- dimensional shape of the UV map-geometry is defined based, at least in part, on a first channel representing a U position on a U-axis, a second channel representing a V position on a V-axis, and a third channel representing a depth with respect to a reference point for one or more pixels of the at least one image associated with the three-dimensional shape.
PCT/US2021/019036 2020-02-28 2021-02-22 Apparatus, method, and system for providing a three-dimensional texture using uv representation Ceased WO2021173489A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202062983171P 2020-02-28 2020-02-28
US62/983,171 2020-02-28

Publications (1)

Publication Number Publication Date
WO2021173489A1 true WO2021173489A1 (en) 2021-09-02

Family

ID=77491916

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2021/019036 Ceased WO2021173489A1 (en) 2020-02-28 2021-02-22 Apparatus, method, and system for providing a three-dimensional texture using uv representation

Country Status (1)

Country Link
WO (1) WO2021173489A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115147526A (en) * 2022-06-30 2022-10-04 北京百度网讯科技有限公司 Training of clothing generation model, method and device for generating clothing images
US12045998B2 (en) 2022-05-18 2024-07-23 Toyota Research Institute, Inc. Systems and methods for neural implicit scene representation with dense, uncertainty-aware monocular depth constraints
CN120014140A (en) * 2025-04-16 2025-05-16 天翼云科技有限公司 UV texture image generation method, device and equipment based on UV mapping transformation

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160027200A1 (en) * 2014-07-28 2016-01-28 Adobe Systems Incorporated Automatically determining correspondences between three-dimensional models
US20190050981A1 (en) * 2017-08-09 2019-02-14 Shenzhen Keya Medical Technology Corporation System and method for automatically detecting a target object from a 3d image
US20190108396A1 (en) * 2017-10-11 2019-04-11 Aquifi, Inc. Systems and methods for object identification

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160027200A1 (en) * 2014-07-28 2016-01-28 Adobe Systems Incorporated Automatically determining correspondences between three-dimensional models
US20190050981A1 (en) * 2017-08-09 2019-02-14 Shenzhen Keya Medical Technology Corporation System and method for automatically detecting a target object from a 3d image
US20190108396A1 (en) * 2017-10-11 2019-04-11 Aquifi, Inc. Systems and methods for object identification

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
BARSOUM EMAD, KENDER JOHN, LIU ZICHENG: "HP-GAN: Probabilistic 3D human motion prediction via GAN", PROCEEDINGS OF THE IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) WORKSHOPS, 2018, pages 1531 - 1540, XP033475490, Retrieved from the Internet <URL:https://openaccess.thecvf.com/content_cvpr_2018_workshops/papers/w29/Barsoum_HP-GAN_Probabilistic_3D_CVPR_2018_paper.pdf> [retrieved on 20210408] *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12045998B2 (en) 2022-05-18 2024-07-23 Toyota Research Institute, Inc. Systems and methods for neural implicit scene representation with dense, uncertainty-aware monocular depth constraints
CN115147526A (en) * 2022-06-30 2022-10-04 北京百度网讯科技有限公司 Training of clothing generation model, method and device for generating clothing images
CN115147526B (en) * 2022-06-30 2023-09-26 北京百度网讯科技有限公司 Training of clothing generation models, methods and devices for generating clothing images
CN120014140A (en) * 2025-04-16 2025-05-16 天翼云科技有限公司 UV texture image generation method, device and equipment based on UV mapping transformation

Similar Documents

Publication Publication Date Title
US11468636B2 (en) 3D hand shape and pose estimation
KR102686099B1 (en) Systems and methods for realistic head turns and face animation synthesis on mobile device
CN112166604B (en) Volumetric capture of objects with a single RGBD camera
KR102850989B1 (en) Motion expressions for articulated animation
JP5520387B2 (en) Robust object recognition by dynamic modeling in augmented reality
WO2020192568A1 (en) Facial image generation method and apparatus, device and storage medium
KR102506738B1 (en) snow texture inpainting
WO2021222386A1 (en) Photometric-based 3d object modeling
CN117136381A (en) whole body segmentation
CN116917938A (en) Whole body visual effects
US20250182404A1 (en) Four-dimensional object and scene model synthesis using generative models
US20240046516A1 (en) Estimating 3d scene representations of images
US20220103860A1 (en) Video compression system
WO2021173489A1 (en) Apparatus, method, and system for providing a three-dimensional texture using uv representation
US10147218B2 (en) System to identify and use markers for motion capture
US20250391102A1 (en) 3d wrist tracking
CN116569218A (en) Image processing method and image processing device
US12265664B2 (en) Shared augmented reality eyewear device with hand tracking alignment
KR20250130334A (en) Unsupervised volume animation
US20250218109A1 (en) Rendering Videos with Novel Views from Near-Duplicate Photos
CN115527011A (en) Navigation method and device based on three-dimensional model
US12541865B2 (en) Image processing with image registration to facilitate combination of images modified by neural networks with reference images
CN120853236B (en) A method and system for eye movement correction based on codec and feature decoupling
US20250199622A1 (en) Shared augmented reality eyewear device with hand tracking alignment
US20240355239A1 (en) Ar mirror

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21760621

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21760621

Country of ref document: EP

Kind code of ref document: A1