WO2017123163A1 - Improvements in or relating to the generation of three dimensional geometries of an object - Google Patents
Improvements in or relating to the generation of three dimensional geometries of an object Download PDFInfo
- Publication number
- WO2017123163A1 WO2017123163A1 PCT/SG2017/050023 SG2017050023W WO2017123163A1 WO 2017123163 A1 WO2017123163 A1 WO 2017123163A1 SG 2017050023 W SG2017050023 W SG 2017050023W WO 2017123163 A1 WO2017123163 A1 WO 2017123163A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- dimensional geometry
- reduced
- dimensional
- generating
- geometry
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/32—Image data format
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/36—Level of detail
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
- G06T9/001—Model-based coding, e.g. wire frame
Definitions
- the present invention relates to improvements in or relating to the generation of three dimensional geometries of an object
- Three dimensional software is used to recreate a real product into a three dimensional geometry.
- the software products commonly used are AutodeskTM, MayaTM, 123D CatchTM, 3D S MaxTM, BlenderTM, RhinoTM, Mud BoxTM, Z-BrushTM etc.
- An alternative way to achieve this is to manually scan a product using three dimensional scanners and create a three dimensional geometry.
- the export output from these software products is generally in the form of an OBJTM, FBXTM or ColladaTM file, each of which is typically heavy in terms of data storage and file size and is not efficient for their commercial utilization online where bandwidth is a major concern.
- Web GL Web Graphics Library
- This is a JavaScript API which uses texture and geometry files in OBJTM, FBXTM or JsonTM format exported out of either a unity game engine or other 3D applications. Due to the heavy file sizes of the geometries developed using these applications, the 3D objects lead to a high latency or wait time in page loading.
- File size is a major handicap which prevents the existing technologies being used for mass creation of three dimensional geometries and for use of such geometries in a library.
- An IPhone 6 three dimensional geometry's OBJ file will be a file size of 4.42 Mb with 16589 polygons and 18588 Vertices. If a user wishes to view 10 such objects it is likely that the user may download about45-50 Mb of data at a time! This is hugely inefficient and highly time consuming.
- the three dimensional geometry of the object may enable a user to see all the details of that object.
- this can make the shopping experience more akin to shopping in a real shop.
- a shopper can tumble an object in any direction (rotate], bring it closer to the screen (zoom function], move it upside down (Pan] just as he/she does in an offline shopping scenario at any conventional store.
- an embodiment of the present invention generates three dimensional geometries of objects that can be easily displayed on a typical Web Browser at speeds that are normal in terms of time delays and the like. This may mean minimal perceivable time delay in loading and/or displaying the three dimensional geometries of objects on, for example, a Web Browser.
- Figure 1 is a block diagram of the overall system, according to an aspect of the present invention.
- Figure 2 is a flow chart of a broad process for on-line shopping, according to an aspect of the present invention.
- Figure 3 is a flow chart of the technical processes for generating three dimensional geometries of objects, according to an aspect of the present invention
- Figure 4 is an example of a UV mapping, according to an aspect of the present invention.
- Figure 5 is an example of a texture mapping, according to an aspect of the present invention.
- Figure 6 is a diagram showing an on-line shopping process, according to an aspect of the present invention.
- Figure 7 is a view of a virtual trolley in a shop, according to an aspect of the present invention.
- Figure 8 is a view of a virtual product to be placed in the virtual trolley, according to an aspect of the present invention.
- phrase of the form of "at least one of A or B” may include A or B or both A and B.
- phrase of the form of "at least one of A or B or C", or including further listed items may include any and all combinations of one or more of the associated listed items.
- Various embodiments may provide a method of generating a three dimensional geometry of an object adapted to be rendered on a remote display.
- the method includes generating a three dimensional geometry of the object from a plurality of images of the object, wherein the plurality of images are taken from a plurality of directions (or angles], generating a reduced three dimensional geometry based on (or from] the three dimensional geometry generated, including processing wanted information from the three dimensional geometry, wherein the wanted information includes features of the three dimensional geometry which relate to displaying the three dimensional geometry on the remote display, and converting the reduced three dimensional geometry into a file capable of being effected (or processed or executed] to render the reduced three dimensional geometry on the remote display.
- the file may be effected or processed or executed on a device having the remote display.
- a smaller sized file may be converted from the reduced three dimensional geometry, as compared to known techniques.
- Providing a smaller sized file allows the reduced three dimensional geometry to be able to be rendered and viewed. Consequently, less computation power is required and faster computation speed is possible for rendering the reduced three dimensional geometry.
- less bandwidth is required for rendering the reduced three dimensional geometry. Accordingly, various embodiments may be able to reduce the load on hardware, software or both and improve the rendering performance.
- various embodiments may provide enhanced viewing experience for a user/viewer.
- a "full" three dimensional geometry means more data getting loaded in the random-access memory (RAM]. This also means that the processor (e.g., central processing unit (CPU]] will take a longer time to process, as it will have to process all the unnecessary information as well, which may not contribute to the rendering process in the web browser.
- the browser cache also gets heavy, hence the experience to the viewer will be a "stuttering" motion (also known as frame drop in gaming industry]. If the viewer does not get smoothness in navigation, which may include zoom in, zoom out, pan, tumble and animation, his overall experience will not be satisfactory. In contrast, by providing a reduced three dimensional geometry, a smoother navigation may be achieved, thus enhancing the overall viewing experience of the viewer.
- WebGL or an application may be used to render the reduced three dimensional geometry on the remote display.
- the method may further include sending (or transmitting] the file converted from the reduced three dimensional geometry to be rendered on the remote display.
- the reduced three dimensional geometry may be converted into the file using JavaScript. This may mean that the file that is converted into may be a JavaScript file or a JavaScript compatible file.
- a mesh output corresponding to the three dimensional geometry may be generated.
- the mesh output may include a polygon mesh, and, for generating the three dimensional geometry of the object, the polygon mesh may be unwrapped so as to lay out the polygon mesh in a planar manner (or configuration].
- the method may further include applying a texture to the polygon mesh laid out in the planar manner. Applying the texture helps to create the three dimensional geometry of the object. Prior to applying the texture to the polygon mesh, the texture may first be created.
- the method may further include at least one of texturing, shading, specularity processing, bump map processing, or transparency processing.
- the method may further include applying a level of detail process to specify a resolution of the three dimensional geometry.
- the method may further include differentiating the wanted information from unwanted information that comprises features of the three dimensional geometry unrelated to displaying the three dimensional geometry on the remote display. Such differentiation may be done prior to processing the wanted information.
- the method may further include removing, from the three dimensional geometry, all unwanted information that includes features of the three dimensional geometry unrelated to displaying the three dimensional geometry on the remote display.
- the unwanted information may include at least one of construction history, additional shader nodes, topology, additional UV sets, object operation history, nodes which carry information to help it in respect of translation, deformations, skinning, attach lights, or any GLU prefixed functions.
- the file that is converted may be further capable of being effected to allow the reduced three dimensional geometry to be rendered on the remote display to undergo at least one of tumbling, orbiting, panning or zooming in a three dimensional space.
- Various embodiments may also provide a computer readable medium having stored thereon computer-readable instructions for executing, under control of a processing device, the method as described herein.
- the computer readable medium may be a non-transitory computer readable medium.
- Various embodiments may also provide computer program having instructions for a computing device to execute the instructions under control of the computing device to perform the method as described herein.
- Various embodiments may further provide an apparatus for generating a three dimensional geometry of an object adapted to be rendered on a remote display.
- the apparatus includes a generating module configured to generate a three dimensional geometry of the object from a plurality of images of the object, wherein the plurality of images are taken from a plurality of directions, a processing module configured to generate a reduced three dimensional geometry based on the three dimensional geometry generated, the processing module being adapted to process wanted information from the three dimensional geometry to generate the reduced three dimensional geometry, wherein the wanted information includes features of the three dimensional geometry which relate to displaying the three dimensional geometry on the remote display, and a conversion module configured to convert the reduced three dimensional geometry into a file capable of being effected to render the reduced three dimensional geometry on the remote display.
- the apparatus may further include a transmission module configured to send the file converted from the reduced three dimensional geometry to be rendered on the remote display.
- the conversion module may be configured to convert the reduced three dimensional geometry into the file using JavaScript.
- the generating module may be configured to generate a mesh output corresponding to the three dimensional geometry.
- the mesh output may include a polygon mesh, and the generating module may be further configured to unwrap the polygon mesh so as to lay out the polygon mesh in a planar manner.
- apparatus may further include a texturing module configured to apply a texture to the polygon mesh laid out in the planar manner.
- the texturing module may be a separate module or may be comprised in the generating module.
- the generating module may equivalently be the texturing module such that the generating module may be further configured to apply the texture to the polygon mesh.
- the generating module may be further configured to perform at least one of texturing, shading, specularity processing, bump map processing, or transparency processing.
- the generating module may be further configured to apply a level of detail process to specify a resolution of the three dimensional geometry.
- the processing module may be further configured to differentiate the wanted information from unwanted information that comprises features of the three dimensional geometry unrelated to displaying the three dimensional geometry on the remote display.
- the processing module may be further configured to remove, from the three dimensional geometry, all unwanted information that comprises features of the three dimensional geometry unrelated to displaying the three dimensional geometry on the remote display.
- the unwanted information may include at least one of construction history, additional shader nodes, topology, additional UV sets, object operation history, nodes which carry information to help it in respect of translation, deformations, skinning, attach lights, or any GLU prefixed functions.
- the file may further be capable of being effected to allow the reduced three dimensional geometry to be rendered on the remote display to undergo at least one of tumbling, orbiting, panning or zooming in a three dimensional space.
- each of the different modules as described above in the context of the apparatus may be an individual or separate module.
- one or more modules may be comprised in or may be part of another module.
- one or more modules may form part of another bigger module.
- one module may be equivalent to two or more respective modules described above, meaning that one module may be configured to perform the same two or more functions performed by the two or more respective modules.
- Various embodiments may further provide a device having a rendering module, the device being adapted to receive and render (e.g., via the rendering module] the reduced three dimensional geometry generated as described herein. This may mean that the device may receive the file that is converted and render the reduced three dimensional geometry (based on the file] on a display of the device.
- Various embodiments may further provide a mobile device having a rendering application, the mobile device being adapted to receive and render (e.g., via the rendering application] the reduced three dimensional geometry generated as described herein. This may mean that the mobile device may receive the file that is converted and render the reduced three dimensional geometry (based on the file] on a display of the mobile device.
- Figure 1 shows an overall view of a system 100 in accordance with an embodiment of the present invention.
- the system includes a computer system 102 including one or more each of processors 104, memory 106 and modules 108.
- the computer system 102 is connected to a network, such as the internet 110.
- Third party user devices 112, 114 and 116 are in communication with the computer system 102 via the network 110.
- Each third party user device includes a rendering module 118.
- FIG. 2 shows a broad process 200 for generating three dimensional geometries of an object according to an embodiment of the present invention.
- the process 200 includes a step 202 of producing three dimensional scanned data or (two dimensional] pictures/images for any object
- the three dimensional scanned data or pictures/images may include views of the object from different directions or angles.
- the object may be a product which may be for sale in an on-line shop.
- the output e.g., three dimensional scanned data or pictures/images] from the step 202 is then processed to produce a mesh output 204.
- the mesh output may be a three dimensional mesh or a polygon mesh.
- the polygon mesh may be in the form of a three dimensional mesh.
- the mesh output may be an OBJTM, FBXTM or ColladaTM format (or file type] generated by three dimensional software such as MayaTM, BlenderTM or Z-BrushTM, by way of example.
- a mesh e.g., three dimensional mesh or polygon mesh] comprises a plurality of cells which overlay the object. The cells may be triangles, quadrilaterals, tetrahedra, cuboids or any other appropriate polygon.
- the mesh is then optimized 206 using open source tools, by way of example.
- Mesh optimization depends on a number of different factors. For example, the shape of the object It may be necessary or important to avoid sharp angles, flat angles and distorted features as these affect the accuracy of any numerical simulation. The size of the object is a further consideration and generally small features are needed near small features, so there are no abrupt changes. Abrupt changes in large features are generally acceptable as there are generally less cells for a larger feature. Another factor in mesh optimization is the number of features. More features lead to a slower solution time which is less efficient. Features located on domain boundaries may require more processing. As a result, it is preferable to minimize the number of cells at boundaries.
- next step 208 converts the mesh into, for example, a WebGLTM data file (or any other equivalent type of file] which is capable of being easily viewed using a Web Browser.
- the conversion process will be described in greater detail below.
- the traditional techniques result in files which are far too big to show on a Web Browser.
- the conversion of the mesh into a file for viewing in accordance with an embodiment of the present invention results in a file which is at least 10 times smaller than traditional conversion techniques.
- the reduction in size makes it possible to view the three dimensional geometries of objects on a Web Browser.
- the present invention thus provides a considerable advantage over current techniques.
- a more detailed process 300 for generating three dimensional geometries of objects is described.
- a plurality of photographs of the object are taken from a number of different predetermined directions or angles.
- the predetermined directions are chosen to ensure that every facet of the object is captured in at least one photograph and preferably more, so that there is sufficient overlap between the plurality of photographs to give rise to an accurate three dimensional geometry.
- the number of photographs required may depend on a number of factors, including the amount of detail required; the resolution of the image, the size of the object etc.
- step 304 a three dimensional geometry creation process is carried out, using for example by MayaTM or MaxTM, which are known three dimensional applications.
- a UV layout of the geometry formed in step 304 is produced.
- texturing of the geometry occurs.
- UV relates to a texture space where U stands for the horizontal and V stands for the vertical direction of the texture space.
- the UV Mapping process at its simplest includes three main steps: unwrapping the mesh, creating the texture, and applying the texture.
- Figure 4 shows a sphere having a checkered texture, first without and then with UV mapping. Without UV mapping, the checkered tile XYZ space and the texture is carved out of the sphere. With UV mapping, the checkered tile XYZ space and points on the sphere map are converted to this UV space according to their latitude and longitude and give rise to a flattened representation.
- a geometry is created as a polygon mesh (e.g., which may be in the form of a three dimensional mesh] using a three dimensional modeler (as described above].
- UV coordinates may be generated for each vertex in the mesh.
- the three dimensional modeler unfolds a polygonal mesh at the seams, so as to automatically lay out a polygon mesh in a planar manner. If the mesh is a UV sphere 400, for example, the modeler might transform it into an equi- rectangular projection 402, as shown in figure 4.
- a UV map can either be generated automatically by a software application, made manually by an artist, or a combination of both. Often a UV map is generated, and then the artist may adjust and optimize it to minimize seams and overlaps. If the geometry is symmetric, the artist might overlap opposite polygons to enable painting to occur on both sides simultaneously.
- UV coordinates are applied per face, not per vertex. This means a shared vertex may have different UV coordinates in each of its polygons, so adjacent polygons can be cut apart and positioned on different areas of the texture map as discussed below.
- UV texturing permits polygons that make up a three dimensional object to be painted with color from an image.
- the image is called a UV texture map, and is essentially an ordinary image.
- the UV mapping process involves assigning pixels in the image to surface mappings on the polygon. This is usually done by copying a polygon shaped piece of the image map and pasting it onto a polygon on the object via a computer program. Such a polygon mesh process is described in more detail below.
- each polygon may map to the appropriate texture from a "decal sheet”.
- the rendering computation uses the UV texture coordinates to determine how to paint the three-dimensional surface.
- step 308 is a step for texturing the three dimensional geometry using the photographs originally captured in step 302.
- Texturing is similar to adding decorative paper to a white box.
- texture mapping is the process of adding graphics to a polygon object These graphics may be anything from photographs to original designs. Texturing may add detail to the geometry based on a photograph. For example, if a particular part of the object has a certain detail such as a surface feature; a color; or a shape this may be added using texturing. Textures may also help age the object and give it more appeal and realism.
- shading is known as shading which may be carried out by a shader computer program. This describes the entire material on an object in terms of how the light is reflected and absorbed and the effects of translucency and bump maps. Shaders and texturers are commonly used together. The texture is connected to a shader to give the three dimensional object a particular look. Shaders calculate rendering effects such as the position of a feature, hue, saturation, brightness, and contrast of all pixels, vertices, or textures used to construct a final image. These may be altered in real time, using algorithms defined in the shader.
- specularity defines how a surface reflects light. It is basically the texture's reflection of the light source and creates a shiny look. Having the right specularity may be necessary or important in defining what the three dimensional object is made from. For example, a shiny metal material may have a high level of reflectivity, whereas a matt texture like cement may not
- a bump map may also be applied. This gives the illusion of depth or relief on a texture without greatly increasing the rendering time.
- the computer determines where areas on the image need to be raised by reading the black, white and grey scale data on the graphic or photograph.
- a transparency map may further be applied at the texturing step.
- Transparency maps are grey scale textures that use black and white values to signify areas of transparency or opacity on an object's material. For example, when modeling a chain link fence, instead of modeling each individual chain link which would take a significant amount of time and processing, a black and white texture can be used to determine which areas should stay opaque and which should be transparent.
- the level of detail may be low resolution 310, mid resolution 312 or high resolution 314. Low resolution is needed when the object is far away from the camera and high resolution is used when the object is close to the camera.
- the level of detail technique is used to determine how much detail is required based on the distance of the object from the camera taking the photographs.
- LOD rendering enables a reduction in the number of polygons rendered for an object as its distance from camera increases. As long as the objects are not all close to the camera at the same time, LOD may reduce the load on the hardware, software or both and improve the rendering performance.
- a next step 316 the three dimensional geometry and the data from the one or more texturing techniques are combined and converted into code language.
- the geometry is converted in to a format (or file type] such as OBJ or FBX.
- a custom code is then used to convert that geometry into code such as JavaScript or any equivalent type of code. This generates what is known as a Disha.js file. After this the OBJ or FBX data need not be called again.
- the next step 318 applies a lighting and look and feel technique to the geometry using the custom code and JavaScript.
- the lighting technique relates to the simulation of light in computer graphics. This simulation may be extremely accurate to track the energy flow of light interacting with materials using lighting computational techniques. Alternatively, the simulation may simply be inspired by light physics, as is the case with non-photorealistic rendering. In both cases, a shading geometry may be used to describe how the surfaces of the object respond to light. Between these two extremes, there are many different rendering approaches which may be employed to achieve the most desirable visual result [0082]
- the three dimensional geometry is taken into a light set rig forming part of an embodiment of the present invention, but not shown.
- the light rig includes a plurality of lights, for example directional lights, spot lights, point lights etc.
- the three dimensional geometry is placed in the center of the rig and its position manipulated and rotated according to visual features of the object. By means of the manipulation and rotation the image may be improved to the extent that the human eye may see the difference.
- a next step 320 data is called from the sever storing the geometry and is uploaded to a browser for rendering on line.
- the data uses the same html code supported by JavaScript.
- the custom code converts a geometry into html code which may be utilized by all browsers on the internet.
- the rendering process generally occurs at a remote device separate from the processers carrying out the conversion of the geometry.
- the geometry may be rendered on any suitable device, including but not limited to, a computer, a mobile phone, a PDA or any other suitable means for viewing the geometry.
- the computer system 102 in figure 1 produces a geometry which is adapted to be rendered on any third party device 112, 114 and 116.
- step 322 further optimization of the geometry may take place using any of the techniques mentioned above.
- the geometry may be adapted to make it compatible with custom tools such as zoom, pan, orbit rotate etc. which may be applied to the geometry as it is manipulated by the user when the geometry is viewed.
- 3D-View displays a geometrically recreated and textured three dimensional geometry of an object.
- three dimensional geometry may undergo tumble, orbit and pan or zoom in a three dimensional space.
- the tool enables users to see an object from different angles and bring it closer to zoom in on it. This simulates a feeling equivalent to that obtained when handling the object physically. For example, it is easy to flip the TV to see the back panel so as to see how many HDMI sockets it has and where they are located.
- the custom code removes or strips unwanted information from a geometry data by keeping its usage solely for online display using technology such as WebGL technology or any other Application Program Interface (API] which may be used when rendering three dimensional graphics.
- Unwanted information relates to, by way of example, a construction history, additional shader nodes, topology, additional UV sets, object operation history, if any, etc.
- a geometry contains a plurality of nodes which carry information to help it in respect of translation, deformations, skinning, attached lights, etc. as the end result may be used for complex animation production or object rendering. If the geometry is to be used to display a geometry of an object in a viewer with minimal properties or features, it is possible to remove all the nodes which are meant for other purposes and keep only display related information or nodes.
- the wanted information relates to features of the geometry which relate to producing and rendering a three dimensional geometry of the object.
- the custom code keeps and processes just the wanted information. Stripping out unnecessary features or functionalities results in a reduced three dimensional geometry.
- reduced three dimensional geometry is used to describe the three dimensional geometry where one or more unnecessary features (e.g., feature(s] which may not include display related information] have been removed from the "full three dimensional geometry”. In further embodiments, all the unnecessary features may be removed to result in the "reduced three dimensional geometry”.
- the custom code generates, JavaScript and data files along with an html file.
- JavaScript is used to load three dimensional data.
- WebGL is a JavaScript part of OpenGL and is used to render the three dimensional geometry on a screen or display.
- the custom code ignores any matrix functions resulting in any OpenGL Utility Library (GLU] prefixed functions being disregarded or constituting unwanted information.
- GLU OpenGL Utility Library
- the OpenGL Utility Library (GLU] is a computer graphics library for OpenGL. It includes a number of functions that use the base OpenGL library to provide higher-level drawing routines from the more primitive routines that OpenGL provides. It is usually distributed with a base OpenGL package. GLU is not implemented in the embedded version of the OpenGL package, OpenGL ES.
- Typical GLU features include: mapping between screen- and world-coordinates, generation of texture Mipmaps, drawing of quadric surfaces, NURBS, tessellation of polygonal primitives, interpretation of OpenGL error codes, an extended range of transformation routines for setting up viewing volumes and simple positioning of the camera, generally in more human- friendly terms than the routines presented by OpenGL.
- GLU also provides additional primitives for use in OpenGL applications, including spheres, cylinders and disks. All GLU functions start with the GLU prefix. By ignoring these types of function the size of the ultimate geometry may be significantly reduced.
- a WebGL environment generally uses geometries which are exported using common cross platform sharing file formats (e.g., OBJ, FBX and Collada, etc.] from the 3D creation software. These export files carry lot of unwanted information which is unnecessary if end requirement is just to display the geometry in WebGL browsers.
- the custom code of various embodiments strips the geometry of this unwanted information and keeps (only] what is needed just for a smooth view and exports the output in binary format.
- the custom code avoids the use of standard transformation and lighting options. For use in geometries of objects this limitation is reasonable for viewing the final product in a limited web environment, but it does add functionality to the custom code.
- C++ language is generally used to create the graphics.
- the custom code makes the geometry usable for the purpose of display on a Web Browser. Complex, animation, lighting, shading and rendering may be avoided. As a result, nodes carrying such information may be removed or stripped out and a lighter geometry may be achieved which typically occupies at least 10 times less file size.
- the custom code may also avoid using additional information on vertices that are often used to display a certain geometry. Instead the custom geometry makes use of buffer objects. Again, this is acceptable from a JavaScript performance point of view.
- Buffer objects are OpenGL Objects that store an array of unformatted memory allocated by the OpenGL context. These may be used to store vertex data, pixel data retrieved from images or the framebuffer, and a variety of other things. Wireframe bounding boxes or just arrows could be rendered via object buffers and shaders.
- custom code has no matrix and point classes, by using matrix libraries, three dimensional calculations using JavaScript are harder than in C++, yet its integration with the Web Browser makes it an ideal choice for the custom code.
- OpenGL is used to render complex three dimensional geometry on displays.
- WebGL is an extension of OpenGL which uses Web Browsers to display three dimensional objects as an inbuilt feature in all the latest Web Browsers.
- the WebGL is not intended to be used for browser based games, it is used instead to render a three dimensional geometry in a browser.
- the custom code may be written in JavaScript and may ignore anything but the features of the geometry that relate to displaying a three dimensional object on a Web Browser.
- the custom codes may strip away any features which do not relate to displaying a three dimensional objects in a Web Browser to give rise to a stripped or reduced three dimensional geometry.
- the features that may be ignored may be varied depending on the object and how it needs to be modeled and displayed.
- the optimizations achieved by the custom code results in a reduction in file size and the time required to render a three dimensional geometry via the Web Browser.
- the reduction in size may be a factor of at least 10.
- the time to render the geometry is thus a time which may be acceptable to the user or viewer compared to what it would be if the reduced three dimensional geometry were not used (e.g., as compared to the longer time required to render a "full three dimensional geometry"].
- the three dimensional geometry may be converted into a format or a file type capable of being effected (or processed or executed] to render the three dimensional geometry on a remote display, for example, the display of a device of a user.
- the display may be provided by a computer, a mobile device, a pda, an application or any other appropriate device or medium.
- the file being converted into may be capable of rendering the reduced three dimensional geometry on the remote display.
- Such a file may be capable of being effected (or processed or executed] to render the reduced three dimensional geometry on a remote display.
- Figure 6 is an overview of a system 600 for the use of three dimensional view in a commercial environment
- the system broadly includes inputs 602, a process 604 and outputs 606.
- the inputs 602 comprise a Universal Product Library 608, which includes details of products for sale in an on-line shop. Products are objects for sale and the two terms are used interchangeable herein.
- the Universal Product Library 608 includes product specifications 610 and images 612 which are generated from product photographs 614. Three dimensional geometries 616 of the products are created and stored. The three dimensional geometries are created by the images 612, three dimensional modelling and surfacing 618, photogammetry 620 and three dimensional scanning 622. Photogammetry is the use of photography in surveying and mapping to ascertain measurements between elements in the photograph.
- the outputs 606 include a web portal php based shopping experience 624 or a handheld application based shopping experience 626.
- the outputs may further include Web or handheld game based shopping experience 628, a virtual reality based shopping experience 630 and a television shopping experience 632.
- the process 604 makes use of the inputs 602 and generates the outputs 604.
- the process has a number of functionalities represented by blocks, which can work together or alone.
- the first block is a 3D-view block 634 which carries out the 3D-view process as described above.
- This block 634 takes image views 612 to produce a web portal php based shopping experience 624 or a handheld application based shopping experience 626.
- the next block 636 relates to beauty images and takes image views 612 to produce a web portal php based shopping experience 624 or a handheld application based shopping experience 626.
- the next block 638 relates again to three dimensional view and takes one or more three dimensional geometries 616 to produce a web portal php based shopping experience 624 or a handheld application based shopping experience 626.
- Block 640 relates to a three dimensional store functionality. This takes inputs from the three dimensional geometry and produces Web or handheld game based shopping experience 628, a virtual reality based shopping experience 630 and a television shopping experience 632.
- an embodiment of the present invention a process of optimization is used which focuses on loss less optimization, where the file size is reduced to a minimum without disturbing the quality of the geometry. This helps the user to view the three dimensional geometries without having to wait for long periods of time.
- This form of modeling also uses the WebGL interface, but there are differences in how the geometries import into a WebGL canvas.
- an embodiment of the present invention can be used in an on-line shopping application. The user may use a head set (as used in some gaming applications] to start shopping and to view products that are available for purchase.
- Figure 7 shows a view of a shopping trolley which the user may navigate "around the shop” to purchase products.
- Figure 8 shows a product being viewed by the user.
- the user may "hold” the product and turn it around to view any or all surfaces thereof.
- the user may zoom in on certain features, for example the label to see details of the product
- the three dimensional geometry of the product may include all relevant detail relating to the product. If the user is happy with the product they may put it into the shopping trolley.
- An embodiment of the present invention is for use by a user on a computer making use of an internet connection.
- the internet is the primary (but not only] example of a data network that is used by users for accessing information on the World Wide Web.
- a computer or a communications device as a device that can provide for connectivity to the internet.
- a terminal, computer or communications device may be a PC, laptop, netbook, mobile phone, smart phone, IP phone, gaming device, set-top box, IPAD, tablet PC and other such equivalent devices as any of these may be used to access the web for accessing web-sites.
- a computer uses LAN, routers, WLAN, DSL, cable, 3G, LTE, WiMAX, 3 G+ etc. for data communication required to access the internet.
- a terminal or computer typically accesses the web and its contents, services and applications via an Internet browser, instances of which include Internet Explorer, Firefox, and Google Chrome etc.
- Each of the functions carried out in a method sense as described above may be implemented on an appropriate computer module.
- the modules may be of any appropriate type either in hardware, software or a combination of both.
- an embodiment of the present invention may include a generating module which generates the three dimensional geometry; a rendering module for rendering the three dimensional geometry etc. There is no limit to the number of modules which may be used to carry out the relevant steps.
- the embodiments of the present invention are concerned with rendering a three dimensional geometry on a Web Browser.
- Other embodiments of the invention are concerned with rendering in executable gaming environments, and the like.
- the rending application/executable may be located on a computer or may be an equivalent application found on a mobile device such as a smartphone or a tablet. Any appropriate device may be used to render the reduced three dimensional geometry.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
- Image Generation (AREA)
Abstract
A method of generating a three dimensional geometry of an object adapted to be rendered on a remote display, the method comprising: generating a three dimensional geometry of the object from a plurality of images of the object, wherein the plurality of images are taken from a plurality of directions, generating a reduced three dimensional geometry based on the three dimensional geometry generated, comprising processing wanted information from the three dimensional geometry, wherein the wanted information comprises features of the three dimensional geometry which relate to displaying the three dimensional geometry on the remote display, and converting the reduced three dimensional geometry into a file capable of being effected to render the reduced three dimensional geometry on the remote display.
Description
Improvements in or relating to the generation of three dimensional geometries of an object.
[0001] This application claims the benefit of priority of Singapore patent application No. 10201600356V, filed 17 January 2016, the content of it being hereby incorporated by reference in its entirety for all purposes.
[0002] The present invention relates to improvements in or relating to the generation of three dimensional geometries of an object
[0003] Attempts have been made to provide three dimensional geometries of an object on a Web Browser or a mobile device, but the mere size of the file makes the process slow if not impossible to achieve. There is simply not enough bandwidth to show a typical three dimensional geometry of an object on a Web Browser.
[0004] Three dimensional software is used to recreate a real product into a three dimensional geometry. The software products commonly used are Autodesk™, Maya™, 123D Catch™, 3D S Max™, Blender™, Rhino™, Mud Box™, Z-Brush™ etc. An alternative way to achieve this is to manually scan a product using three dimensional scanners and create a three dimensional geometry. The export output from these software products is generally in the form of an OBJ™, FBX™ or Collada™ file, each of which is typically heavy in terms of data storage and file size and is not efficient for their commercial utilization online where bandwidth is a major concern.
[0005] In order to render 3D computer graphics in a browser the existing technology used is often Web Graphics Library (Web GL]. This is a JavaScript API which uses texture and geometry files in OBJ™, FBX™ or Json™ format exported out of either a unity game engine or other 3D applications. Due to the heavy file sizes of the geometries developed using these applications, the 3D objects lead to a high latency or wait time in page loading.
[0006] File size is a major handicap which prevents the existing technologies being used for mass creation of three dimensional geometries and for use of such geometries in a library. For Example: An IPhone 6 three dimensional geometry's OBJ file will be a file size of 4.42 Mb with 16589 polygons and 18588 Vertices. If a user wishes to view 10
such objects it is likely that the user may download about45-50 Mb of data at a time! This is hugely inefficient and highly time consuming.
[0007] The ability to view a three dimensional geometry of an object in a Web Browser has many potential uses. One such use is on-line shopping, although it is clear that many other applications, such as gaming, augmented reality and virtual reality environment applications, may benefit from the ability to view a three dimensional geometry of an object in a Web Browser.
[0008] In a traditional shop, the purchaser moves up and down the aisles and selects objects to purchase. It is easy to see the products available and select those of interest. If there are a number of similar products by different manufacturers these will be in the same place. The purchaser can thus select the mark they prefer. Each object can be examined, for example the purchaser may read the label on the back to see what is in the product. The purchaser has full knowledge of the product and knows exactly what they are purchasing.
[0009] A need exists for the ability to view a three dimensional geometry of an object in a Web Browser. The three dimensional geometry of the object may enable a user to see all the details of that object. In an on-line shopping application, this can make the shopping experience more akin to shopping in a real shop. Whereby a shopper can tumble an object in any direction (rotate], bring it closer to the screen (zoom function], move it upside down (Pan] just as he/she does in an offline shopping scenario at any conventional store.
[0010] Thus, an embodiment of the present invention generates three dimensional geometries of objects that can be easily displayed on a typical Web Browser at speeds that are normal in terms of time delays and the like. This may mean minimal perceivable time delay in loading and/or displaying the three dimensional geometries of objects on, for example, a Web Browser.
[0011] The invention is defined in the independent claims. Various optional features of the invention are defined in the dependent claims.
[0012] Reference will now be made by way of example to the accompanying drawings, in which:
[0013] Figure 1 is a block diagram of the overall system, according to an aspect of the present invention;
[0014] Figure 2 is a flow chart of a broad process for on-line shopping, according to an aspect of the present invention;
[0015] Figure 3 is a flow chart of the technical processes for generating three dimensional geometries of objects, according to an aspect of the present invention;
[0016] Figure 4 is an example of a UV mapping, according to an aspect of the present invention;
[0017] Figure 5 is an example of a texture mapping, according to an aspect of the present invention;
[0018] Figure 6 is a diagram showing an on-line shopping process, according to an aspect of the present invention;
[0019] Figure 7 is a view of a virtual trolley in a shop, according to an aspect of the present invention; and
[0020] Figure 8 is a view of a virtual product to be placed in the virtual trolley, according to an aspect of the present invention.
[0021] As used herein, the phrase of the form of "at least one of A or B" may include A or B or both A and B. Correspondingly, the phrase of the form of "at least one of A or B or C", or including further listed items, may include any and all combinations of one or more of the associated listed items.
[0022] Various embodiments may provide a method of generating a three dimensional geometry of an object adapted to be rendered on a remote display. The method includes generating a three dimensional geometry of the object from a plurality of images of the object, wherein the plurality of images are taken from a plurality of directions (or angles], generating a reduced three dimensional geometry based on (or from] the three
dimensional geometry generated, including processing wanted information from the three dimensional geometry, wherein the wanted information includes features of the three dimensional geometry which relate to displaying the three dimensional geometry on the remote display, and converting the reduced three dimensional geometry into a file capable of being effected (or processed or executed] to render the reduced three dimensional geometry on the remote display. As an example, the file may be effected or processed or executed on a device having the remote display.
[0023] In various embodiments, by generating a reduced three dimensional geometry, a smaller sized file may be converted from the reduced three dimensional geometry, as compared to known techniques. Providing a smaller sized file allows the reduced three dimensional geometry to be able to be rendered and viewed. Consequently, less computation power is required and faster computation speed is possible for rendering the reduced three dimensional geometry. In this way, there is minimal perceivable time delay in rendering a three dimensional geometry of an object on a (remote] display. Further, less bandwidth is required for rendering the reduced three dimensional geometry. Accordingly, various embodiments may be able to reduce the load on hardware, software or both and improve the rendering performance.
[0024] Therefore, various embodiments may provide enhanced viewing experience for a user/viewer. A "full" three dimensional geometry means more data getting loaded in the random-access memory (RAM]. This also means that the processor (e.g., central processing unit (CPU]] will take a longer time to process, as it will have to process all the unnecessary information as well, which may not contribute to the rendering process in the web browser. The browser cache also gets heavy, hence the experience to the viewer will be a "stuttering" motion (also known as frame drop in gaming industry]. If the viewer does not get smoothness in navigation, which may include zoom in, zoom out, pan, tumble and animation, his overall experience will not be satisfactory. In contrast, by providing a reduced three dimensional geometry, a smoother navigation may be achieved, thus enhancing the overall viewing experience of the viewer.
[0025] In various embodiments, as non limiting examples, WebGL or an application (e.g., on a mobile phone] may be used to render the reduced three dimensional geometry on the remote display.
[0026] In various embodiments, the method may further include sending (or transmitting] the file converted from the reduced three dimensional geometry to be rendered on the remote display.
[0027] In various embodiments, the reduced three dimensional geometry may be converted into the file using JavaScript. This may mean that the file that is converted into may be a JavaScript file or a JavaScript compatible file.
[0028] In various embodiments, for generating the three dimensional geometry of the object, a mesh output corresponding to the three dimensional geometry may be generated.
[0029] In various embodiments, the mesh output may include a polygon mesh, and, for generating the three dimensional geometry of the object, the polygon mesh may be unwrapped so as to lay out the polygon mesh in a planar manner (or configuration].
[0030] In various embodiments, the method may further include applying a texture to the polygon mesh laid out in the planar manner. Applying the texture helps to create the three dimensional geometry of the object. Prior to applying the texture to the polygon mesh, the texture may first be created.
[0031] In various embodiments, for generating the three dimensional geometry of the object, the method may further include at least one of texturing, shading, specularity processing, bump map processing, or transparency processing.
[0032] In various embodiments, for generating the three dimensional geometry of the object, the method may further include applying a level of detail process to specify a resolution of the three dimensional geometry.
[0033] In various embodiments, for generating the reduced three dimensional geometry, the method may further include differentiating the wanted information from unwanted information that comprises features of the three dimensional geometry unrelated to displaying the three dimensional geometry on the remote display. Such differentiation may be done prior to processing the wanted information.
[0034] In various embodiments, for generating the reduced three dimensional geometry, the method may further include removing, from the three dimensional geometry, all unwanted information that includes features of the three dimensional geometry unrelated to displaying the three dimensional geometry on the remote display.
[0035] In the context of various embodiments, the unwanted information may include at least one of construction history, additional shader nodes, topology, additional UV sets, object operation history, nodes which carry information to help it in respect of translation, deformations, skinning, attach lights, or any GLU prefixed functions.
[0036] In the context of various embodiments, the file that is converted may be further capable of being effected to allow the reduced three dimensional geometry to be rendered on the remote display to undergo at least one of tumbling, orbiting, panning or zooming in a three dimensional space.
[0037] While the method described above is illustrated and described as a series of steps or events, it will be appreciated that any ordering of such steps or events are not to be interpreted in a limiting sense. For example, some steps may occur in different orders and/or concurrently with other steps or events apart from those illustrated and/or described herein. In addition, not all illustrated steps may be required to implement one or more aspects or embodiments described herein. Also, one or more of the steps depicted herein may be carried out in one or more separate acts and/or phases.
[0038] Various embodiments may also provide a computer readable medium having stored thereon computer-readable instructions for executing, under control of a processing device, the method as described herein. The computer readable medium may be a non-transitory computer readable medium.
[0039] Various embodiments may also provide computer program having instructions for a computing device to execute the instructions under control of the computing device to perform the method as described herein.
[0040] Various embodiments may further provide an apparatus for generating a three dimensional geometry of an object adapted to be rendered on a remote display. The apparatus includes a generating module configured to generate a three dimensional geometry of the object from a plurality of images of the object, wherein the plurality of images are taken from a plurality of directions, a processing module configured to generate a reduced three dimensional geometry based on the three dimensional geometry generated, the processing module being adapted to process wanted information from the three dimensional geometry to generate the reduced three dimensional geometry, wherein the wanted information includes features of the three dimensional geometry which relate to displaying the three dimensional geometry on the remote display, and a conversion module configured to convert the reduced three dimensional geometry into a file capable of being effected to render the reduced three dimensional geometry on the remote display.
[0041] In various embodiments, the apparatus may further include a transmission module configured to send the file converted from the reduced three dimensional geometry to be rendered on the remote display.
[0042] In various embodiments, the conversion module may be configured to convert the reduced three dimensional geometry into the file using JavaScript.
[0043] In various embodiments, for generating the three dimensional geometry of the object, the generating module may be configured to generate a mesh output corresponding to the three dimensional geometry. The mesh output may include a polygon mesh, and the generating module may be further configured to unwrap the polygon mesh so as to lay out the polygon mesh in a planar manner.
[0044] In various embodiments, apparatus may further include a texturing module configured to apply a texture to the polygon mesh laid out in the planar manner. The texturing module may be a separate module or may be comprised in the generating module. As a non-limiting example, the generating module may equivalently be the texturing module such that the generating module may be further configured to apply the texture to the polygon mesh.
[0045] In various embodiments, the generating module may be further configured to perform at least one of texturing, shading, specularity processing, bump map processing, or transparency processing.
[0046] In various embodiments, the generating module may be further configured to apply a level of detail process to specify a resolution of the three dimensional geometry.
[0047] In various embodiments, for generating the reduced three dimensional geometry, the processing module may be further configured to differentiate the wanted information from unwanted information that comprises features of the three dimensional geometry unrelated to displaying the three dimensional geometry on the remote display.
[0048] In various embodiments, for generating the reduced three dimensional geometry, the processing module may be further configured to remove, from the three dimensional geometry, all unwanted information that comprises features of the three dimensional geometry unrelated to displaying the three dimensional geometry on the remote display.
[0049] In the context of various embodiments, the unwanted information may include at least one of construction history, additional shader nodes, topology, additional UV sets, object operation history, nodes which carry information to help it in respect of translation, deformations, skinning, attach lights, or any GLU prefixed functions.
[0050] In the context of various embodiments, the file may further be capable of being effected to allow the reduced three dimensional geometry to be rendered on the remote display to undergo at least one of tumbling, orbiting, panning or zooming in a three dimensional space.
[0051] In the context of various embodiments, each of the different modules as described above in the context of the apparatus may be an individual or separate module. However, it should be appreciated that one or more modules may be comprised in or may be part of another module. For example, one or more modules may form part of another bigger module. As a further example, one module may be equivalent to two or more respective modules described above, meaning that one module may be
configured to perform the same two or more functions performed by the two or more respective modules.
[0052] It should be appreciated that descriptions in the context of the method of generating a three dimensional geometry of an object as described above may correspondingly be applicable in relation to the apparatus for generating a three dimensional geometry of an object
[0053] Various embodiments may further provide a device having a rendering module, the device being adapted to receive and render (e.g., via the rendering module] the reduced three dimensional geometry generated as described herein. This may mean that the device may receive the file that is converted and render the reduced three dimensional geometry (based on the file] on a display of the device.
[0054] Various embodiments may further provide a mobile device having a rendering application, the mobile device being adapted to receive and render (e.g., via the rendering application] the reduced three dimensional geometry generated as described herein. This may mean that the mobile device may receive the file that is converted and render the reduced three dimensional geometry (based on the file] on a display of the mobile device.
[0055] Figure 1 shows an overall view of a system 100 in accordance with an embodiment of the present invention. The system includes a computer system 102 including one or more each of processors 104, memory 106 and modules 108. The computer system 102 is connected to a network, such as the internet 110. Third party user devices 112, 114 and 116 are in communication with the computer system 102 via the network 110. Each third party user device includes a rendering module 118.
[0056] Figure 2 shows a broad process 200 for generating three dimensional geometries of an object according to an embodiment of the present invention. The process 200 includes a step 202 of producing three dimensional scanned data or (two dimensional] pictures/images for any object The three dimensional scanned data or pictures/images may include views of the object from different directions or angles. For example, the object may be a product which may be for sale in an on-line shop. The output (e.g., three dimensional scanned data or pictures/images] from the step 202 is then
processed to produce a mesh output 204. The mesh output may be a three dimensional mesh or a polygon mesh. The polygon mesh may be in the form of a three dimensional mesh. The mesh output may be an OBJ™, FBX™ or Collada™ format (or file type] generated by three dimensional software such as Maya™, Blender™ or Z-Brush™, by way of example. A mesh (e.g., three dimensional mesh or polygon mesh] comprises a plurality of cells which overlay the object. The cells may be triangles, quadrilaterals, tetrahedra, cuboids or any other appropriate polygon. The mesh is then optimized 206 using open source tools, by way of example.
[0057] Mesh optimization depends on a number of different factors. For example, the shape of the object It may be necessary or important to avoid sharp angles, flat angles and distorted features as these affect the accuracy of any numerical simulation. The size of the object is a further consideration and generally small features are needed near small features, so there are no abrupt changes. Abrupt changes in large features are generally acceptable as there are generally less cells for a larger feature. Another factor in mesh optimization is the number of features. More features lead to a slower solution time which is less efficient. Features located on domain boundaries may require more processing. As a result, it is preferable to minimize the number of cells at boundaries.
[0058] Returning to figure 2, the next step 208 converts the mesh into, for example, a WebGL™ data file (or any other equivalent type of file] which is capable of being easily viewed using a Web Browser. The conversion process will be described in greater detail below.
[0059] The traditional techniques result in files which are far too big to show on a Web Browser. The conversion of the mesh into a file for viewing in accordance with an embodiment of the present invention, results in a file which is at least 10 times smaller than traditional conversion techniques. The reduction in size makes it possible to view the three dimensional geometries of objects on a Web Browser. The present invention thus provides a considerable advantage over current techniques.
[0060] Referring now to figure 3 a more detailed process 300 for generating three dimensional geometries of objects is described. In a first step 302 a plurality of photographs of the object are taken from a number of different predetermined directions or angles. The predetermined directions are chosen to ensure that every
facet of the object is captured in at least one photograph and preferably more, so that there is sufficient overlap between the plurality of photographs to give rise to an accurate three dimensional geometry. The number of photographs required may depend on a number of factors, including the amount of detail required; the resolution of the image, the size of the object etc.
[0061] In step 304 a three dimensional geometry creation process is carried out, using for example by Maya™ or Max™, which are known three dimensional applications. At step 306, after an approval process, a UV layout of the geometry formed in step 304 is produced. At step 308 texturing of the geometry occurs.
[0062] Referring to figure 4 a three dimensional geometry (i.e. from step 304] and an equivalent UV map (i.e. from step 306] are shown. UV relates to a texture space where U stands for the horizontal and V stands for the vertical direction of the texture space.
[0063] The UV Mapping process at its simplest includes three main steps: unwrapping the mesh, creating the texture, and applying the texture.
[0064] Figure 4 shows a sphere having a checkered texture, first without and then with UV mapping. Without UV mapping, the checkered tile XYZ space and the texture is carved out of the sphere. With UV mapping, the checkered tile XYZ space and points on the sphere map are converted to this UV space according to their latitude and longitude and give rise to a flattened representation.
[0065] When a geometry is created as a polygon mesh (e.g., which may be in the form of a three dimensional mesh] using a three dimensional modeler (as described above], UV coordinates may be generated for each vertex in the mesh. The three dimensional modeler unfolds a polygonal mesh at the seams, so as to automatically lay out a polygon mesh in a planar manner. If the mesh is a UV sphere 400, for example, the modeler might transform it into an equi- rectangular projection 402, as shown in figure 4.
[0066] A UV map can either be generated automatically by a software application, made manually by an artist, or a combination of both. Often a UV map is generated, and then the artist may adjust and optimize it to minimize seams and overlaps. If the geometry is
symmetric, the artist might overlap opposite polygons to enable painting to occur on both sides simultaneously.
[0067] UV coordinates are applied per face, not per vertex. This means a shared vertex may have different UV coordinates in each of its polygons, so adjacent polygons can be cut apart and positioned on different areas of the texture map as discussed below.
[0068] In figure 5 the process projects a texture map 500 onto a three dimensional object 502. UV texturing permits polygons that make up a three dimensional object to be painted with color from an image. The image is called a UV texture map, and is essentially an ordinary image. The UV mapping process involves assigning pixels in the image to surface mappings on the polygon. This is usually done by copying a polygon shaped piece of the image map and pasting it onto a polygon on the object via a computer program. Such a polygon mesh process is described in more detail below.
[0069] Once the geometry is unwrapped, the artist may paint a texture on each polygon individually, using the unwrapped mesh as a template. When the scene is rendered, each polygon may map to the appropriate texture from a "decal sheet".
[0070] The rendering computation, described below, uses the UV texture coordinates to determine how to paint the three-dimensional surface.
[0071] Returning to figure 3 step 308 is a step for texturing the three dimensional geometry using the photographs originally captured in step 302. To create a surface that resembles real life, texture mapping is an ideal tool. Texturing is similar to adding decorative paper to a white box. In three dimensions, texture mapping is the process of adding graphics to a polygon object These graphics may be anything from photographs to original designs. Texturing may add detail to the geometry based on a photograph. For example, if a particular part of the object has a certain detail such as a surface feature; a color; or a shape this may be added using texturing. Textures may also help age the object and give it more appeal and realism.
[0072] A further technique used at this stage, is known as shading which may be carried out by a shader computer program. This describes the entire material on an object in terms of how the light is reflected and absorbed and the effects of translucency and bump maps.
Shaders and texturers are commonly used together. The texture is connected to a shader to give the three dimensional object a particular look. Shaders calculate rendering effects such as the position of a feature, hue, saturation, brightness, and contrast of all pixels, vertices, or textures used to construct a final image. These may be altered in real time, using algorithms defined in the shader.
[0073] Another feature applied to the geometry at this time is specularity. Specularity defines how a surface reflects light. It is basically the texture's reflection of the light source and creates a shiny look. Having the right specularity may be necessary or important in defining what the three dimensional object is made from. For example, a shiny metal material may have a high level of reflectivity, whereas a matt texture like cement may not
[0074] A bump map may also be applied. This gives the illusion of depth or relief on a texture without greatly increasing the rendering time. The computer determines where areas on the image need to be raised by reading the black, white and grey scale data on the graphic or photograph.
[0075] A transparency map may further be applied at the texturing step. Transparency maps are grey scale textures that use black and white values to signify areas of transparency or opacity on an object's material. For example, when modeling a chain link fence, instead of modeling each individual chain link which would take a significant amount of time and processing, a black and white texture can be used to determine which areas should stay opaque and which should be transparent.
[0076] The above mentioned processes define techniques of applying texture and materials on the object Which techniques are used may depend on the geometry, what needs to be generated and which textures may be required and which may not be required.
[0077] Other texturing techniques may also be used as required. These include, by way of example: 2.5D, 3D computer graphics, Cube mapping, Mipmap, Displacement mapping, Environment mapping, Image analogy, List of Sega arcade system boards, Materials system, Mode 7, Namco System 22, Normal mapping, Parametrization, Parallax mapping, Relief mapping (computer graphics], Sprite (computer graphics],
Texture synthesis, Texture atlas, Texture artist' Texture splatting (a technique for combining textures], UV Mapping, UVW Mapping and Virtual globe.
[0078] After texturing has been completed the process progresses to one of three levels of detail techniques 310, 312 and 314. The level of detail (LOD] may be low resolution 310, mid resolution 312 or high resolution 314. Low resolution is needed when the object is far away from the camera and high resolution is used when the object is close to the camera. The level of detail technique is used to determine how much detail is required based on the distance of the object from the camera taking the photographs.
[0079] When an object in the scene is a long way from the camera, the amount of detail that may be seen on it may be greatly reduced. However, in current techniques the same number of polygons will be used to render the object, even though the detail will not be noticed. In accordance with an embodiment of the present invention, LOD rendering enables a reduction in the number of polygons rendered for an object as its distance from camera increases. As long as the objects are not all close to the camera at the same time, LOD may reduce the load on the hardware, software or both and improve the rendering performance.
[0080] In a next step 316 the three dimensional geometry and the data from the one or more texturing techniques are combined and converted into code language. The geometry is converted in to a format (or file type] such as OBJ or FBX. A custom code is then used to convert that geometry into code such as JavaScript or any equivalent type of code. This generates what is known as a Disha.js file. After this the OBJ or FBX data need not be called again.
[0081] The next step 318 applies a lighting and look and feel technique to the geometry using the custom code and JavaScript. The lighting technique relates to the simulation of light in computer graphics. This simulation may be extremely accurate to track the energy flow of light interacting with materials using lighting computational techniques. Alternatively, the simulation may simply be inspired by light physics, as is the case with non-photorealistic rendering. In both cases, a shading geometry may be used to describe how the surfaces of the object respond to light. Between these two extremes, there are many different rendering approaches which may be employed to achieve the most desirable visual result
[0082] The three dimensional geometry is taken into a light set rig forming part of an embodiment of the present invention, but not shown. The light rig includes a plurality of lights, for example directional lights, spot lights, point lights etc. The three dimensional geometry is placed in the center of the rig and its position manipulated and rotated according to visual features of the object. By means of the manipulation and rotation the image may be improved to the extent that the human eye may see the difference.
[0083] In a next step 320, data is called from the sever storing the geometry and is uploaded to a browser for rendering on line. The data uses the same html code supported by JavaScript. The custom code converts a geometry into html code which may be utilized by all browsers on the internet.
[0084] The rendering process generally occurs at a remote device separate from the processers carrying out the conversion of the geometry. The geometry may be rendered on any suitable device, including but not limited to, a computer, a mobile phone, a PDA or any other suitable means for viewing the geometry. The computer system 102 in figure 1, produces a geometry which is adapted to be rendered on any third party device 112, 114 and 116.
[0085] At step 322, further optimization of the geometry may take place using any of the techniques mentioned above. At step 324 the geometry may be adapted to make it compatible with custom tools such as zoom, pan, orbit rotate etc. which may be applied to the geometry as it is manipulated by the user when the geometry is viewed.
[0086] The above mentioned custom code is referred to as 3D-View. "3D-View" displays a geometrically recreated and textured three dimensional geometry of an object. In 3D- View, three dimensional geometry may undergo tumble, orbit and pan or zoom in a three dimensional space. The tool enables users to see an object from different angles and bring it closer to zoom in on it. This simulates a feeling equivalent to that obtained when handling the object physically. For example, it is easy to flip the TV to see the back panel so as to see how many HDMI sockets it has and where they are located.
[0087] It is also possible to compare products to evaluate their size in comparison to common units of measurement, such as a One Dollar Bill; a 2 liter bottle of coke; the size
of a palm; male or female human figures etc. Real measurements may be used in conjunction with this or alone. If the object is smaller than the viewing media being used to view the object, the object may be to scale.
[0088] The custom code will now be discussed in more detail. The custom code removes or strips unwanted information from a geometry data by keeping its usage solely for online display using technology such as WebGL technology or any other Application Program Interface (API] which may be used when rendering three dimensional graphics. Unwanted information relates to, by way of example, a construction history, additional shader nodes, topology, additional UV sets, object operation history, if any, etc. A geometry contains a plurality of nodes which carry information to help it in respect of translation, deformations, skinning, attached lights, etc. as the end result may be used for complex animation production or object rendering. If the geometry is to be used to display a geometry of an object in a viewer with minimal properties or features, it is possible to remove all the nodes which are meant for other purposes and keep only display related information or nodes.
[0089] As there is unwanted information, so there is wanted information. Anything that is not unwanted information is wanted information. The wanted information relates to features of the geometry which relate to producing and rendering a three dimensional geometry of the object. In accordance with an embodiment of the present invention the custom code keeps and processes just the wanted information. Stripping out unnecessary features or functionalities results in a reduced three dimensional geometry. The term "reduced three dimensional geometry" is used to describe the three dimensional geometry where one or more unnecessary features (e.g., feature(s] which may not include display related information] have been removed from the "full three dimensional geometry". In further embodiments, all the unnecessary features may be removed to result in the "reduced three dimensional geometry".
[0090] The custom code generates, JavaScript and data files along with an html file. JavaScript is used to load three dimensional data. WebGL is a JavaScript part of OpenGL and is used to render the three dimensional geometry on a screen or display. The custom code ignores any matrix functions resulting in any OpenGL Utility Library (GLU] prefixed functions being disregarded or constituting unwanted information.
[0091] The OpenGL Utility Library (GLU] is a computer graphics library for OpenGL. It includes a number of functions that use the base OpenGL library to provide higher-level drawing routines from the more primitive routines that OpenGL provides. It is usually distributed with a base OpenGL package. GLU is not implemented in the embedded version of the OpenGL package, OpenGL ES. Typical GLU features include: mapping between screen- and world-coordinates, generation of texture Mipmaps, drawing of quadric surfaces, NURBS, tessellation of polygonal primitives, interpretation of OpenGL error codes, an extended range of transformation routines for setting up viewing volumes and simple positioning of the camera, generally in more human- friendly terms than the routines presented by OpenGL. GLU also provides additional primitives for use in OpenGL applications, including spheres, cylinders and disks. All GLU functions start with the GLU prefix. By ignoring these types of function the size of the ultimate geometry may be significantly reduced.
[0092] A WebGL environment generally uses geometries which are exported using common cross platform sharing file formats (e.g., OBJ, FBX and Collada, etc.] from the 3D creation software. These export files carry lot of unwanted information which is unnecessary if end requirement is just to display the geometry in WebGL browsers. The custom code of various embodiments strips the geometry of this unwanted information and keeps (only] what is needed just for a smooth view and exports the output in binary format.
[0093] In simple terms, in a WebGL environment, if deformations, bones and animation etc. were included, it would make the geometry very heavy in terms of size and rendering time. By stripping a geometry of any unnecessary functionalities, such as standard transformations and lighting etc., the geometry may be very much lighter and easily viewable in, for example, a Web Browser based viewer. This is due to the fact that the techniques disclosed herein may be used to display a product, and it may not necessarily be required to have animation, deformation, FX dynamics etc., although in various embodiments, animation may be optionally included.
[0094] The custom code avoids the use of standard transformation and lighting options. For use in geometries of objects this limitation is reasonable for viewing the final product in a limited web environment, but it does add functionality to the custom code.
[0095] C++ language is generally used to create the graphics. The custom code makes the geometry usable for the purpose of display on a Web Browser. Complex, animation, lighting, shading and rendering may be avoided. As a result, nodes carrying such information may be removed or stripped out and a lighter geometry may be achieved which typically occupies at least 10 times less file size.
[0096] The custom code may also avoid using additional information on vertices that are often used to display a certain geometry. Instead the custom geometry makes use of buffer objects. Again, this is acceptable from a JavaScript performance point of view.
[0097] Buffer objects are OpenGL Objects that store an array of unformatted memory allocated by the OpenGL context. These may be used to store vertex data, pixel data retrieved from images or the framebuffer, and a variety of other things. Wireframe bounding boxes or just arrows could be rendered via object buffers and shaders.
[0098] Although the custom code has no matrix and point classes, by using matrix libraries, three dimensional calculations using JavaScript are harder than in C++, yet its integration with the Web Browser makes it an ideal choice for the custom code.
[0099] In order to make a WebGL output, a number of initial steps need to be carried out in a different manner. OpenGL is used to render complex three dimensional geometry on displays. WebGL is an extension of OpenGL which uses Web Browsers to display three dimensional objects as an inbuilt feature in all the latest Web Browsers. In an embodiment of the present invention, the WebGL is not intended to be used for browser based games, it is used instead to render a three dimensional geometry in a browser.
[0100] The custom code may be written in JavaScript and may ignore anything but the features of the geometry that relate to displaying a three dimensional object on a Web Browser. The custom codes may strip away any features which do not relate to displaying a three dimensional objects in a Web Browser to give rise to a stripped or reduced three dimensional geometry. The features that may be ignored may be varied depending on the object and how it needs to be modeled and displayed. The optimizations achieved by the custom code results in a reduction in file size and the time required to render a three dimensional geometry via the Web Browser. The
reduction in size may be a factor of at least 10. The time to render the geometry is thus a time which may be acceptable to the user or viewer compared to what it would be if the reduced three dimensional geometry were not used (e.g., as compared to the longer time required to render a "full three dimensional geometry"].
[0101] In an embodiment of the present invention the three dimensional geometry may be converted into a format or a file type capable of being effected (or processed or executed] to render the three dimensional geometry on a remote display, for example, the display of a device of a user. The display may be provided by a computer, a mobile device, a pda, an application or any other appropriate device or medium. As an example, the file being converted into may be capable of rendering the reduced three dimensional geometry on the remote display. Such a file may be capable of being effected (or processed or executed] to render the reduced three dimensional geometry on a remote display.
[0102] As previously stated the application of the custom code to generate and render objects for view in a Web Browser has many potential uses. One such use is in the field of on-line shopping. A commercial operation in accordance with an embodiment of the present invention will now be described with reference to figure 6.
[0103] On the basis of 3D-View the following functional structure for a commercial operation can be produced.
[0104] Figure 6 is an overview of a system 600 for the use of three dimensional view in a commercial environment The system broadly includes inputs 602, a process 604 and outputs 606.
[0105] The inputs 602 comprise a Universal Product Library 608, which includes details of products for sale in an on-line shop. Products are objects for sale and the two terms are used interchangeable herein. The Universal Product Library 608 includes product specifications 610 and images 612 which are generated from product photographs 614. Three dimensional geometries 616 of the products are created and stored. The three dimensional geometries are created by the images 612, three dimensional modelling and surfacing 618, photogammetry 620 and three dimensional scanning 622.
Photogammetry is the use of photography in surveying and mapping to ascertain measurements between elements in the photograph.
[0106] The outputs 606 include a web portal php based shopping experience 624 or a handheld application based shopping experience 626. The outputs may further include Web or handheld game based shopping experience 628, a virtual reality based shopping experience 630 and a television shopping experience 632.
[0107] The process 604 makes use of the inputs 602 and generates the outputs 604. The process has a number of functionalities represented by blocks, which can work together or alone. The first block is a 3D-view block 634 which carries out the 3D-view process as described above. This block 634 takes image views 612 to produce a web portal php based shopping experience 624 or a handheld application based shopping experience 626.
[0108] The next block 636 relates to beauty images and takes image views 612 to produce a web portal php based shopping experience 624 or a handheld application based shopping experience 626.
[0109] The next block 638, relates again to three dimensional view and takes one or more three dimensional geometries 616 to produce a web portal php based shopping experience 624 or a handheld application based shopping experience 626.
[0110] Block 640 relates to a three dimensional store functionality. This takes inputs from the three dimensional geometry and produces Web or handheld game based shopping experience 628, a virtual reality based shopping experience 630 and a television shopping experience 632.
[0111] In an embodiment of the present invention a process of optimization is used which focuses on loss less optimization, where the file size is reduced to a minimum without disturbing the quality of the geometry. This helps the user to view the three dimensional geometries without having to wait for long periods of time. This form of modeling also uses the WebGL interface, but there are differences in how the geometries import into a WebGL canvas.
[0112] As previously mentioned an embodiment of the present invention can be used in an on-line shopping application. The user may use a head set (as used in some gaming applications] to start shopping and to view products that are available for purchase.
[0113] Figure 7 shows a view of a shopping trolley which the user may navigate "around the shop" to purchase products.
[0114] Figure 8 shows a product being viewed by the user. Using an embodiment of the present invention the user may "hold" the product and turn it around to view any or all surfaces thereof. In addition, the user may zoom in on certain features, for example the label to see details of the product The three dimensional geometry of the product may include all relevant detail relating to the product. If the user is happy with the product they may put it into the shopping trolley.
[0115] An embodiment of the present invention is for use by a user on a computer making use of an internet connection. The internet is the primary (but not only] example of a data network that is used by users for accessing information on the World Wide Web. As used herein, we define a computer or a communications device as a device that can provide for connectivity to the internet. Thus, a terminal, computer or communications device may be a PC, laptop, netbook, mobile phone, smart phone, IP phone, gaming device, set-top box, IPAD, tablet PC and other such equivalent devices as any of these may be used to access the web for accessing web-sites. Typically, a computer uses LAN, routers, WLAN, DSL, cable, 3G, LTE, WiMAX, 3 G+ etc. for data communication required to access the internet. A terminal or computer typically accesses the web and its contents, services and applications via an Internet browser, instances of which include Internet Explorer, Firefox, and Google Chrome etc.
[0116] Each of the functions carried out in a method sense as described above may be implemented on an appropriate computer module. The modules may be of any appropriate type either in hardware, software or a combination of both. For example, an embodiment of the present invention may include a generating module which generates the three dimensional geometry; a rendering module for rendering the three dimensional geometry etc. There is no limit to the number of modules which may be used to carry out the relevant steps.
[0117] It should be noted that the embodiments of the present invention are concerned with rendering a three dimensional geometry on a Web Browser. Other embodiments of the invention are concerned with rendering in executable gaming environments, and the like. It will be appreciated that the rending application/executable may be located on a computer or may be an equivalent application found on a mobile device such as a smartphone or a tablet. Any appropriate device may be used to render the reduced three dimensional geometry.
[0118] It will be appreciated that the above described features of the present invention are not limited to those described above. There are many variations which could be used to replace various elements and still be captured within the scope of the invention.
Claims
1. A method of generating a three dimensional geometry of an object adapted to be rendered on a remote display, the method comprising:
generating a three dimensional geometry of the object from a plurality of images of the object, wherein the plurality of images are taken from a plurality of directions; generating a reduced three dimensional geometry based on the three dimensional geometry generated, comprising processing wanted information from the three dimensional geometry, wherein the wanted information comprises features of the three dimensional geometry which relate to displaying the three dimensional geometry on the remote display; and
converting the reduced three dimensional geometry into a file capable of being effected to render the reduced three dimensional geometry on the remote display.
2. The method of claim 1, further comprising sending the file converted from the reduced three dimensional geometry to be rendered on the remote display.
3. The method of claim 1 or claim 2, wherein converting the reduced three dimensional geometry into the file comprises using JavaScript.
4. The method of any preceding claim, wherein generating a three dimensional geometry of the object comprises generating a mesh output corresponding to the three dimensional geometry.
5. The method of claim 4, wherein the mesh output comprises a polygon mesh, and wherein generating a three dimensional geometry of the object further comprises unwrapping the polygon mesh so as to lay out the polygon mesh in a planar manner.
6. The method of claim 5, further comprising applying a texture to the polygon mesh laid out in the planar manner.
7. The method of any preceding claim, wherein generating a three dimensional geometry of the object further comprises at least one of texturing, shading, specularity processing, bump map processing, or transparency processing.
8. The method of any preceding claim, wherein generating a three dimensional geometry of the object further comprises applying a level of detail process to specify a resolution of the three dimensional geometry.
9. The method of any preceding claim, wherein generating a reduced three dimensional geometry further comprises differentiating the wanted information from unwanted information that comprises features of the three dimensional geometry unrelated to displaying the three dimensional geometry on the remote display.
10. The method of any preceding claim, wherein generating a reduced three dimensional geometry comprises removing, from the three dimensional geometry, all unwanted information that comprises features of the three dimensional geometry unrelated to displaying the three dimensional geometry on the remote display.
11. The method of claim 9 or 10, wherein the unwanted information comprises at least one of construction history, additional shader nodes, topology, additional UV sets, object operation history, nodes which carry information to help it in respect of translation, deformations, skinning, attach lights, or any GLU prefixed functions.
12. The method of any preceding claim, wherein the file that is converted is further capable of being effected to allow the reduced three dimensional geometry to be rendered on the remote display to undergo at least one of tumbling, orbiting, panning or zooming in a three dimensional space.
13. A computer readable medium having stored thereon computer-readable instructions for executing, under control of a processing device, the method of any preceding claim.
14. A computer program comprising instructions for a computing device to execute the instructions under control of the computing device to perform the method of any one of claims 1 to 12.
15. Apparatus for generating a three dimensional geometry of an object adapted to be rendered on a remote display, the apparatus comprising:
a generating module configured to generate a three dimensional geometry of the object from a pluralily of images of the object, wherein the pluralily of images are taken from a plurality of directions;
a processing module configured to generate a reduced three dimensional geometry based on the three dimensional geometry generated, the processing module being adapted to process wanted information from the three dimensional geometry to generate the reduced three dimensional geometry, wherein the wanted information comprises features of the three dimensional geometry which relate to displaying the three dimensional geometry on the remote display; and
a conversion module configured to convert the reduced three dimensional geometry into a file capable of being effected to render the reduced three dimensional geometry on the remote display.
16. The apparatus of claim 15, further comprising a transmission module configured to send the file converted from the reduced three dimensional geometry to be rendered on the remote display.
17. The apparatus of claim 15 or claim 16, wherein the conversion module is configured to convert the reduced three dimensional geometry into the file using JavaScript.
18. The apparatus of any one of claims 15 to 17, wherein, for generating the three dimensional geometry of the object, the generating module is configured to generate a mesh output corresponding to the three dimensional geometry.
19. The apparatus of any of claim 18, wherein the mesh output comprises a polygon mesh, and wherein the generating module is further configured to unwrap the polygon mesh so as to lay out the polygon mesh in a planar manner.
20. The apparatus of claim 19, further comprising a texturing module configured to apply a texture to the polygon mesh laid out in the planar manner.
21. The apparatus of any one of claims 15 to 20, wherein the generating module is further configured to perform at least one of texturing, shading, specularity processing, bump map processing, or transparency processing.
22. The apparatus of any one of claims 15 to 21, wherein the generating module is further configured to apply a level of detail process to specify a resolution of the three dimensional geometry.
23. The apparatus of any one of claims 15 to 22, wherein, for generating the reduced three dimensional geometry, the processing module is further configured to
differentiate the wanted information from unwanted information that comprises features of the three dimensional geometry unrelated to displaying the three dimensional geometry on the remote display.
24. The apparatus of any one of claims 15 to 23, wherein, for generating the reduced three dimensional geometry, the processing module is further configured to remove, from the three dimensional geometry, all unwanted information that comprises features of the three dimensional geometry unrelated to displaying the three dimensional geometry on the remote display.
25. The apparatus any of claim 23 or 24, wherein the unwanted information comprises at least one of construction history, additional shader nodes, topology, additional UV sets, object operation history, nodes which carry information to help it in respect of translation, deformations, skinning, attach lights, or any GLU prefixed functions.
26. The apparatus of any one of claims 15 to 25, wherein the file is further capable of being effected to allow the reduced three dimensional geometry to be rendered on the remote display to undergo at least one of tumbling, orbiting, panning or zooming in a three dimensional space.
27. A device having a rendering module, the device adapted to receive and render the reduced three dimensional geometry generated in accordance with any one of claims 1 to 14.
28. A mobile device having a rendering application, the mobile device adapted to receive and render the reduced three dimensional geometry generated in accordance with any one of claims 1 to 14.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| SG10201600356V | 2016-01-17 | ||
| SG10201600356V | 2016-01-17 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2017123163A1 true WO2017123163A1 (en) | 2017-07-20 |
Family
ID=59311376
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/SG2017/050023 Ceased WO2017123163A1 (en) | 2016-01-17 | 2017-01-17 | Improvements in or relating to the generation of three dimensional geometries of an object |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2017123163A1 (en) |
Cited By (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2019207448A (en) * | 2018-05-28 | 2019-12-05 | 大日本印刷株式会社 | Selling system |
| CN111383329A (en) * | 2020-03-06 | 2020-07-07 | 深圳市工之易科技有限公司 | Three-dimensional image display method and device based on browser and electronic equipment |
| CN111402390A (en) * | 2020-02-20 | 2020-07-10 | 平安科技(深圳)有限公司 | Model rendering method, device, equipment and storage medium |
| WO2020159995A3 (en) * | 2019-01-29 | 2020-10-15 | Aveva Software, Llc | Lazy loading for design views system and server |
| CN112380357A (en) * | 2020-12-09 | 2021-02-19 | 武汉烽火众智数字技术有限责任公司 | Method for realizing interactive navigation of knowledge graph visualization |
| CN113421329A (en) * | 2021-06-15 | 2021-09-21 | 广联达科技股份有限公司 | Three-dimensional model generation method, system and device |
| CN117195442A (en) * | 2023-07-24 | 2023-12-08 | 瞳见科技有限公司 | Urban road network modeling method, system, terminal and storage medium |
| CN118444816A (en) * | 2024-07-05 | 2024-08-06 | 淘宝(中国)软件有限公司 | Page processing method, electronic device, storage medium and program product |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20080111816A1 (en) * | 2006-11-15 | 2008-05-15 | Iam Enterprises | Method for creating, manufacturing, and distributing three-dimensional models |
| CN102063475A (en) * | 2010-12-22 | 2011-05-18 | 张丛喆 | Webpage user terminal presenting method of three-dimensional model |
| US20140112573A1 (en) * | 2012-10-18 | 2014-04-24 | Google Inc. | Systems and Methods for Marking Images for Three-Dimensional Image Generation |
| US8817021B1 (en) * | 2011-11-11 | 2014-08-26 | Google Inc. | System for writing, interpreting, and translating three-dimensional (3D) scenes |
-
2017
- 2017-01-17 WO PCT/SG2017/050023 patent/WO2017123163A1/en not_active Ceased
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20080111816A1 (en) * | 2006-11-15 | 2008-05-15 | Iam Enterprises | Method for creating, manufacturing, and distributing three-dimensional models |
| CN102063475A (en) * | 2010-12-22 | 2011-05-18 | 张丛喆 | Webpage user terminal presenting method of three-dimensional model |
| US8817021B1 (en) * | 2011-11-11 | 2014-08-26 | Google Inc. | System for writing, interpreting, and translating three-dimensional (3D) scenes |
| US20140112573A1 (en) * | 2012-10-18 | 2014-04-24 | Google Inc. | Systems and Methods for Marking Images for Three-Dimensional Image Generation |
Cited By (12)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2019207448A (en) * | 2018-05-28 | 2019-12-05 | 大日本印刷株式会社 | Selling system |
| JP7172139B2 (en) | 2018-05-28 | 2022-11-16 | 大日本印刷株式会社 | Sales system |
| WO2020159995A3 (en) * | 2019-01-29 | 2020-10-15 | Aveva Software, Llc | Lazy loading for design views system and server |
| US12045651B2 (en) | 2019-01-29 | 2024-07-23 | Aveva Software, Llc | Lazy loading for design views system and server |
| CN111402390A (en) * | 2020-02-20 | 2020-07-10 | 平安科技(深圳)有限公司 | Model rendering method, device, equipment and storage medium |
| CN111402390B (en) * | 2020-02-20 | 2023-11-10 | 平安科技(深圳)有限公司 | Model rendering method, device, equipment and storage medium |
| CN111383329A (en) * | 2020-03-06 | 2020-07-07 | 深圳市工之易科技有限公司 | Three-dimensional image display method and device based on browser and electronic equipment |
| CN112380357A (en) * | 2020-12-09 | 2021-02-19 | 武汉烽火众智数字技术有限责任公司 | Method for realizing interactive navigation of knowledge graph visualization |
| CN112380357B (en) * | 2020-12-09 | 2022-11-01 | 武汉烽火众智数字技术有限责任公司 | Method for realizing interactive navigation of knowledge graph visualization |
| CN113421329A (en) * | 2021-06-15 | 2021-09-21 | 广联达科技股份有限公司 | Three-dimensional model generation method, system and device |
| CN117195442A (en) * | 2023-07-24 | 2023-12-08 | 瞳见科技有限公司 | Urban road network modeling method, system, terminal and storage medium |
| CN118444816A (en) * | 2024-07-05 | 2024-08-06 | 淘宝(中国)软件有限公司 | Page processing method, electronic device, storage medium and program product |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2017123163A1 (en) | Improvements in or relating to the generation of three dimensional geometries of an object | |
| US11694392B2 (en) | Environment synthesis for lighting an object | |
| Hughes | Computer graphics: principles and practice | |
| KR101145260B1 (en) | Method and apparatus for mapping a texture to a 3D object model | |
| CN106127859B (en) | A kind of mobile augmented reality type scribble paints the sense of reality generation method of sheet | |
| US9098930B2 (en) | Stereo-aware image editing | |
| US20150325044A1 (en) | Systems and methods for three-dimensional model texturing | |
| US9202309B2 (en) | Methods and apparatus for digital stereo drawing | |
| US20080297505A1 (en) | Method and apparatus for real-time 3d viewer with ray trace on demand | |
| De Paolis et al. | Photogrammetric 3D reconstruction of small objects for a real-time fruition | |
| US20130257856A1 (en) | Determining a View of an Object in a Three-Dimensional Image Viewer | |
| EP3340183B1 (en) | Graphics processing employing cube map texturing | |
| RU2680355C1 (en) | Method and system of removing invisible surfaces of a three-dimensional scene | |
| US8669996B2 (en) | Image processing device and image processing method | |
| Verhoeven | Computer graphics meets image fusion: the power of texture baking to simultaneously visualise 3D surface features and colour | |
| US20180005432A1 (en) | Shading Using Multiple Texture Maps | |
| US9317967B1 (en) | Deformation of surface objects | |
| Sandnes | Sketching 3D immersed experiences rapidly by hand through 2D cross sections | |
| JP2024053340A (en) | Image processing device, printing system, and image processing program | |
| US20240046546A1 (en) | Method and system for creating virtual spaces | |
| US9734579B1 (en) | Three-dimensional models visual differential | |
| CN114170409A (en) | A method for automatically judging and displaying annotations of three-dimensional models | |
| Gledhill et al. | A novel methodology for the optimization of photogrammetry data of physical objects for use in metaverse virtual environments | |
| Spieker | Utilizing and Optimizing 3D Photogrammetry Assets for VR Experiences: Case: Pop-Up VR Museum on Meta Quest 2 | |
| US20030117410A1 (en) | Method and apparatus for providing refractive transparency in selected areas of video displays |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17738738 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 31.10.2018) |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 17738738 Country of ref document: EP Kind code of ref document: A1 |