GB2641215A - A method and system for generating a model of a three-dimensional item and a method of providing a virtual wardrobe - Google Patents
A method and system for generating a model of a three-dimensional item and a method of providing a virtual wardrobeInfo
- Publication number
- GB2641215A GB2641215A GB2406835.5A GB202406835A GB2641215A GB 2641215 A GB2641215 A GB 2641215A GB 202406835 A GB202406835 A GB 202406835A GB 2641215 A GB2641215 A GB 2641215A
- Authority
- GB
- United Kingdom
- Prior art keywords
- user
- dimensional
- item
- model
- clothing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
- G06Q30/0641—Electronic shopping [e-shopping] utilising user interfaces specially adapted for shopping
- G06Q30/0643—Electronic shopping [e-shopping] utilising user interfaces specially adapted for shopping graphically representing goods, e.g. 3D product representation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/174—Segmentation; Edge detection involving the use of two or more images
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/16—Cloth
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Business, Economics & Management (AREA)
- Radar, Positioning & Navigation (AREA)
- Computer Graphics (AREA)
- Finance (AREA)
- Accounting & Taxation (AREA)
- Geometry (AREA)
- Remote Sensing (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Electromagnetism (AREA)
- Computer Networks & Wireless Communication (AREA)
- Development Economics (AREA)
- Economics (AREA)
- Marketing (AREA)
- Strategic Management (AREA)
- General Business, Economics & Management (AREA)
- Processing Or Creating Images (AREA)
Abstract
A method and system for generation of a digital model of a three-dimensional physical item via multiple two-dimensional images (6) captured using the camera or scanning capabilities of personal electronic device or smartphone (4). Two-dimensional image data is extracted and processed (10) using machine learning to generate a three-dimensional base image of the item. The images may be taken from at least two different angles and base image may be a mesh or silhouette, which may also be generated using machine learning. The extraction of data may comprise edge detection and the edge data may be stored without a complete visual representation of the item. The three-dimensional base image may be generated entirely locally on the capture device. A generative model may be used to learn the item’s appearance based only on the extracted data (14), the item may be a user.
Description
A Method and System for Generating a Model of a Three-Dimensional Item and a Method of Providing a Virtual Wardrobe The present invention relates to a method and system for generating a model of a three-dimensional item, a virtual wardrobe and a method of providing a virtual wardrobe.
Online shopping is well known technology and relies on a user visiting a vendor's website and identifying items of interest which can then be purchased. Such technology is old and well known and although for many types of purchase it works well, particularly if a user is purchasing an item which they have purchased before, a problem is encountered in the sale of items which require some form of fitting. Clothing is commonly purchased online in this very well-known manner, but if it does not fit well, which the user will only discover once they are in receipt of the physical item, it can be inconvenient and frustrating for the user to have to send it back or return it to the vendor in some other way.
To address this problem some vendors have implemented systems in which a three-dimensional render can be created to simulate how an item of clothing, of interest to a user, might appear when worn by the user. This increases the chance that the purchase when made, will not be regretted, or considered an error once the physical item of clothing is actually received by the user.
However, existing system for enabling a user to see what they might look like wearing an as yet unpurchased item of clothing are limited. The renders or representations that are produced, of a user wearing the item, are not accurate and so the problem still exists that a user ends up purchasing an item when there is a significant chance that they will not be satisfied with it when they actually receive it from the vendor.
Some attempts have been made to address this problem. For example, US 018005375 discloses an inventory capture system including a method and apparatus (i.e., the inventory capture system) for creating and updating an inventory of clothing for a user. The inventory capture system may use voice and image recognition to capture an inventory of clothing and provide users the ability to enhance the captured details about an inventory of clothing with annotations. Moreover; the inventory capture system may provide a way to facilitate retailers and users to leverage the users existing nventory of clothing and augment the users inventory of clothing with shared, purchased and/or rented clothing.
Other systems have used augmented reality. US2024037858 discloses a system for providing an Augmented Reality (AR) experience. The system accesses, by a messaging application, an image depicting a real-world fashion item of a user and to generates a three-dimensional (3D) virtual fashion item based on the real-world fashion item depicted in the image. The system stores the 3D virtual fashion item in a database that includes a virtual wardrobe comprising a plurality of 3D virtual fashion items associated with the user. The system generates, by the messaging application, an AR experience that allows the user to interact *with the virtual wardrobe. ;1.1S2019272675A-I discloses a smart mirror a d smart mirror system for mixed or augmented reality display. The system comprises a server and a smart mirror. The smart mirror comprises a display and a camera. The server is configured to receive information associated with a user of the system, identify, using the information, an object for the user, and transmit, to the smart mirror a three-dimensional model of the object. ;E° 3460531 discloses a computer implemented method for predicting garment or accessory attributes using deep learning techniques. The method comprises the steps receiving and storing one or more digital image datasets including images of garments or accessories; (ii) training a deep model for garment or accessory attribute identification, using the stored one or more digital image datasets, by configuring a deep neural network mode: to predict (a) multiple--class discrete attributes; (b) binary discrete attributes, and (c) continuous attributes, (iii) receiving one or more digital images of a garment or an accessory, and (iv) extracting attributes of the garment or the accessory from the one or more received digital images using the trained deep model for garment or accessory attribute identification, US2015154691 discloses a system and method for virtually fitting an article of clothing on an accurate representation of a user's body obtained by 3D scanning of the user in minimal clothing and in standard garments of known properties. A graphical user interface a*.ows the user to access a database of garments and accessories available for selection for the virtual fitting simulation for which each garments physical and material properties are known. A finite element analysis is applied to determine the shape of the combined user body and garment and a an accurate visual representation of the selected garment or accessory on the proportional model of the user's body based on the analysis is generated.
According to a first aspect of the present invention, there is provided a method of generating a model of a three-dimensional item, the method comprising: capturing multiple images of the item using a digital device comprising a mobile telephone or a personal digital device; extracting data at the device from the multiple images captured by the device; and using machine learning and the extracted data, generating a three-dimensional base image of the object.
A method is provided in which a user is able to make use of technology that is commonly available, e.g. a selfie image capture mode, in either still or video mode, to capture plural images of themselves. The technology then further operates by extracting key data points from the captured images and generating an accurate three-dimensional base image based on the extracted key data points. Using machine learning the method then enables the generation of an accurate three-dimensional base image which can be used as the bass for generating composite images including images of clothing.
Preferably, though not essentially, LIDAR is used as a means to generate the three-dimensional model. Individual images, which can be still images or frames taken from a video, are used in the generation of the three-dimensional model. As referenced in, for example, Wikipedia, Lidar ("light detection and ranging) is a method for determining ranges by targeting an object or a surface with a laser, and measuring the time for the reflected light to return to the receiver.
In an embodiment, the multiple images comprise plural images from at least two different angles of the device relative to the item.
In an embodiment, the three-dimensional base image is a three-dimensional mesh or silhouette.
In an embodiment, the three-dimensional mesh or silhouette is generated using machine learning.
In an embodiment, extracting data comprises edge detection and extraction from the captured images.
In an embodiment, extracting data comprises edge detection and extraction from the captured images.
In an embodiment, extracted edge data is stored without generating or storing of a complete visual representation of the object.
The method uses extraction of key data points such as edges and stores these data points, e.g. in database or in a file, which can be used in the generation of the accurate three-dimensional model. Importantly, a complete visual representation of the object, i.e. the user is not stored which means that the level of data security provided by the system is high.
In an embodiment, having extracted data at the device from the multiple images captured by the device, the extracted data is used to generate locally at the device the three-dimensional base image of the object.
In an embodiment, a generative model is used to learn the appearance of the object based on the extracted data.
In an embodiment, the generative model is selected from the group consisting of a generative adversarial network (GAN) or a variational autoencoder (VAE).
A generative model which is trained in the captured data and learns the appearance of the object, i.e. the user based on the data.
Learning the appearance or physical parameters of a user means that the system is thus able to analyse user preferences, body shape, past purchases, and interaction data to make intelligent recommendations. This approach ensures that users are presented with options that match their style and fit preferences, streamlining a shopping process.
In an embodiment, the object is the user of the a mobile telephone or a personal digital device.
According to a second aspect of the present invention, there is provided a method of providing a virtual wardrobe, the method comprising: receiving a three-dimensional model of a user; receiving data relating to an item of clothing; combining the data relating to the clothing with the three-dimensional model of the user to produce an image of the user wearing the identified clothing item, in which the combining of the data relating to the clothing with the three-dimensional model of the user is executed using a physics engine to simulate interaction of the clothing with the shape of the user.
A method is provided by which a virtual wardrobe can be realised in which a physics engine is used to simulate interaction of the clothing with the shape of the user.
A physics engine is used that simulates how selected clothing items will look and fit on a user's avatar, i.e. the generated three-dimension image of the user. The physics engine is arranged and controlled to account for fabric behaviour, drape, and fit under various conditions, offering a true-to-life visualization.
In an embodiment, the data relating to an item of clothing is derived from a third-party database of clothing items.
In an embodiment, the third-party database is from an online clothing or fashion retailer.
In an embodiment, the method is executed locally on a user's mobile digital device.
In an embodiment, the method comprises storing the produced image of the user wearing the identified clothing item.
In an embodiment, the three-dimensional model of the user is generated using machine learning based on images captured by the user's mobile digital device.
In an embodiment, the three-dimensional model of the user is generated using a method according to the first aspect of the present invention.
According to a third aspect of the present invention, there is provided a system for generating a model of a three-dimensional item, the system comprising: an image capture device for capturing multiple images of the item; a processor arranged and configured to extract data at the device from the multiple images captured by the device; and being arranged to use machine learning to generate a three-dimensional base image of the object.
In an embodiment, the multiple images comprise plural images from at least two different angles of the device relative to the item.
In an embodiment, the three-dimensional base image is a three-dimensional mesh or silhouette.
In an embodiment, the three-dimensional mesh or silhouette is generated using machine learning.
In an embodiment, the system is arranged to execute the method of the first aspect of the present invention.
According to a fourth aspect of the present invention, there is provided a method of generating a video or media, the method comprising: receiving a three-dimensional model of a user; generating a film including the three-dimensional model of the user, wherein the model is generated by capturing multiple images of the user with a digital device comprising a mobile telephone or a personal digital device; extracting data at the device from the multiple images captured by the device; using machine learning and the extracted data, generating a three-dimensional base image of the user..
Embodiments of the present invention will now be described in detail with reference to the accompanying drawings, in which: Figure 1 is a schematic representation of a flow chart showing the steps in a method of generating a model of a three-dimensional item and a method of providing a virtual wardrobe; Figure 2 shows further steps in a method of generating a model of a three-dimensional item and a method of providing a virtual wardrobe; and Figure 3 shows further steps in a method of generating a model of a three-dimensional item and a method of providing a virtual wardrobe.
The disclosed invention presents a comprehensive system designed to transform the online shopping experience by integrating advanced 3D body scanning technology, virtual try-on capabilities, and intelligent wardrobe management. Utilizing a combination of artificial intelligence (Al), computer vision, augmented reality (AR), and a sophisticated physics engine, this system enables accurate personal avatars and realistic garment simulation. It offers users an unprecedented level of immersion and personalization in online apparel shopping, significantly reducing return rates due to fit issues and enhancing overall customer satisfaction.
Figure 1 is a schematic representation of a flow chart 2 showing the steps in a method of generating a model of a three-dimensional item and a method of providing a virtual wardrobe. As will be explained the method comprises, initially, generating a model of a three-dimensional item. This can be achieved by a user interacting with a mobile device such as a mobile telephone and capturing multiple images of the item using a digital comprising a mobile telephone or a personal digital device. Data is then extracted at the device from the multiple images captured by the device, and, using machine learning, a three-dimensional base image of the object is generated.
Referring to Figure at step 4 a user, engaging with their personal digital device, such as a smart phone, opens an App to initiate a scanning process. At step 6, an integrated camera, commonly included of course within smart phones, captures plural images of the user. The App preferably includes instructions to guide a user through stages of image capture and indicative of desired or required angles of capture. Typically, the user is instructed to rotate slowly as images are captured by the mobile phone.
Next at step 8 data is preferably uploaded to the App which operates to determine and generate a base three-dimensional mesh or silhouette of the user, in dependence on the uploaded data. An important feature of the present method is that the entirety of the captured data is not uploaded to the App. This ensures a level of data security for the user. The complete images of the user as captured by the mobile device are not uploaded or even accessible to the App.
Specifically at step 8, the App is arranged first to extract from the captured images essential point or edge data which can be stored locally on the device for use in subsequent processing, as will be described below.
At the device 10, the stored or generated edge or essential point data is processed to generate the base three-dimensional mesh or silhouette of the user. The processing functionality of the device 10 comprises algorithms that utilise machine learning to generate the base three-dimensional mesh or silhouette. Thus, it can be understood that starting with the local processing on a user's mobile phone, LIDAR or photogrammetry are used to create the three-dimensional base model.
The process of machine learning as executed on the device 10 involves measurements and data appoints being extracted from the stored or generated data. A generative model 14 executed on the user's device 12 and preferably as part of the provided App is able to learn a user's appearance based on the extracted data.
The generative Al model used may be any suitable or appropriate model. Typical examples include a generative adversarial network (GAN) or a variational autoencoder (VA E).
Once a user's appearance has been learned and modelled this generated mesh or silhouette model can then be used to enable virtual fitting of clothes as will now be described with reference to Figure 2.
The process of virtual fitting uses a clothing simulation 18 of a retailer's catalogue. As is well known, the retailer's online catalogue will typically include graphical representations of clothing items that the retailer wishes to sell and these can be perused and viewed by users in known ways. When a user makes a selection of an item of clothing from the catalogue, a physics and rendering engine 20 is operated to simulate how the different materials and clothes would fit or drape over a user based on their generated three-dimensional model ("avatar"). Once an item of clothing is selected and by operation of the physics engine and rendering engine a simulation is created. This is output 26 for the user to see on the screen or interface of the mobile phone. The user is then able to confirm their choice and make an online purchase or decline if they do not want to make the purchase. )0
The physics and rendering engine(s) 20 are arranged to provide any or all of texturing, rigging, skinning, animation and lighting to generate a representation of what the user would like wearing the item of clothing. The physics engine is preferably programmed with data relating to the different materials and their weights and other relevant parameters which will affect how they appear and interact with user's body when worn. Thus, an accurate representation of the user's avatar "wearing" the item can be created.
At 24 purchased items, or even liked items, can be stored in a virtual wardrobe.
This can be data store on a user's mobile phone which stores records and menus of all clothing items. It is preferably further arranged to display the clothing items appropriately, e.g. as they would look when worn. It provides a user with flexibility and choice in terms of deciding what to wear at any point in time and also in creating outfits based on combinations of clothing items.
Thus the virtual wardrobe uses the initial steps of three dimension model generation, and then a physics and rendering engine to generate accurate representations of a user virtually wearing the clothes. This is a significant improvement on systems that, say simply show images o the clothing items, or attempts to portray the clothe sin three dimensions. The user is able to make decisions based on a data that shows them what they would actually look like when wearing the clothes in question, in a way not previously possible.
The operation of the virtual wardrobe can be understood clearly with reference to Figure 3. As seen steps 12 and 14 are the same as those described with reference to Figure 1. Step 14, which constitutes the execution of the generative Al model such as a GAN or VAE operates gradually to improve its knowledge and understanding of the user. The details of the three-dimensional model are improved 28 as the Al generative system such as the GAN or VAE is exposed to more user data over time.
The data stored locally on a user's device can be shared 30 with a retailer when desired by a user. This enables the retailer to make suggestions or recommendations based on the data to provide a user with options of clothing items to purchase. This interaction can be controlled by a user and only operates when the user so desires.
An example of the virtual wardrobe interface 32 is shown schematically as being dependent on the three-dimensional model of the user and the selections or suggestions of clothing by a retailer. The three-dimensional virtual wardrobe functions by overlaying the locally-stored (i.e. on the user's mobile device) three-dimensional model of the user onto the wardrobe. This enables a user to interact with the virtual wardrobe interface and see renders of what they look like wearing particular clothing items. As explained above, the hang of the clothing on the avatar will be faithful and realistic based on the calculations performed by the physics and/or rendering engines.
In practice then a user is provided with a virtual wardrobe that can, in real time, generate realistic and technically faithful, i.e. correct, representations of what they will look like wearing a particular item of clothing. This is achieved without the sharing of personal data (regarding the appearance of the user) with a retailer. This can be particularly helpful as often it is not until a user actually tries on a piece of clothing that they get a true idea of whether they like it. This process of seeing what they actually look like in an article of clothing is achieved virtually using the present system and method.
As described above, a generative model 14 executed on the user's device 12 and preferably as part of the provided App is able to learn a user's appearance based on the extracted data and generate the 3-dimensional model of the user. As also described above one application to which the generated model can be applied is that of a virtual wardrobe. Another use is in the generation of personalized advertising content, as will now be described.
The generative model 14 is executed on a user's device which, it will be understood, means that the processing is done locally to the user. It could be a device under control of the user but not the original device that, say, captures the images. For example, a typical set up might be that a user has a mobile telephone that they use to capture the images and then they connect the mobile telephone to their local personal computer to perform, locally, the described processing. In other words, it could be a personal computer to which a user connects the image capture device. The data is not sent to a remote server, e.g., on a network, or to any device that is not under the user's direct control. Preferably the personal digital device is the user's actual mobile telephone.
In another example, it is envisaged that the captured images could be sent to a remote server and the extraction of data is performed at the remote server. However, it is preferred that it is done locally.
In this example, the high-fidelity digital avatars of users, generated through the above-described advanced 3D body scanning and rendering technology, is used to personalize advertising content. A user's generated model, i.e., avatar, is substituted into an existing advertisement. This is achieved by the application of an Al video generator which is able either to modify existing video advertisements by replacing characters in the original film with the user's avatar, or by generating entirely new video content including the user's avatar and whatever product or items are being advertised.
In another example, the generative model can be arranged to create a personalised vignette based on the high-fidelity digital avatars of a user.
Thus, a uniquely tailored marketing experience is created which is able to resonates with an individual, enhancing engagement and conversion rates.
To achieve this, the detailed generated 3D model created by the above-described system, is integrated into various advertising mediums such as video ads, interactive web banners, and virtual reality (VR) or augmented reality (AR) experiences.
Advanced image processing and AR/VR integration techniques are used to ensure seamless insertion of a generated avatar into marketing materials, preserving the original lighting, perspective, and context for a realistic appearance.
Through use, machine learning algorithms are arranged to analyse user preferences and behaviour to select dynamically and personalise advertisements in real-time, ensuring relevance and increasing the efficacy of marketing efforts.
Applications of this enable a user e.g., a customer, to "see" themselves in the clothes, using the gadgets, or experiencing the services being advertised, offering an immersive preview of the product's impact on their lives.
For brands, this presents an opportunity to form a stronger emotional connection with their audience, leading to enhanced brand loyalty and consumer satisfaction.
In a further application, the generated model is used in social media platforms, e-commerce sites, and digital content providers. By the machine learning based model generation described herein, a broad range of technical uses are enabled. In addition, application of personalised marketing is enabled from personalised product recommendations to customised advertising narratives that feature the consumer as the protagonist.
The system in preferred embodiments, can thus be understood to include any or all of the following aspects: Innovative 3D Body Scanning Leveraging the latest in photogrammetry and depth sensing technology, including LiDAR sensors available on select smartphones and devices, the system captures precise body measurements. It constructs highly accurate 3D models of users, forming the foundation for personalized virtual try-on experiences.
Virtual Try-On with Realistic Simulation Employing AR and VR technologies alongside a cutting-edge physics engine, the system simulates how selected clothing items will look and fit on the user's avatar. This simulation accounts for fabric behaviour, drape, and fit under various conditions, offering a true-to-life visualization.
Intelligent Wardrobe Management In addition to virtual try-on, the system incorporates a wardrobe management component. It catalogues users' existing apparel into a virtual wardrobe, suggests outfits based on personal style, occasion, and weather conditions, and enables virtual try-on of owned items for complete outfit planning.
Al-Driven Personalization and Recommendation The core Al algorithms analyse user preferences, body shape, past purchases, and interaction data to tailor clothing recommendations. This Al-driven approach ensures that users are presented with options that match their style and fit preferences, streamlining the shopping process.
Cross-Platform Compatibility Designed to function seamlessly across various devices, including smartphones, AR/VR headsets, and smart mirrors equipped with cameras, the system ensures accessibility and convenience for a broad user base.
Advanced Technologies and Implementation The system's architecture integrates several key technologies: CNNs for image processing, GANs for texture and pattern synthesis, geometric deep learning for 3D structure understanding from 2D images, and reinforcement learning for dynamic adjustment based on user feedback. Conclusion: This invention addresses critical challenges in the online apparel industry by offering a sophisticated solution that marries accuracy in body modelling with the immersive experience of AR/VR. It paves the way for a future where online shopping is not only convenient and efficient but also highly personalized and engaging.
Embodiments of the present invention have been described with particular reference to the examples illustrated. However, it will be appreciated that variations and modifications may be made to the examples described within the scope of the present invention.
Claims (23)
- Claims 1. A method of generating a model of a three-dimensional item, the method comprising: capturing multiple images of the item using a digital device comprising a mobile telephone or a personal digital device; extracting data at the device from the multiple images captured by the device; using machine learning and the extracted data, generating a three-dimensional base image of the object.
- 2. A method according to claim 1, in which the multiple images comprise plural images from at least two different angles of the device relative to the item.
- 3. A method according to claim 1 or 2, in which the three-dimensional base image is a three-dimensional mesh or silhouette.
- 4. A method according to claim 3, in which the three-dimensional mesh or silhouette is generated using machine learning.
- 5. A method according to any of claims 1 to 4, in which extracting data comprises edge detection and extraction from the captured images.
- 6. A method according to any of claims 1 to 5, in which extracting data comprises edge detection and extraction from the captured images.
- 7. A method according to claim 6, in which extracted edge data is stored without generating or storing of a complete visual representation of the object.
- 8. A method according to any of claims 1 to 7, in which, having extracted data at the device from the multiple images captured by the device, the extracted data is used to generate locally ay the device the three-dimensional base image of the object.
- 9. A method according to claim 8, in which a generative model is used to learn the appearance of the object based on the extracted data.
- 10. A method according to claim 9, in which the generative model is selected from the group consisting of a generative adversarial network (GAN) or a variational autoencoder (VAE).
- 11. A method according to any of claims 1 to 10, in which the object is the user of the a mobile telephone or a personal digital device.
- 12. A method of providing a virtual wardrobe, the method comprising: receiving a three-dimensional model of a user; receiving data relating to an item of clothing; combining the data relating to the clothing with the three-dimensional model of the user to produce an image of the user wearing the identified clothing item, in which the combining of the data relating to the clothing with the three-dimensional model of the user is executed using a physics engine to simulate interaction of the clothing with the shape of the user.
- 13. A method according to claim 12, in which the data relating to an item of clothing is derived from a third-party database of clothing items.
- 14. A method according to claim 13, in which the third-party database is from an online clothing or fashion retailer.
- 15. A method according to any of claims 12 to 14, in which the method is executed locally on a user's mobile digital device.16. A method according to claim 15, comprising storing the produced image of the user wearing the identified clothing item.
- 16. A method according to any of claims 12 to 15, in which the three-dimensional model of the user is generated using machine learning based on images captured by the user's mobile digital device.
- 17. A method according to claim 16, in which the three-dimensional model of the user is generated using a method according to any of claims 1 to 11.
- 18. A system for generating a model of a three-dimensional item, the system comprising: an image capture device for capturing multiple images of the item; a processor arranged and configured to extract data at the device from the multiple images captured by the device; and being arranged to use machine learning to generate a three-dimensional base image of the object.
- 19. A system according to claim 18, in which the multiple images comprise plural images from at least two different angles of the device relative to the item.
- 20. A system according to claims 18 or 19, in which the three-dimensional base image is a three-dimensional mesh or silhouette.
- 21. A system according to claim 20, in which the three-dimensional mesh or silhouette is generated using machine learning.
- 22. A system according to any of claims 18 to 21, arranged to execute the method of any of claims 1 to 11.
- 23. A method of generating a video or media, the method comprising: receiving a three-dimensional model of a user; generating a film including the three-dimensional model of the user, wherein the model is generated by capturing multiple images of the user with a digital device comprising a mobile telephone or a personal digital device; extracting data at the device from the multiple images captured by the device; using machine learning and the extracted data, generating a three-dimensional base image of the user..
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB2406835.5A GB2641215A (en) | 2024-05-14 | 2024-05-14 | A method and system for generating a model of a three-dimensional item and a method of providing a virtual wardrobe |
| PCT/EP2025/063084 WO2025238011A1 (en) | 2024-05-14 | 2025-05-13 | A method and system for generating a model of a three-dimensional item and a method of providing a virtual wardrobe |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB2406835.5A GB2641215A (en) | 2024-05-14 | 2024-05-14 | A method and system for generating a model of a three-dimensional item and a method of providing a virtual wardrobe |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| GB202406835D0 GB202406835D0 (en) | 2024-06-26 |
| GB2641215A true GB2641215A (en) | 2025-11-26 |
Family
ID=91581587
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| GB2406835.5A Pending GB2641215A (en) | 2024-05-14 | 2024-05-14 | A method and system for generating a model of a three-dimensional item and a method of providing a virtual wardrobe |
Country Status (2)
| Country | Link |
|---|---|
| GB (1) | GB2641215A (en) |
| WO (1) | WO2025238011A1 (en) |
Citations (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20040227752A1 (en) * | 2003-05-12 | 2004-11-18 | Mccartha Bland | Apparatus, system, and method for generating a three-dimensional model to represent a user for fitting garments |
| GB2488237A (en) * | 2011-02-17 | 2012-08-22 | Metail Ltd | Using a body model of a user to show fit of clothing |
| US20150154691A1 (en) * | 2013-12-02 | 2015-06-04 | Scott William Curry | System and Method For Online Virtual Fitting Room |
| WO2017203262A2 (en) * | 2016-05-25 | 2017-11-30 | Metail Limited | Method and system for predicting garment attributes using deep learning |
| CA3116540A1 (en) * | 2018-10-19 | 2020-04-23 | Perfitly, Llc. | Perfitly ar/vr platform |
| US10796480B2 (en) * | 2015-08-14 | 2020-10-06 | Metail Limited | Methods of generating personalized 3D head models or 3D body models |
| WO2022048534A1 (en) * | 2020-09-03 | 2022-03-10 | International Business Machines Corporation | Digital twin multi-dimensional model record using photogrammetry |
| US20220366651A1 (en) * | 2019-10-28 | 2022-11-17 | Telefonaktiebolaget Lm Ericsson (Publ) | Method for generating a three dimensional, 3d, model |
| US20230052169A1 (en) * | 2021-08-16 | 2023-02-16 | Perfectfit Systems Private Limited | System and method for generating virtual pseudo 3d outputs from images |
| KR102559717B1 (en) * | 2023-01-04 | 2023-07-26 | 주식회사 크리토 | Apparatus and Method for Generating 3D Human Model |
| WO2024032165A1 (en) * | 2022-08-12 | 2024-02-15 | 华为技术有限公司 | 3d model generating method and system, and electronic device |
Family Cites Families (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US3774008A (en) | 1971-03-03 | 1973-11-20 | T Maniscalco | Steam generating apparatus |
| US10650281B2 (en) | 2016-06-29 | 2020-05-12 | Intel Corporation | Inventory capture system |
| US10573077B2 (en) | 2018-03-02 | 2020-02-25 | The Matilda Hotel, LLC | Smart mirror for location-based augmented reality |
| KR102121334B1 (en) * | 2018-04-13 | 2020-06-11 | 최재영 | Apparatus and method of prediction for body shape changing |
| US12017142B2 (en) * | 2021-02-16 | 2024-06-25 | Pritesh KANANI | System and method for real-time calibration of virtual apparel using stateful neural network inferences and interactive body measurements |
| US12062146B2 (en) | 2022-07-28 | 2024-08-13 | Snap Inc. | Virtual wardrobe AR experience |
-
2024
- 2024-05-14 GB GB2406835.5A patent/GB2641215A/en active Pending
-
2025
- 2025-05-13 WO PCT/EP2025/063084 patent/WO2025238011A1/en active Pending
Patent Citations (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20040227752A1 (en) * | 2003-05-12 | 2004-11-18 | Mccartha Bland | Apparatus, system, and method for generating a three-dimensional model to represent a user for fitting garments |
| GB2488237A (en) * | 2011-02-17 | 2012-08-22 | Metail Ltd | Using a body model of a user to show fit of clothing |
| US20150154691A1 (en) * | 2013-12-02 | 2015-06-04 | Scott William Curry | System and Method For Online Virtual Fitting Room |
| US10796480B2 (en) * | 2015-08-14 | 2020-10-06 | Metail Limited | Methods of generating personalized 3D head models or 3D body models |
| WO2017203262A2 (en) * | 2016-05-25 | 2017-11-30 | Metail Limited | Method and system for predicting garment attributes using deep learning |
| CA3116540A1 (en) * | 2018-10-19 | 2020-04-23 | Perfitly, Llc. | Perfitly ar/vr platform |
| US20220366651A1 (en) * | 2019-10-28 | 2022-11-17 | Telefonaktiebolaget Lm Ericsson (Publ) | Method for generating a three dimensional, 3d, model |
| WO2022048534A1 (en) * | 2020-09-03 | 2022-03-10 | International Business Machines Corporation | Digital twin multi-dimensional model record using photogrammetry |
| US20230052169A1 (en) * | 2021-08-16 | 2023-02-16 | Perfectfit Systems Private Limited | System and method for generating virtual pseudo 3d outputs from images |
| WO2024032165A1 (en) * | 2022-08-12 | 2024-02-15 | 华为技术有限公司 | 3d model generating method and system, and electronic device |
| KR102559717B1 (en) * | 2023-01-04 | 2023-07-26 | 주식회사 크리토 | Apparatus and Method for Generating 3D Human Model |
Also Published As
| Publication number | Publication date |
|---|---|
| WO2025238011A1 (en) | 2025-11-20 |
| GB202406835D0 (en) | 2024-06-26 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10235810B2 (en) | Augmented reality e-commerce for in-store retail | |
| KR102202843B1 (en) | System for providing online clothing fitting service using three dimentional avatar | |
| US10497053B2 (en) | Augmented reality E-commerce | |
| CN114299264A (en) | System and method for generating augmented reality content based on warped three-dimensional models | |
| US12417598B2 (en) | Using machine learning models to generate a mirror representing an image of virtual try-on and styling of an actual user | |
| US10783528B2 (en) | Targeted marketing system and method | |
| US20140279289A1 (en) | Mobile Application and Method for Virtual Dressing Room Visualization | |
| WO2013120851A1 (en) | Method for sharing emotions through the creation of three-dimensional avatars and their interaction through a cloud-based platform | |
| US20160267576A1 (en) | System and Method for Controlling and Sharing Online Images of Merchandise | |
| KR102335918B1 (en) | Digital signage to sell fashion goods using big data and artificial intelligence algorithm | |
| CN109416806B (en) | System and method for linking database entries of a network platform | |
| AU2019240635A1 (en) | Targeted marketing system and method | |
| EP4334880A1 (en) | Systems and methods for the display of virtual clothing | |
| Papazoglou Chalikias et al. | Novel paradigms of human-fashion interaction | |
| US20250336170A1 (en) | Three-dimensional models of users wearing clothing items | |
| KR20190105702A (en) | System for providing furniture shopping service through virtual experience | |
| US20250037192A1 (en) | Virtual try on for garments | |
| CN114339434A (en) | Method and device for displaying goods fitting effect | |
| GB2641215A (en) | A method and system for generating a model of a three-dimensional item and a method of providing a virtual wardrobe | |
| Bhagyalakshmi et al. | Virtual dressing room application using GANs | |
| Divyanjalee et al. | FITSTYLE: A GAN-Based Application Revolutionizing Online Shopping by Enhancing the Virtual Try-On Experience | |
| US20250191055A1 (en) | System and Methods for Creating E-Commerce Virtual Experiences And Integrating Graphical User Interfaces | |
| Castenetto | Redefining fashion ecommerce: a comprehensive study on the transformative impact of technologies on user experience | |
| Delamore et al. | Everything in 3D: developing the fashion digital studio | |
| Vittal et al. | Avatar Closet: An Augmented Reality Based Multi-Modal Virtual Try-On System for Fashion Retail |