GB2639864A - Method of generating datapoints representing a body part - Google Patents
Method of generating datapoints representing a body partInfo
- Publication number
- GB2639864A GB2639864A GB2404260.8A GB202404260A GB2639864A GB 2639864 A GB2639864 A GB 2639864A GB 202404260 A GB202404260 A GB 202404260A GB 2639864 A GB2639864 A GB 2639864A
- Authority
- GB
- United Kingdom
- Prior art keywords
- body part
- mesh
- segmented
- images
- garment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
- G06T17/205—Re-meshing
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61F—FILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
- A61F13/00—Bandages or dressings; Absorbent pads
- A61F13/06—Bandages or dressings; Absorbent pads specially adapted for feet or legs; Corn-pads; Corn-rings
- A61F13/08—Elastic stockings; for contracting aneurisms
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/16—Cloth
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/008—Cut plane or projection plane definition
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2016—Rotation, translation, scaling
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Public Health (AREA)
- Life Sciences & Earth Sciences (AREA)
- Geometry (AREA)
- Architecture (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Vascular Medicine (AREA)
- Computer Hardware Design (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Veterinary Medicine (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
A computer-implemented method of generating a plurality of datapoints which represent a body part on which a bespoke garment is to be worn, and from which datapoints said bespoke garment can be manufactured or selected. The method comprising the steps of: receiving a plurality of 2D images of the body part and a surrounding environment from different angles of view. This is followed by reconstructing a virtual three-dimensional (3D) mesh based on the received 2d images. There are multiple aspects of the invention, some of which involve an additional step of segmenting the 2d images to isolate the body part from the surrounding environment before the reconstruction. This is followed by scaling the segmented 3D mesh of the body part, based on an indication of scale, to produce a scaled segmented 3D mesh of the body part; and generating the plurality of datapoints which represent the body part based on the scaled segmented 3D mesh of the body part. A bespoke garment manufactured by such a method is also provided. Notably, the present method does not use depth data, nor does it use a scanning process such as LiDAR or time-of-flight scanning.
Description
METHOD OF GENERATING DATAPOINTS REPRESENTING A BODY PART
Field of the Invention
This invention relates to a computer-implemented method of generating a plurality of datapoints which represent a body part on which a bespoke garment (for example, a bespoke compression garment) is to be worn. The invention further relates to a garment manufactured by such a method, and to a computer program for carrying out the method.
Background to the Invention
There is often a need to manufacture garments that are "bespoke" in nature -that is to say, garments which are shaped and sized to fit a specific wearer, based on the measurements of the applicable body part(s) of the intended wearer. Alternatively, one may wish to select, from a stock of pre-existing garments, a garment that best fits the intended wearer, which may also be considered to be "bespoke" to that wearer.
One such example of "bespoke" garments are bespoke compression garments, towards which the present work will be primarily directed. However, it should be appreciated that the present work is also applicable to other types of bespoke garments, such as sports garments (e.g. clothing or shoes for elite sportspeople), tailored clothing (e.g. shirts, trousers, dresses, jackets, etc.), garments for medical, clinical or surgical purposes, and so on.
Turning to bespoke compression garments in particular, these are worn on a person's or wearer's body part and apply pressure to the body part, typically to improve blood circulation (in particular venous return) therein. This can help to improve or cure a number of different health conditions or may address other medical needs. Compression garments are commonly worn on limbs, for example on the leg and/or foot.
Such compression garments are typically designed to envelop at least part of a foot and at least part of the corresponding leg of the wearer and may be termed compression stockings or compression socks. It should be noted, though, that although the exemplary compression garments in the present disclosure are -2 -generally illustrated and described as completely covering the wearer's foot, this need not necessarily be the case. For instance, when worn in use, the compression garment need not necessarily extend to cover the wearer's toes, and instead may take the form of a sleeve that is open at both ends.
Compression garments are conventionally knitted. Knitting, rather than weaving, may be used for compression garments since knitting is suited to cylindrical garments and compression garments are typically cylindrical. As such, knitted compression garments are quicker and more cost effective to manufacture than woven garments and may provide greater comfort than woven garments.
A knitted compression garment may typically comprise a plurality of courses or bands of fabric (which may also be referred to herein as "material courses" or "fabric courses"). These have a circumference which, when at rest (i.e. unstretched), is smaller than a circumference of the associated body pad. As such, the courses are required to be strained to be worn over the body part, which causes a pressure to be applied on the body pad.
Since people's legs vary in shape and size from person to person, it is desirable for a compression garment to be customised for a specific wearer, to provide the wearer with a specified pressure configuration along the length of the garment in use, along the wearer's foot and the part of their leg. Such a compression garment may be referred to as a bespoke compression garment. In such a case, the pressure configuration may for example be prescribed by a clinician or healthcare professional, to suit the wearer's particular health condition or other medical needs.
It is important that the compression garment provided to the wearer is configured to accurately apply the prescribed pressure configuration. If the pressure which is applied is too great, then this can cause pain or discomfort. If the pressure is too little, then the patients blood circulation may not be improved.
To impart the desired pressure configuration along the wearers foot and part of their leg requires the compression garment to be tight-fitting along its length.
There are several ways in which a bespoke compression garment -and indeed any -3 -bespoke garment -can be made. For example, a person may take manual measurements (e.g. using a measuring tape) that are then used to knit and cut the garment. This process is slow and often inaccurate despite the small number of measurements needed.
WO 2022/008932 Al discloses a method of making a bespoke knitted compression garment, the method comprising a scanning step carried out by clinician using a camera, such as that on a portable electronic device such as a tablet, to take images or record video of the body part from multiple angles or perspectives. Scanning software may then transform such images or video into a representation, in other words a model or three-dimensional model, of the body part which comprises a plurality of datapoints or vertices. However, this type of software may have errors in the representation (i.e. holes or gaps in the representation) either by insufficient recording data or errors in data transmission. Depending on the position of the error in the representation and on the size of said error, re-scanning of the body part may be required as it is not possible to create additional views from the existing representation.
Other methods of obtaining the representation may use specialised depth hardware such as a time-of-flight camera or Light Detection and Ranging (LiDAR) scanner.
Such depth hardware measures distances between the camera and the subject for each point of the image based on time for the reflected artificial light to return to the camera. The scanning step requires dedicated depth hardware which is expensive, bulky and requires specific training to operate so as not to lose tracking of the object in real-time. Furthermore, the depth hardware is platform dependent and not compatible with all operating systems.
One way of overcoming the platform limitation is to use any device with a camera to take images and, using photogrammetry, reconstruct a three-dimensional (3D) representation of the body part. This method, however, requires a high number of photos that must be precisely taken to achieve the desired accuracy, including in respect of being dimensionally to scale.
Another disadvantage of any of these methods is that the person carrying them out needs to have extensive training in taking the measurements and/or operating the -4 -necessary software/hardware. Thus, currently, there is a trade-off between manufacturing garments from very few measurements which are easy to take, and manufacturing garments from extremely detailed models which require specialist hardware and training and can be difficult to do at scale.
There is, therefore, a desire to address the above problems, to develop a method of obtaining detailed models or representations of body pads without expensive hardware and by a person with very little training using commonly-available devices with a standard digital camera, that can then be used to manufacture bespoke garments or select well-fitting garments from pre-existing stock.
Summary of the Invention
Aspects of the present invention are set out in the appended independent claims, while details of certain embodiments are set out in the appended dependent claims. 15 According to a first aspect of the present invention there is provided a computer-implemented method of generating a plurality of datapoints which represent a body part on which a bespoke garment is to be worn, and from which datapoints said bespoke garment can be manufactured or selected, the method comprising the steps of: receiving a plurality of two-dimensional (2D) images of the body part and a surrounding environment from different angles of view; reconstructing a virtual three-dimensional (3D) mesh based on the received 2D images, the virtual 3D mesh comprising a 3D reconstruction of the body part with the surrounding environment; segmenting the virtual 3D mesh to isolate the body part from the surrounding environment and produce a segmented 3D mesh of the body part; scaling the segmented 3D mesh of the body part, based on an indication of scale, to produce a scaled segmented 3D mesh of the body part; and generating the plurality of datapoints which represent the body part based on the scaled segmented 3D mesh of the body part.
Such a method enables a dimensionally-accurate representation of the body part to be obtained from images provided by a person with little training, using a commonly-available device having a standard digital camera, without the use of depth hardware -5 -and without compromising the accuracy of the resulting representation of the body part.
According to a second aspect of the present invention there is provided a computer-implemented method of generating a plurality of datapoints which represent a body part on which a bespoke garment is to be worn, and from which datapoints said bespoke garment can be manufactured or selected, the method comprising the steps of: receiving a plurality of two-dimensional (2D) images of the body part and a surrounding environment from different angles of view; segmenting each of the plurality of 2D images of the body part to isolate the body part from the surrounding environment and produce a plurality of segmented 2D images; reconstructing a segmented three-dimensional (3D) mesh of the body part 15 based on the segmented 2D images, the segmented 3D mesh comprising a 3D reconstruction of the body part; scaling the segmented 3D mesh, based on an indication of scale, to produce a scaled segmented 3D mesh of the body part; and generating the plurality of datapoints which represent the body part based on 20 the scaled segmented 3D mesh of the body part.
This method also enables a dimensionally-accurate representation of the body part to be obtained from images provided by a person with little training, using a commonly-available device having a standard digital camera, without the use of depth hardware and without compromising the accuracy of the resulting representation of the body part. Another benefit of this method is a computationally efficient reconstruction step as the reconstruction step is performed on the body pad only, without reconstructing any of the surrounding environment.
The bespoke garment may be a bespoke compression garment. The body part may comprise a foot and at least part of a leg, for example.
The method may further comprise receiving a measurement in respect of the body part, or a measurement of another object within the surrounding environment, said measurement being used as the indication of scale. In some examples, the -6 -measurement may be the size of the foot.
The step of reconstructing may be performed using an Al model. Advantageously, this may be used to generate additional views of the virtual 3D mesh, not depicted by the received 2D images, based on the received 2D images. This may be particularly beneficial in situations where the intended wearer of the bespoke garment has low mobility or is immobilised (e.g. bed-bound), as 2D images may be captured in the wearer's most comfortable position while avoiding re-scans due to the lack of 2D image information.
For example, the Al model may be a Neural Radiance Field model or Gaussian Splatting model.
The reconstructed virtual 3D mesh according to the first aspect of the present invention may comprise a plurality of vertices or a plurality of faces, and a plurality of normals.
The segmenting step according to the first aspect of the present invention may comprise the steps of: calculating the angles between each normal and a reference axis; and selecting the vertices or the faces that have a consistent angle between the respective normal and the reference axis as indicating a planar surface.
The segmenting step according to the first aspect of the present invention may further comprise the steps of: identifying the planar surface that comprises the vertices or the faces that have a consistent angle; calculating a rotation matrix required to rotate the planar surface to align the normals to be parallel to the reference axis; and rotating (reorienting) the virtual 3D mesh using the rotation matrix.
Such reorientation of the virtual 3D mesh facilitates the eventual knitting of the garment and the ability to achieve the required compression behaviour of the garment. -7 -
The segmenting step according to the first aspect of the present invention may further comprise removing the vertices or the faces that have a consistent angle between the respective normal and the reference axis. This enables any flat surfaces (i.e. floor and any furniture present) to be automatically removed, thereby isolating the body part from the environment.
Alternatively, the method according to the first aspect of the present invention may comprise, after the step of reconstructing, and prior to the step of segmenting, a step of receiving user input to manually select a plurality of points that define sections of the virtual 3D mesh that contain the body part such that, in the step of segmenting, the computer implemented method then isolates the body part from the surrounding environment and produces the segmented 3D mesh based on the user's inputs.
According to a third aspect of the present invention there is provided a computer-implemented method of generating a plurality of datapoints which represent a body part on which a bespoke garment is to be worn, and from which datapoints said bespoke garment can be manufactured or selected, the method comprising the steps of: receiving a plurality of 2D images of the body part and a surrounding environment from different angles of view; generating a virtual segmented 3D mesh of the body part, without the surrounding environment, based on the received 2D images; scaling the segmented 3D mesh of the body part, based on an indication of scale, to produce a scaled segmented 3D mesh of the body part; and generating the plurality of datapoints which represent the body part based on the scaled segmented 3D mesh of the body part.
The receiving step according to the first, second, and third aspects of the present invention may comprise receiving still photographs or frames of video.
In certain examples, the plurality of 2D images may be received from a digital camera of a smartphone or tablet device.
Notably, the receiving step according to the first, second, and third aspects of the present invention does not use depth data, nor does it use a scanning process such -8 -as Li DAR or time-of-flight scanning.
According to a fourth aspect of the invention there is provided a method according to the first, second or third aspects of the invention; and manufacturing said bespoke garment by knitting in accordance with a set of knitting instructions based on said plurality of datapoints.
According to a fifth aspect of the invention there is provided a method according to the first, second or third aspects of the invention; and selecting an off-the-shelf garment having dimensions which correspond or substantially correspond to said plurality of datapoints.
According to a sixth aspect of the invention there is provided a bespoke garment manufactured made by the method according to the fourth aspect of the invention. 15 The bespoke garment may be a bespoke compression garment, for example.
According to a seventh aspect of the invention there is provided a computer program comprising instructions which, when the program is executed by a computer processor, cause the computer processor to carry out the method according to the first, second, third, fourth or fifth aspects of the invention.
Brief Description of the Drawings
Embodiments of the invention will now be described by way of example only with reference to the attached figures in which: Figure 1A schematically illustrates an example of a wearer's body part where a bespoke garment of Figure 1B is to be worn; Figure 1B illustrates an example of a bespoke garment to be worn on the wearer's body part; Figure 2 is a procedural flow diagram providing an overview of a first method of generating a plurality of datapoints which represent a body part on which a bespoke garment is to be worn; Figure 3A shows a schematic example of a set of 2D images that could be used in a virtual 3D mesh reconstruction; Figure 3B shows, in a line drawing style, the result of a virtual 3D mesh reconstruction 35 based on a set of 2D images; -9 -Figures 4A and 4B schematically illustrate face normals and vertex normals of two different faces; Figure 5 is a procedural flow diagram providing an overview of automatic ways of segmenting the 3D mesh of the body part; Figure 6 shows a simplified virtual 3D mesh reconstruction based on the received set of 2D images; Figure 7A illustrates a virtual 3D mesh reconstruction that has been rotated so that it is representative of the body part when the wearer is in a typical standing condition; Figure 7B illustrates an additional view of the segmented 3D mesh of the body part, not depicted in any of the received 2D images; Figure 8A shows a first detailed view of the segmented 3D mesh of the body part in profile aligned in the x direction; Figure 8B shows a second detailed view of the segmented 3D mesh of the body part front-faced and aligned in the y direction; Figures 9A and 9B show an example of a scaled segmented body part (the wearer's foot), once an indication of scale (e.g. the based on the wearer's foot size) has been applied; Figure 10A shows a first detailed view of the scaled segmented 3D mesh of the body part in profile aligned in the x direction, derived from the view of Figure 8A; Figure 10B shows a second detailed view of the scaled segmented 3D mesh of the body pad front-faced and aligned in the y direction, derived from the view of Figure 8B; Figure 11 is a procedural flow diagram providing an overview of a second method of generating a plurality of datapoints which represent a body part on which a bespoke 25 garment is to be worn; and Figure 12 is a procedural flow diagram providing an overview of a third method of generating a plurality of datapoints which represent a body pad on which a bespoke garment is to be worn.
In the figures, like elements are indicated by like reference numerals throughout.
Detailed Description of Preferred Embodiments
The present embodiments represent the best ways known to the Applicant of putting the invention into practice. However, they are not the only ways in which this can be 35 achieved.
Figures 1A and 1B illustrate an example of a body part 100 on which a bespoke compression garment 10 is to be worn. In this example, the body part comprises a lower portion of a leg 160 and a foot 120. The bespoke garment 10 is tubular or substantially tubular in form, and comprises a plurality of fabric courses. For flat knitted garments in weft knitting, each fabric course may be considered to be a band, ring or circle of yarn. Adjacent courses or bands of yarn have interconnected or interknitted loops or bights. Preferably, the fabric courses are flat knitted. In practice, the bespoke garment 10 may be manufactured by an automated knitting machine according to measurements of the intended wearer's body part, or may be selected from pre-existing stock of differently-sized garments so as to closely match the measurements of the intended wearer's body part.
The compression garment 10 comprises a toe region 11, a foot region 12, a heel region 13, a leg region 16 and an open top 17 through which the wearer inserts their foot and leg when putting on the garment. Whilst the toe region 11 and heel region 13 are shown in the illustration as distinct areas, this need not necessarily be the case, and they may instead be a smooth continuation of the foot region 12 and leg region 16.
The different regions of the bespoke garment are designed to accommodate different portions of the body part. In other words, the toe region 11 is designed to accommodate the toes 110, the foot region 12 is designed to accommodate the foot 120, the heel region 13 is designed to accommodate the heel 130, and a leg region 16 is designed to accommodate the lower portion of the leg 160 such that the bespoke garment fits the needs of a specific wearer.
In the example of Figures 1A and 1B, the region 150 above the ankle of the wearer's leg, at the bottom of the calf, typically has a small circumference compared with other parts of the leg of the same wearer. As such, the corresponding region 15 of the bespoke garment is also a region of relatively small circumference.
Similarly, the region 140 of the wearer's foot around the heel and ankle (over the bridge of the foot), has a relatively large circumference which is typically much greater than the small circumference around the leg. Therefore, the corresponding region 14 of the bespoke garment is also a region of relatively large circumference.
At this point, it becomes clear that having an accurate representation of the body pad on which the bespoke garment is to be worn, is paramount, in order for the bespoke garment to be accurately manufactured or selected from pre-existing stock.
Therefore, an objective of the present work is to provide a method of generating a plurality of datapoints which accurately represent a body part on which a bespoke garment is to be worn, from images easily obtained by a person with little training using a commonly-available device with a standard digital camera, without the use of depth hardware and without compromising the accuracy of the detailed representation of the body part. Such datapoints may then be used to manufacture a bespoke garment (e.g. using an automatic knitting machine), or to enable a well-fitting garment to be selected from pre-existing stock of differently-sized garments.
First example method
A first example of the present method will now be described, primarily with reference to Figure 2. This example provides a computer-implemented method 20 of generating a plurality of datapoints 31 which represent a body part on which a bespoke garment is to be worn, and from which datapoints said bespoke garment can be manufactured or selected. The datapoints may be, for example, measurements, sizes of circumferences, and/or other necessary information that enable a dimensionally accurate representation of the wearer's body part to be produced.
In overview, the present method 20 comprises the steps of: receiving (step 21) a plurality of two-dimensional (2D) images 49 of the body part and a surrounding environment from different angles of view; reconstructing (step 22) a virtual three-dimensional (3D) mesh based on the received 2D images, the virtual 3D mesh comprising a 3D reconstruction of the body part with the surrounding environment; segmenting (step 23) the virtual 3D mesh to isolate the body part from the surrounding environment and produce a segmented 3D mesh of the body part; scaling (step 24) the segmented 3D mesh of the body part, based on an indication of scale 48, to produce a scaled segmented 3D mesh of the body part; and generating (step 25) the plurality of datapoints 31 which represent the body -12 -part based on the scaled segmented 3D mesh of the body part.
The steps 21 to 25 of the method may in practice be implemented by a suitably-programmed microprocessor -e.g. within a mobile device such as a smartphone or tablet used to capture the images of step 21, or forming part of a computer or server to which the images of step 21 are sent for processing. Any of these devices, or combination of these devices, may constitute the "system" referred to herein. A computer program which, when executed by such a microprocessor, causes the system to carry out the steps of the method, is also provided by the present disclosure. The computer program may be supplied on a computer-readable medium (e.g. a non-transitory computer-readable recording medium such as a CD or DVD) having computer-readable instructions thereon. Alternatively, the computer program may be provided in a downloadable format, over a network such as the Internet, or may be hosted on a server.
The method may also include receiving the desired pressure configuration the resulting garment is to exert on the intended wearer, as prescribed by a healthcare professional, for example.
Even though the present method refers to the steps of reconstructing, segmenting and scaling a "virtual 3D mesh", those skilled in the art may appreciate that there are several possible techniques and methods of creating virtual models of objects, environments and body parts. For example, a point cloud model may be used to virtually represent a reconstructed, segmented and/or scaled object, environment or body part in a similar way to using the steps described herein. Furthermore, point cloud models can be converted to a virtual 3D meshes with readily available software as virtual 3D meshes are easier to visualise and interpret than point cloud models. Therefore, the term "mesh" is not intended to be limited to traditional 3D mesh models but to include other techniques and methods that recreate virtual 3D models based on the individual vertices in a virtual 3D space.
In more detail, the present method 20 comprises the following steps: -Receiving two dimensional (2D) images The system receives a plurality of 2D images 49 of the intended wearer's body part on which the bespoke garment is to be worn, and a surrounding environment from different angles of view (step 21 of Figure 2). The surrounding environment may be, for example, the floor on which the intended wearer stands, or nearby walls, or furniture or other items in the vicinity of the intended wearer.
The received 2D images may be still photographs or frames of video, which may for example be captured by a digital camera of a smartphone or tablet device. For example, a user may capture the 2D images of the wearer's body part simply by taking a 360° video of the body part. The images may be captured by the user as an integral part of carrying out the present method (i.e. "in real time", concurrently with the rest of the method), or may have been captured prior to the carrying-out of the present method, potentially in a different location.
It will be appreciated that the term "user", as used herein, may refer to the person that captures the images, and/or who causes the present method to be carried out, and is not necessarily the intended wearer of the bespoke garment (although could be).
The capture of the 2D images 49 may be carried out by an untrained person (i.e. a person who is not trained in image capture and/or processing techniques, but may nevertheless be trained in other fields) using a standard digital camera, such as the one on a portable electronic device like a tablet or a smartphone, to take images or record video of the wearer's body part from multiple angles or perspectives.
In the example of Figures 1A and 1B, given that the body part on which the bespoke garment is to be worn is a leg and foot, the 2D images 49, schematically represented in Figure 3A, should include at least part of both the leg and the foot, for example, including at least a majority of the foot and at least a majority of the lower portion of the leg. For convenience, the 2D images 49 may be captured with the wearer having their legs spaced apart and therefore at an angle to a vertical direction.
-Reconstructing a virtual three-dimensional (3D) mesh Based on the received 2D images 49, the system reconstructs a virtual 3D mesh 220 comprising a 3D reconstruction of the body part with the surrounding environment (step 22 of Figure 2).
For ease of representation, Figure 3B illustrates the result of a virtual 3D mesh reconstruction 220 in a line drawing style. The reconstruction shows a pair of feet 220.1, a pair of legs spaced apart and flat on a surface 220.3, and noise 220.4 surrounding the body part 220.2. As expected, the received 2D images 49 focus on the body part where the bespoke garment is to be worn and not on the surrounding environment. This means that the body part 220.2 is reconstructed with much more information than the surrounding environment and, therefore, the surrounding environment is represented as being mostly noise 220.4.
Therefore, the virtual 3D mesh accurately represents the body part 220.2 (ankles, calves, knees and part of the thighs) and the surrounding environment, in this case the flat surface 220.3 (e.g. the floor). This virtual 3D mesh 220 is represented in three directions x, y and z and can be manipulated in any direction and in each and all directions to obtain views in different angles of the virtual 3D mesh 220.
There are several techniques that can be used to reconstruct the virtual 3D mesh. By way of example, Artificial Intelligence (Al) models such as NeRF and Gaussian Splatting are able to use 2D images to reconstruct a virtual 3D mesh allowing for the use of easily captured 2D images of any standard digital camera in the reconstruction process. These Al models also work fairly well from as few as ten 2D images (preferably, 100 to 200 2D images) and, thus, there is no need to use custom hardware/software to take specific images at specific distances apart.
NeRF (or Neural Radiance Field) receives the 2D images of the body part taken from different perspectives, ideally from the same digital camera. In a first step, a computational photography algorithm calculates the location and direction of the camera for each 2D image. For each pixel in each 2D image, a sample ray is sent to each pixel in the 2D image to calculate the cartesian coordinates x, y, z, of each pixel which are then sent to the network. The network then calculates the RGB values and a colour density value associated with each pixel. The difference between equivalent pixels in different 2D images and the expected results are used to tune the neural network weights and train the neural network. The process is repeated many times (e.g. 200,000 times or so) and the network converges on a decent virtual 3D mesh, achieving the required accuracy.
Gaussian Splatting, on the other hand, uses a structure from motion method to estimate a 3D point cloud from a set of 2D images. Each point is converted to a Gaussian, which is then rasterised. The Gaussian parameters are then adjusted based on the difference between the rasterized image and the real 2D image. Further processing may also be performed to ensure that Gaussians better fit fine-grained details.
NeRF and Gaussian Splatting are two Al models that may advantageously be used, as they both use sparsely taken 2D images and the output is a virtual 3D mesh of those 2D images that can be viewed from additional angles rather than just viewed from the original 2D image angle. However, any Al model that is configured to generate additional views of the virtual 3D mesh, not depicted by the received 2D images, based on the received 2D images could potentially be used.
Indeed, it will be appreciated that NeRF and Gaussian Splatting are just examples of Al models used to implement the invention, but they are by no means the only ones. Other examples of Al models can be used, including deep learning and machine learning models or techniques.
-Segmenting and producing a segmented 3D mesh of the body part After the virtual 3D mesh reconstruction is performed, the virtual 3D mesh 220 is segmented to isolate the body part from the surrounding environment and produce a segmented 3D mesh of the body part (step 23 of Figure 2).
For simplicity's sake, the segmentation process will be described in relation to a virtual 3D mesh 220 comprising a plurality of vertices or a plurality of faces, and a plurality of normals, wherein the plurality of normals may comprise a plurality of face normals or a plurality of vertex normals, or both. This is, as explained above, by no means necessary as the virtual 3D mesh may be, for example, a point cloud or any other means of representing a 3D mesh.
As those skilled in the art of 3D image processing will appreciate, a normal is a perpendicular theoretical line to a face that indicates the direction of said face. For example, in the illustration of Figure 4A, the first and second faces 501, 502 have first and second face normals 511 and 512, respectively. A vertex normal, on the other hand, is determined by averaging the face normals that contain the vertex. In the example of Figure 4B, the face normals 511 and 512 were brought to the edge of the face and then copied for each vertex. This allows the contributions 521 and 522 of each face normal to the vertex normals for all six vertices to be determined.
Vertices A and B have vertex normal contributions from both faces and, therefore, an average is to be performed in order to calculate the total vertex normal for A and B vertices. This average may be a weighted average or an unweighted average. This means that by identifying vertices or the faces that have a consistent angle between their respective normal and the reference axis, one can identify planar surfaces in the virtual 3D mesh.
Figure 5 shows an example of automatic segmentation steps that the system may perform. The segmentation starts by calculating the angles between each normal and a reference axis (step 23.1). The angles may be calculated between each vertex normal and a reference axis or between each face normal and the reference axis. Preferably, the reference axis is a vertical axis (in the examples described herein, the z-axis) but this is by no means necessary and other reference axes may be used. The vertices or the faces that have a consistent angle between the respective normal and the reference axis may then be selected as indicating a planar surface (step 23.2) such as, for example, the floor, furniture etc. Having consistent angles between the respective normal and the reference axis means that vertices or faces have substantially the same or common angle between their respective normal and the reference axis. The term "consistent angle" or "consistent angles" may be interpreted to mean angles within a predefined margin of variation, such as ±10, ±2°, ±3°, ±4°, ±5° or ±10°, for example.
The selection of the plane may be made using a plane finding algorithm, such as random sample consensus (or RANSAC) on the points with a consistent angle to calculate a plane equation. This allows the plane and the correct equation for such a plane to be correctly identified (step 23.3).
The segmentation may then proceed directly to the removal of the vertices or the faces that have a consistent angle between the respective normal and the reference axis (step 23.6) as they all belong to the identified plane. This not only removes the floor, but also noise on the opposite side of the plane relative to the body part (i.e. noise from "below" the floor), thereby segmenting the body part and producing a segmented 3D mesh of the body part.
In some examples, however, the planar surface 220.3 may not be parallel to the z-plane but at an angle and, therefore, it may be preferable to rotate the entirety of the virtual 3D mesh. For simplicity of representation, Figure 6 shows an example of a virtual 3D mesh 221 with an inclined planar surface 221.3 with reduced noise 221.4.
In this example, the next step in segmentation may be to calculate a rotation matrix required to rotate the planar surface to align the normals to be parallel to the reference axis (step 23.4), using the plane equation obtained in the previous step (i.e. step 23.3). This allows the entire virtual 3D mesh to be rotated using the rotation matrix (step 23.5).
The rotation of the entire virtual 3D mesh may be of special importance in the field of bespoke compression garments as it enables the body part to be aligned with the reference axis (e.g. the z-axis), as shown in Figure 7A. In particular, when the virtual 3D mesh is in an orientation or at an angle relative to the z-axis of the coordinate system then the virtual 3D mesh is not representative of the body part when the intended wearer is in a typical standing condition. As such, said misoriented representation should be transformed or rotated to a correctly oriented condition, having an orientation relative to said z-axis which is representative of the body part when the intended wearer is in a typical standing condition. Following such reorientation, this facilitates the knitting of the garment and the ability to achieve the required compression behaviour of the garment. For example, when manufacturing a compression stocking or sock, each course of the fabric that is to surround the intended wearer's calf will now be correctly oriented in the x-y plane (or "z-plane"), as viewed down the z-axis. In other words, each course of the fabric will run circumferentially around the body part of the intended wearer, enabling the required compression behaviour to be provided.
More generally, the orientation and geometry of the representation of the intended wearers body part are preferably set to accurately reflect the material courses of the corresponding garment. For example, the axial direction of each of the courses of a foot portion should be oriented so as to be horizontal or substantially horizontal. Similarly, the axial direction of each of the courses of a leg portion should be orientated so as to be vertical or substantially vertical.
The segmentation may then proceed to the removal of the vertices or the faces that have a consistent angle between the respective normal and the reference axis (step 23.6) as they all belong to the identified plane. This not only removes the floor, but also noise on the opposite side of the plane relative to the body part (i.e. any spurious datapoints that would be "below" the floor), thereby segmenting the body part and producing a segmented 3D mesh of the body part. This may be achieved by cutting through the z plane aligned with the rotated flat surface so as to isolate the body part from the plane surface.
Another benefit of rotating the entire virtual 3D mesh before segmenting is that it is easier to remove the floor as it is at a set z height rather than at an angle. However, this is by no means necessary as other algorithms and techniques can be used to identify and select the plane so as to be removed.
Optionally, instead of (or in addition to) the system calculating the angles between each normal and a reference axis, the selection of points that define sections of the virtual 3D mesh that contain the body part may be done manually by the user after the reconstruction step but prior to the segmentation step (i.e. between steps 22 and 23 of Figure 2). The user may check manually that all vertices or faces of the body part are located within the selected section by manipulating the view angles of the virtual 3D mesh. Once the user is satisfied with their selection, the system may receive the user's input to select the plurality of points that define the sections of the virtual 3D mesh that contain the body part such that, in the step of segmenting, the system may then isolate the body part from the surrounding environment and may produce a segmented 3D mesh based on the user's inputs.
The result of the rotation of the virtual 3D mesh reconstruction 230 is shown in Figure 7A. As with Figure 6, the rotation of the virtual 3D mesh reconstruction shows a pair of feet 230.1 and a pair of legs spaced apart and flat on a surface 230.3, which now is parallel to the x-y plane. The virtual 3D mesh accurately represents the body part 230.2 (ankles, calves, knees and part of the thighs) and the surrounding environment, in this case the flat surface 230.3 or the floor. As with Figure 6, this rotated virtual 3D mesh 230 is represented in three directions x, y and z and can be manipulated (on screen, by the user) in each and all directions to obtain views in different angles of the rotated virtual 3D mesh 230. Figures 7B, 8A and 8B show three of those angles taken from the rotated virtual 3D mesh 230.
Figure 7B shows a detailed view 233 of the feet 230.1 of body part 230.2, after the removal of the flat surface 230.3, from a z-plane cutting through the ankles of the wearer. Notably, Figure 7B is an additional view of the segmented 3D mesh of the body part 230.2, not depicted in any of the received 2D images, but may be derived from the received 2D images. This view may be used to ensure that all the noise and plane surfaces have been correctly removed from the segmented 3D mesh of the body part without unduly undercutting a portion of the body part.
The view of Figure 7B can be obtained by using one of the Al models described above. By manipulating the view angles of the segmented 3D mesh of the body part it is possible to generate different z-planes which would yield different circumferences depending on the region of the virtual 3D mesh the z-plane cuts through. For example, the portion of the 3D mesh that represents region 150 above the ankle of the wearer's leg, at the bottom of the calf, should generally have a smaller circumference than the portion of the 3D mesh that represents region 140 of the wearer's foot around the heel and ankle (over the bridge of the foot).
This ability of generating additional views is very convenient as the 2D images may be captured with the intended wearer of the bespoke garment lying down, sitting down, or, in the case of arms or legs, with their extremities spaced apart by various distances. Therefore, the use of these Al models allows the 2D images to be taken in the wearer's most comfortable position while avoiding re-scans due to the lack of 2D image information. This method is particular advantageous in situations where the intended wearer of the bespoke garment has low mobility or is immobilised (e.g. bed-bound).
As described above, the reconstructed virtual 3D mesh allows for the body part -in the case of the example, legs and feet -to be perceived from any angle. This includes additional angles that were not captured by the 2D images but were generated based on those 2D images.
Figure 8A shows a first detailed view of the segmented 3D mesh of the body pad 230.2 in profile aligned in the x direction with the toes facing right side of the figure. 5 Figure 8B shows a second detailed view of the segmented 3D mesh of the body part 230.2 front-faced and aligned in the y direction.
At the end of the segmentation step, the segmented 3D mesh of the body part is an accurate representation of the body part, but it is dimensionless as there has been no indication of dimensions in any part of the method so far. Therefore, the next step is to scale the segmented 3D mesh of the body part so that the representation of the body part is a dimensionally accurate representation.
-Scaling the segmented 3D mesh of the body part Based on an indication of scale 48, the segmented 3D mesh of the body 230 part is scaled to produce a scaled segmented 3D mesh of the body part (step 24 of Figure 2), which is dimensionally accurate representation of said body pad.
The indication of scale 48 may be a measurement in respect of the body part, or a measurement of another object within the surrounding environment. For example, considering Figures 8A and 8B, the indication of scale 48 may be an input measurement of the wearer's foot size 340.1 (i.e. shoe size or measurement, in cm or mm, of the foot length from the heel to the toes). The input measurement may be supplied to the system by the intended wearer themselves, or by a third party (such as a clinician or the operator of the digital camera used to provide the 2D images in step 21 of Figure 2), e.g. through a mobile app, web interface or other program prompt. For instance, the input measurement may be such as "UK size 5", or "European size 38", or "US size 6", for example. For example, Figures 9A and 9B show an example of a scaled foot (the wearer's foot), once an indication of scale 48 has been applied.
Alternatively, the indication of scale 48 may be the measurement, in cm or mm, of the wearer's calf from the ankle to the knee, for example.
Optionally, the indication of scale 48 may be the height, width or length of a known -21 -object within the surrounding environment within one of the 2D images received. The known object may for example be a ruler or calibration bar of known length, placed alongside the body part.
Thus, the views of the segmented 3D mesh of the body part of Figures 8A and 8B are scaled to produce the views of the scaled segmented 3D mesh of the body part of Figures 10A and 10B respectively. The dimensions in the x, y and z axes are displayed in mm. Figure 10A shows a first detailed view 241 of the scaled segmented 3D mesh of the body part 240.2 in profile aligned in the x direction with the toes facing right side of the figure. Figure 10B shows a second detailed view 242 of the scaled segmented 3D mesh of the body part 240 front-faced and aligned in the y direction.
Although step 24 of Figure 2 shows that the indication of scale is received by the system at the scaling step, it should be noted that this need not be the case. For example, the indication of scale 48 may be received with the 2D images of step 21 and saved in memory until the scaling step.
-Generating a plurality of datapoints Once the segmented 3D mesh of the body part is dimensionally accurate, as a result of the scaling, it can be used to generate the plurality of datapoints which represent the body part (step 25 of Figure 2). These datapoints may then be used to determine measurements and circumference measurements along the body part's length so as to enable a dimensionally accurate bespoke garment to be manufactured, or selected from pre-existing stock.
Alternative methods A second example of the present method will now be described, primarily with reference to Figure 11. As with the first example (discussed with reference to Figure 2), this example also provides a computer-implemented method 70 of generating a plurality of datapoints 31 which represent a body part on which a bespoke garment is to be worn, and from which datapoints said bespoke garment can be manufactured or selected.
In overview, the alternative method 70 comprises the steps of: receiving (step 21) a plurality of two-dimensional (2D) images 49 of the body -22 -pad and a surrounding environment from different angles of view; segmenting (step 72) each of the plurality of 2D images 49 of the body part to isolate the body part from the surrounding environment and produce a plurality of segmented 2D images; reconstructing (step 73) a segmented three-dimensional (3D) mesh of the body part based on the segmented 2D images, the segmented 3D mesh comprising a 3D reconstruction of the body pad; scaling (step 24) the segmented 3D mesh of the body part, based on an indication of scale 48, to produce a scaled segmented 3D mesh of the body part; and generating (step 25) the plurality of datapoints which represent the body part based on the scaled segmented 3D mesh of the body part.
As apparent from Figure 11, the steps of receiving the 2D images, scaling, and generating a plurality of datapoints are similar to the corresponding steps of the first example discussed with reference to Figure 2 and therefore are identified with the same reference numbers, and as such they will not be described in detail with reference to this second example.
-Segmenting the 2D images and producing a plurality of segmented 2D images of 20 the body part After receiving the 2D images 49, the system segments each of the plurality of 2D images 49 of the body part to isolate the body part from the surrounding environment and produce a plurality of segmented 2D images (step 72 of Figure 11).
There are several techniques that can be used to segment a set of 2D images. By way of example, the received 2D images may be segmented by using computer vision techniques such as (but not exclusively) edge-based segmentation or by using another Al model such as (but not exclusively) "Segment Anything".
Edge-based segmentation algorithms identify the edges of the body part in the 2D images based on discontinuities or variations in contrast, texture, colour, and saturation in each 2D image. These may accurately identify the edges of the body part in each 2D image, allowing the body part to be selected and segmented (or cropped) from the surrounding environment. However, to obtain a seamless border of the body part in each 2D image, the edges may need to be combined so as to reduce the number of edges and, by extension, facilitate the process of region filling which allows the whole body part (not just the edges) to be isolated from the surrounding environment. Therefore, edge-based segmentation may be applied on its own to each 2D image, or in combination with region-based segmentation or any other type of segmentation.
The Al model "Segment Anything", on the other hand, allows the body part to be isolated from the surrounding environment simply by selecting one single point contained in the body part. If used in combination with other Al models, "Segment Anything" may also isolate the body part from the surrounding environment by using an input text (for example, "crop the body part') rather than manual inputting the point contained in the body part.
It will be appreciated that edge-based segmentation and the Al model "Segment 15 Anything" are just examples of segmenting the 2D images before the reconstruction of the virtual 3D mesh, but they are by no means the only ones. Other examples of 2D image segmentation and/or Al models can be used.
-Reconstructing a segmented 3D mesh of the body part Based on the segmented 2D images, the system reconstructs a 3D mesh of the body pad which comprises a 3D reconstruction of the body part (step 73 of Figure 11).
This reconstruction may be made using the processes and methods described in relation to Figure 2, but using the segmented 2D images to reconstruct a 3D mesh of the body part directly (i.e. without reconstructing the surrounding environment).
This means that the resulting segmented 3D mesh of the body part is reconstructed with minimal to no noise, resulting in a clear reconstruction of the body part. This technique may also be computationally more efficient than the one described in relation to Figure 2, as the reconstruction step 73 is performed on the body part only, without reconstructing any of the surrounding environment.
As explained above, there are benefits in aligning the body part with the reference axis. However, in this example, there is no noise in the resulting segmented 3D mesh of the body part, and no information indicative of a planar surface (i.e. the floor).
Consequently, the rotation matrix cannot be found by identifying the plane equation of the planar surface (i.e. the floor), as only the body part has been reconstructed, and therefore a different technique is needed to determine the rotation matrix to be applied. For example, a virtual straight line may be applied through the middle of the body part and its equation determined. In this case, the next step would be to calculate a rotation matrix required to rotate the line so as to be perpendicular to the z-plane. This would allow the entire segmented 3D mesh of the body part to be rotated using the rotation matrix.
In another example, a user may manually identify some key markers on the segmented 3D mesh of the body part (e.g. markers on the feet of the segmented 3D mesh of the body part) and calculate the rotation matrix so as to align the key markers with the z-plane, allowing the entire segmented 3D mesh of the body part to be rotated.
It will be appreciated that these methods of rotating the segmented 3D mesh of the body part are just examples that may be used to implement the above method, but they are by no means the only ones.
The remaining steps of scaling the segmented 3D mesh of the body part (step 24) and generating a plurality of datapoints (step 25) are similar to those of the first example and can be performed in similar ways as those described above.
More generally, with reference now to Figure 12, the methods presented in the first and second examples may incorporate steps 22 and 23 of Figure 2, or steps 72 and 73 of Figure 11, in one single step 82, such that the overall method 80 comprises the steps of: receiving (step 21) the plurality of 2D images 89 of the body part and a surrounding environment from different angles of view; generating (step 82) a virtual segmented 3D mesh of the body part, without the surrounding environment, based on the received 2D images; scaling (step 24) the segmented 3D mesh of the body part, based on an indication of scale 48, to produce a scaled segmented 3D mesh of the body part; and generating (step 25) the plurality of datapoints which represent the body part based on the scaled segmented 3D mesh of the body part.
In this example, the step of generating a virtual segmented 3D mesh of the body part may comprise steps of reconstructing and segmenting whereby the reconstruction of the virtual 3D mesh may happen before or after the segmenting step depending on which set of 2D images are received for the reconstruction step. If the 'original' 2D images (i.e. the plurality 2D images 89 received include the body part and surrounding environment) are received, then the reconstruction of the virtual 3D mesh will comprise the reconstruction of the body part and noise (i.e. surrounding environment).
On the other hand, if the 2D images 89 received are already segmented 2D images so that each segmented 2D image comprises the isolated body part and no information on the surrounding environment, then the reconstruction of the virtual 3D mesh will solely comprise the reconstruction of the body part.
The remaining steps of scaling the segmented 3D mesh of the body part (step 24) and generating a plurality of datapoints (step 25) are similar to those described above, and can be performed in similar ways as those described above.
Other considerations If the bespoke garment is a bespoke compression garment, any suitable method of making bespoke compression garments may use the computer-implemented method described herein to obtain dimensionally accurate datapoints and use these to produce a dimensionally accurate bespoke compression garment.
In the field of bespoke compression garments, a method of making bespoke knitted compression garments, like the one described in WO 2022/008932 Al or the one described in WO 2022/129924 Al, could use the computer-implemented method described herein to obtain dimensionally accurate datapoints and use these to design a pressure-accurate compression garment.
It is important to note that the computer-implemented method describe herein, unlike WO 2022/008932 Al or WO 2022/129924 Al, does not use depth data, nor does it use a scanning process such as LiDAR or time-of-flight scanning in any of the steps of the computer-implemented method, including the receiving step (step 21 of Figure 2).
Another important note is that it is not essential for the system to display any images (such as those of Figures 3A to 4C and 7A to 10B) to the user or another party during the execution of the present method. The images included herein are primarily to illustrate the processing carried out in each step of the method. Although displaying the images at the end of each step may be useful to reassure the user or other party that the method is being correctly executed, and may also enable corrections to be initiated if required, this is by no means necessary for implementing the invention. Accordingly, it is possible to have an implementation of the method wherein the 2D images are received and the datapoints which represent a body part are outputted (for use in generating the knitting instructions required to manufacture the bespoke garment or selecting an off-the-shelf garment) without any images being displayed to the user.
Even though the body pad, in the examples described herein, comprises a foot and at least part of a leg, this is not the only body part where a bespoke garment may be used. For example, arms and torsos may also be body pads where a bespoke garment, manufactured or obtained using the present method, may be worn.
Knitting the garment The above methods, and the present disclosure more generally, are primarily directed to generating a plurality of datapoints which represent a body part on which a bespoke garment is to be worn, and therefore the methods may end once the plurality of datapoints have been obtained. The above methods may however continue to include the manufacture of the garment itself.
Thus, the above methods may further comprise a step of manufacturing the bespoke garment by knitting in accordance with a set of knitting instructions based on the plurality of datapoints, e.g. using an automated knitting machine. The knitting may also take into account a desired pressure configuration the resulting garment is to exert on the intended wearer, as prescribed by a healthcare professional, for example.
The system may also export quality information about the processing and quality checks for post-manufacture. Examples of this could include pressure maps, strain maps, transition markers, datapoints or coordinates, or measurements. The system may export key quality checks as a text file or spreadsheet for a knitting facility, with the knitting machine, to check against. The system may also generate the production documents with all the order information and the quality checks.
The system may be required to save or store all files, data, maps or other information generated. This could work in a variety of ways. For example, the system could store the files to a cloud location that is accessible by the knitting facility. A more advanced system could include a Graphical User Interface that has a process flow that integrates with an enterprise resource planning system. For example, the system could record the approval of quality checks by the operator and directly instruct the knitting machine.
Although a bespoke knitted garment may be manufactured, it will be appreciated that this need not necessarily be the case. For example, an off-the-shelf garment may be selected if the geometry of the body part of the wearer corresponds closely to that for which an off-the-shelf garment has already been designed and manufactured. In other words, an off-the-shelf garment having dimensions which correspond or substantially correspond, as closely as possible, to said plurality of datapoints may be selected and provided to the wearer.
If the bespoke garment is a bespoke compression garment, then the off-the-shelf garment geometry and applied pressure profile should correspond or substantially correspond, as closely as possible, to the geometry of the body part of the wearer and the desired pressure profile recommended or prescribed by a clinician. As such, once the plurality of datapoints which represent the body part on which the bespoke garment is to be worn have been generated, the system may use a database of preexisting compression garments having different size data and for applying different pressure configurations to determine whether an off-the-shelf garment may be suitable. A pre-existing compression garment from the database may be selected based on matching or substantially matching the size data and desired pressure configuration to the size data and pressure configuration of the pre-existing compression garment. The pre-existing compression garment may then be obtained and provided to the patient. The healthcare professional may utilise this option if a wait-time for a bespoke manufactured garment is unacceptable.
Although the bespoke garment is described as knitted, it will be appreciated that similar modelling methods and processes may be applied for non-knitted garments, for example woven garments or other garment construction.
Although flat knit courses are described above, it will be appreciated that helically knitted courses may be designed and manufactured using a variant of the present principles. Sections of the helical course may be modelled in a similar or identical way as the representation courses described above.
The words 'comprises/comprising' and the words 'having/including' when used herein with reference to the present invention are used to specify the presence of stated features, integers, steps or components, but do not preclude the presence or addition of one or more other features, integers, steps, components or groups thereof. It is appreciated that certain features of the invention, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the invention which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable sub-combination.
The embodiments described above are provided by way of examples only, and various other modifications will be apparent to persons skilled in the field without departing from the scope of the invention as defined herein.
Claims (26)
- CLAIMS1. A computer-implemented method of generating a plurality of datapoints which represent a body part on which a bespoke garment is to be worn, and from which 5 datapoints said bespoke garment can be manufactured or selected, the method comprising the steps of: receiving a plurality of two-dimensional (2D) images of the body part and a surrounding environment from different angles of view; reconstructing a virtual three-dimensional (3D) mesh based on the received 2D images, the virtual 3D mesh comprising a 3D reconstruction of the body part with the surrounding environment; segmenting the virtual 3D mesh to isolate the body part from the surrounding environment and produce a segmented 3D mesh of the body part; scaling the segmented 3D mesh of the body part, based on an indication of 15 scale, to produce a scaled segmented 3D mesh of the body part; and generating the plurality of datapoints which represent the body part based on the scaled segmented 3D mesh of the body part.
- 2. A computer-implemented method of generating a plurality of datapoints which represent a body part on which a bespoke garment is to be worn, and from which datapoints said bespoke garment can be manufactured or selected, the method comprising the steps of: receiving a plurality of two-dimensional (2D) images of the body part and a surrounding environment from different angles of view; segmenting each of the plurality of 2D images of the body part to isolate the body part from the surrounding environment and produce a plurality of segmented 2D images; reconstructing a segmented three-dimensional (3D) mesh of the body part based on the segmented 2D images, the segmented 3D mesh comprising a 3D reconstruction of the body part; scaling the segmented 3D mesh, based on an indication of scale, to produce a scaled segmented 3D mesh of the body part; and generating the plurality of datapoints which represent the body part based on the scaled segmented 3D mesh of the body part.
- 3. The method according to claim 1 or claim 2, wherein the bespoke garment is a bespoke compression garment.
- 4. The method according to any preceding claim, wherein the body part comprises a foot and at least part of a leg.
- 5. The method according to any preceding claim, further comprising receiving a measurement in respect of the body part, or a measurement of another object within the surrounding environment, said measurement being used as the indication of 10 scale.
- 6. The method according to claim 5 when dependent on claim 4, wherein the measurement is the size of the foot.
- 7. The method according to any preceding claim, wherein the step of reconstructing is performed using an Al model.
- 8. The method according to claim 7, wherein the Al model is configured to generate additional views of the virtual 3D mesh, not depicted by the received 2D 20 images, based on the received 2D images.
- 9. The method according to claim 7 or claim 8, wherein the Al model is a NeuralRadiance Field model.
- 10. The method according to claim 7 or claim 8, wherein the Al model is a Gaussian Splatting model.
- 11. The method according to claim 1, or any of claims 3 to 10 when dependent on claim 1, wherein the reconstructed virtual 3D mesh comprises a plurality of vertices or a plurality of faces, and a plurality of normals.
- 12. The method according to claim 11, wherein the step of segmenting further comprises the steps of: calculating the angles between each normal and a reference axis; and selecting the vertices or the faces that have a consistent angle between the -31 -respective normal and the reference axis as indicating a planar surface.
- 13. The method according to claim 12, wherein the step of segmenting further comprises the steps of: identifying the planar surface that comprises the vertices or the faces that have a consistent angle; calculating a rotation matrix required to rotate the planar surface to align the normals to be parallel to the reference axis; and rotating the virtual 3D mesh using the rotation matrix.
- 14. The method according to claim 12 or claim 13, wherein the step of segmenting further comprises: removing the vertices or the faces that have a consistent angle between the respective normal and the reference axis.
- 15. The method according to claim 1, or any of claims 3 to 10 when dependent on claim 1, further comprising, after the step of reconstructing, and prior to the step of segmenting, a step of: receiving user input to manually select a plurality of points that define sections of the virtual 3D mesh that contain the body part such that, in the step of segmenting, the computer implemented method then isolates the body part from the surrounding environment and produces the segmented 3D mesh based on the user's inputs.
- 16. A computer-implemented method of generating a plurality of datapoints which represent a body part on which a bespoke garment is to be worn, and from which datapoints said bespoke garment can be manufactured or selected, the method comprising the steps of: receiving a plurality of 2D images of the body part and a surrounding environment from different angles of view; generating a virtual segmented 3D mesh of the body part, without the surrounding environment, based on the received 2D images; scaling the segmented 3D mesh of the body part, based on an indication of scale, to produce a scaled segmented 3D mesh of the body part; and generating the plurality of datapoints which represent the body part based on 35 the scaled segmented 3D mesh of the body part.
- -32 - 17. The method according to any preceding claim, wherein the receiving step comprises receiving still photographs.
- 18. The method according to any of claims 1 to 16, wherein the receiving step comprises receiving frames of video.
- 19. The method according to any preceding claim, wherein the plurality of 2D images are received from a digital camera of a smartphone or tablet device.
- 20. The method according to any preceding claim, wherein the receiving step does not use depth data.
- 21. The method according to any preceding claim, wherein the receiving step does not involve a scanning process such as Li DAR or time-of-flight scanning.
- 22. The method according to any preceding claim; and manufacturing said bespoke garment by knitting in accordance with a set of knitting instructions based on said plurality of datapoints.
- 23. The method according to any of claims 1 to 21, and selecting an off-the-shelf garment having dimensions which correspond or substantially correspond to said plurality of datapoints.
- 24. A bespoke garment manufactured by the method of claim 22.
- 25. The bespoke garment of claim 24, wherein the bespoke garment is a bespoke compression garment.
- 26. A computer program comprising instructions which, when the program is executed by a computer processor, cause the computer processor to carry out the method of any of claims 1 to 23.
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB2404260.8A GB2639864A (en) | 2024-03-25 | 2024-03-25 | Method of generating datapoints representing a body part |
| PCT/GB2025/050624 WO2025202621A1 (en) | 2024-03-25 | 2025-03-24 | Method of generating datapoints representing a body part |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB2404260.8A GB2639864A (en) | 2024-03-25 | 2024-03-25 | Method of generating datapoints representing a body part |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| GB202404260D0 GB202404260D0 (en) | 2024-05-08 |
| GB2639864A true GB2639864A (en) | 2025-10-08 |
Family
ID=90923533
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| GB2404260.8A Pending GB2639864A (en) | 2024-03-25 | 2024-03-25 | Method of generating datapoints representing a body part |
Country Status (2)
| Country | Link |
|---|---|
| GB (1) | GB2639864A (en) |
| WO (1) | WO2025202621A1 (en) |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2019045717A1 (en) * | 2017-08-31 | 2019-03-07 | Sony Mobile Communications Inc. | Methods, devices and computer program products for generation of mesh in constructed 3d images containing incomplete information |
| WO2022008932A1 (en) | 2020-07-10 | 2022-01-13 | Advanced Therapeutic Materials Limited | Method of making bespoke knitted compression garment |
| US20220044070A1 (en) * | 2018-12-17 | 2022-02-10 | Bodygram, Inc. | Methods and systems for automatic generation of massive training data sets from 3d models for training deep learning networks |
| WO2022129924A1 (en) | 2020-12-17 | 2022-06-23 | Advanced Therapeutic Materials Limited | Method of designing a bespoke compression garment, a compression garment and a computer programm for carrying out this method |
| US11423630B1 (en) * | 2019-06-27 | 2022-08-23 | Amazon Technologies, Inc. | Three-dimensional body composition from two-dimensional images |
| EP4082435A1 (en) * | 2021-04-29 | 2022-11-02 | Lymphatech, Inc. | Methods and systems for identifying body part or body area anatomical landmarks from digital imagery for the fitting of compression garments for a person in need thereof |
-
2024
- 2024-03-25 GB GB2404260.8A patent/GB2639864A/en active Pending
-
2025
- 2025-03-24 WO PCT/GB2025/050624 patent/WO2025202621A1/en active Pending
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2019045717A1 (en) * | 2017-08-31 | 2019-03-07 | Sony Mobile Communications Inc. | Methods, devices and computer program products for generation of mesh in constructed 3d images containing incomplete information |
| US20220044070A1 (en) * | 2018-12-17 | 2022-02-10 | Bodygram, Inc. | Methods and systems for automatic generation of massive training data sets from 3d models for training deep learning networks |
| US11423630B1 (en) * | 2019-06-27 | 2022-08-23 | Amazon Technologies, Inc. | Three-dimensional body composition from two-dimensional images |
| WO2022008932A1 (en) | 2020-07-10 | 2022-01-13 | Advanced Therapeutic Materials Limited | Method of making bespoke knitted compression garment |
| WO2022129924A1 (en) | 2020-12-17 | 2022-06-23 | Advanced Therapeutic Materials Limited | Method of designing a bespoke compression garment, a compression garment and a computer programm for carrying out this method |
| EP4082435A1 (en) * | 2021-04-29 | 2022-11-02 | Lymphatech, Inc. | Methods and systems for identifying body part or body area anatomical landmarks from digital imagery for the fitting of compression garments for a person in need thereof |
Non-Patent Citations (2)
| Title |
|---|
| ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 2002, TIANHAN XU ET AL, "Surface-Aligned Neural Radiance Fields for Controllable 3D Human Synthesis" * |
| COMPUTER GRAPHICS FORUM, vol 27, 2008, ARIEL SHAMIR, "A survey on Mesh Segmentation Techniques", pages 1539-1556 * |
Also Published As
| Publication number | Publication date |
|---|---|
| GB202404260D0 (en) | 2024-05-08 |
| WO2025202621A1 (en) | 2025-10-02 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN108697375B (en) | Spinal alignment estimation device, spinal alignment estimation method, and spinal alignment estimation program | |
| US8571698B2 (en) | Simple techniques for three-dimensional modeling | |
| Grant et al. | Accuracy of 3D surface scanners for clinical torso and spinal deformity assessment | |
| US20180122089A1 (en) | Method, apparatus and program for selective registration three-dimensional tooth image data to optical scanning tooth model | |
| US20200364935A1 (en) | Method For Calculating The Comfort Level Of Footwear | |
| US11779242B2 (en) | Systems and methods to estimate human length | |
| CN103605832A (en) | Method for forecasting clothing pressure distribution of human shanks | |
| Zhao et al. | Computerized girth determination for custom footwear manufacture | |
| US11600054B2 (en) | Methods and systems for manufacture of a garment | |
| Kozar et al. | Designing an adaptive 3D body model suitable for people with limited body abilities | |
| Chiu et al. | Automated body volume acquisitions from 3D structured-light scanning | |
| Sobhiyeh et al. | Hole filling in 3D scans for digital anthropometric applications | |
| US20250366544A1 (en) | Methods and systems for manufacturing of a garment | |
| GB2639864A (en) | Method of generating datapoints representing a body part | |
| US12490901B2 (en) | System and method of high precision anatomical measurements of features of living organisms including visible contoured shape | |
| Vannier et al. | Visualization of prosthesis fit in lower-limb amputees | |
| JP2018512188A (en) | Segment objects in image data using channel detection | |
| McGhee et al. | Three-dimensional scanning of the torso and breasts to inform better bra design | |
| CN114387389A (en) | A method of reconstructing three-dimensional head portrait from head CT tomography | |
| CN116797634A (en) | Image registration method for three-dimensional broken bone registration and splicing oriented to anatomical reduction | |
| CN109118501A (en) | Image processing method and system | |
| JP2008032489A (en) | 3D shape data generation method and 3D shape data generation apparatus for human body | |
| US12256788B2 (en) | Systems and methods for designing and fabricating mass-customized products | |
| US11776116B1 (en) | System and method of high precision anatomical measurements of features of living organisms including visible contoured shapes | |
| Sun | Finite element model for predicting the pressure comfort and shaping effect of wired bras |