WO2002003304A2 - Prevision de changements dans les caracteristiques d'un objet - Google Patents
Prevision de changements dans les caracteristiques d'un objet Download PDFInfo
- Publication number
- WO2002003304A2 WO2002003304A2 PCT/GB2001/002828 GB0102828W WO0203304A2 WO 2002003304 A2 WO2002003304 A2 WO 2002003304A2 GB 0102828 W GB0102828 W GB 0102828W WO 0203304 A2 WO0203304 A2 WO 0203304A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- model
- condition
- shape
- data
- operative
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/50—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
Definitions
- This invention relates to predicting changes in characteristics of an object and has particular but not exclusive application to procedures to be performed on living objects, especially the human body, such as maxillo-facial and craniofacial surgery, for example bimaxillary osteotomy which involves breaking, moving and resetting of both the maxilla and mandible to improve facial function and aesthetics.
- the simulation is performed on a 2D lateral view of the patient rather than in 3D, and hence the surgeon or patient cannot visualise the post-operative appearance from a range of 3D view-points.
- the simplistic nature of the empirical models leads to inaccurate simulation results.
- the second main prior approach involves finite element models - slower modelling techniques which allow simulation of non-linear, anisotropic and visco-elastic tissue properties. Examples are given in Hemmy D., Harris G.F., Ganaparthy V., "Finite Element Analysis of Craniofacial Skeleton Using Three Dimensional Imaging as the Substrate", in Caronni E.F. (Ed) Craniofacial Surgery, Proc. of the 2 nd International Congress of the Intern. Society of Cranio-Maxillo-Facial Surgery, Florence, Italy, 1991, and Koch.
- the present invention embodies a new approach based upon statistical rather than physical modelling techniques.
- the invention addresses the disadvantages of current modelling techniques, and when applied to maxillo-facial and craniofacial surgery, can produce post-operative predictions in near real-time, from conventional pre- operative lateral cephalograms and pre-operative 3D facial surface data acquired using for example the Tricorder DSP Series 3D imaging system manufactured by Tricorder pic, of 6 The Long Room, Coppermill Lock, Summerhouse Lane,
- the invention can also provide significant advantages when used in other situations as will become evident hereinafter.
- a generic 2D statistical shape modelling technique has been developed known as a 2D Point Distribution Model (or PDM), based upon objects represented at a set of labelled 2D points.
- PDM Point Distribution Model
- the model consists of the mean positions of these points and the main modes of variation, which describe the ways in which the points move about the mean.
- a PDM is built by performing a statistical analysis of a number of shape training examples.
- Each example represents an observed instance of the class of shape, and is described by a set of 2D manually labelled so-called landmark points that capture the important features of the object.
- the 2D PDM is built from the training data as follows: 1. Align training examples using Procrustes Analysis as described by Cootes et al supra, scaling, rotating and translating the examples so that they correspond as closely as possible to the first training example, and
- P is truncated to use only the t most significant eigenvectors such that some fraction (typically 95%) of the training set variance is expressed.
- New examples of the class of objects modelled can then be generated by varying the shape parameters b.
- g,. is a vector of grey level profile data, g ; . is the mean grey level profile vector averaged over the training data for the ith. landmark point
- P ft is a matrix of the most significant eigenvectors of the grey-level training data covariance matrix for the ith landmark point
- b g is a set of weights, one for each eigenvector
- Training local grey-level models gives a set of specific models of the expected grey- level evidence at each point in the 2D PDM.
- 2D PDMs plus grey-level models can then be used in image search applications, that is, given a PDM of a particular class of 2D shape, one can locate an instance of that class of shape in a new image.
- the grey-level models can be used to compare expected and observed grey-level image evidence, producing a measure of grey-level fitness at each model point that is used to drive the image search algorithm.
- Image search is achieved using an algorithm known as an Active Shape Model (or ASM), described in detail in Cootes T.F., Taylor C.J., "Active Shape Models - Smart Snakes", Proc. BMVC, Leeds 1992, Springer Verlag, pp266-275.
- ASM Active Shape Model
- An instance of a 2D PDM is initialised at some position in the image, typically using the mean shape parameters. 2. A region of the image around each model point along the perpendicular to the boundary at that point is examined, and the best match between the observed and expected image data in that region is found; this gives a suggested local displacement at each model point.
- Steps 2 and 3 are iterated until the algorithm converges.
- ASMs can also be implemented in a multi-resolution form that speeds up the algorithm and improves its robustness.
- 2D PDMs and ASMs have been applied to a range of shape-modelling and image analysis applications, including face modelling and location as described in Lanitis A., Taylor, C.J., et al "Automatic Identification of Human Faces Using Flexible Appearance Models", Proc. 5 th BMVC, 1994 pp 65-74.
- Other applications include locating heart ventricles in echocardio rams, segmenting magnetic resonance (MR) images of the abdomen, and locating anatomical landmarks in lateral cephalograms.
- MR magnetic resonance
- each object is described as a labelled set of n points; the only difference is that z-ordinates are now included.
- a large number of landmark points ( ⁇ 500-1000) must be marked by hand for each example.
- the examples must be aligned before contour extraction so that the contours approximately correspond between different examples.
- the points marked on the contours are very unlikely to be 'true' 3D landmark points e.g. points of high curvature in 3D. 4.
- the method has problems dealing with objects of complex topology. Another method is proposed in Heap T., Hogg D., "Towards 3D hand Tracking Using a Deformable Model", Proc. 2 nd International Conf. On Automatic Face and Gesture Recognition 1996, ppl40-145. This involves a semi-automatic method for building 3D hand-models from MRI data in which a physically based Simplex Mesh model is constructed on the first example.
- a further extension of the statistical modelling techniques is to build a predictive model. This is done by building a combined statistical model which models the correlation between one class of measurements A and another class of measurements B. A particular measurement of A can then be used to predict the corresponding measurement of B.
- a model is built which links a 3D PDM of an object to a matrix of Scatter Correction Factors associated with the object, and subsequently uses an instance of 3D shape to infer the corresponding Scatter Correction Factors.
- each combined example contains an example of measurements A (vector ⁇ of length a) and an example of measurements B (vector x B of length b).
- the ith training example so obtained is a vector x c , which concatenates a normalised version of x Al and a normalised version of x B :
- the normalisation factors ⁇ A and ⁇ B are given by the total training set variance of measurement vectors ⁇ and x B respectively.
- the combined vector x c is normalised such that the sub-measurements ⁇ and x B give an equal contribution (in terms of variance) to the combined vector.
- the model is truncated to use a or less eigenvectors in order that it may be used to make predictions.
- W is a diagonal matrix of weights with diagonal elements set to 1 for the first a elements, and 0 for the final b elements.
- Equation (7) is solved for the unknown vector of combined model weights b c , using standard linear algebra techniques.
- x c can then be calculated using equation (6), and the estimate of x B is given by the last b elements of vector x c multiplied by the normalisation factor ⁇ B .
- the invention provides an improved predictive technique which involves planning changes for one set of variables for an object and predicting corresponding changes in another r set of variables for the object.
- the invention provides a method of predicting changes for an object with first and second characteristics that are distinct from but statistically correlated with one another, comprising: providing a statistical model configuration of at least one mode of variation of a first set of variables relating to the first characteristic of the object, and at least one mode of variation of a second set of variables relating to the second characteristic of the object, planning a change to the first set of variables for the object, and using the model configuration to predict a corresponding change to the second set of variables for the object from data corresponding to the planned change to the first set.
- the statistical model configuration may include a first parametric model of the first characteristic of the object, a second parametric model of the second characteristic of the object and a predictive model that characterises a statistical correlation between the models, and the method involves fitting the first and second models to the corresponding characteristics of an object in the first condition to provide parameterised data for the first and second characteristics of the object in the first condition, planning a change to the condition for the object so as to provide parameterised data for the first characteristic of the object in a second different condition, and utilising the parameterised data and the predictive model to provide parameterised data corresponding to a prediction of the change of second characteristic of the object in the second condition.
- the statistical model configuration may include a first parametric model of the first characteristic of the object in the first object condition, a second parametric model of the second characteristic of the object in the first object condition, a third parametric model of the first characteristic of the object in the second object condition, a fourth parametric model of the second characteristic of the object in the second object condition, and the predictive model characterises a statistical correlation between the models, with the method involving: fitting the first and second models to the corresponding characteristics of an object in the first condition to provide parameterised data for the first and second characteristics of the object in the first condition, planning the second condition for the object using the third model to provide parameterised data for the first characteristic of the object in the second condition, and utilising the parameterised data and the predictive model to provide parameterised data for the fourth model to predict the second characteristics of the object in the second condition.
- the invention has particular application to predicting the outcome of medical procedures and may be carried out to predict the outcome of a medical operative procedure
- the object comprises a patient
- the first shape characteristic corresponds to the shape of underlying hard tissue structure of the patient
- the second shape characteristic corresponds to the shape of a soft tissue structure that covers the hard tissue structure.
- Data may be acquired from a pre-operative lateral cephalogram concerning the shape of underlying hard tissue structure of the patient and data from a pre- operative 3D scan of the patient may be acquired for the shape of the soft tissue structure.
- the invention also includes a computer program to be run on a computer to perform the aforesaid method and data processing apparatus configured to perform the method.
- the invention provides a medical analysis tool comprising a processor operable to provide a statistical model configuration of at least one mode of variation of a first set of variables relating to shape characteristics of a relatively hard tissue part of a living body, and at least one mode of variation of a second set of variables relating to shape characteristics of a relatively soft tissue part of the body that overlies the relatively hard tissue part, an input operable to plan a change to the first set of variables for a patient, such that the processor utilises the model configuration to predict corresponding changes to the second set of variables for the patient, whereby to predict changes in shape of the soft tissue part that correspond to changes planned for the hard tissue part.
- the tool may include a model fitting system operable to fit the first and second models to the corresponding pre-operative hard and soft tissue shape characteristics of a patient to provide parameterised shape data for the pre-operative hard and soft tissue shape characteristics, and a planning input system operable to define a post- operative hard tissue configuration for the patient using the third model to provide parameterised shape data for post-operative hard tissue configuration.
- a model fitting system operable to fit the first and second models to the corresponding pre-operative hard and soft tissue shape characteristics of a patient to provide parameterised shape data for the pre-operative hard and soft tissue shape characteristics
- a planning input system operable to define a post- operative hard tissue configuration for the patient using the third model to provide parameterised shape data for post-operative hard tissue configuration.
- the processor may be operable to utilise the parameterised shape data and the predictive model to provide parameterised data for the fourth model to predict post-operative soft tissue shape characteristics for the patient corresponding to the planned post post-operative hard tissue configuration.
- the statistical model configuration may include at least one point distribution model.
- a display device may be configured to provide a visual display of the predicted the post-operative soft tissue configuration and least one of the pre-operative soft and hard tissue configuration and the planned post-operative hard tissue configuration so that the outcome of the planned procedure can be reviewed and shown to the patient if desired.
- Figure 1 is a schematic illustration of a hardware configuration for carrying out a predictive method according to the invention for predicting the outcome of a bimaxillary osteotomy
- Figure 2 illustrates the relationship between process components of a model used in predicting the outcome of the surgery
- Figure 3 is a lateral cephalogram of a patient's head with landmark points shown marked on it
- Figure 4 illustrates a camera arrangement for capturing 3D data
- Figure 5 is an example of a 2D rendering of a 3D image captured by the camera arrangement of Fig. 4 with landmark points thereon,
- Figure 6 is a flow chart of a process for training the models
- Figure 7 is a flow chart of a process for predicting the outcome of a bimaxillary osteotomy, using the trained models
- Figure 8a illustrates a display of a 2D lateral cephalogram of the bony tissue of a patient before surgery is carried out
- Figure 8b illustrates a display of a proposed surgical treatment plan for the patient
- Figure 9a illustrates a display of a 3D model instance for the soft tissue shape of the head of the patient before surgery is carried out
- Figure 9b illustrates a display of a 3D predicted model of the soft tissue shape of the head of the patient after surgery is carried out according to the proposed treatment plan shown in Figure 8b.
- 2D and 3D shape-modelling techniques are used to build a statistical model of the relationship between hard and soft-tissue during maxillo-facial surgery.
- This model can then be used to predict 3D soft -tissue changes that occur as a result of maxillo-facial surgery.
- a surgeon may propose to break and move a patient's jawbone to improve facial function and aesthetics and the model provides a prediction of the resulting 3D shape of the head produced by the proposed surgery.
- the method can be split into 2 general stages:
- Model-Building this involves building a statistical model which expresses the relationship between hard tissue and soft tissue, for both pre and post-operative maxillo-facial patient data.
- Soft-Tissue Prediction - given patient pre-operative data for an individual patient the statistical model is used to predict the post-operative soft-tissue appearance for the patient given the pre-operative data, plus knowledge of the surgeon's treatment plan.
- a number of statistical models are constructed using a hardware configuration shown in Figure 1.
- a conventional personal computer 1 with a processor unit 2, display screen 3 keyboard 4 and mouse 5 is coupled to a scanner 6.
- the scanner 6 permits X-ray side-view images of the patient's head, known as lateral cephalograms, to be scanned, digitised and fed to the processor unit 2.
- the resulting cephalogram data thus provides data concerning the bony or hard tissue configuration in the patient's head. It will be understood that this data can alternatively be obtained directly from digital X-ray equipment and the invention is not restricted to any particular method of hard tissue data capture.
- the processor unit 2 is also configured to receive data concerning the external or soft tissue appearance of the patient's head. This data may be captured using a 3D scanner 7 shown schematically.
- 3D scanner 7 is the Tricorder DSP Series 3D device supra.
- the processor unit 2 includes a central digital processor RAM, ROM and data storage media such as a hard disc and floppy disc connected on a common bus, in a conventional manner.
- the central processor can execute stored programs stored on the data storage media, so as to build the statistical models and display results obtained from them on the screen 3, and allow manipulation of the displayed data using the keyboard 4 and mouse 5.
- the programs build statistical models for the aforesaid model building and also execute the soft tissue prediction as will become apparent hereinafter.
- a statistical model is built that allows a prediction of postoperative soft-tissue appearance to be made from the following data: pre-operative soft-tissue appearance, pre-operative hard-tissue appearance, and knowledge of the surgical treatment plan i.e. knowledge of a proposed post-operative hard tissue appearance.
- the model building utilises the following components shown in Figure 2:
- a 3D PDM 11 describing the variability in shape of pre-operative 3D facial soft- tissue appearance, modelled from 3D surfaces acquired using the 3D scanner 7
- a 3D PDM 13 describing the variability in shape of post-operative 3D facial soft-tissue, modelled from 3D surfaces acquired using the scanner 7.
- a predictive model 14 which links the data from the models 10 - 13 together, and describing the relationship between data from models 10-12 and data from model 13.
- a training set of pre and post-operative lateral cephalograms is obtained for human patients who have already undergone maxillo-facial surgery.
- the cephalograms thus constitute historical data for maxillo-facial procedures previously carried out and can be used to train the pre- and post-operative 2D PDMs 10, 12.
- the cephalograms are individually scanned using the scanner 6 and individually displayed on the screen 3 of the computer 1.
- Each of the pre and post operative models includes a number of standard anatomical landmarks useful to maxillo-facial surgeons (Nasion, Sella, Porion, Orbitale, Gonion, Pogonion, Menton, Gnathion, Upper Incisor Root, Upper Incisor Tip, Lower Incisor Root, Lower Incisor Tip, ANS, PNS, A Point, B Point).
- Figure 3 shows the structures modelled.
- x CephPre is a vector of pre-op cephalogram 2D landmark data
- x Ceph?re is the mean pre-op cephalogram 2D landmark data averaged over the training set
- ⁇ CephPre is a matrix of the most significant eigenvectors of the pre-op cephalogram training data covariance matrix
- an ⁇ b CephPre is a set of weights, one for each eigenvector.
- CephPosl ⁇ CephPosl " * " * CephPosl” CephPosl V' (where x CephPos ⁇ is a vector of post-op cephalogram 2D landmark data, x CephPosl is the mean post-op cephalogram 2D landmark data averaged over the training set, ⁇ c ephPost 1S matri of the most significant eigenvectors of the post-op cephalogram training data covariance matrix, andb Ce/7 ⁇ F ⁇ , is a set of weights, one for each eigenvector.)
- identical anatomical landmarks are used in the post-operative cephalogram model to those in the pre-operative cephalogram model.
- the 3D shape of 3D pre and post-operative facial soft -tissue are each modelled using a 3D PDM. This involves capturing a training set of images of pre- and post operative facial shape using the scanner 7 shown in Figure 1.
- the basic modelling technique used is standard, as described by Hill et al. supra, but an improved method for marking up 3D training data is used, which addresses two problems with the standard method of Hill et al as will now be explained.
- a texture-mapped, triangulated 3D facial surface is acquired for each training example using the Tricorder DSP Series 3D capture system. The acquisition is done with each person face-on to the capture system as shown in Figure 4.
- the system includes an array of digital cameras C1-C4 directed face-on to the patient's face which is illuminated with a spatially textured light from a source (not shown) and the outputs of the cameras are processed to produce data corresponding to a texture-mapped, triangulated 3D facial surface.
- Each texture-mapped, triangulated 3D facial surface is converted into a 2.5D depth-map, and an image of the corresponding texture. This is done by calculating a virtual pin-hole camera model which is the average of the 4 (pre-calibrated) Tricorder DSP Series cameras models shown in Figure 4, and re-projecting the 3D facial surface using this camera model to give a 2.5D depth-map and texture image.
- Each depth-map texture image is then treated as a simple image and a relatively small ( ⁇ 80) set of reproducible 2D points are manually marked on each image.
- Figure 5 shows an example marked-up texture image.
- the marked points consist of two types: i) landmark points (shown as filled dots 15) - distinctive facial features or positions which can be reliably marked on each example image, and ii) pseudo- landmark points (shown as unfilled dots 16) - intermediate points which are equally spaced along the shape boundary between the distinctive landmark points.
- the marked 2D points are used to warp each image and depth-map into a common 'shape-free' frame using 2D thin-plate spline (TPS) interpolation.
- TPS thin-plate spline
- any pixel (x,y) in a given training example depth-map is nominally in correspondence with the same pixel in every other example depth-map.
- a small number of 2D landmark points have been used to produce texture-map and depth-map correspondences over the whole face.
- Equation (3) a shape instance in the pre-operative 3D soft -tissue model 11 can be described by the equation:
- Xj ⁇ pr e 1S a vector of pre-op 3D soft -tissue data
- ⁇ rypr e 1S ne mean pre-op 3D soft-tissue data averaged over the training set
- ⁇ 3£ ,p re is a matrix of the most significant eigenvectors of the pre-op 3D soft-tissue training data covariance matrix
- h 3DPre is a set of weights, one for each eigenvector.
- identical 3D landmarks are used in the post-operative 3D soft- tissue model to those in the pre-operative 3D soft-tissue model.
- steps SI- S4 the building of the models 10 - 13 is shown schematically as steps SI- S4.
- Each training example for the predictive model 14 consists of a measurement vector ⁇ edtc i tn at 1S the concatenation of 4 blocks of data: 1) a vector b CephPre of length nCephPre representing the pre-operative 2D bony structure of the face in parametric form.
- b Ceph ⁇ le is calculated from the raw 2D landmark point data x CephPre by inverting equation (8),
- b 30Pre of length n3DPre representing the pre-operative 3D soft- tissue structure of the face in parametric form.
- b 3flPre is calculated from the raw 3D landmark point data x 3DPre by inverting equation (10),
- b CephPosl of length nCephPost representing the post-operative 2D bony structure of the face in parametric form.
- b CephPosl is calculated from the raw 2D landmark point data x CephPosl by inverting equation (9), and 4)
- b 3DPosl is calculated from the raw 3D landmark point data x 3DPosl by inverting equation (11).
- each block of data making up p, ⁇ is normalised by dividing by its total training set variance, so that each type of data gives a contribution of equal weight to the combined model i.e.:
- the combined predictive model is then (in step S6 of Fig. 6) built from the training data by Principal Component Analysis, using the method described previously in relation to prior predictive models.
- an instance of the predictive model can be described by the equation: X Pr edict ⁇ X Pr edict " * " "?r edict " Predict (13)
- x P ⁇ edicl is the predictive model instance, x Pre ⁇ c , is the mean predictive model data averaged over the training set, P ' fredicl is a matrix of the most significant eigenvectors of the predictive model training data covariance matrix, andb p - erf;c/ is a set of weights, one for each eigenvector.
- a useful predictive model can be built from of the order of 100 (or more) training examples, each example containing the data for a single example of a bimaxillary osteotomy procedure. Adding further training data improves the accuracy of the predictive model.
- Soft- is sue Prediction
- the trained predictive model can be used to predict to the outcome of a surgical maxillo-facial procedure.
- a surgeon may propose a procedure which involves breaking a patient's jaw and moving the jaw- line by resetting the jaw.
- the resulting change in the 3D physical appearance of the face produced by the procedure depends on the rearrangement of bony material produced by the surgery and has been difficult to predict, explain and demonstrate to the patient.
- the actual and the perceived success of the procedure depends greatly on the skill, experience and communication skills of the surgeon.
- the method according to the invention allows the surgeon to input a proposed procedure making reference to a 2D cephalogram of the patient and predict the 3D soft tissue outcome, i.e. the facial appearance after carrying out the surgery.
- a standard pre-operative lateral cephalogram of the patient is acquired by conventional X-ray techniques, which is scanned by means of the scanner 6 and the resulting data is supplied to the processor 2 shown in Figure 1.
- the 2D captured data for the pre- operative lateral cephalogram is converted into a parametric form by fitting the 2D pre-operative lateral cephalogram model 10 to the cephalogram of the patient.
- a pre-operative 3D facial soft -tissue surface image of the patient is acquired using the 3D Tricorder DSP Series device.
- the corresponding data is sent from scanner 7 to the processor 2.
- the captured pre-operative 3D facial soft-tissue surface data is converted into a parametric form by fitting the 3D facial soft-tissue model 12 to the 3D facial soft-tissue surface.
- the surgical treatment plan is set up by manipulating the 2D landmarks on the pre-operative lateral cephalogram. This process is used to define an instance of the of the post-operative 2D cephalogram model 13.
- the resulting data are supplied as inputs to the predictive model 14 which, at steps S12 and S13, uses the pre-op lateral cephalogram parameters, pre-op 3D soft-tissue parameters and surgical treatment plan to predict post-op 3D soft-tissue shape and appearance.
- the 2D pre-operative lateral cephalogram model is fitted to the pre-operative lateral cephalogram using the standard multi-resolution ASM of Cootes et al "Active Shape Models : Evaluation of a Multi-Resolution Method for Improving Image Search", supra.
- the fitting algorithm determines the pre-operative cephalogram model shape parameters b CephPte which best fit the given cephalogram, and also the 2D location, orientation and scaling of the model instance in the cephalogram. This permits the cephalogram to be characterised in terms of a small set of shape parameters b Ceph?re , from which the aforementioned corresponding anatomical landmark point positions x c ePhP ⁇ e c n he calculated.
- the fitting algorithm is run on the processor unit 2 in Figure 1 and the resulting location of the landmark points relative to the cephalogram of the patient may be displayed on the screen 3 of the computer to provide the user with confirmation that the 2D pre-operative model has been satisfactorily fitted to the bony tissue image of the patient
- the 3D pre-operative facial soft-tissue model is fitted to the pre-operative 3D facial soft-tissue surface using an algorithm run on the processor unit 2 which is a variant of the Iterated Closest Point (ICP) algorithm described in "A method for registration of 3-D shapes", Besl, P. J. and McKay, N. D., IEEE PAMI, 14(2), pp 239-256, 1992 .
- the original search algorithm of Hill et al described in "Model- Based Interpretation of 3D Medical Images", supra was developed for deforming 3D models to fit to 3D volumetric image data whereas the modified version of the ICP algorithm deforms an initial 3D PDM in both pose and shape to produce the best local fit to 3D surface data.
- the algorithm proceeds as follows.
- a display of the resulting parameterised data may be provided on the screen 3 of the computer.
- the process allows the pre-operative 3D facial soft-tissue surface to be characterised automatically in terms of a small set of shape parameters b 3DPve , and a
- the surgical treatment plan is input using a similar User Interface to that of existing systems such as OTP and QuickCeph supra.
- the pre-operative lateral cephalogram acquired at step S7 is displayed on the screen 2 with the anatomical landmark point positions CephPte marked on it.
- the surgeon then indicates the proposed changes to make during surgery by manipulating the bony landmark points with the mouse 5 or by means of the keyboard 4 to give a new set of landmark point positions x CephPosl , indicating how the mandible and/or maxilla will move during surgery.
- Figure 8a is a schematic illustration of the pre-operative lateral cephalogram of the patient and Figure 8b illustrates the planned post-operative configuration to be achieved by surgery.
- the parameterised form of the pre-operative data (b CephPre , b 3DP ⁇ e and 3D surface model pose s, t, R), and the parameterised form of the treatment plan (b Cep ⁇ , Pos ,), are used to calculate a prediction of post-operative soft-tissue shape and appearance. This is done as follows:
- the output of this algorithm is a version of the 3D pre-operative facial surface which has been modified to simulate the required maxillo-facial surgery.
- Figure 9 a shows the display of the instance of pre-operative 3D model 11 for the patient
- Figure 9b illustrates the predicted 3D post-operative shape predicted by the predictive model 14 for the surgeon's treatment plan.
- the surgery planned in 2D as shown in Figures 8a and 8b is predicted to produces changes in 3D as shown in Figures 9a and 9b.
- the surgeon can then if desired modify the planned surgery in the screen display of Figure 8b and observe the outcome in the display of Figure 9b. This enables the surgical procedure to be optimised to achieve the desired aesthetic outcome.
- the displays of Figures 8 and 9 may be shown to the patient to explain and seek approval for the proposed procedure.
- the training of the predictive model 14 may be carried out on an ongoing basis.
- the model training was carried out as an initial step, but in addition, the data for subsequent surgical procedures may be used to update the training of the models.
- the invention is not restricted to maxillo-facial and craniofacial surgery and can be used for other procedures where it useful to predict changes in soft tissue shape resulting from a proposed operation to change to a corresponding relatively hard tissue configuration, and is not restricted to human surgery.
- the invention may also be used for operations on non-animate objects for which a statistical correlation occurs between an inner structure and an outer structure covering the inner structure so as to predict changes in the shape of the outer structure produced by a proposed operation to change the inner structure. Conditions other than the shape of the object may be predicted by means of the invention.
Landscapes
- Medical Informatics (AREA)
- Engineering & Computer Science (AREA)
- Public Health (AREA)
- Health & Medical Sciences (AREA)
- Pathology (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Biomedical Technology (AREA)
- Epidemiology (AREA)
- General Health & Medical Sciences (AREA)
- Primary Health Care (AREA)
- Image Processing (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
Abstract
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| AU2001266169A AU2001266169A1 (en) | 2000-06-30 | 2001-06-26 | Predicting changes in characteristics of an object |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB0016151A GB2364494A (en) | 2000-06-30 | 2000-06-30 | Predicting changes in characteristics of an object |
| GB0016151.3 | 2000-06-30 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| WO2002003304A2 true WO2002003304A2 (fr) | 2002-01-10 |
| WO2002003304A3 WO2002003304A3 (fr) | 2003-03-13 |
Family
ID=9894812
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/GB2001/002828 Ceased WO2002003304A2 (fr) | 2000-06-30 | 2001-06-26 | Prevision de changements dans les caracteristiques d'un objet |
Country Status (3)
| Country | Link |
|---|---|
| AU (1) | AU2001266169A1 (fr) |
| GB (1) | GB2364494A (fr) |
| WO (1) | WO2002003304A2 (fr) |
Cited By (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2007034346A3 (fr) * | 2005-09-23 | 2008-12-04 | Koninkl Philips Electronics Nv | Procede, systeme et programme informatique destine a une segmentation d'images |
| WO2008115368A3 (fr) * | 2007-03-16 | 2008-12-11 | Carestream Health Inc | Système numérique pour chirurgie plastique et esthétique |
| EP2471483A1 (fr) * | 2005-03-01 | 2012-07-04 | Kings College London | Planification chirurgicale |
| WO2012117122A1 (fr) * | 2011-03-01 | 2012-09-07 | Dolphin Imaging Systems, Llc | Système et procédé de génération d'une mutation de profils au moyen de données de suivi |
| WO2012138624A2 (fr) | 2011-04-07 | 2012-10-11 | Dolphin Imaging Systems, Llc | Système et procédé pour simulation et planification chirurgicales maxillo-faciales tridimensionnelles |
| US8417004B2 (en) | 2011-04-07 | 2013-04-09 | Dolphin Imaging Systems, Llc | System and method for simulated linearization of curved surface |
| US8650005B2 (en) | 2011-04-07 | 2014-02-11 | Dolphin Imaging Systems, Llc | System and method for three-dimensional maxillofacial surgical simulation and planning |
| EP2569755A4 (fr) * | 2010-05-21 | 2017-06-28 | My Orthodontics Pty Ltd | Prédiction d'une apparence post-intervention |
| EP2680233A4 (fr) * | 2011-02-22 | 2017-07-19 | Morpheus Co., Ltd. | Procédé et système pour obtenir une image d'ajustement facial |
| CN114649064A (zh) * | 2022-03-25 | 2022-06-21 | 国科大杭州高等研究院 | 预测模型及构建方法、预测方法及装置、电子设备 |
| CN116778576A (zh) * | 2023-06-05 | 2023-09-19 | 吉林农业科技学院 | 基于骨架的时序动作分割的时空图变换网络 |
Families Citing this family (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7359748B1 (en) | 2000-07-26 | 2008-04-15 | Rhett Drugge | Apparatus for total immersion photography |
| KR102475962B1 (ko) * | 2020-08-26 | 2022-12-09 | 주식회사 어셈블써클 | 임상 영상의 시뮬레이션 방법 및 장치 |
| KR102757169B1 (ko) * | 2022-09-20 | 2025-01-21 | 사회복지법인 삼성생명공익재단 | 안면 교정 수술 후의 연조직 변화를 예측하는 방법 및 영상처리장치 |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| AT408623B (de) * | 1996-10-30 | 2002-01-25 | Voest Alpine Ind Anlagen | Verfahren zur überwachung und steuerung der qualität von walzprodukten aus warmwalzprozessen |
-
2000
- 2000-06-30 GB GB0016151A patent/GB2364494A/en not_active Withdrawn
-
2001
- 2001-06-26 AU AU2001266169A patent/AU2001266169A1/en not_active Abandoned
- 2001-06-26 WO PCT/GB2001/002828 patent/WO2002003304A2/fr not_active Ceased
Cited By (14)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP2471483A1 (fr) * | 2005-03-01 | 2012-07-04 | Kings College London | Planification chirurgicale |
| RU2429539C2 (ru) * | 2005-09-23 | 2011-09-20 | Конинклейке Филипс Электроникс Н.В. | Способ, система и компьютерная программа для сегментирования изображения |
| WO2007034346A3 (fr) * | 2005-09-23 | 2008-12-04 | Koninkl Philips Electronics Nv | Procede, systeme et programme informatique destine a une segmentation d'images |
| WO2008115368A3 (fr) * | 2007-03-16 | 2008-12-11 | Carestream Health Inc | Système numérique pour chirurgie plastique et esthétique |
| EP2569755A4 (fr) * | 2010-05-21 | 2017-06-28 | My Orthodontics Pty Ltd | Prédiction d'une apparence post-intervention |
| EP2680233A4 (fr) * | 2011-02-22 | 2017-07-19 | Morpheus Co., Ltd. | Procédé et système pour obtenir une image d'ajustement facial |
| US8711178B2 (en) | 2011-03-01 | 2014-04-29 | Dolphin Imaging Systems, Llc | System and method for generating profile morphing using cephalometric tracing data |
| WO2012117122A1 (fr) * | 2011-03-01 | 2012-09-07 | Dolphin Imaging Systems, Llc | Système et procédé de génération d'une mutation de profils au moyen de données de suivi |
| US8417004B2 (en) | 2011-04-07 | 2013-04-09 | Dolphin Imaging Systems, Llc | System and method for simulated linearization of curved surface |
| EP2693976A4 (fr) * | 2011-04-07 | 2015-01-07 | Dolphin Imaging Systems Llc | Système et procédé pour simulation et planification chirurgicales maxillo-faciales tridimensionnelles |
| US8650005B2 (en) | 2011-04-07 | 2014-02-11 | Dolphin Imaging Systems, Llc | System and method for three-dimensional maxillofacial surgical simulation and planning |
| WO2012138624A2 (fr) | 2011-04-07 | 2012-10-11 | Dolphin Imaging Systems, Llc | Système et procédé pour simulation et planification chirurgicales maxillo-faciales tridimensionnelles |
| CN114649064A (zh) * | 2022-03-25 | 2022-06-21 | 国科大杭州高等研究院 | 预测模型及构建方法、预测方法及装置、电子设备 |
| CN116778576A (zh) * | 2023-06-05 | 2023-09-19 | 吉林农业科技学院 | 基于骨架的时序动作分割的时空图变换网络 |
Also Published As
| Publication number | Publication date |
|---|---|
| GB2364494A (en) | 2002-01-23 |
| WO2002003304A3 (fr) | 2003-03-13 |
| GB0016151D0 (en) | 2000-08-23 |
| AU2001266169A1 (en) | 2002-01-14 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| KR102018565B1 (ko) | 수술 시뮬레이션 정보 구축 방법, 장치 및 프로그램 | |
| EP3100236B1 (fr) | Procédé et système de construction d'avatars personnalisés à l'aide d'un maillage déformable paramétré | |
| Gao et al. | Individual tooth segmentation from CT images using level set method with shape and intensity prior | |
| EP2537111B1 (fr) | Procédé et système permettant d'archiver des informations tridimensionnelles spécifiques au sujet concernant la géométrie d'une partie du corps | |
| US7929745B2 (en) | Method and system for characterization of knee joint morphology | |
| Jones | Facial Reconstruction Using Volumetric Data. | |
| US8948484B2 (en) | Method and system for automatic view planning for cardiac magnetic resonance imaging acquisition | |
| CN113302660A (zh) | 对动态解剖结构进行可视化的方法 | |
| US20090232369A1 (en) | Method, a system and a computer program for integration of medical diagnostic information and a geometric model of a movable body | |
| WO2002003304A2 (fr) | Prevision de changements dans les caracteristiques d'un objet | |
| Desvignes et al. | 3D semi-landmarks based statistical face reconstruction | |
| Buchaillard et al. | 3D statistical models for tooth surface reconstruction | |
| JP2022111704A (ja) | 画像処理装置、医用画像撮像装置、画像処理方法、およびプログラム | |
| Tiddeman et al. | Construction and visualisation of three-dimensional facial statistics | |
| JP2022111705A (ja) | 学習装置、画像処理装置、医用画像撮像装置、学習方法およびプログラム | |
| EP1851721B1 (fr) | Procede, systeme et programme informatique pour segmenter une surface dans un ensemble de donnees multidimensionnel | |
| CN118967950A (zh) | 三维影像引导矫正规划方法、系统、装置及介质 | |
| Kohout et al. | Automatic reconstruction of the muscle architecture from the superficial layer fibres data | |
| Zhang et al. | Performance analysis of active shape reconstruction of fractured, incomplete skulls | |
| M Buzug et al. | A multi-modality computer-aided framework towards postmortem identification | |
| Danckaers et al. | Statistical shape and pose model of the forearm for custom splint design | |
| CN120279197B (zh) | 一种基于自由手超声的颅颌面三维重建方法 | |
| JP7165541B2 (ja) | ボリュームデータ処理装置、方法及びプログラム | |
| Magnenat-Thalmann et al. | Modeling anatomical-based humans | |
| Soltaninejad et al. | Automatic crown surface reconstruction using tooth statistical model for dental prosthesis planning |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AK | Designated states |
Kind code of ref document: A2 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW |
|
| AL | Designated countries for regional patents |
Kind code of ref document: A2 Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG |
|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
| DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
| REG | Reference to national code |
Ref country code: DE Ref legal event code: 8642 |
|
| 122 | Ep: pct application non-entry in european phase | ||
| NENP | Non-entry into the national phase |
Ref country code: JP |