WO2017180097A1 - Deformable registration of intra and preoperative inputs using generative mixture models and biomechanical deformation - Google Patents
Deformable registration of intra and preoperative inputs using generative mixture models and biomechanical deformation Download PDFInfo
- Publication number
- WO2017180097A1 WO2017180097A1 PCT/US2016/027018 US2016027018W WO2017180097A1 WO 2017180097 A1 WO2017180097 A1 WO 2017180097A1 US 2016027018 W US2016027018 W US 2016027018W WO 2017180097 A1 WO2017180097 A1 WO 2017180097A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- model
- intraoperative
- dimensional model
- mesh
- preoperative
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
- G06T7/344—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10068—Endoscopic image
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
Definitions
- the following disclosure describes the present invention according to several embodiments directed at methods, systems, and apparatuses for registering intra and preoperative inputs using generative mixture models and biomechanical deformation techniques.
- the technology described herein is especially applicable to, but not limited to, minimally invasive surgical techniques.
- a laparoscopic camera is used to provide the surgeon with a visualization of the anatomical area of interest. For example, when removing a tumor, the surgeon's goal is to safely remove the tumor without damaging critical structures such as vessels.
- the laparoscopic camera can only visualize the surface of the tissue. This makes localizing sub-surface structures, such as vessels and tumors, challenging. Therefore, intraoperative 3D images are introduced to provide updated information. While the intraoperative images typically have limited image information due to the constraints imposed in operating rooms, the preoperative images can provide supplementary anatomical and functional details, and carry accurate segmentation of organs, vessels, and tumors. To bridge the gap between surgical plans and laparoscopic images, registration of pre- and intraoperative 3D images is needed. However, this registration is challenging due to liquid injection or gas insufflation, breathing motion, and other surgical preparation which results in large organ deformation and sliding between viscera and abdominal wall. Therefore, a standard non-rigid registration method cannot be directly applied and enhanced registration techniques which account for deformation and sliding are needed.
- Embodiments of the present invention address and overcome one or more of the above shortcomings and drawbacks, by providing methods, systems, articles of manufacture, and apparatuses for performing deformable registration of intra and preoperative inputs using generative mixture models and biomechanical deformation. More specifically, the techniques described perform deformable registration between endoscopic or laparoscopic video with preoperative or intraoperative 3D models. Gaussian mixture models are used to create forces for a biomechanical model which is applied to deform the 3D image volume data such that it matches a point cloud derived from the intraoperative video data.
- the techniques described herein do not require strict point-to-point or surface-to-surface feature correspondences, but can also be used in the presence of known correspondences.
- the disclosed technology may be applied to, for example, endoscopic-to-tomographic registrations.
- a computer-implemented method of performing registration of preoperative and intraoperative image data includes receiving a first three-dimensional model (e.g., a mesh) of an anatomical area of interest derived from one or more image volumes acquired in a preoperative setting and acquiring images of the anatomical area of interest in an operative setting using an intraoperative image acquisition device.
- a second three-dimensional model e.g., a point cloud
- the first three-dimensional model is aligned with the second three-dimensional model using a rigid registration process.
- an iterative deformable registration process is performed to further align the two three-dimensional models.
- This iterative deformable registration process may include, for example, computing a generative mixture model representative of the second three-dimensional model, using the generative mixture model to derive physical force vectors, and biomechanically deforming the first three-dimensional model toward the second three-dimensional model using the physical force vectors.
- a fused image display is presented which overlays the first three-dimensional model on live images acquired from the intraoperative image acquisition device in the operative setting.
- dampening coefficients are applied during the iterative deformable registration process to velocities of points in the first three-dimensional model while deforming the first three-dimensional model toward the second three-dimensional model.
- Various techniques may be used for selecting the dampening coefficients. For example, in one embodiment, for each respective point in the first three-dimensional model, dampening coefficients are selected which are proportional to the generative mixture model applied at that respective point.
- one or more feature correspondences are identified between the first three-dimensional model and the second three-dimensional model and, during the iterative deformable registration process, the generative mixture model is weighted to favor the one or more feature correspondences.
- a Von Mises-Fisher (VMF) model representative of the second three- dimensional model is computed and used to derive physical wrench vectors. These physical wrench vectors may then be applied to biomechanically deform the first three-dimensional model toward the second three-dimensional model.
- damping wrench coefficients are applied to velocities of points in the first three-dimensional model while deforming the first three-dimensional model toward the second three- dimensional model.
- VMF Von Mises-Fisher
- a second computer- implemented method of performing registration of preoperative and intraoperative image data includes receiving a mesh prior to a surgical procedure.
- This mesh includes mesh points which are representative of an anatomical area of interest.
- intraoperative data representative of the anatomical area of interest is generated using a live image sequence acquired using an intraoperative imaging device.
- this intraoperative data is a stitched three-dimensional point cloud.
- the intraoperative data comprises individual depth data extracted from the live intraoperative image sequence(s).
- a Gaussian mixture model is constructed based on the intraoperative data and, for each mesh point, a physical force vector is generated pointing to the intraoperative data using a corresponding gradient value in the Gaussian mixture model.
- this Gaussian mixture model is weighted based on feature correspondences between the mesh and the intraoperative data.
- the mesh is biomechanically deformed by modifying each mesh point based on its corresponding physical force vector. Once deformed, a fused image display which overlays the mesh on the live image sequence(s) may be presented.
- a damping force vector is generated for each mesh point based on its corresponding gradient value in the Gaussian mixture model. Then, each respective damping force vector is applied to its corresponding mesh point while biomechanically deforming the mesh.
- a VMF model is constructed based on the intraoperative data. Then, for each mesh point, a physical wrench vector pointing to the intraoperative data is generated using a corresponding VMF gradient value in the VMF model. The mesh may then be biomechanically deformed by modifying each mesh point based on its corresponding physical wrench vector. Additionally, the mesh may be biomechanically deformed using dampening wrench vectors which are computed using a corresponding VMF gradient value associated with the VMF model.
- a system for performing registration of preoperative and intraoperative image data includes a database, an intraoperative image acquisition device and an imaging computer.
- the database is configured to store a preoperative model representative of an anatomical area of interest.
- the intraoperative image acquisition device is configured to acquire a live image sequence(s) of the anatomical area of interest during a surgical procedure.
- the imaging computer is configured to generate an intraoperative model representative of the anatomical area of interest using the live image sequence(s) and construct a generative mixture model based on the intraoperative model.
- the imaging computer generates a physical force vector pointing to the intraoperative model using a corresponding gradient value associated with the generative mixture model.
- the imaging computer biomechanically deforms the mesh by modifying each point of the preoperative model based on its corresponding physical force vector.
- the system also includes a display which is configured to present a fused image display which overlays the preoperative model on the live intraoperative image sequence(s).
- the image acquisition device comprises an endoscope and the live intraoperative image sequences comprises of stereo images.
- the image acquisition device comprises an endoscope and a patterned light source and the live intraoperative image sequences comprise of images from one or more view directions.
- the image acquisition device is inserted inside a body cavity and acquires a three dimensional representation of the cavity structure using reflection of acoustic waves.
- FIG. 1 shows a computer-assisted surgical system, used in some embodiments of the present invention
- FIG. 2 provides a high-level overview of a method for performing deformable registration, according to some embodiments of the present invention
- FIG. 3 provides an illustration of point cloud to mesh closest point distance error for each iteration of a biomechanical model, with and without damping forces, as may be applied in some embodiments;
- FIG. 4 illustrates an exemplary computing environment within which embodiments of the invention may be implemented.
- the following disclosure describes the present invention according to several embodiments directed at methods, systems, and apparatuses for deformable registration of intra and preoperative inputs using generative mixture models and biomechanical deformation.
- One application of the disclosed technology is to support the fusion of preoperative image data to intraoperatively acquired video images.
- the preoperative data typically comprises high resolution computed tomography (CT) or magnetic resonance (MR) images. These images are used to construct a digital patient-specific organ represented by points and possibly a topological structure such as a mesh.
- CT computed tomography
- MR magnetic resonance
- the intraoperative video data typically comprises optical images from an endoscope, laparoscope, surgical microscope, or similar device which contains depth information to give a geometric point cloud representation of the organ surface during the intervention.
- the techniques described herein are based on a surface- or point-based registration with a generative mixture model such as Gaussian mixture model (GMM) framework to provide weak correspondences between mesh and point cloud, and a biomechanical regularization to capture the non-rigid component of the registration.
- GMM Gaussian mixture model
- the various methods, systems, and apparatuses described herein are especially applicable to, but not limited to, minimally invasive surgical techniques.
- FIG. 1 shows a computer-assisted surgical system 100, used in some embodiments of the present invention.
- the system 100 includes components which may be categorized generally as being associated with a preoperative site 105 or an intraoperative site 110.
- the various components located at each site 105, 110 may be operably connected with a network 115.
- the components may be located at different areas of a facility, or even at different facilities.
- the preoperative site 105 and the intraoperative site 110 are co-located.
- the network 115 may be absent and the components may be directly connected.
- a small scale network e.g., a local area network
- an imaging system 105 A performs a scan on a subject 11 OA and gathers image volumes of an anatomical area of interest using any of a variety of imaging modalities including, for example, tomographic modalities, such as computed tomography (CT), magnetic resonance imaging (MRI), single-photon emission computed tomography (SPECT), and positron emission tomography (PET).
- CT computed tomography
- MRI magnetic resonance imaging
- SPECT single-photon emission computed tomography
- PET positron emission tomography
- a polygonal or polyhedral mesh is generated using one or more techniques generally known in the art. This mesh comprises a plurality of vertices which approximate the geometric domain of an object the anatomical area of interest. Once the mesh is generated by the imaging system 105 A, it is transferred (e.g.
- the image volumes are transferred to the database HOB and a computer at the intraoperative site 110 (e.g., imaging computer 110F) generates this mesh.
- a computer at the intraoperative site 110 e.g., imaging computer 110F
- a laparoscope HOD is used during surgery to acquire live video sequences of structures within the abdomen and pelvis within the subject 110A for presentation on a display 110E.
- a small incision is made in a patient's abdominal wall allowing the laparoscope to be inserted.
- laparoscopes including, for example, telescopic rod lens systems (usually connected to a video camera) and digital systems where a miniature digital video camera is at the end of the laparoscope.
- laparoscopes may be configured to capture stereo images using either a two-lens optical system or a single optical channel.
- a tracking system HOC provides tracking data to the imaging computer 11 OF for use in registration of the preoperative planning data (received from imaging system 105 A) with data gathered by laparoscope HOD.
- an optical tracking system HOC is depicted.
- EM electromagnetic
- FIG. 1 only illustrates a single imaging computer 11 OF, in other embodiments, multiple imaging computers may be used.
- the one or more imaging computers provide functionality for viewing, manipulating, communicating and storing medical images on computer readable media. Example implementations of computers that may be used as the imaging computer 11 OF are described below with reference to FIG. 4.
- the system further includes a gas insufflation device (not shown in FIG. 1) which may be used to expand the anatomical area of interest (e.g., abdomen) to provide additional workroom or reduce obstruction during surgery.
- This insufflation device may be configured to provide a pressure measurement values to the imaging computer 110F for display or for use in other applications such as the modeling techniques described herein.
- devices such as liquid injection systems may be used to create and measure pressure during surgery as an alternative to the aforementioned gas insufflation device.
- the imaging computer 110F retrieves the preoperative mesh from the database HOB (or generates the mesh based on a stored image volume).
- the imaging computer 11 OF then performs a deformable registration of mesh to the intraoperative video sequences acquired with the laparoscope HOD.
- the process of performing this deformable registration is described in further detail below with reference to FIG. 2.
- the mesh is biomechanically deformed to match an intraoperative point cloud representative of the intraoperative video sequences. This deformation is performed using forces computed from a probabilistic model constructed on the point cloud.
- the mesh Once the mesh has been deformed, it may be presented on the display 110E overlaying the intraoperative video sequences. Although a single display 110E is shown in the embodiment illustrated in FIG.
- multiple displays may be used, for example, to display different perspectives of the anatomical area of interest (e.g., based on preoperative data and/or the intraoperative data), indications of sensitive tissue areas, or messages indicating that a new intraoperative scan should be performed to update the intraoperative planning data.
- FIG. 2 provides a high-level overview of a method 200 for performing deformable registration, according to some embodiments of the present invention.
- Registration of endoscopic/laparoscopic 3D video data to 3D image volumes is a challenging task due to intraoperative organ movements which occur with phenomena like breathing or surgical manipulation, such that correspondence between features in the video and features in the image volumes can be difficult to achieve.
- the goal of fusing these images together can be cast as two steps: 1) an initial rigid alignment, 2) and a non-rigid alignment.
- the method 200 shown in FIG. 2 and discussed in further detail below primarily addresses the latter non- rigid alignment.
- a core concept disclosed herein is to establish fuzzy correspondences between geometry from the video data and geometry from the 3D image volumes and then force a tissue model created from the 3D images to match the intraoperative data.
- the input images comprise preoperative image volumes 205 and intraoperative video sequences 210.
- the preoperative image volumes 205 comprise 3D volumes captured by an image scanner (e.g., a CT, MR, or PET) before or during surgery. These preoperative image volumes 205 provide dense anatomical or functional data.
- an image scanner e.g., a CT, MR, or PET
- these preoperative image volumes 205 provide dense anatomical or functional data.
- the organ of interest is segmented from this image data and used to construct a 3D point representation of the tissue.
- this 3D point representation is a preoperative mesh 215; however, in other embodiments, other representations may be used such as a 3D point cloud 220.
- the intraoperative video sequences 210 are captured by an optical image acquisition system such as a camera-projector system, a stereo camera system, or cameras combined with a time-of-flight sensor to provide 2D visual information and 2.5D depth information.
- the 2.5D stream is of particular utility, in that it can provide metric geometric information about the object surface.
- a 3D intraoperative point cloud 220 is created from this depth information using intrinsic camera parameters or by stitching individual 3D frames together to form a larger field of view of the organ surface.
- intraoperative point cloud 220 is derived from dense stereo vision computations or a structured light-based vision system.
- the intraoperative point cloud is derived from a contact or contact-less surface scanning device such as a range scanner.
- Medical 3D organ data in the intraoperative data can comprise a triangulated organ surface or point cloud data generated by a segmentation process or a binary mask.
- any rigid registration technique generally known in the art suitable to modalities being used may be applied to compute the initial registration.
- intensity-based registration techniques are used to compare intensity patterns in the images of interest via correlation metrics.
- feature-based methods are used to determine correspondences between image features such as points, lines, and contours.
- the method 200 illustrated in FIG. 2 next performs a deformable registration at steps 235 - 245.
- a biomechanical model is used to control the deformation of the preoperative data. In some embodiments, this would be performed by solving equations of motion using the finite element method with a mesh representation of the preoperative organ. In other embodiments, meshless methods are used to solve the biomechanics equations using the method 200.
- This deformable registration process requires designating proper boundary conditions for the motion equations in terms of forces or displacements.
- a generative mixture model such as Gaussian Mixture Model (GMM) is computed on the intraoperative point cloud.
- GMM is a parametric probability density function represented as a weighted sum of Gaussian component densities.
- the GMM will be treated as stationary in the registration.
- Each cloud point is given a Gaussian function, with the point position as the mean.
- the Gaussian standard deviation for the point is selected to be proportional to its distance to the closest point on the preoperative mesh.
- Each of the cloud point Gaussians are combined into a single GMM functional.
- the GMM gradient at that position is used to define a 3D vector which points toward the intraoperative point cloud 220.
- These unitless vectors are multiplied at step 235 by a scaling factor to convert them into physical force vectors.
- the physical force vectors determined at step 235 are utilized in a biomechanical model to drive the preoperative mesh 215 to deform toward the intraoperative point cloud 220.
- Various biomechanical models may be applied at step 240. For example, one embodiment of the equations of motion applied at step 240:
- Equation 1 u is a vector of displacements at the 3D points, M is a mass matrix, K is a stiffness matrix which depends on the material type, and R is a vector of external active forces.
- the GMM gradient forces determined at steps 230 and 235 would be added to the R term in Equation 1.
- Equation 1 is only one example of a biomechanical model that may be applied at step 240. In other embodiments, other biomechanical models may be applied, for example, to incorporate other characteristics of the materials involved in the deformation.
- the final output of the method 200 is a deformation field which describes the non-rigid alignment of the preoperative data with the intraoperative data.
- this deformation field is used to generate a fused image display which overlays the deformed preoperative mesh 215 on the intraoperative point cloud 220.
- This fused image display may then be presented on a monitor at the intraoperative site to guide the medical staff in performing surgical procedures.
- the GMM gradient forces or other biomechanical parameters associated with the anatomical area of interest are used to determine a time for performing a new intraoperative scan to update. As this time approaches, a visual and/or audible indicator maybe presented to alert the surgical team that a new intraoperative scan should be performed. Alternatively, the time may be used to automatically perform the scan. It should be noted that this time may be derived far in advance to when this scan is needed. Thus, any automatic or manual preparation of the device providing the intraoperative scan may be done while surgery is being performed, allowing for minimal time to be lost transitioning between surgery and intraoperative scanning.
- damping forces may be applied to the moving preoperative mesh 215 to aid in convergence to the final alignment.
- the gradient forces will be made overly strong and cause oscillating movement of the mesh through the intraoperative point cloud 220.
- another set of forces may be introduced which penalize fast motion through the intraoperative point cloud 220. This could take the form of a force which is the product of the negative velocity of the mesh point multiplied by a damping coefficient written as:
- the probabilistic model on the intraoperative point cloud 220 can incorporate surface normal.
- This surface normal can be generated from methods generally known in the art.
- the eigenvector corresponding to the smallest eigenvalue of covariance matrix, computed by using the neighborhood of the point can be used as a surrogate for lit.
- c ⁇ ⁇ o, 1, 2 ⁇ (3)
- p is the mean of all the points in the neighborhood JV, p; is the i th point and Vy is j th eigenvector.
- a second pass to resolve the ambiguity in the direction of surface normal is required. The ambiguity is resolved using the assumption that the surface is generated from an endoscopic view and hence must be pointed towards the viewport.
- the probabilistic GMM model applied at step 230 of FIG. 2 is generated on the augmented 6D vector, Xj G E 6 composed of [x;;ii;] instead of just the points.
- the VMF gradient at that position is used to define a 6D vector which points toward the intraoperative cloud.
- These vectors are unitless, and so are then multiplied by a scaling factor to convert them into physical wrench (force/torque) vectors.
- the wrenches are then utilized in the biomechanical model to drive the tissue to deform toward the intraoperative point cloud 220.
- one embodiment of the equations of motion may be written as
- ⁇ Damping D n (-[v n ; ⁇ ⁇ ]) (2) where D is the vector of damping coefficients, and v, ⁇ are the point translational and rotational velocities of point normal, respectively.
- FIG. 4 illustrates an exemplary computing environment 400 within which embodiments of the invention may be implemented.
- This environment 400 may be used, for example, to implement a portion of one or more components used at the preoperative site 105 or the intraoperative site 110 illustrated in FIG. 1.
- Computing environment 400 may include computer system 410, which is one example of a computing system upon which embodiments of the invention may be implemented.
- Computers and computing environments, such as computer system 410 and computing environment 400, are known to those of skill in the art and thus are described briefly here.
- the computer system 410 may include a communication mechanism such as a bus 421 or other communication mechanism for communicating information within the computer system 410.
- the system 410 further includes one or more processors 420 coupled with the bus 421 for processing the information.
- the processors 420 may include one or more central processing units (CPUs), graphical processing units (GPUs), or any other processor known in the art. More generally, a processor as used herein is a device for executing machine-readable instructions stored on a computer readable medium, for performing tasks and may comprise any one or combination of, hardware and firmware. A processor may also comprise memory storing machine- readable instructions executable for performing tasks. A processor acts upon information by manipulating, analyzing, modifying, converting or transmitting information for use by an executable procedure or an information device, and/or by routing the information to an output device.
- CPUs central processing units
- GPUs graphical processing units
- a processor may use or comprise the capabilities of a computer, controller or microprocessor, for example, and be conditioned using executable instructions to perform special purpose functions not performed by a general purpose computer.
- a processor may be coupled (electrically and/or as comprising executable components) with any other processor enabling interaction and/or communication there-between.
- a user interface processor or generator is a known element comprising electronic circuitry or software or a combination of both for generating display images or portions thereof.
- a user interface comprises one or more display images enabling user interaction with a processor or other device.
- the computer system 410 also includes a system memory 430 coupled to the bus 421 for storing information and instructions to be executed by processors 420.
- the system memory 430 may include computer readable storage media in the form of volatile and/or nonvolatile memory, such as read only memory (ROM) 431 and/or random access memory (RAM) 432.
- the system memory RAM 432 may include other dynamic storage device(s) (e.g., dynamic RAM, static RAM, and synchronous DRAM).
- the system memory ROM 431 may include other static storage device(s) (e.g., programmable ROM, erasable PROM, and electrically erasable PROM).
- the system memory 430 may be used for storing temporary variables or other intermediate information during the execution of instructions by the processors 420.
- a basic input/output system 433 (BIOS) containing the basic routines that help to transfer information between elements within computer system 410, such as during start-up, may be stored in ROM 431.
- RAM 432 may contain data and/or program modules that are immediately accessible to and/or presently being operated on by the processors 420.
- System memory 430 may additionally include, for example, operating system 434, application programs 435, other program modules 436 and program data 437.
- the computer system 410 also includes a disk controller 440 coupled to the bus
- a magnetic hard disk 441 and a removable media drive 442 e.g., floppy disk drive, compact disc drive, tape drive, and/or solid state drive.
- the storage devices may be added to the computer system 410 using an appropriate device interface (e.g., a small computer system interface (SCSI), integrated device electronics (IDE), Universal Serial Bus (USB), or FireWire).
- SCSI small computer system interface
- IDE integrated device electronics
- USB Universal Serial Bus
- FireWire FireWire
- the computer system 410 may also include a display controller 465 coupled to the bus 421 to control a display or monitor 466, such as a cathode ray tube (CRT) or liquid crystal display (LCD), for displaying information to a computer user.
- the computer system includes an input interface 460 and one or more input devices, such as a keyboard 462 and a pointing device 461, for interacting with a computer user and providing information to the processor 420.
- the pointing device 461 for example, may be a mouse, a light pen, a trackball, or a pointing stick for communicating direction information and command selections to the processor 420 and for controlling cursor movement on the display 466.
- the display 466 may provide a touch screen interface which allows input to supplement or replace the communication of direction information and command selections by the pointing device 461.
- the computer system 410 may perform a portion or all of the processing steps of embodiments of the invention in response to the processors 420 executing one or more sequences of one or more instructions contained in a memory, such as the system memory 430. Such instructions may be read into the system memory 430 from another computer readable medium, such as a hard disk 441 or a removable media drive 442.
- the hard disk 441 may contain one or more datastores and data files used by embodiments of the present invention. Datastore contents and data files may be encrypted to improve security.
- the processors 420 may also be employed in a multi-processing arrangement to execute the one or more sequences of instructions contained in system memory 430.
- hard-wired circuitry may be used in place of or in combination with software instructions. Thus, embodiments are not limited to any specific combination of hardware circuitry and software.
- the computer system 410 may include at least one computer readable medium or memory for holding instructions programmed according to embodiments of the invention and for containing data structures, tables, records, or other data described herein.
- the term "computer readable medium” as used herein refers to any medium that participates in providing instructions to the processor 420 for execution.
- a computer readable medium may take many forms including, but not limited to, non-transitory, nonvolatile media, volatile media, and transmission media.
- Non-limiting examples of nonvolatile media include optical disks, solid state drives, magnetic disks, and magneto-optical disks, such as hard disk 441 or removable media drive 442.
- Non-limiting examples of volatile media include dynamic memory, such as system memory 430.
- Non-limiting examples of transmission media include coaxial cables, copper wire, and fiber optics, including the wires that make up the bus 421.
- Transmission media may also take the form of acoustic or light waves, such as those generated during radio wave and infrared data communications.
- the computing environment 400 may further include the computer system 410 operating in a networked environment using logical connections to one or more remote computers, such as remote computer 480.
- Remote computer 480 may be a personal computer (laptop or desktop), a mobile device, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to computer system 410.
- computer system 410 may include modem 472 for establishing communications over a network 471, such as the Internet. Modem 472 may be connected to system bus 421 via user network interface
- Network 471 may be any network or system generally known in the art, including the Internet, an intranet, a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a direct connection or series of connections, a cellular telephone network, or any other network or medium capable of facilitating communication between computer system 410 and other computers (e.g., remote computing system 480).
- the network 471 may be wired, wireless or a combination thereof. Wired connections may be implemented using Ethernet, Universal Serial Bus (USB), RJ-11, or any other wired connection generally known in the art.
- Wireless connections may be implemented using Wi- Fi, WiMAX, and Bluetooth, infrared, cellular networks, satellite or any other wireless connection methodology generally known in the art. Additionally, several networks may work alone or in communication with each other to facilitate communication in the network
- computers in computing environment 400 may include a hardware or software receiver module (not shown in FIG. 4) configured to receive one or more data items used in performing the techniques described herein.
- An executable application comprises code or machine readable instructions for conditioning the processor to implement predetermined functions, such as those of an operating system, a context data acquisition system or other information processing system, for example, in response to user command or input.
- An executable procedure is a segment of code or machine readable instruction, sub-routine, or other distinct section of code or portion of an executable application for performing one or more particular processes. These processes may include receiving input data and/or parameters, performing operations on received input data and/or performing functions in response to received input parameters, and providing resulting output data and/or parameters.
- a graphical user interface comprises one or more display images, generated by a display processor and enabling user interaction with a processor or other device and associated data acquisition and processing functions.
- the GUI also includes an executable procedure or executable application.
- the executable procedure or executable application conditions the display processor to generate signals representing the GUI display images. These signals are supplied to a display device which displays the image for viewing by the user.
- the processor under control of an executable procedure or executable application, manipulates the UI display images in response to signals received from the input devices. In this way, the user may interact with the display image using the input devices, enabling user interaction with the processor or other device.
- the embodiments of the present invention can be included in an article of manufacture comprising, for example, a non-transitory computer readable medium.
- This computer readable medium may have embodied therein a method for facilitating one or more of the techniques utilized by some embodiments of the present invention.
- the article of manufacture may be included as part of a computer system or sold separately.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
A computer-implemented method of performing registration of preoperative and intraoperative image data includes receiving a first three-dimensional model of an anatomical area of interest derived from one or more image volumes acquired in a preoperative setting and acquiring images of the anatomical area of interest in an operative setting using an intraoperative image acquisition device. A second three-dimensional model of the anatomical area of interest is generated using the images. Next, the first three-dimensional model is aligned with the second three-dimensional model using a rigid registration process. Then, an iterative deformable registration process is performed to further align the two three-dimensional models. This iterative deformable registration process may include, for example, computing a generative mixture model representative of the second three-dimensional model, using the generative mixture model to derive physical force vectors, and biomechanically deforming the first three-dimensional model toward the second three-dimensional model using the physical force vectors.
Description
Deformable Registration of Intra and Preoperative Inputs Using Generative Mixture
Models and Biomechanical Deformation
TECHNICAL FIELD
[1] The following disclosure describes the present invention according to several embodiments directed at methods, systems, and apparatuses for registering intra and preoperative inputs using generative mixture models and biomechanical deformation techniques. The technology described herein is especially applicable to, but not limited to, minimally invasive surgical techniques.
BACKGROUND
[2] In minimally invasive surgery such as liver resection, a laparoscopic camera is used to provide the surgeon with a visualization of the anatomical area of interest. For example, when removing a tumor, the surgeon's goal is to safely remove the tumor without damaging critical structures such as vessels.
[3] During surgery, the laparoscopic camera can only visualize the surface of the tissue. This makes localizing sub-surface structures, such as vessels and tumors, challenging. Therefore, intraoperative 3D images are introduced to provide updated information. While the intraoperative images typically have limited image information due to the constraints imposed in operating rooms, the preoperative images can provide supplementary anatomical and functional details, and carry accurate segmentation of organs, vessels, and tumors. To bridge the gap between surgical plans and laparoscopic images, registration of pre- and intraoperative 3D images is needed. However, this registration is challenging due to liquid injection or gas insufflation, breathing motion, and other surgical preparation which results in large organ deformation and sliding between viscera and abdominal wall. Therefore, a standard non-rigid registration method cannot be directly applied and enhanced registration techniques which account for deformation and sliding are needed.
SUMMARY
[4] Embodiments of the present invention address and overcome one or more of the above shortcomings and drawbacks, by providing methods, systems, articles of manufacture, and apparatuses for performing deformable registration of intra and preoperative inputs using generative mixture models and biomechanical deformation. More specifically, the techniques described perform deformable registration between endoscopic or laparoscopic video with
preoperative or intraoperative 3D models. Gaussian mixture models are used to create forces for a biomechanical model which is applied to deform the 3D image volume data such that it matches a point cloud derived from the intraoperative video data. The techniques described herein do not require strict point-to-point or surface-to-surface feature correspondences, but can also be used in the presence of known correspondences. The disclosed technology may be applied to, for example, endoscopic-to-tomographic registrations.
[5] According to some embodiments, a computer-implemented method of performing registration of preoperative and intraoperative image data includes receiving a first three-dimensional model (e.g., a mesh) of an anatomical area of interest derived from one or more image volumes acquired in a preoperative setting and acquiring images of the anatomical area of interest in an operative setting using an intraoperative image acquisition device. A second three-dimensional model (e.g., a point cloud) of the anatomical area of interest is generated using the images. Next, the first three-dimensional model is aligned with the second three-dimensional model using a rigid registration process. Then, an iterative deformable registration process is performed to further align the two three-dimensional models. This iterative deformable registration process may include, for example, computing a generative mixture model representative of the second three-dimensional model, using the generative mixture model to derive physical force vectors, and biomechanically deforming the first three-dimensional model toward the second three-dimensional model using the physical force vectors. In some embodiments, following the iterative deformable registration process, a fused image display is presented which overlays the first three-dimensional model on live images acquired from the intraoperative image acquisition device in the operative setting.
[6] In some embodiments of the aforementioned method, dampening coefficients are applied during the iterative deformable registration process to velocities of points in the first three-dimensional model while deforming the first three-dimensional model toward the second three-dimensional model. Various techniques may be used for selecting the dampening coefficients. For example, in one embodiment, for each respective point in the first three-dimensional model, dampening coefficients are selected which are proportional to the generative mixture model applied at that respective point.
[7] Additional enhancements, refinements, or other modifications may be made to the aforementioned method in different embodiments. For example, in one embodiment, one
or more feature correspondences are identified between the first three-dimensional model and the second three-dimensional model and, during the iterative deformable registration process, the generative mixture model is weighted to favor the one or more feature correspondences. In other embodiments, a Von Mises-Fisher (VMF) model representative of the second three- dimensional model is computed and used to derive physical wrench vectors. These physical wrench vectors may then be applied to biomechanically deform the first three-dimensional model toward the second three-dimensional model. Additionally, in some embodiments, damping wrench coefficients are applied to velocities of points in the first three-dimensional model while deforming the first three-dimensional model toward the second three- dimensional model.
[8] According to other embodiments of the present invention, a second computer- implemented method of performing registration of preoperative and intraoperative image data includes receiving a mesh prior to a surgical procedure. This mesh includes mesh points which are representative of an anatomical area of interest. During the surgical procedure, intraoperative data representative of the anatomical area of interest is generated using a live image sequence acquired using an intraoperative imaging device. In one embodiment, this intraoperative data is a stitched three-dimensional point cloud. In other embodiments, the intraoperative data comprises individual depth data extracted from the live intraoperative image sequence(s). Next, a Gaussian mixture model is constructed based on the intraoperative data and, for each mesh point, a physical force vector is generated pointing to the intraoperative data using a corresponding gradient value in the Gaussian mixture model. In one embodiment, this Gaussian mixture model is weighted based on feature correspondences between the mesh and the intraoperative data. Once the Gaussian mixture model is constructed, the mesh is biomechanically deformed by modifying each mesh point based on its corresponding physical force vector. Once deformed, a fused image display which overlays the mesh on the live image sequence(s) may be presented.
[9] Additional enhancements, refinements, or other modifications may be made to the aforementioned second method in different embodiments. For example, in one embodiment, a damping force vector is generated for each mesh point based on its corresponding gradient value in the Gaussian mixture model. Then, each respective damping force vector is applied to its corresponding mesh point while biomechanically deforming the mesh. In another embodiment, a VMF model is constructed based on the intraoperative data.
Then, for each mesh point, a physical wrench vector pointing to the intraoperative data is generated using a corresponding VMF gradient value in the VMF model. The mesh may then be biomechanically deformed by modifying each mesh point based on its corresponding physical wrench vector. Additionally, the mesh may be biomechanically deformed using dampening wrench vectors which are computed using a corresponding VMF gradient value associated with the VMF model.
[10] According to other embodiments, a system for performing registration of preoperative and intraoperative image data includes a database, an intraoperative image acquisition device and an imaging computer. The database is configured to store a preoperative model representative of an anatomical area of interest. The intraoperative image acquisition device is configured to acquire a live image sequence(s) of the anatomical area of interest during a surgical procedure. The imaging computer is configured to generate an intraoperative model representative of the anatomical area of interest using the live image sequence(s) and construct a generative mixture model based on the intraoperative model. Next, for each point of the preoperative model, the imaging computer generates a physical force vector pointing to the intraoperative model using a corresponding gradient value associated with the generative mixture model. Then, the imaging computer biomechanically deforms the mesh by modifying each point of the preoperative model based on its corresponding physical force vector. In some embodiments, the system also includes a display which is configured to present a fused image display which overlays the preoperative model on the live intraoperative image sequence(s).
[11] Various types of image acquisition devices may be used with the aforementioned system. In one embodiment, the image acquisition device comprises an endoscope and the live intraoperative image sequences comprises of stereo images. In another embodiment, the image acquisition device comprises an endoscope and a patterned light source and the live intraoperative image sequences comprise of images from one or more view directions. In some embodiments, the image acquisition device is inserted inside a body cavity and acquires a three dimensional representation of the cavity structure using reflection of acoustic waves.
[12] Additional features and advantages of the invention will be made apparent from the following detailed description of illustrative embodiments that proceeds with reference to the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
[13] The foregoing and other aspects of the present invention are best understood from the following detailed description when read in connection with the accompanying drawings. For the purpose of illustrating the invention, there is shown in the drawings embodiments that are presently preferred, it being understood, however, that the invention is not limited to the specific instrumentalities disclosed. Included in the drawings are the following Figures:
[14] FIG. 1 shows a computer-assisted surgical system, used in some embodiments of the present invention;
[15] FIG. 2 provides a high-level overview of a method for performing deformable registration, according to some embodiments of the present invention;
[16] FIG. 3 provides an illustration of point cloud to mesh closest point distance error for each iteration of a biomechanical model, with and without damping forces, as may be applied in some embodiments; and
[17] FIG. 4 illustrates an exemplary computing environment within which embodiments of the invention may be implemented.
DETAILED DESCRIPTION
[18] The following disclosure describes the present invention according to several embodiments directed at methods, systems, and apparatuses for deformable registration of intra and preoperative inputs using generative mixture models and biomechanical deformation. One application of the disclosed technology is to support the fusion of preoperative image data to intraoperatively acquired video images. The preoperative data typically comprises high resolution computed tomography (CT) or magnetic resonance (MR) images. These images are used to construct a digital patient-specific organ represented by points and possibly a topological structure such as a mesh. The intraoperative video data typically comprises optical images from an endoscope, laparoscope, surgical microscope, or similar device which contains depth information to give a geometric point cloud representation of the organ surface during the intervention. The techniques described herein are based on a surface- or point-based registration with a generative mixture model such as Gaussian mixture model (GMM) framework to provide weak correspondences between mesh and point cloud, and a biomechanical regularization to capture the non-rigid component of
the registration. The various methods, systems, and apparatuses described herein are especially applicable to, but not limited to, minimally invasive surgical techniques.
[19] FIG. 1 shows a computer-assisted surgical system 100, used in some embodiments of the present invention. The system 100 includes components which may be categorized generally as being associated with a preoperative site 105 or an intraoperative site 110. In the example of FIG. 1, the various components located at each site 105, 110 may be operably connected with a network 115. Thus, the components may be located at different areas of a facility, or even at different facilities. However, it should be noted that, in some embodiments the preoperative site 105 and the intraoperative site 110 are co-located. In these embodiments, the network 115 may be absent and the components may be directly connected. Alternatively, a small scale network (e.g., a local area network) may be used.
[20] At the preoperative site 105, an imaging system 105 A performs a scan on a subject 11 OA and gathers image volumes of an anatomical area of interest using any of a variety of imaging modalities including, for example, tomographic modalities, such as computed tomography (CT), magnetic resonance imaging (MRI), single-photon emission computed tomography (SPECT), and positron emission tomography (PET). Once the preoperative image volumes have been acquired, a polygonal or polyhedral mesh is generated using one or more techniques generally known in the art. This mesh comprises a plurality of vertices which approximate the geometric domain of an object the anatomical area of interest. Once the mesh is generated by the imaging system 105 A, it is transferred (e.g. via network 115) to a database HOB at the intraoperative site 110. It should be noted that mesh generation may also be performed at the intraoperative site 110. Thus, in other embodiments, the image volumes are transferred to the database HOB and a computer at the intraoperative site 110 (e.g., imaging computer 110F) generates this mesh.
[21] At the intraoperative site 110, a laparoscope HOD is used during surgery to acquire live video sequences of structures within the abdomen and pelvis within the subject 110A for presentation on a display 110E. Typically a small incision is made in a patient's abdominal wall allowing the laparoscope to be inserted. There are various types of laparoscopes including, for example, telescopic rod lens systems (usually connected to a video camera) and digital systems where a miniature digital video camera is at the end of the laparoscope. To mimic three-dimensional vision in humans, laparoscopes may be configured to capture stereo images using either a two-lens optical system or a single optical channel.
Such laparoscopes are referred to herein as "stereo laparoscopes." A tracking system HOC provides tracking data to the imaging computer 11 OF for use in registration of the preoperative planning data (received from imaging system 105 A) with data gathered by laparoscope HOD. In the example of FIG. 1, an optical tracking system HOC is depicted. However, other techniques may be used for tracking including, without limitation, electromagnetic (EM) tracking and/or robotic encoders. Although FIG. 1 only illustrates a single imaging computer 11 OF, in other embodiments, multiple imaging computers may be used. Collectively, the one or more imaging computers provide functionality for viewing, manipulating, communicating and storing medical images on computer readable media. Example implementations of computers that may be used as the imaging computer 11 OF are described below with reference to FIG. 4.
[22] Depending on the surgical setting, additional devices may be included in the system 100 depicted in FIG. 1. For example, in some embodiments, the system further includes a gas insufflation device (not shown in FIG. 1) which may be used to expand the anatomical area of interest (e.g., abdomen) to provide additional workroom or reduce obstruction during surgery. This insufflation device may be configured to provide a pressure measurement values to the imaging computer 110F for display or for use in other applications such as the modeling techniques described herein. In other embodiments, devices such as liquid injection systems may be used to create and measure pressure during surgery as an alternative to the aforementioned gas insufflation device.
[23] During surgery, the imaging computer 110F retrieves the preoperative mesh from the database HOB (or generates the mesh based on a stored image volume). The imaging computer 11 OF then performs a deformable registration of mesh to the intraoperative video sequences acquired with the laparoscope HOD. The process of performing this deformable registration is described in further detail below with reference to FIG. 2. Briefly, the mesh is biomechanically deformed to match an intraoperative point cloud representative of the intraoperative video sequences. This deformation is performed using forces computed from a probabilistic model constructed on the point cloud. Once the mesh has been deformed, it may be presented on the display 110E overlaying the intraoperative video sequences. Although a single display 110E is shown in the embodiment illustrated in FIG. 1, in other embodiments multiple displays may be used, for example, to display different perspectives of the anatomical area of interest (e.g., based on preoperative data and/or the intraoperative
data), indications of sensitive tissue areas, or messages indicating that a new intraoperative scan should be performed to update the intraoperative planning data.
[24] FIG. 2 provides a high-level overview of a method 200 for performing deformable registration, according to some embodiments of the present invention. Registration of endoscopic/laparoscopic 3D video data to 3D image volumes is a challenging task due to intraoperative organ movements which occur with phenomena like breathing or surgical manipulation, such that correspondence between features in the video and features in the image volumes can be difficult to achieve. The goal of fusing these images together can be cast as two steps: 1) an initial rigid alignment, 2) and a non-rigid alignment. The method 200 shown in FIG. 2 and discussed in further detail below primarily addresses the latter non- rigid alignment. A core concept disclosed herein is to establish fuzzy correspondences between geometry from the video data and geometry from the 3D image volumes and then force a tissue model created from the 3D images to match the intraoperative data.
[25] The input images comprise preoperative image volumes 205 and intraoperative video sequences 210. The preoperative image volumes 205 comprise 3D volumes captured by an image scanner (e.g., a CT, MR, or PET) before or during surgery. These preoperative image volumes 205 provide dense anatomical or functional data. In the method 200 shown in FIG. 2, the organ of interest is segmented from this image data and used to construct a 3D point representation of the tissue. In the example of FIG. 2, this 3D point representation is a preoperative mesh 215; however, in other embodiments, other representations may be used such as a 3D point cloud 220.
[26] The intraoperative video sequences 210 are captured by an optical image acquisition system such as a camera-projector system, a stereo camera system, or cameras combined with a time-of-flight sensor to provide 2D visual information and 2.5D depth information. The 2.5D stream is of particular utility, in that it can provide metric geometric information about the object surface. A 3D intraoperative point cloud 220 is created from this depth information using intrinsic camera parameters or by stitching individual 3D frames together to form a larger field of view of the organ surface. In some embodiments, intraoperative point cloud 220 is derived from dense stereo vision computations or a structured light-based vision system. In other embodiments, the intraoperative point cloud is derived from a contact or contact-less surface scanning device such as a range scanner.
Medical 3D organ data in the intraoperative data can comprise a triangulated organ surface or point cloud data generated by a segmentation process or a binary mask.
[27] An initial rigid registration 225 is performed to register the preoperative mesh
215 to the intraoperative point cloud 220. Any rigid registration technique generally known in the art suitable to modalities being used may be applied to compute the initial registration. For example, in some embodiments intensity-based registration techniques are used to compare intensity patterns in the images of interest via correlation metrics. In other embodiments, feature-based methods are used to determine correspondences between image features such as points, lines, and contours.
[28] The method 200 illustrated in FIG. 2 next performs a deformable registration at steps 235 - 245. During the deformable registration process, a biomechanical model is used to control the deformation of the preoperative data. In some embodiments, this would be performed by solving equations of motion using the finite element method with a mesh representation of the preoperative organ. In other embodiments, meshless methods are used to solve the biomechanics equations using the method 200. This deformable registration process requires designating proper boundary conditions for the motion equations in terms of forces or displacements.
[29] Continuing with reference to FIG. 2, at step 230, a generative mixture model such as Gaussian Mixture Model (GMM) is computed on the intraoperative point cloud. As is understood in the art, a GMM is a parametric probability density function represented as a weighted sum of Gaussian component densities. In the present case, the GMM will be treated as stationary in the registration. Each cloud point is given a Gaussian function, with the point position as the mean. The Gaussian standard deviation for the point is selected to be proportional to its distance to the closest point on the preoperative mesh. Each of the cloud point Gaussians are combined into a single GMM functional. Then, for each point in the preoperative mesh 215, the GMM gradient at that position is used to define a 3D vector which points toward the intraoperative point cloud 220. These unitless vectors are multiplied at step 235 by a scaling factor to convert them into physical force vectors.
[30] At step 240, the physical force vectors determined at step 235 are utilized in a biomechanical model to drive the preoperative mesh 215 to deform toward the intraoperative
point cloud 220. Various biomechanical models may be applied at step 240. For example, one embodiment of the equations of motion applied at step 240:
Miin + K(un)■ un = Rn (1)
In Equation 1, u is a vector of displacements at the 3D points, M is a mass matrix, K is a stiffness matrix which depends on the material type, and R is a vector of external active forces. The GMM gradient forces determined at steps 230 and 235 would be added to the R term in Equation 1. It should be noted that Equation 1 is only one example of a biomechanical model that may be applied at step 240. In other embodiments, other biomechanical models may be applied, for example, to incorporate other characteristics of the materials involved in the deformation.
[31] The final output of the method 200 is a deformation field which describes the non-rigid alignment of the preoperative data with the intraoperative data. At step 250, this deformation field is used to generate a fused image display which overlays the deformed preoperative mesh 215 on the intraoperative point cloud 220. This fused image display may then be presented on a monitor at the intraoperative site to guide the medical staff in performing surgical procedures. In some embodiments, the GMM gradient forces or other biomechanical parameters associated with the anatomical area of interest are used to determine a time for performing a new intraoperative scan to update. As this time approaches, a visual and/or audible indicator maybe presented to alert the surgical team that a new intraoperative scan should be performed. Alternatively, the time may be used to automatically perform the scan. It should be noted that this time may be derived far in advance to when this scan is needed. Thus, any automatic or manual preparation of the device providing the intraoperative scan may be done while surgery is being performed, allowing for minimal time to be lost transitioning between surgery and intraoperative scanning.
[32] In some embodiments of the biomechanical model applied at step 240, damping forces may be applied to the moving preoperative mesh 215 to aid in convergence to the final alignment. For example, depending on the scaling factor which converts the GMM gradient forces to physical forces (at step 235), it is possible that the gradient forces will be made overly strong and cause oscillating movement of the mesh through the intraoperative point cloud 220. In order to reduce such behavior, another set of forces may be introduced which
penalize fast motion through the intraoperative point cloud 220. This could take the form of a force which is the product of the negative velocity of the mesh point multiplied by a damping coefficient written as:
^■Damping = vn) (2) where D is the vector of damping coefficients, and v are the point velocities. The damping coefficients are selected to be proportional to the GMM functional value at the position of each point of the preoperative mesh 215. Thus, since the functional values are maximal on the intraoperative point cloud 220 itself, the damping forces will act most strongly when the preoperative mesh 215 achieves alignment with the intraoperative point cloud 220. An example of the registration error during the biomechanical model is shown in FIG. 3 where plot 305 shows the average cloud-to-mesh closest point distances without dampening forces and plot 310 shows the average cloud-to-mesh closest point distances with dampening forces. As shown in FIG. 3, adding even small damping forces is shown to smooth out oscillatory behavior.
[33] In one embodiment, the probabilistic model on the intraoperative point cloud 220 can incorporate surface normal. This surface normal can be generated from methods generally known in the art. Alternatively, the eigenvector corresponding to the smallest eigenvalue of covariance matrix, computed by using the neighborhood of the point can be used as a surrogate for lit. c =
ε {o, 1, 2} (3) where p is the mean of all the points in the neighborhood JV, p; is the ith point and Vy is jth eigenvector. In this case, a second pass to resolve the ambiguity in the direction of surface normal is required. The ambiguity is resolved using the assumption that the surface is generated from an endoscopic view and hence must be pointed towards the viewport.
[34] To incorporate surface normal in the above registration framework, the probabilistic GMM model applied at step 230 of FIG. 2 is generated on the augmented 6D vector, Xj G E6 composed of [x;;ii;] instead of just the points. Like before, the mean of this vector is the location and estimated normal of the zth point and the standard deviation, σί=[σ;;Ρ; σ^η] for the point is selected to be proportional to its distance to the closest point on the preoperative mesh 215, and the difference in the normal. That is
σίιΡ oc d(jpit q7); a n oc d(nit n;) (4) where p; and q; are points on the moving and stationary point clouds and and η; are normal at respective points.
[35] It must be noted that this simple augmentation is valid if difference between the normal is small as GMM cannot inherently model data that lies on a unit sphere/hypersphere. Directional data which has been normalized to unit sphere has been modeled using Von Mises-Fisher (VMF) distributions. However, using this alone for the normal leaves us with a mixture of two different models making subsequent registration problems as proposed in equations (1) and (2) intractable. As an alternative, a higher dimensional representation of 6D vector, xi may be used. One such higher dimensional is a dual quaternion, h; G M8. We restrict ourselves to unit dual quaternions which represent general motions of lines. The transformations from 6D vector to a representation as unit dual quaternions and vice versa are well established in literature. The motivating factor for use of the unit dual quaternion is that it enables one to use VMF distribution to model the probabilistic behavior of the intraoperative point cloud 220, as these data lie on a unit hypersphere of dimension 8.
[36] Then, for each point in the preoperative mesh 215, the VMF gradient at that position is used to define a 6D vector which points toward the intraoperative cloud. These vectors are unitless, and so are then multiplied by a scaling factor to convert them into physical wrench (force/torque) vectors. The wrenches are then utilized in the biomechanical model to drive the tissue to deform toward the intraoperative point cloud 220. For example, one embodiment of the equations of motion may be written as
Mun + K(un) - un = Wn (1) where u is a vector of displacements at the 3D points, M is a mass matrix, K is a stiffness matrix which depends on the material type, and W is a vector of external active wrenches. In some embodiments, the VMF gradient forces would be added to the W term in equation 1. Likewise, the damping wrench could take the form which is the product of the negative velocity of the mesh point multiplied by a damping coefficient written as:
^Damping = Dn(-[vn; ωη]) (2)
where D is the vector of damping coefficients, and v, ω are the point translational and rotational velocities of point normal, respectively.
[37] FIG. 4 illustrates an exemplary computing environment 400 within which embodiments of the invention may be implemented. This environment 400 may be used, for example, to implement a portion of one or more components used at the preoperative site 105 or the intraoperative site 110 illustrated in FIG. 1. Computing environment 400 may include computer system 410, which is one example of a computing system upon which embodiments of the invention may be implemented. Computers and computing environments, such as computer system 410 and computing environment 400, are known to those of skill in the art and thus are described briefly here.
[38] As shown in FIG. 4, the computer system 410 may include a communication mechanism such as a bus 421 or other communication mechanism for communicating information within the computer system 410. The system 410 further includes one or more processors 420 coupled with the bus 421 for processing the information.
[39] The processors 420 may include one or more central processing units (CPUs), graphical processing units (GPUs), or any other processor known in the art. More generally, a processor as used herein is a device for executing machine-readable instructions stored on a computer readable medium, for performing tasks and may comprise any one or combination of, hardware and firmware. A processor may also comprise memory storing machine- readable instructions executable for performing tasks. A processor acts upon information by manipulating, analyzing, modifying, converting or transmitting information for use by an executable procedure or an information device, and/or by routing the information to an output device. A processor may use or comprise the capabilities of a computer, controller or microprocessor, for example, and be conditioned using executable instructions to perform special purpose functions not performed by a general purpose computer. A processor may be coupled (electrically and/or as comprising executable components) with any other processor enabling interaction and/or communication there-between. A user interface processor or generator is a known element comprising electronic circuitry or software or a combination of both for generating display images or portions thereof. A user interface comprises one or more display images enabling user interaction with a processor or other device.
[40] Continuing with reference to FIG. 4, the computer system 410 also includes a system memory 430 coupled to the bus 421 for storing information and instructions to be executed by processors 420. The system memory 430 may include computer readable storage media in the form of volatile and/or nonvolatile memory, such as read only memory (ROM) 431 and/or random access memory (RAM) 432. The system memory RAM 432 may include other dynamic storage device(s) (e.g., dynamic RAM, static RAM, and synchronous DRAM). The system memory ROM 431 may include other static storage device(s) (e.g., programmable ROM, erasable PROM, and electrically erasable PROM). In addition, the system memory 430 may be used for storing temporary variables or other intermediate information during the execution of instructions by the processors 420. A basic input/output system 433 (BIOS) containing the basic routines that help to transfer information between elements within computer system 410, such as during start-up, may be stored in ROM 431. RAM 432 may contain data and/or program modules that are immediately accessible to and/or presently being operated on by the processors 420. System memory 430 may additionally include, for example, operating system 434, application programs 435, other program modules 436 and program data 437.
[41] The computer system 410 also includes a disk controller 440 coupled to the bus
421 to control one or more storage devices for storing information and instructions, such as a magnetic hard disk 441 and a removable media drive 442 (e.g., floppy disk drive, compact disc drive, tape drive, and/or solid state drive). The storage devices may be added to the computer system 410 using an appropriate device interface (e.g., a small computer system interface (SCSI), integrated device electronics (IDE), Universal Serial Bus (USB), or FireWire).
[42] The computer system 410 may also include a display controller 465 coupled to the bus 421 to control a display or monitor 466, such as a cathode ray tube (CRT) or liquid crystal display (LCD), for displaying information to a computer user. The computer system includes an input interface 460 and one or more input devices, such as a keyboard 462 and a pointing device 461, for interacting with a computer user and providing information to the processor 420. The pointing device 461, for example, may be a mouse, a light pen, a trackball, or a pointing stick for communicating direction information and command selections to the processor 420 and for controlling cursor movement on the display 466. The display 466 may provide a touch screen interface which allows input to supplement or replace
the communication of direction information and command selections by the pointing device 461.
[43] The computer system 410 may perform a portion or all of the processing steps of embodiments of the invention in response to the processors 420 executing one or more sequences of one or more instructions contained in a memory, such as the system memory 430. Such instructions may be read into the system memory 430 from another computer readable medium, such as a hard disk 441 or a removable media drive 442. The hard disk 441 may contain one or more datastores and data files used by embodiments of the present invention. Datastore contents and data files may be encrypted to improve security. The processors 420 may also be employed in a multi-processing arrangement to execute the one or more sequences of instructions contained in system memory 430. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions. Thus, embodiments are not limited to any specific combination of hardware circuitry and software.
[44] As stated above, the computer system 410 may include at least one computer readable medium or memory for holding instructions programmed according to embodiments of the invention and for containing data structures, tables, records, or other data described herein. The term "computer readable medium" as used herein refers to any medium that participates in providing instructions to the processor 420 for execution. A computer readable medium may take many forms including, but not limited to, non-transitory, nonvolatile media, volatile media, and transmission media. Non-limiting examples of nonvolatile media include optical disks, solid state drives, magnetic disks, and magneto-optical disks, such as hard disk 441 or removable media drive 442. Non-limiting examples of volatile media include dynamic memory, such as system memory 430. Non-limiting examples of transmission media include coaxial cables, copper wire, and fiber optics, including the wires that make up the bus 421. Transmission media may also take the form of acoustic or light waves, such as those generated during radio wave and infrared data communications.
[45] The computing environment 400 may further include the computer system 410 operating in a networked environment using logical connections to one or more remote computers, such as remote computer 480. Remote computer 480 may be a personal computer (laptop or desktop), a mobile device, a server, a router, a network PC, a peer device or other
common network node, and typically includes many or all of the elements described above relative to computer system 410. When used in a networking environment, computer system 410 may include modem 472 for establishing communications over a network 471, such as the Internet. Modem 472 may be connected to system bus 421 via user network interface
470, or via another appropriate mechanism.
[46] Network 471 may be any network or system generally known in the art, including the Internet, an intranet, a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a direct connection or series of connections, a cellular telephone network, or any other network or medium capable of facilitating communication between computer system 410 and other computers (e.g., remote computing system 480). The network 471 may be wired, wireless or a combination thereof. Wired connections may be implemented using Ethernet, Universal Serial Bus (USB), RJ-11, or any other wired connection generally known in the art. Wireless connections may be implemented using Wi- Fi, WiMAX, and Bluetooth, infrared, cellular networks, satellite or any other wireless connection methodology generally known in the art. Additionally, several networks may work alone or in communication with each other to facilitate communication in the network
471. In some embodiments, computers in computing environment 400 may include a hardware or software receiver module (not shown in FIG. 4) configured to receive one or more data items used in performing the techniques described herein.
[47] An executable application, as used herein, comprises code or machine readable instructions for conditioning the processor to implement predetermined functions, such as those of an operating system, a context data acquisition system or other information processing system, for example, in response to user command or input. An executable procedure is a segment of code or machine readable instruction, sub-routine, or other distinct section of code or portion of an executable application for performing one or more particular processes. These processes may include receiving input data and/or parameters, performing operations on received input data and/or performing functions in response to received input parameters, and providing resulting output data and/or parameters.
[48] A graphical user interface (GUI), as used herein, comprises one or more display images, generated by a display processor and enabling user interaction with a processor or other device and associated data acquisition and processing functions. The GUI also includes an executable procedure or executable application. The executable procedure or executable
application conditions the display processor to generate signals representing the GUI display images. These signals are supplied to a display device which displays the image for viewing by the user. The processor, under control of an executable procedure or executable application, manipulates the UI display images in response to signals received from the input devices. In this way, the user may interact with the display image using the input devices, enabling user interaction with the processor or other device.
[49] The functions and process steps herein may be performed automatically or wholly or partially in response to user command. An activity (including a step) performed automatically is performed in response to one or more executable instructions or device operation without user direct initiation of the activity.
[50] The embodiments of the present invention can be included in an article of manufacture comprising, for example, a non-transitory computer readable medium. This computer readable medium may have embodied therein a method for facilitating one or more of the techniques utilized by some embodiments of the present invention. The article of manufacture may be included as part of a computer system or sold separately.
[51] The system and processes of the figures are not exclusive. Other systems, processes and menus may be derived in accordance with the principles of the invention to accomplish the same objectives. Although this invention has been described with reference to particular embodiments, it is to be understood that the embodiments and variations shown and described herein are for illustration purposes only. Modifications to the current design may be implemented by those skilled in the art, without departing from the scope of the invention. As described herein, the various systems, subsystems, agents, managers and processes can be implemented using hardware components, software components, and/or combinations thereof. No claim element herein is to be construed under the provisions of 35 U.S.C. 112, sixth paragraph, unless the element is expressly recited using the phrase "means for."
Claims
1. A computer-implemented method of performing registration of preoperative and intraoperative image data, the method comprising:
receiving a first three-dimensional model of an anatomical area of interest derived from one or more image volumes acquired in a preoperative setting;
acquiring images of the anatomical area of interest in an operative setting using an intraoperative image acquisition device;
generating a second three-dimensional model of the anatomical area of interest using the images;
aligning the first three-dimensional model with the second three-dimensional model using a rigid registration process;
performing an iterative deformable registration process to further align the first three- dimensional model with the second three-dimensional model, the iterative deformable registration process comprising:
computing a generative mixture model representative of the second three- dimensional model,
using the generative mixture model to derive a plurality of physical force vectors, and
biomechanically deforming the first three-dimensional model toward the second three-dimensional model using the plurality of physical force vectors.
2. The method of claim 1, further comprising:
following the iterative deformable registration process, presenting a fused image display which overlays the first three-dimensional model on live images acquired from the intraoperative image acquisition device in the operative setting.
3. The method of claim 1, wherein the first three-dimensional model is a mesh and the second three-dimensional model is a point cloud.
4. The method of claim 1, wherein the iterative deformable registration process further comprises:
applying a plurality of damping coefficients to velocities of points in the first three- dimensional model while deforming the first three-dimensional model toward the second three-dimensional model.
5. The method of claim 4, further comprising:
for each respective point in the first three-dimensional model, selecting dampening coefficients proportional to the generative mixture model applied at that respective point.
6. The method of claim 1, further comprising:
identifying one or more feature correspondences between the first three-dimensional model and the second three-dimensional model;
during the iterative deformable registration process, weighting the generative mixture model to favor the one or more feature correspondences.
7. The method of claim 1, further comprising:
computing a Von Mises-Fisher (VMF) model representative of the second three- dimensional model,
using the VMF model to derive a plurality of physical wrench vectors, and applying the plurality of physical wrench vectors to biomechanically deform the first three-dimensional model toward the second three-dimensional model.
8. The method of claim 7, further comprising:
applying a plurality of damping wrench coefficients to velocities of points in the first three-dimensional model while deforming the first three-dimensional model toward the second three-dimensional model.
9. A computer-implemented method of performing registration of preoperative and intraoperative image data, the method comprising:
prior to a surgical procedure, receiving a mesh comprising a plurality of mesh points representative of an anatomical area of interest;
during the surgical procedure, generating intraoperative data representative of the anatomical area of interest using a live image sequence acquired using an intraoperative imaging device;
constructing a Gaussian mixture model based on the intraoperative data; for each mesh point, generating a physical force vector pointing to the intraoperative data using a corresponding gradient value in the Gaussian mixture model; and
biomechanically deforming the mesh by modifying each mesh point based on its corresponding physical force vector.
10. The method of claim 9, further comprising:
after biomechanically deforming the mesh, presenting a fused image display which overlays the mesh on the live image sequence(s).
11. The method of claim 9, further comprising:
generating a damping force vector for each mesh point based on its corresponding gradient value in the Gaussian mixture model; and
applying each respective damping force vector to its corresponding mesh point while biomechanically deforming the mesh.
12. The method of claim 9, further comprising:
constructing a Von Mises-Fisher (VMF) model based on the intraoperative data; for each mesh point, generating a physical wrench vector pointing to the
intraoperative data using a corresponding VMF gradient value in the VMF model; and
biomechanically deforming the mesh by modifying each mesh point based on its corresponding physical wrench vector.
13. The method of claim 12, further comprising:
for each mesh point, generating a damping wrench vector using a corresponding VMF gradient value associated with the VMF model; and
applying each respective damping wrench vector to its corresponding mesh point while biomechanically deforming the mesh.
14. The method of claim 9, wherein the intraoperative data comprises a stitched three- dimensional point cloud.
15. The method of claim 9, wherein the intraoperative data comprises individual depth
data extracted from the live image sequence(s).
16. The method of claim 9, further comprising:
identifying one or more feature correspondences between the mesh and the intraoperative data;
weighting the Gaussian mixture model to favor the one or more feature
correspondences.
17. A system for performing registration of preoperative and intraoperative image data, the system comprising:
a database configured to store a preoperative model representative of an anatomical area of interest;
an intraoperative image acquisition device configured to acquire one or more live image sequences of the anatomical area of interest during a surgical procedure;
an imaging computer configured to:
generate an intraoperative model representative of the anatomical area of interest using the one or more live image sequences;
construct a generative mixture model based on the intraoperative model; for each point of the preoperative model, generate a physical force vector pointing to the intraoperative model using a corresponding gradient value associated with the generative mixture model; and
biomechanically deforming the mesh by modifying each point of the preoperative model based on its corresponding physical force vector.
18. The system of claim 17, further comprising a display configured to present a fused image display which overlays the preoperative model on the one or more live image sequences.
19. The system of claim 17, wherein the intraoperative image acquisition device comprises an endoscope and the one or more live image sequences comprises stereo images.
20. The system of claim 17, wherein the preoperative model comprises a mesh and the intraoperative model comprises a three-dimensional point cloud.
21. The system of claim 17, wherein the intraoperative image acquisition device comprises an endoscope, a patterned light source and the one or more live image sequences comprise images from one or more view directions.
22. The system of claim 17, wherein the intraoperative image acquisition device is inserted inside a body cavity and acquires a three dimensional representation of the body cavity's structure using reflection of acoustic waves.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/US2016/027018 WO2017180097A1 (en) | 2016-04-12 | 2016-04-12 | Deformable registration of intra and preoperative inputs using generative mixture models and biomechanical deformation |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/US2016/027018 WO2017180097A1 (en) | 2016-04-12 | 2016-04-12 | Deformable registration of intra and preoperative inputs using generative mixture models and biomechanical deformation |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2017180097A1 true WO2017180097A1 (en) | 2017-10-19 |
Family
ID=55910352
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2016/027018 Ceased WO2017180097A1 (en) | 2016-04-12 | 2016-04-12 | Deformable registration of intra and preoperative inputs using generative mixture models and biomechanical deformation |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2017180097A1 (en) |
Cited By (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2020140044A1 (en) * | 2018-12-28 | 2020-07-02 | Activ Surgical, Inc. | Generation of synthetic three-dimensional imaging from partial depth maps |
| CN113143459A (en) * | 2020-01-23 | 2021-07-23 | 海信视像科技股份有限公司 | Navigation method and device for augmented reality operation of laparoscope and electronic equipment |
| US11857153B2 (en) | 2018-07-19 | 2024-01-02 | Activ Surgical, Inc. | Systems and methods for multi-modal sensing of depth in vision systems for automated surgical robots |
| WO2024065343A1 (en) * | 2022-09-29 | 2024-04-04 | 中国科学院深圳先进技术研究院 | System and method for registration of preoperative and intraoperative liver point cloud data, and terminal and storage medium |
| WO2024108409A1 (en) * | 2022-11-23 | 2024-05-30 | 北京肿瘤医院(北京大学肿瘤医院) | Non-contact four-dimensional imaging method and system based on four-dimensional surface respiratory signal |
| US20240242426A1 (en) * | 2023-01-12 | 2024-07-18 | Clearpoint Neuro, Inc. | Dense non-rigid volumetric mapping of image coordinates using sparse surface-based correspondences |
Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2014127321A2 (en) * | 2013-02-15 | 2014-08-21 | Siemens Aktiengesellschaft | Biomechanically driven registration of pre-operative image to intra-operative 3d images for laparoscopic surgery |
-
2016
- 2016-04-12 WO PCT/US2016/027018 patent/WO2017180097A1/en not_active Ceased
Patent Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2014127321A2 (en) * | 2013-02-15 | 2014-08-21 | Siemens Aktiengesellschaft | Biomechanically driven registration of pre-operative image to intra-operative 3d images for laparoscopic surgery |
Non-Patent Citations (4)
| Title |
|---|
| BILLINGS SETH ET AL: "Generalized iterative most likely oriented-point (G-IMLOP) registration", INTERNATIONAL JOURNAL OF COMPUTER ASSISTED RADIOLOGY AND SURGERY, SPRINGER, DE, vol. 10, no. 8, 23 May 2015 (2015-05-23), pages 1213 - 1226, XP035524243, ISSN: 1861-6410, [retrieved on 20150523], DOI: 10.1007/S11548-015-1221-2 * |
| DUAY V ET AL: "Non-rigid registration algorithm with spatially varying stiffness properties", BIOMEDICAL IMAGING: MACRO TO NANO, 2004. IEEE INTERNATIONAL SYMPOSIUM ON ARLINGTON,VA, USA APRIL 15-18, 2004, PISCATAWAY, NJ, USA,IEEE, 15 April 2004 (2004-04-15), pages 408 - 411, XP010773884, ISBN: 978-0-7803-8389-0, DOI: 10.1109/ISBI.2004.1398561 * |
| MOHAMMADI AMROLLAH ET AL: "Estimation of intraoperative brain shift by combination of stereovision and doppler ultrasound: phantom and animal model study", INTERNATIONAL JOURNAL OF COMPUTER ASSISTED RADIOLOGY AND SURGERY, SPRINGER, DE, vol. 10, no. 11, 10 May 2015 (2015-05-10), pages 1753 - 1764, XP035574224, ISSN: 1861-6410, [retrieved on 20150510], DOI: 10.1007/S11548-015-1216-Z * |
| TAO WENBING ET AL: "Asymmetrical Gauss Mixture Models for Point Sets Matching", 2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, IEEE, 23 June 2014 (2014-06-23), pages 1598 - 1605, XP032649508, DOI: 10.1109/CVPR.2014.207 * |
Cited By (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11857153B2 (en) | 2018-07-19 | 2024-01-02 | Activ Surgical, Inc. | Systems and methods for multi-modal sensing of depth in vision systems for automated surgical robots |
| WO2020140044A1 (en) * | 2018-12-28 | 2020-07-02 | Activ Surgical, Inc. | Generation of synthetic three-dimensional imaging from partial depth maps |
| CN113906479A (en) * | 2018-12-28 | 2022-01-07 | 艾科缇弗外科公司 | Generate synthetic 3D imaging from local depth maps |
| CN113143459A (en) * | 2020-01-23 | 2021-07-23 | 海信视像科技股份有限公司 | Navigation method and device for augmented reality operation of laparoscope and electronic equipment |
| WO2024065343A1 (en) * | 2022-09-29 | 2024-04-04 | 中国科学院深圳先进技术研究院 | System and method for registration of preoperative and intraoperative liver point cloud data, and terminal and storage medium |
| WO2024108409A1 (en) * | 2022-11-23 | 2024-05-30 | 北京肿瘤医院(北京大学肿瘤医院) | Non-contact four-dimensional imaging method and system based on four-dimensional surface respiratory signal |
| US20240242426A1 (en) * | 2023-01-12 | 2024-07-18 | Clearpoint Neuro, Inc. | Dense non-rigid volumetric mapping of image coordinates using sparse surface-based correspondences |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN104000655B (en) | Surface reconstruction and registration for the combination of laparoscopically surgical operation | |
| US9129422B2 (en) | Combined surface reconstruction and registration for laparoscopic surgery | |
| Plantefeve et al. | Patient-specific biomechanical modeling for guidance during minimally-invasive hepatic surgery | |
| Grasa et al. | Visual SLAM for handheld monocular endoscope | |
| US11900541B2 (en) | Method and system of depth determination with closed form solution in model fusion for laparoscopic surgical guidance | |
| US8712016B2 (en) | Three-dimensional shape data processing apparatus and three-dimensional shape data processing method | |
| CN102999938B (en) | The method and system of the fusion based on model of multi-modal volumetric image | |
| US20110282151A1 (en) | Image-based localization method and system | |
| US9155470B2 (en) | Method and system for model based fusion on pre-operative computed tomography and intra-operative fluoroscopy using transesophageal echocardiography | |
| EP2452649A1 (en) | Visualization of anatomical data by augmented reality | |
| CN113302660A (en) | Method for visualizing dynamic anatomical structures | |
| WO2017180097A1 (en) | Deformable registration of intra and preoperative inputs using generative mixture models and biomechanical deformation | |
| WO2016178690A1 (en) | System and method for guidance of laparoscopic surgical procedures through anatomical model augmentation | |
| CN110458872A (en) | System and method for performing biomechanically driven image registration using ultrasound elastography | |
| WO2014127321A2 (en) | Biomechanically driven registration of pre-operative image to intra-operative 3d images for laparoscopic surgery | |
| JP6608165B2 (en) | Image processing apparatus and method, and computer program | |
| Tella-Amo et al. | Probabilistic visual and electromagnetic data fusion for robust drift-free sequential mosaicking: application to fetoscopy | |
| Shu et al. | Seamless augmented reality integration in arthroscopy: a pipeline for articular reconstruction and guidance | |
| Zampokas et al. | Real‐time stereo reconstruction of intraoperative scene and registration to preoperative 3D models for augmenting surgeons' view during RAMIS | |
| Paulus et al. | Surgical augmented reality with topological changes | |
| JP5904976B2 (en) | 3D data processing apparatus, 3D data processing method and program | |
| Boussot et al. | Statistical model for the prediction of lung deformation during video-assisted thoracoscopic surgery | |
| Zhang | 3D Reconstruction of Colon Structures and Textures from Colonoscopic Videos | |
| KR20240143425A (en) | Marker Tracking Method and System in Augmented Reality | |
| Ho | Nonrigid Registration Techniques and Evaluation for Augmented Reality in Robotic Assisted Minimally Invasive Surgery |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16720232 Country of ref document: EP Kind code of ref document: A1 |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 16720232 Country of ref document: EP Kind code of ref document: A1 |