[go: up one dir, main page]

WO2023036848A1 - Système de navigation chirurgicale à réalité augmentée - Google Patents

Système de navigation chirurgicale à réalité augmentée Download PDF

Info

Publication number
WO2023036848A1
WO2023036848A1 PCT/EP2022/074921 EP2022074921W WO2023036848A1 WO 2023036848 A1 WO2023036848 A1 WO 2023036848A1 EP 2022074921 W EP2022074921 W EP 2022074921W WO 2023036848 A1 WO2023036848 A1 WO 2023036848A1
Authority
WO
WIPO (PCT)
Prior art keywords
orientation
virtual environment
displacement sensor
patient
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/EP2022/074921
Other languages
English (en)
Inventor
Ali Rezaei HADDAD
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neuronav Ltd
Original Assignee
Neuronav Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neuronav Ltd filed Critical Neuronav Ltd
Publication of WO2023036848A1 publication Critical patent/WO2023036848A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • A61B2034/102Modelling of surgical devices, implants or prosthesis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • A61B2034/105Modelling of the patient, e.g. for ligaments or bones
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/107Visualisation of planned trajectories or target regions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2048Tracking techniques using an accelerometer or inertia sensor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2051Electromagnetic tracking systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2055Optical tracking systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2068Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis using pointers, e.g. pointers having reference marks for determining coordinates of body points
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B2090/364Correlation of different images or relation of image positions in respect to the body
    • A61B2090/365Correlation of different images or relation of image positions in respect to the body augmented reality, i.e. correlating a live optical image with another image
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • A61B2090/372Details of monitor hardware
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/50Supports for surgical instruments, e.g. articulated arms
    • A61B2090/502Headgear, e.g. helmet, spectacles
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/034Recognition of patterns in medical or anatomical images of medical instruments

Definitions

  • An augmented reality surgical navigation system in which a computer-generated image of a portion of a patient derived from scan data is displayed to the surgeon in alignment with that portion of the patient, or an image of that portion of the patient, during a surgical procedure to provide assistance to the surgeon.
  • the invention is particularly concerned with monitoring movement of the patient, tissue displacement and tracking of surgical instruments during surgery and adjusting the display of the computer-generated image in order to maintain alignment between the computer-generated image and the portion of the patient during the surgical procedure.
  • Optical alignment with the head of a patient can be achieved by using facial recognition techniques to identify landmark points on the face of a patient. Information to be displayed to the surgeon can be rendered in positions relative to these points.
  • the body is usually draped during surgery which renders tracking of these landmark points during surgery using facial recognition techniques impossible. As a result, if the body moves during surgery alignment between the virtual model of the body part in the virtual environment and the actual body part in the physical environment will be lost.
  • an augmented reality surgery system having a camera for imaging a view of a physical environment and a display for displaying a virtual environment.
  • a displacement sensing system has one or more displacement sensors for fixing relative to a body part of a patient in the physical environment.
  • the displacement sensing system outputs measurement data for each displacement sensor corresponding to translational and rotational movement of that displacement sensor relative to an origin and co-ordinate system defined by that displacement sensor.
  • the augmented reality surgery system also includes a processing system which generates the virtual environment based on the physical environment of the camera so that each position in the physical environment has a corresponding position in the virtual environment.
  • the processing system receives image data from the camera, and scan data corresponding to a three-dimensional model of the body part of the patient.
  • the processing system determines a position and orientation for the three-dimensional model of the body part in the virtual environment matching the position and orientation of the body part of the patient in the physical environment, and introduces the three-dimensional model of the body part into the virtual environment with the determined position and orientation.
  • the processing system determines, for each displacement sensor, the position in the virtual environment corresponding to the position of the origin point in the physical environment and the orientation of the coordinate axes of the displacement sensor in the virtual environment using image data for one or more images from the camera.
  • the processing system then renders a first visual representation of the virtual environment including the model of the body part and outputs the first visual representation to the display, the first visual representation corresponding to the view of physical environment imaged by the camera at a first time.
  • the processing system subsequently receive measurement data from the displacement sensor corresponding to movement of the body part of the patient in the physical environment and modifies at least one of the position, orientation and form of the model in the virtual environment based on the received measurement data, the determined position of the origin point of the displacement sensor and the determined orientation of the co-ordinate axes of the displacement sensor in the physical environment, and renders a second visual representation of the virtual environment including the modified model of the body part and outputs the second visual representation to the display, the second visual representation corresponding to the view of physical environment imaged by the camera at a second time.
  • measurement data from the displacement sensing system can be used to track movement or reconfiguration of the body part at a second time during surgery.
  • an augmented reality surgery system having a camera for imaging a view of a physical environment, a display for displaying a view of a virtual environment, and a displacement sensing system having one or more displacement sensors for fixing relative to a surgical instrument in the physical environment, the displacement sensing system being operable to output measurement data corresponding to translational and rotational movement of each displacement sensor relative to a respective origin and co-ordinate system.
  • the augmented reality surgery system also includes a processing system which generates the virtual environment based on the physical environment of the camera so that each position in the physical environment has a corresponding position in the virtual environment, and for each displacement sensor determines the position of the origin and the orientation of the co-ordinate axes of that displacement sensor in the virtual environment using image data for one or more images from the camera.
  • the processing system then introduces a three-dimensional model of the surgical instrument into the virtual environment with a position based on the determined position of the origin and an orientation based on the determined orientation of the co-ordinate axes in the virtual environment.
  • the processing system renders a first visual representation of the virtual environment including the three-dimensional model of the surgical instrument and outputs the first visual representation to the display, the first visual representation corresponding to the view of the physical environment imaged by the camera at the first time.
  • the processing system modifies at least one of the position and orientation of the three-dimensional model of the surgical instrument in the virtual environment based on the received measurement data and the determined position of the origin point of the displacement sensor and the determined orientation of the co-ordinate axes of the displacement sensor in the virtual environment and at a second time renders a second visual representation of the virtual environment including the modified model of the surgical instrument and output the second visual representation to the display, the second visual representation corresponding to the view of physical environment imaged by the camera at the second time.
  • the displacement sensing system is an electromagnetic tracking system and the displacement sensors are probes whose translation and rotational movement is tracked in six dimensions.
  • optical fiducial markers are provided on each probe positioned in relation to the origin point in dependence on the co-ordinate system of the probe.
  • the augmented reality surgical navigation system could be used during neurosurgery to allow a virtual image of the head of a patient to track movement of the head of the patient during surgery.
  • the augmented reality surgical navigation system could be used during spinal surgery in which the virtual image is modified to take account of relative movement of spinal vertebrae pre and post fixation, and to track the trajectory of screw insertions into the vertibrae.
  • Figure 1A shows a perspective view of a surgeon performing cranial neurosurgery assisted by an augmented reality surgical navigation system in accordance with examples described herein.
  • Figure IB shows a perspective view of a surgeon performing cranial neurosurgery whereby a probe or instrument is tracked via the augmented reality surgical navigation system as it enters the tissue, allowing the surgeon to follow and visualise its trajectory towards a desired target in accordance with examples described herein.
  • Figure 2 shows a schematic view of a displacement sensor with surface optical fiducial markers that forms part of the augmented reality surgical navigation system of Figure 1.
  • Figure 3A schematically shows the functional components of the augmented reality surgical navigation system for co-registering a 3D virtual model in a virtual environment with a corresponding body part of a patient in the physical environment.
  • Figure 3B is a flow chart showing steps performed to generate a point cloud from the 3D virtual model in which each point represents a landmark feature on the surface of the 3D virtual model.
  • Figure 3C is a flow chart showing operations performed to position the 3D virtual model in the virtual environment in a position and orientation corresponding to the position and orientation of the corresponding body part in the physical environment.
  • Figure 4A schematically shows the functional components of the augmented reality surgical navigation system for registering the location of an origin point and the orientation of co-ordinate axes associated with a displacement sensor in the virtual environment.
  • Figure 4B is a flow chart showing operations performed to determine the position in the virtual environment corresponding to the origin of the displacement sensor and the orientation of the co-ordinate axes of the displacement sensor in the virtual environment.
  • Figure 5 is schematically shows the operation of the augmented reality surgical navigation system during surgery.
  • Figures 6A and 6B show perspective views of a surgeon performing spinal surgery assisted by an augmented reality surgical navigation system in accordance with examples described herein. Detailed Description
  • Figure 1 A shows a surgeon 1 carrying out a surgical procedure on a patient 3.
  • the surgical procedure involves cranial neurosurgery on the brain, on blood vessels or on nerves located in the skull or near the brain.
  • Such neurosurgical procedures require high precision to avoid significant complications, and the surgeon 1 makes use of an Augmented Reality (AR) surgical navigation system to assist with carrying out the neurosurgical procedure with such high precision.
  • AR Augmented Reality
  • the AR surgical navigation system includes a head-mounted AR display device 5. More particularly, in this example the AR display device 5 is a Microsoft HoloLens 2 device.
  • the AR display device 5 senses the physical environment of the head-mounted AR display device, e.g. a surgical theatre, and generates a virtual environment corresponding to the physical environment using a first co-ordinate system, which will hereafter be called the virtual environment co-ordinate system.
  • the headmounted AR display device 5 presents an image 15 to the surgeon 1 of a three- dimensional model of the head of the patient that is derived from scan data and positioned and oriented within the virtual environment of the AR surgical navigation system so as to match the position and orientation of the head of the patient 3 in the physical environment.
  • the image is rendered from a viewpoint in the virtual environment corresponding to the position and orientation of the head-mounted AR device 5.
  • the displayed image is superimposed over the corresponding portion of the physical body of the patient 3 in the field of view of the surgeon 1.
  • the head-mounted AR display device 5 presents information detailing the trajectory and distance to a target location.
  • the AR surgical navigation system also includes a displacement sensing system, which in this example is an electromagnetic tracking system in which movement of a displacement sensor 7 in six degrees of freedom (three translational and three rotational) is monitored using a field generator 9, which generates an electromagnetic field that induces currents in the displacement sensor 7 that can be analysed to determine the sensed movements.
  • a displacement sensing system which in this example is an electromagnetic tracking system in which movement of a displacement sensor 7 in six degrees of freedom (three translational and three rotational) is monitored using a field generator 9, which generates an electromagnetic field that induces currents in the displacement sensor 7 that can be analysed to determine the sensed movements.
  • the displacement sensor 7 is a substantially planar device, as shown in Figure 2, having optical fiduciary markers 21a, 21b and 21c (such as AprilTag, ARTag, ARToolkit, ArUco and the like) positioned thereon in a configuration such that a cartesian co-ordinate system for the sensed movement corresponds to a first axis 23a aligned with a line joining the first optical fiduciary marker 21a and the second optical fiduciary marker 21b, a second axis 23b aligned with a line joining the first optical fiduciary marker 21a and the third fiduciary marker 21c, and a third axis 23c aligned with a line perpendicular to the plane of the displacement sensor 7 and passing through the first optical fiduciary marker 23a.
  • Displacement sensors sensor 7 can be fixed to the forehead of the patient 3, although other positions on the patient 3 are possible, and to surgical devices.
  • the AR surgical navigation system also includes a fixed display 11 that displays the field of view of the surgeon 1, including both an image from a camera in the AR display device 5 that captures the view of the surgeon 1 of the physical environment and the displayed image corresponding with the same view of the virtual environment.
  • the fixed display 11 may show a first image of the head of the patient 3 captured in the physical environment with a second image of the brain of the patient 3 captured from the virtual environment, with the first and second images having matching positions and orientations.
  • the AR surgical navigation system includes a processing system having three different modes of operation, namely a patient registration mode, a displacement sensor registration mode and a surgery mode.
  • the patient registration mode involves, before surgery, identifying the position and orientation in the virtual environment corresponding to the head of the patient 3, and then introducing the three-dimensional model derived from the previously scanned data of the head of the patient 3 into the virtual environment at a position and orientation matching the identified position and orientation for the physical head of the patient 3.
  • the patient registration mode uses image processing techniques that require the head of the patient 3 to be in the field of view of the surgeon 1 wearing the head-mounted AR device 5.
  • the patient registration mode uses the displacement sensing system to perform the orientation and positioning of the three- dimensional model into the virtual environment.
  • the displacement sensor registration mode involves determining the position and orientation in the virtual world corresponding to the position and orientation of one or more six-dimensional displacement sensors (three axes of translation and three axes of rotation) which can be fixed relative to the head of the patient, for example on the forehead of the patient, or on a surgical device. In this way, detected movement of the displacement sensor can be converted into a corresponding translation and/or rotation in the virtual world of the corresponding object to which it is fixed.
  • the displacement sensor registration mode also uses image processing techniques that require the head of the patient 3 to be in the field of view of the surgeon 1 wearing the head-mounted AR device 5.
  • the surgery mode involves a rendered image of the three-dimensional model being displayed to the surgeon 1 from a viewpoint corresponding to the position and orientation of the AR display device 5.
  • movement of the displacement sensor 7 fixed to the head of the patient 3 is sensed and converted by the AR surgical navigation system to a corresponding translation and/or rotation of the three-dimensional model of the head of the patient 3 in the virtual environment so as to maintain the rendered image being superimposed over the corresponding part of the head of the patient 3 in the field of view of the surgeon 1 even if the head of the patient 3 moves.
  • movement of a displacement sensor fixed relative to a surgical device such as the stylet 13 is used to track movement of a virtual model of the surgical device in the virtual environment.
  • the surgery mode does not require the displacement sensor 7 to be in the field of view of the surgeon 1 wearing the AR display device 5. This is advantageous because during surgery the head of the patient 3 is typically draped for hygiene reasons and therefore the displacement sensor 7 is not in the field of view of the surgeon 1. In addition, this is advantageous because during surgery a surgical device maybe inserted into body tissue and accordingly the tip of the surgical device is no longer visible to the surgeon 1.
  • FIG 3A schematically illustrates the functionality of the surgical navigation system in the patient registration mode, the patient registration mode in this example using an optical system which employs image processing techniques.
  • a headmounted augmented reality (AR) device 201 (corresponding to the AR device 5 of Figure 1) has a display 203, a camera 205 having associated depth sensing capabilities and a device pose calculator 207.
  • the camera 205 takes images of the field of view of the surgeon 1 and the device pose calculator 207 processes those images to determine the position and orientation in the virtual environment corresponding to the position and orientation of the AR device 201 in the physical environment of the AR surgical navigation system.
  • AR augmented reality
  • scan data 209 corresponding to a pre-operative scan of the portion of the head of the patient is input to a processing system in which the scan data is processed by a 3D model generation module 211 to generate a virtual three-dimensional model of the portion of the head of the patient using a second co-ordinate system, which will hereafter be referred to as the scan model coordinate system.
  • the pre-operative scan data is received in Digital Imaging and Communications in Medicine (DICOM) format which stores slice-based images.
  • DICOM Digital Imaging and Communications in Medicine
  • the pre-operative scan data may be from CT scans, MRI scans, ultrasound scans, or any other known medical imaging procedure.
  • the slice-based images of the DICOM files are processed to construct a plurality of three-dimensional (3D) scan models each concentrating on a different aspect of the scan data using conventional segmentation filtering techniques to identify and separate anatomical features within the 2D DICOM images.
  • 3D scan models is of the exterior surface of the head of the patient.
  • Other 3D scan models may represent internal features of the head and brain such as brain tissue, blood vessels, the skull of the patient, etc. All these other models are constructed using the scan model co-ordinate system and the same scaling so as to allow the scan models to be selectively exchanged.
  • the patient registration process then generates a 3D point cloud of landmark points in the 3D scan model of the exterior surface of the head.
  • the patient registration mode renders, at S201, an image of the 3D model of the exterior surface of the head of the patient under virtual lighting conditions and from the viewpoint of a virtual camera positioned directly in front of the face of the patient 3 to create two-dimensional image data corresponding to an image of the face of the patient 3.
  • a pre-trained facial detection model analyses at S203, this image of the face of the patient 3 to obtain a set of 2D landmark points relating to features of the face of the patient 3.
  • These landmark points are then converted, at S205, into a 3D set of points using ray tracing or casting.
  • the position of the virtual camera is used to project a ray from a 2D landmark point in the image plane of the virtual camera into virtual model space, and the collision point of this ray with the 3D model of the exterior surface of the head of the patient corresponds to the corresponding 3D position of that landmark point.
  • the 3D point cloud of landmark points corresponding to positions of facial landmarks is generated.
  • the patient registration process compares the 3D point cloud with image data for an image of the face of the patient captured by the camera 205 to determine, at 213, transform data for transforming the 3D model into a co-ordinate system relative to the camera 205, which will hereafter be referred to as the camera co-ordinate system.
  • the patient registration process determines a translation vector and rotation matrix which matches the position and orientation of the 3D point cloud with the position and orientation of the head of the patient in the captured image.
  • the patient registration process processes, at S303, the corresponding image data to detect faces within the image, and optical fiducial markers in the image that indicate which of the detected faces belongs to the patient 3, as only the patient 3 has optical fiducial markers on them.
  • the optical fiducial markers are useful because there may be one or more persons other than the patient 3 within the image.
  • the patient registration process then identifies, at S305, a 2D set of landmark points of the face of the patient 3 within the captured image using the same pre-trained facial detection model as was used to process the rendered image of the 3D model of the exterior surface of the head.
  • the patient registration process then aligns, at S307, the 3D point cloud of landmark points for the 3D model and the 2D set of landmark points from the captured image using a conventional pose estimation process, which determines a translation vector and a rotation matrix that positions and orientates the 3D model in the field of view of the AR camera 205 co-registered with the physical head of the patient 3.
  • a conventional pose estimation process which determines a translation vector and a rotation matrix that positions and orientates the 3D model in the field of view of the AR camera 205 co-registered with the physical head of the patient 3.
  • PNP perspective n-point
  • the determined translation vector and rotation matrix are relative to the field of view of the camera when capturing the image.
  • the head-mounted AR device moves within the physical environment, and therefore a further transformation is required to determine compensated transform data to locate the 3D model in the virtual environment at a position and orientation that matches the position and orientation of the head of the patient 3 in the physical environment.
  • the patient registration process then transforms, at 215, the 3D model into the virtual environment co-ordinate system.
  • the patient registration process determines a compensated translation vector and a compensated rotation matrix which matches the position and orientation of the 3D point cloud with the position and orientation in the virtual environment that corresponds to the position and orientation of the head in the physical environment.
  • the patient registration process receives, at S309, data form the device pose calculator 207 that identifies the position and the orientation within the virtual environment that corresponds to the position and orientation of the camera 205 in the physical environment when the image was captured.
  • the patient registration process then calculates, at S311, a compensating translation vector and rotation matrix using the data provided by the device pose calculator 207.
  • the compensating translation vector and rotation matrix form compensated transform data to transform the location and orientation of the 3D model of the exterior surface of the head into a location and orientation in the virtual environment, defined using the virtual environment co-ordinate system, so that the position and orientation of the 3D model in the virtual environment matches the position and orientation of the head of the patient 3 in the real world.
  • the patient registration process then estimates the accuracy of the coregistration of the physical head and the virtual model.
  • depth data from the depth sensors associated with the camera 205 is used to construct, at S313, a point cloud of landmark points of the physical face of the patient from the 2D set of landmark points determined from the captured image, and then the accuracy of the overlap of the point cloud of landmark points of the physical face of the patient determined from the captured image and the point cloud of landmark points of the virtual model is calculated, at S315, for example by calculating the sum of the cartesian distance between corresponding points of each point cloud.
  • the patient registration process repeats, at S317, the above processing for multiple captured images, while the head of the patient is immobilised but the camera 205 may be mobile, until sufficient accuracy is achieved.
  • This assessment of accuracy can involve averaging the previous best-fits, and determining, at S317, whether the difference between current and previous averaged fits has fallen below a defined accuracy threshold and hence converged on some suitable alignment.
  • co-regi strati on is deemed to be sufficiently accurate and the averaged compensated translation vector and rotation matrix can be used to introduce the 3D models into the virtual environment.
  • FIG 4A schematically illustrates the functionality of the surgical navigation system in the displacement sensor registration mode.
  • each displacement sensor 7 is equipped with optical fiducial markers 21a-21c which are positioned to define the origin and axes of a co-ordinate system in which the displacement sensor 7 measures movement in six directions (three translational and three rotational), hereafter referred to as the displacement sensor co-ordinate system.
  • the aim of the displacement sensor registration process is to identify the position of the origin of the displacement sensor co-ordinate system, and the orientation of the axes of the displacement sensor co-ordinate system, in the virtual environment.
  • the displacement sensor registration process receives, at step S401, an image of the physical environment, which may be an image used for patient registration, together with associated depth data and identifies, at step S403, the optical fiducial markers on the displacement sensor 7, which can be done using available software from libraries such as OpenCV combined with helper libraries which provide specific implementations for the optical fiducial marker used.
  • libraries such as OpenCV combined with helper libraries which provide specific implementations for the optical fiducial marker used.
  • the displacement sensor registration process uses pose estimation to determine, at S405, a rotational transformation required to rotate the plane of the displacement sensor 7 containing the optical fiducial markers to align with the object plane of the camera 205. From this rotation transformation, the displacement sensor registration process determines, at S407, a rotation matrix transforming the displacement sensor co-ordinate system to the AR camera co-ordinate system. In addition, the depth data corresponding to the optical fiducial markers is used to calculate a 3D position of the origin of the displacement sensor 7 in the AR camera coordinate system.
  • the determined origin position and rotation matrix are relative to the field of view of the camera 205 when capturing the image.
  • the patient registration process generates, at S409, a compensating translation vector and rotation matrix using a location and orientation of the camera 205, provided by the device pose calculator 207, for when the image was captured.
  • the compensating translation vector and rotation matrix converts the location of the origin of the displacement sensor 7 and the orientation of the displacement sensor 7 from the AR camera co-ordinate system to the virtual environment co-ordinate system, so that the position of the origin of the displacement sensor 7 and the orientation of the displacement sensor co-ordinate system in the virtual environment matches the position and orientation of the displacement sensor 7 in the physical environment.
  • the displacement sensor registration process can then be repeated, at S413, using multiple different images captured by the camera 205 to acquire an average position of the origin of the displacement sensor 7 and the orientation of the displacement sensor co-ordinate system in the virtual environment.
  • Figure 5 schematically illustrates the functionality of the surgical navigation system in the surgery mode.
  • 3D models 211 are transformed into the virtual environment coordinate system.
  • this involves using the transformation data generated during the patient registration process, so that its position and orientation in the virtual environment matches the position and orientation of the head of the patient 3 in the physical environment.
  • 3D models corresponding to surgical devices this involves determining the position and orientation of the surgical device in the virtual environment based on the determined position and the orientation of the displacement sensor 7 attached to surgical device.
  • Each 3D model then undergoes a further transformation based on the readings from the displacement sensor 7, and then the resultant transformed model is output to a rendering engine 235.
  • the pose calculator 207 outputs data indicating the position and orientation in the virtual environment corresponding to the position and orientation of the AR camera 205 in the physical environment. This allow the rendering engine 235 to render two-dimensional image data corresponding to the view of a virtual camera, whose optical specification matches the optical specification of the AR camera 205, positioned and orientated in the virtual environment with the position and orientation indicated by the pose calculator 207.
  • the resultant two-dimensional image data is then output by the rendering engine 235 to the display 203 for viewing by the surgeon. More particularly, the display of the head-mounted AR device 5 is a semi-transparent display device enabling the displayed image to be superimposed in the view of the surgeon 1.
  • the displacement sensor 7 attached to the head will make a corresponding movement and the displacement sensing system will output displacement sensor readings 218a in six dimensions corresponding to translational movement of the origin point and rotational movement of the displacement sensor co-ordinate system.
  • the displacement sensor readings 218a are converted to a displacement transformation 218b to effect a corresponding translation and rotation of the 3D model in the virtual environment. In this way, co-regi strati on between the head of the patient in the physical environment and the 3D model in the virtual environment is maintained.
  • the surgeon 1 may make a visual check that the virtual image of one or more of the 3D models is aligned with the head of the patient 3 before draping the head of the patient 3 for surgery. After draping, the surgeon 1 is reliant on the transformations based on the sensor readings from the displacement sensor 7 to maintain alignment.
  • the displacement sensor attached to the surgical device will make a corresponding movement and the displacement sensing system will output displacement sensor readings 218a in six dimensions corresponding to translational movement of the origin point and rotational movement of the displacement sensor co-ordinate system.
  • the displacement sensor readings 218a are converted to a displacement transformation 218b to effect a corresponding translation and rotation of the 3D model of the surgical device in the virtual environment.
  • a target location can be identified in the 3D models of the head of the patient 3, and data corresponding to a distance and trajectory between the tip of the stylet 13 and the target location can be calculated and displayed superimposed over the rendered image of the virtual environment.
  • This data may be displayed with or without a rendered virtual model of the stylet 13.
  • the augmented reality surgical navigation system can also be used to modify the form of a 3D model during surgery to take account of modifications to the corresponding body part during the surgical procedure or to track the trajectory of instruments such as spinal screws
  • Figures 6A and 6B illustrate a surgeon 1 carrying out spinal surgery in which vertebrae of the spine are physically moved relative to each other.
  • the 3D model is accordingly of the relevant vertebrae of the spine, with each vertebra having a respective sub-model.
  • a displacement sensor 7 is fixed relative to each vertebra, and the position of the origin and the orientation of the co-ordinate axes of each displacement sensor in the virtual environment is determined in the manner described above.
  • the surgeon 1 physically inserts a screw using the electromagnetically tracked tool A, the insertion and fixation of the screw moves the vertebrae relative to each other, and so the displacement sensor 7 on each vertebra will move with that vertebra respectively.
  • the displacement readings for each displacement sensor can then be converted to transformation data for transforming the position and orientation of the corresponding sub-module in the virtual environment.
  • transformation data for transforming the position and orientation of the corresponding sub-module in the virtual environment.
  • the form of the 3D model is modified during surgery to take account of the relative movement of the vertebrae as a result of surgical manipulation.
  • the trajectory and position of the spinal screws B will be tracked within the vertebrae.
  • patient registration involves the processing of two- dimensional images of a body part of the patient to generate a set of three-dimensional points to allow co-regi strati on between three-dimensional scan data in the virtual world and the body part of the patient.
  • patient registration mode uses the displacement sensing system instead of, or in conjunction with, the optical system described earlier.
  • a displacement sensor 7 can output displacement sensor readings 218a in six dimensions corresponding to translational movement of the origin point and rotational movement of the displacement sensor coordinate system which, due to the registration process, corresponds to a position and orientation in the virtual environment.
  • the optical patient registration process identifies a 2D set of landmark points of the face of the patient 3 within a captured image and aligns a 3D point cloud of landmark points for the 3D model with the 2D set of landmark points from the captured image, in the patient registration process which uses the displacement sensing system it is not necessary to capture an image of the patient 3.
  • the patient registration process which uses the displacement sensing system obtains a 3D set of landmark points which can be aligned with the 3D model point cloud.
  • the 3D position of each landmark point is obtained by e.g. the surgeon bringing a displacement sensor into close proximity with the landmark point and triggering the displacement sensor system to provide a displacement sensor reading 218a that provides the three-dimensional position of that landmark point in the virtual world.
  • the registration system Upon capturing the 3D point cloud of displacement sensor readings relating to the landmark points, the registration system performs an alignment process as described previously to align the 3D landmark point cloud with the 3D model point cloud.
  • the alignment process seeks to minimize the cartesian distance between the landmark points and the 3D model points until a pre-determined accuracy has been achieved.
  • the displacement sensor used in the patient registration process may be in the form of a stylet sensor, for example, which permits precise identification of each landmark point by bringing the displacement sensor into close proximity with the landmark point of the patient’s body.
  • the displacement sensor may be positioned on a pointing tool, and may be positioned away from e.g. a sterile pointing end of the pointing tool.
  • the sterile pointing end of the pointing tool is at a known position relative to the displacement sensor, such that when the sterile pointing end is in close proximity with a landmark position on the patient’s body, the displacement sensor reading can be converted to a 3D landmark position based on the positional relationship between the sterile pointing end of the tool and the position of the displacement sensor on the tool.
  • the registration process may, in some examples, use the displacement sensing system in conjunction with the optical system. For example, it may be desirable to calculate the position of some landmark points optically to avoid the risk of physical contact with a particular body part of the patient. Combining the two methods of patient registration may help improve accuracy of the overall patient registration process by using one method to verify the other. Furthermore, the optical patient registration system may be performed automatically, and the surgeon may use the displacement sensing system where verification of particular landmark points is required, which may further improve the speed of the patient registration process whilst retaining accuracy.
  • the head-mounted AR device is a Microsoft HoloLens 2 device. It will be appreciated that other head-mounted AR devices that display a virtual environment in association with a physical environment could be used. Further, examples of the augmented reality surgical navigation system need not include a headmounted AR device as alternatively a fixed camera could be used, for example in conjunction with the fixed display 11, with the displacement sensing system being used to maintain registration between a 3D model in a virtual environment and a corresponding body part in a physical environment.
  • displacement sensing system of the illustrated embodiment is an electronic tracking system
  • other displacement systems that do not rely upon image analysis to track movement in six dimensions could be used.
  • a six axis gyroscopic system could be used.
  • optical fiducial markers on the displacement sensors assist in the registration of the position of the origin of the displacement sensor and the orientation of the co-ordinate axes of the displacement sensor in the virtual environment.
  • the optical fiducial markers are not, however, essential as the displacement sensor may be designed to have sufficient landmark points to enable registration to be performed.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • Biomedical Technology (AREA)
  • Veterinary Medicine (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Molecular Biology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Robotics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Pathology (AREA)
  • Processing Or Creating Images (AREA)

Abstract

La divulgation concerne un système chirurgical à réalité augmentée. Le système chirurgical à réalité augmentée comprend une caméra, une unité d'affichage (5), un système de détection de déplacement et un système de traitement. Un modèle 3D est affiché dans un environnement virtuel. Le système de détection de déplacement comprend des capteurs de déplacement (7) qui sont utilisés pour mettre à jour la position du modèle 3D dans l'environnement virtuel.
PCT/EP2022/074921 2021-09-07 2022-09-07 Système de navigation chirurgicale à réalité augmentée Ceased WO2023036848A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB2112741.0 2021-09-07
GB2112741.0A GB2614025B (en) 2021-09-07 2021-09-07 Augmented reality surgical navigation system

Publications (1)

Publication Number Publication Date
WO2023036848A1 true WO2023036848A1 (fr) 2023-03-16

Family

ID=78076860

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2022/074921 Ceased WO2023036848A1 (fr) 2021-09-07 2022-09-07 Système de navigation chirurgicale à réalité augmentée

Country Status (2)

Country Link
GB (1) GB2614025B (fr)
WO (1) WO2023036848A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116993794A (zh) * 2023-08-02 2023-11-03 德智鸿(上海)机器人有限责任公司 一种增强现实手术辅助导航的虚实配准方法和装置
WO2025229542A1 (fr) * 2024-05-01 2025-11-06 Auris Health, Inc. Localisation de cible pour accès percutané

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160225192A1 (en) * 2015-02-03 2016-08-04 Thales USA, Inc. Surgeon head-mounted display apparatuses
US20170258526A1 (en) * 2016-03-12 2017-09-14 Philipp K. Lang Devices and methods for surgery
WO2019215550A1 (fr) * 2018-05-10 2019-11-14 3M Innovative Properties Company Traitement orthodontique simulé par visualisation en réalité augmentée en temps réel
WO2020163358A1 (fr) * 2019-02-05 2020-08-13 Smith & Nephew, Inc. Système d'arthroplastie assisté par ordinateur pour améliorer le fonctionnement de la rotule

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8325614B2 (en) * 2010-01-05 2012-12-04 Jasper Wireless, Inc. System and method for connecting, configuring and testing new wireless devices and applications

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160225192A1 (en) * 2015-02-03 2016-08-04 Thales USA, Inc. Surgeon head-mounted display apparatuses
US20170258526A1 (en) * 2016-03-12 2017-09-14 Philipp K. Lang Devices and methods for surgery
WO2019215550A1 (fr) * 2018-05-10 2019-11-14 3M Innovative Properties Company Traitement orthodontique simulé par visualisation en réalité augmentée en temps réel
WO2020163358A1 (fr) * 2019-02-05 2020-08-13 Smith & Nephew, Inc. Système d'arthroplastie assisté par ordinateur pour améliorer le fonctionnement de la rotule

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
PEPE ANTONIO ET AL: "A Marker-Less Registration Approach for Mixed Reality-Aided Maxillofacial Surgery: a Pilot Evaluation", JOURNAL OF DIGITAL IMAGING, SPRINGER INTERNATIONAL PUBLISHING, CHAM, vol. 32, no. 6, 4 September 2019 (2019-09-04), pages 1008 - 1018, XP037047699, ISSN: 0897-1889, [retrieved on 20190904], DOI: 10.1007/S10278-019-00272-6 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116993794A (zh) * 2023-08-02 2023-11-03 德智鸿(上海)机器人有限责任公司 一种增强现实手术辅助导航的虚实配准方法和装置
CN116993794B (zh) * 2023-08-02 2024-05-24 德智鸿(上海)机器人有限责任公司 一种增强现实手术辅助导航的虚实配准方法和装置
WO2025229542A1 (fr) * 2024-05-01 2025-11-06 Auris Health, Inc. Localisation de cible pour accès percutané

Also Published As

Publication number Publication date
GB2614025B (en) 2023-12-27
GB202112741D0 (en) 2021-10-20
GB2614025A (en) 2023-06-28

Similar Documents

Publication Publication Date Title
US11717376B2 (en) System and method for dynamic validation, correction of registration misalignment for surgical navigation between the real and virtual images
US11712307B2 (en) System and method for mapping navigation space to patient space in a medical procedure
EP3773305B1 (fr) Systèmes pour effectuer un guidage peropératoire
JP7429120B2 (ja) ホログラフィック画像ガイダンスのための非血管性経皮処置のシステム及び方法
CA2973479C (fr) Systeme et procede de mise en correspondance d'un espace de navigation avec l'espace patient au cours d'un acte medical
US10166079B2 (en) Depth-encoded fiducial marker for intraoperative surgical registration
US11191595B2 (en) Method for recovering patient registration
Grimson et al. Clinical experience with a high precision image-guided neurosurgery system
US20080119725A1 (en) Systems and Methods for Visual Verification of CT Registration and Feedback
JP2002186603A (ja) 対象物の案内のための座標変換法
US20240285351A1 (en) Surgical assistance system with improved registration, and registration method
WO2023036848A1 (fr) Système de navigation chirurgicale à réalité augmentée
EP4169470B1 (fr) Appareil et procédé de positionnement du corps d'un patient et de suivi de la position du patient pendant une intervention chirurgicale
Giraldez et al. Design and clinical evaluation of an image-guided surgical microscope with an integrated tracking system
EP4169468B1 (fr) Technique de fourniture de guidage à un utilisateur sur l'agencement d'un objet d'intérêt dans une salle d'opération
Jing et al. Navigating system for endoscopic sinus surgery based on augmented reality
Li et al. C-arm based image-guided percutaneous puncture of minimally invasive spine surgery
Edwards et al. Guiding therapeutic procedures

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22782667

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 10/06/2024)

122 Ep: pct application non-entry in european phase

Ref document number: 22782667

Country of ref document: EP

Kind code of ref document: A1