WO2026028196A1 - Validation using registered fiducial - Google Patents
Validation using registered fiducialInfo
- Publication number
- WO2026028196A1 WO2026028196A1 PCT/IL2025/050625 IL2025050625W WO2026028196A1 WO 2026028196 A1 WO2026028196 A1 WO 2026028196A1 IL 2025050625 W IL2025050625 W IL 2025050625W WO 2026028196 A1 WO2026028196 A1 WO 2026028196A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image data
- fiducial
- anatomical element
- patient
- respect
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Abstract
Systems and methods for registration using one or more fiducials and X-ray technology are provided. A first imaging device is operable to obtain first image data including at least one fiducial and a portion of an anatomical element of a patient that are visible in the first image data. The location of the at least one fiducial with respect to the anatomical element of the patient based on the first image data is determined. Errors in the obtained first image data and the determined location of the at least one fiducial with respect to the anatomical element of the patient are eliminated by using the first imaging device and the determined location of the at least one fiducial with respect to the anatomical element of the patient is registered. The first imaging device obtains the first image data at a wavelength of 10 nanometers or less.
Description
VALIDATION USING REGISTERED FIDUCIAL
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit of U.S. Provisional Patent Application Serial No. 63/676,783, filed 29 July 2024, the entire content of which is incorporated herein by reference.
FIELD OF INVENTION
[0002] The present disclosure is generally directed to registration and relates more particularly to registration using one or more fiducials and X-ray technology.
BACKGROUND
[0003] Surgical robots may assist a surgeon or other medical provider in carrying out a surgical procedure or may complete one or more surgical procedures autonomously. Imaging may be used by a medical provider for diagnostic and/or therapeutic purposes. Patient anatomy can change over time, particularly following placement of a medical implant in the patient anatomy.
BRIEF SUMMARY
[0004] Example aspects of the present disclosure include:
A system according to at least one embodiment of the present disclosure comprises a first imaging device operable to obtain first image data, at least one fiducial that is visible in the first image data, wherein the at least one fiducial is mounted on an anatomical element of a patient; a processor; and a memory storing data for processing by the processor, the data, when processed, causes the processor to: receive the first image data depicting the at least one fiducial and at least a portion of the anatomical element of the patient; determine a location of the at least one fiducial with respect to the anatomical element of the patient based on the first image data; eliminate errors in the obtained first image data and the determined location of the at least one fiducial with respect to the anatomical element of the patient by using the first imaging device; and register the determined location of the at least one fiducial with respect to the anatomical element of the patient, wherein the first imaging device obtains the first image data at a wavelength of 10 nanometers or less.
[0005] Any of the aspects herein, wherein the memory stores additional data for processing by the processor, that when processed, cause the processor to receive second image data from a second imaging device; and determine movement of the location of the at least one fiducial with respect
to the anatomical element of the patient based on comparing the first image data with the second image data.
[0006] Any of the aspects herein, wherein the memory stores additional data for processing by the processor, that when processed, cause the processor to determine movement of the location of the at least one fiducial with respect to the anatomical element of the patient has exceeded a predetermined threshold; and provide notification that the registered determined location of the at least one fiducial with respect to the anatomical element of the patient is invalid.
[0007] Any of the aspects herein, wherein the memory stores additional data for processing by the processor, that when processed, cause the processor to: compare the first image data with the second image data; determine movement of the location of the at least one fiducial with respect to the anatomical element of the patient based on the comparison of the first image data with the second image data; and update the registered determined location of the at least one fiducial with respect to the anatomical element of the patient based on the comparison of the first image data with the second image data.
[0008] Any of the aspects herein, wherein the first imaging device is an X-ray device, and the X- ray device obtains X-ray image data as the first image data.
[0009] Any of the aspects herein, wherein the first imaging device is a computerized tomography (CT) machine, and the CT machine obtains CT image data as the first image data.
[0010] Any of the aspects herein, wherein the second imaging device obtains the second image data at a wavelength of 400 nanometers or greater.
[0011] Any of the aspects herein, wherein the second imaging device is a three-dimensional (3D) imaging camera, and the 3D imaging camera obtains 3D image data as the second image data.
[0012] Any of the aspects herein, wherein the at least one fiducial comprises at least three fiducials.
[0013] Any of the aspects herein, wherein the anatomical element comprises one or more vertebrae.
[0014] Any of the aspects herein, wherein the at least one fiducial comprises a metal.
[0015] Any of the aspects herein, wherein the at least one fiducial comprises glass.
[0016] Any of the aspects herein, wherein the at least one fiducial is radiopaque.
[0017] Any of the aspects herein, further comprising a reflective marker that is visible in the first image data.
[0018] A system according to at least one embodiment of the present disclosure comprises a first imaging device operable to obtain first image data; at least one fiducial that is visible in the first image data, wherein the at least one fiducial is mounted on an anatomical element of a patient; a processor; and a memory storing data for processing by the processor, the data, when processed, causes the processor to: receive the first image data depicting the at least one fiducial and at least a portion of the anatomical element of the patient; determine a location of the at least one fiducial with respect to the anatomical element of the patient based on the first image data; eliminate errors in the obtained first image data and the determined location of the at least one fiducial with respect to the anatomical element of the patient by using the first imaging device; register the determined location of the at least one fiducial with respect to the anatomical element of the patient, wherein the first imaging device obtains the first image data at a wavelength of 10 nanometers or less; receive second imaging data from a second imaging device; compare the first image data with the second image data; and determine movement of the location of the at least one fiducial with respect to the anatomical element of the patient based on the comparison of the first image data and the second image data.
[0019] Any of the aspects herein, wherein the memory stores additional data for processing by the processor, that when processed, cause the processor to: determine movement of the location of the at least one fiducial with respect to the anatomical element of the patient has exceeded a predetermined threshold; and provide notification that the registered determined location of the at least one fiducial with respect to the anatomical element of the patient is invalid.
[0020] Any of the aspects herein, wherein the memory stores additional data for processing by the processor, that when processed, cause the processor to: determine movement of the location of the at least one fiducial with respect to the anatomical element of the patient based on comparing the first image data with the second image data; and update the registered determined location of the at least one fiducial with respect to the anatomical element of the patient based on the comparison of the first image data with the second image data.
[0021] A method according to at least one embodiment of the present disclosure comprises receiving, from a first imaging device, first image data including at least one fiducial and at least a portion of an anatomical element of a patient that are visible in the first image data, wherein the at least one fiducial is mounted on that anatomical element of a patient; determining a location of the at least one fiducial with respect to the anatomical element of the patient based on the first image
data; eliminating errors in the received first image data and the determined location of the at least one fiducial with respect to the anatomical element of the patient by using the first imaging device; and registering the determined location of the at least one fiducial with respect to the anatomical element of the patient, wherein the first imaging device obtains the first image data at a wavelength of 10 nanometers or less.
[0022] Any of the aspects herein further comprising receiving, from a second imaging device, second image data; and determining movement of the location of the at least one fiducial with respect to the anatomical element of the patient based on comparing the first image data with the second image data.
[0023] Any of the aspects herein further comprising determining movement of the location of the at least one fiducial with respect to the anatomical element of the patient has exceeded a predetermined threshold; and providing notification that the registered determined location of the at least one fiducial with respect to the anatomical element of the patient is invalid.
[0024] Any aspect in combination with any one or more other aspects.
[0025] Any one or more of the features disclosed herein.
[0026] Any one or more of the features as substantially disclosed herein.
[0027] Any one or more of the features as substantially disclosed herein in combination with any one or more other features as substantially disclosed herein.
[0028] Any one of the aspects/features/embodiments in combination with any one or more other aspects/features/embodiments.
[0029] Use of any one or more of the aspects or features as disclosed herein.
[0030] It is to be appreciated that any feature described herein can be claimed in combination with any other feature(s) as described herein, regardless of whether the features come from the same described embodiment.
[0031] The details of one or more aspects of the disclosure are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the techniques described in this disclosure will be apparent from the description and drawings, and from the claims.
[0032] The phrases “at least one”, “one or more”, and “and/or” are open-ended expressions that are both conjunctive and disjunctive in operation. For example, each of the expressions “at least one of A, B and C”, “at least one of A, B, or C”, “one or more of A, B, and C”, “one or more of A,
B, or C” and “A, B, and/or C” means A alone, B alone, C alone, A and B together, A and C together, B and C together, or A, B and C together. When each one of A, B, and C in the above expressions refers to an element, such as X, Y, and Z, or class of elements, such as XI -Xn, Yl-Ym, and Zl- Zo, the phrase is intended to refer to a single element selected from X, Y, and Z, a combination of elements selected from the same class (e.g., XI and X2) as well as a combination of elements selected from two or more classes (e.g., Y 1 and Zo).
[0033] The term “a” or “an” entity refers to one or more of that entity. As such, the terms “a” (or “an”), “one or more” and “at least one” can be used interchangeably herein. It is also to be noted that the terms “comprising”, “including”, and “having” can be used interchangeably.
[0034] The preceding is a simplified summary of the disclosure to provide an understanding of some aspects of the disclosure. This summary is neither an extensive nor exhaustive overview of the disclosure and its various aspects, embodiments, and configurations. It is intended neither to identify key or critical elements of the disclosure nor to delineate the scope of the disclosure but to present selected concepts of the disclosure in a simplified form as an introduction to the more detailed description presented below. As will be appreciated, other aspects, embodiments, and configurations of the disclosure are possible utilizing, alone or in combination, one or more of the features set forth above or described in detail below.
[0035] Numerous additional features and advantages of the present disclosure will become apparent to those skilled in the art upon consideration of the embodiment descriptions provided hereinbelow.
BRIEF DESCRIPTION OF THE DRAWINGS
[0036] The accompanying drawings are incorporated into and form a part of the specification to illustrate several examples of the present disclosure. These drawings, together with the description, explain the principles of the disclosure. The drawings simply illustrate preferred and alternative examples of how the disclosure can be made and used and are not to be construed as limiting the disclosure to only the illustrated and described examples. Further features and advantages will become apparent from the following, more detailed, description of the various aspects, embodiments, and configurations of the disclosure, as illustrated by the drawings referenced below. [0037] Fig. 1 is a block diagram of a system according to at least one embodiment of the present disclosure;
[0038] Fig. 2 is a schematic illustration of a set of images of fiducials according to at least one embodiment of the present disclosure;
[0039] Fig. 3 is an example of items under X-ray according to at least one embodiment of the present disclosure;
[0040] Fig. 4 is a schematic illustration of one or more surgical landmarks according to at least one embodiment of the present disclosure;
[0041] Fig. 5 is an example illustration of a patient anatomy including at least one fiducial according to at least one embodiment of the present disclosure;
[0042] Fig. 6 is an example process according to at least one embodiment of the present disclosure;
[0043] Fig. 7 A is an example implementation of a navigation tool according to at least one embodiment of the present disclosure;
[0044] Fig. 7B is an example implementation of a navigation system according to at least one embodiment of the present disclosure;
[0045] Fig. 7C is an example implementation of a navigation system according to at least one embodiment of the present disclosure; and
[0046] Fig. 8 is a flowchart according to at least one embodiment of the present disclosure.
[0047] Skilled artisans will appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help to improve understanding of examples, aspects, and features illustrated.
[0048] In some instances, the apparatus and method components have been represented where appropriate by conventional symbols in the drawings, showing only those specific details that are pertinent to understanding the of various implementations, examples, aspects, and features so as not to obscure the disclosure with details that will be readily apparent to those of ordinary skill in the art having the benefit of the description herein.
DETAILED DESCRIPTION
[0049] It should be understood that various aspects disclosed herein may be combined in different combinations than the combinations specifically presented in the description and accompanying drawings. It should also be understood that, depending on the example or
embodiment, certain acts or events of any of the processes or methods described herein may be performed in a different sequence, and/or may be added, merged, or left out altogether (e.g., all described acts or events may not be necessary to carry out the disclosed techniques according to different embodiments of the present disclosure). In addition, while certain aspects of this disclosure are described as being performed by a single module or unit for purposes of clarity, it should be understood that the techniques of this disclosure may be performed by a combination of units or modules associated with, for example, a computing device and/or a medical device.
[0050] In one or more examples, the described methods, processes, and techniques may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or code on a computer-readable medium and executed by a hardware-based processing unit. Alternatively, or additionally, functions may be implemented using machine learning models, neural networks, artificial neural networks, or combinations thereof (alone or in combination with instructions). Computer-readable media may include non-transitory computer-readable media, which corresponds to a tangible medium such as data storage media (e.g., RAM, ROM, EEPROM, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer).
[0051] Instructions may be executed by one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors (e.g., Intel Core i3, i5, i7, or i9 processors; Intel Celeron processors; Intel Xeon processors; Intel Pentium processors; AMD Ryzen processors; AMD Athlon processors; AMD Phenom processors; Apple A10 or 10X Fusion processors; Apple Al 1, A 12, A12X, A12Z, or A13 Bionic processors; or any other general purpose microprocessors), graphics processing units (e.g., Nvidia GeForce RTX 2000-series processors, Nvidia GeForce RTX 3000-series processors, AMD Radeon RX 5000-series processors, AMD Radeon RX 6000-series processors, or any other graphics processing units), application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Accordingly, the term “processor” as used herein may refer to any of the foregoing structure or any other physical structure suitable for implementation of the described techniques. Also, the techniques could be fully implemented in one or more circuits or logic elements.
[0052] Before any embodiments of the disclosure are explained in detail, it is to be understood that the disclosure is not limited in its application to the details of construction and the arrangement of components set forth in the following description or illustrated in the drawings. The disclosure is capable of other embodiments and of being practiced or of being carried out in various ways. Also, it is to be understood that the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The use of “including,” “comprising,” or “having” and variations thereof herein is meant to encompass the items listed thereafter and equivalents thereof as well as additional items. Further, the present disclosure may use examples to illustrate one or more aspects thereof. Unless explicitly stated otherwise, the use or listing of one or more examples (which may be denoted by “for example,” “by way of example,” “e.g.,” “such as,” or similar language) is not intended to and does not limit the scope of the present disclosure. [0053] The terms proximal and distal are used in this disclosure with their conventional medical meanings, proximal being closer to the operator or user of the system, and further from the region of surgical interest in or on the patient, and distal being closer to the region of surgical interest in or on the patient, and further from the operator or user of the system.
[0054] Currently, while using navigation systems to navigate the anatomy of a patient (e.g., an anatomy having innate flexibility such as a patient’s spine), the options available for testing the motion of the anatomy from a registered state are limited. Options currently available include a user performing tests to determine the location of the anatomy without these tests being automated. These tests are also to anatomical landmarks and do not provide numerical feedback. In other words, there is no live indication or monitoring of the movement of the anatomy. Therefore, if the anatomy has shifted, the user does not know the location of the anatomy which compromises system performance.
[0055] In surgical procedures such as robotic surgical procedures or robotic assisted surgical procedures for, for example, a spinal operation, at least one vertebra is initially registered using one or more computerized tomography (CT) images registered to images from a three-dimensional (3D) imaging camera. During the surgical procedures, the patient may move. These movements result in reregistering the vertebra to account for the movement. Conventionally, the vertebra may be reregistered by obtaining additional CT images and registered to images from the 3D imaging camera.
[0056] The 3D imaging camera is used to detect markers located near the patient and/or markers located on the at least one vertebra of the patient during the initial registration. The 3D imaging camera, however, is quite inaccurate in determining the locations of the markers. For example, when taking images with the 3D imaging camera, each angle of the 3D imaging camera has a different reflection (e.g., the reflection is not uniform). This leads to distortion of the images of the markers. Moreover, when trying to determine the location of the markers using the 3D imaging camera, other errors are created when establishing the correspondence between the markers detected in the 3D camera images and the markers localized in the CT images which involves matching the 3D coordinates of the markers obtained from both imaging modalities. The technique described above is prone to various errors.
[0057] Moreover, when using an X-ray for registration, the anatomical element is of concern and the marker (i.e., the fiducial) may not show up on the X-ray depending on the material used for the fiducial). On the other hand, when using a camera for registration, the marker is of concern and the anatomical element is usually not shown. According to embodiments of the present disclosure, by using an X-ray of the anatomical element with a fiducial attached to or in proximity to the anatomical element that appears on the X-ray for registration, an accurate determination of the anatomical element with respect to the fiducial can be determined since both (i.e., the anatomical element and the marker) can be seen on the X-ray. Estimates are used for other types of registration methods for determining the location of the anatomical element with respect to the fiducial.
[0058] Thus, according to at least one embodiment of the present disclosure, the location of a marker attached to a patient anatomy is measured using an X-ray machine. The measurement is taken during radiation of the patient on the patient’s body. By measuring the markers on the patient’s body itself, allows for more accurate measurements. Moreover, the reregistration process can be applied for various scenarios including, but not limited to, verification of non-movement of the anatomical element and registration of the anatomical element after movement of the anatomical element.
[0059] Embodiments of the present disclosure provide technical solutions to one or more of the problems of (1) accurately determining movement of an anatomical element, (2) accurately enabling registration or reregistration of an anatomical element, and (3) providing real-time positional information of one or more anatomical elements to a surgical team.
[0060] Turning first to Fig. 1, a block diagram of a system 100 according to at least one embodiment of the present disclosure is shown. The system 100 may be used to perform a registration using at least one fiduciary 126 mounted on a patient and/or to carry out one or more other aspects of one or more of the methods disclosed herein. The system 100 comprises a computing device 102, one or more imaging devices 112, a robot 114, a navigation system 118, the one or more fiducials 126, a database 130, and/or a cloud or other network 134. Systems according to other embodiments of the present disclosure may comprise more or fewer components than the system 100. For example, the system 100 may not include the imaging device 112, the robot 114, the navigation system 118, one or more components of the computing device 102, the database 130, and/or the cloud 134.
[0061] The computing device 102 comprises a processor 104, a memory 106, a communication interface 108, and a user interface 110. Computing devices according to other embodiments of the present disclosure may comprise more or fewer components than the computing device 102.
[0062] The processor 104 of the computing device 102 may be any processor described herein or any similar processor. The processor 104 may be configured to execute instructions stored in the memory 106, which instructions may cause the processor 104 to carry out one or more computing steps utilizing or based on data received from the imaging device 112, the robot 114, the navigation system 118, the database 130, and/or the cloud 134.
[0063] The memory 106 may be or comprise RAM, DRAM, SDRAM, other solid-state memory, any memory described herein, or any other tangible, non-transitory memory for storing computer- readable data and/or instructions. The memory 106 may store information or data useful for completing, for example, any step of the method 800 described herein, or of any other methods. The memory 106 may store, for example, instructions and/or machine learning models that support one or more functions of the robot 114. For instance, the memory 106 may store content (e.g., instructions and/or machine learning models) that, when executed by the processor 104, enable image processing 120, segmentation 122, registration 124, and/or transformation 128.
[0064] The image processing 120 enables the processor 104 to process image data of an image (received from, for example, the imaging device 112, an imaging device of the navigation system 118, or any imaging device, etc.) for the purpose of, for example, identifying one or more anatomical elements and/or fiducials 126 depicted in the image data. The fiducial 126 may be mounted on the patient on a target anatomical element. The target anatomical element may be, for
example, one or more vertebrae though it will be appreciated that the target anatomical element may be any soft tissue and/or hard tissue. The information may comprise, for example, identification of anatomical element(s) and/or the fiducial(s) 126, a boundary between an anatomical element, a boundary of hard tissue and/or soft tissue, etc. The image processing 120 may, for example, identify the anatomical element based on the fiducial(s) 126, and/or a boundary of the anatomical element by determining a difference in or contrast between colors or grayscales of image pixels. For example, a boundary of the anatomical element may be identified as a contrast between lighter pixels and darker pixels. The imaging processing 120 may also use segmentation 122, as described below.
[0065] The segmentation 122 enables the processor 104 to process image data of an image (received from, for example, the imaging device 112, an imaging device of the navigation system 118, or any imaging device) for the purpose of, for example, identifying individual objects and/or anatomical elements in the image data. In some embodiments, the segmentation 122 may be used by the image processing 120. The segmentation 122 may enable the processor 104 to identify a boundary of a fiducial 126 or an anatomical element by using, for example, feature recognition. For example, the segmentation 122 may enable the processor 104 to identify a vertebra in the image data. In other instances, the segmentation 122 may enable the processor 104 to identify a boundary of the fiducial 126 or an anatomical element by determining a difference in or contrast between colors or grayscales of image pixels.
[0066] The fiducial(s) 126 and/or anatomical element(s) identified from the image processing 120 and/or the segmentation 122 may enable the registration 124 to identify a target anatomical element based on the fiducial(s) 126, as will be described in more detail below.
[0067] The registration 124 enables the processor 104 to process the identified fiducial(s) 126 and/or anatomical element(s) obtained from the image processing 120 and/or the segmentation 122 to register the anatomical element(s) depicted in the image data based on the identified fiducial(s) 126 to, for example, a preliminary image of the patient. It will be appreciated that though the image processing 120, the segmentation 122, the registration 124, and the transformation are described separately, that the image processing 120 and/or the segmentation 122 may be part of or a step of the registration 124. For example, registering the one or more anatomical elements may comprise using the image processing 120 and/or the segmentation 122 to identify one or more fiducial(s) 126 and/or one or more anatomical element(s) depicted in the image data.
[0068] The transformation 128 enables the processor 104 to transform one coordinate system into another coordinate system. In other words, the transformation 128 enables the processor 104 to transform the first coordinate system (e.g., the patient coordinate system) into the second coordinate system (e.g., the reference frame coordinate system) based on, for example, the registration of the first coordinate system and the third coordinate system and the registration of the second coordinate system and the third coordinate system.
[0069] Such content, if provided as in instruction, may, in some embodiments, be organized into one or more applications, modules, packages, layers, or engines. Alternatively or additionally, the memory 106 may store other types of content or data (e.g., machine learning models, artificial neural networks, deep neural networks, etc.) that can be processed by the processor 104 to carry out the various method and features described herein. Thus, although various contents of memory 106 may be described as instructions, it should be appreciated that functionality described herein can be achieved through use of instructions, algorithms, and/or machine learning models. The data, algorithms, and/or instructions may cause the processor 104 to manipulate data stored in the memory 106 and/or received from or via the imaging device 112, the robot 114, the database 130, and/or the cloud 134.
[0070] The computing device 102 may also comprise a communication interface 108. The communication interface 108 may be used for receiving image data or other information from an external source (such as the imaging device 112, the robot 114, the navigation system 118, the database 130, the cloud 134, and/or any other system or component not part of the system 100), and/or for transmitting instructions, images, or other information to an external system or device (e.g., another computing device 102, the imaging device 112, the robot 114, the navigation system 118, the database 130, the cloud 134, and/or any other system or component not part of the system 100). The communication interface 108 may comprise one or more wired interfaces (e.g., a USB port, an Ethernet port, a Firewire port) and/or one or more wireless transceivers or interfaces (configured, for example, to transmit and/or receive information via one or more wireless communication protocols such as 802.1 la/b/g/n, Bluetooth, NFC, ZigBee, and so forth). In some embodiments, the communication interface 108 may be useful for enabling the device 102 to communicate with one or more other processors 104 or computing devices 102, whether to reduce the time needed to accomplish a computing-intensive task or for any other reason.
[0071] The computing device 102 may also comprise one or more user interfaces 110. The user interface 110 may be or comprise a keyboard, mouse, trackball, monitor, television, screen, touchscreen, and/or any other device for receiving information from a user and/or for providing information to a user. The user interface 110 may be used, for example, to receive a user selection or other user input regarding any step of any method described herein. Notwithstanding the foregoing, any required input for any step of any method described herein may be generated automatically by the system 100 (e.g., by the processor 104 or another component of the system 100) or received by the system 100 from a source external to the system 100. In some embodiments, the user interface 110 may be useful to allow a surgeon or other user to modify instructions to be executed by the processor 104 according to one or more embodiments of the present disclosure, and/or to modify or adjust a setting of other information displayed on the user interface 110 or corresponding thereto.
[0072] Although the user interface 110 is shown as part of the computing device 102, in some embodiments, the computing device 102 may utilize a user interface 110 that is housed separately from one or more remaining components of the computing device 102. In some embodiments, the user interface 110 may be located proximate one or more other components of the computing device 102, while in other embodiments, the user interface 110 may be located remotely from one or more other components of the computer device 102.
[0073] The imaging device 112 may be operable to image the fiducial(s) 126, anatomical feature(s) (e.g., a bone, veins, tissue, etc.), and/or other aspects of patient anatomy to yield image data (e.g., image data depicting or corresponding to a bone, veins, tissue, etc.). “Image data” as used herein refers to the data generated or captured by an imaging device 112, including in a machine -readable form, a graphical/visual form, and in any other form. In various examples, the image data may comprise data corresponding to the fiducial(s) 126, an anatomical feature of a patient, or to a portion thereof. The image data may be or comprise a preoperative image, an intraoperative image, a postoperative image, or an image taken independently of any surgical procedure. In some embodiments, a first imaging device 112 may be used to obtain first image data (e.g., a first image) at a first time, and a second imaging device 112 may be used to obtain second image data (e.g., a second image) at a second time after the first time. In such embodiments, the first imaging device may use ionizing radiation (e.g., X-ray scans) and the second imaging device
may be free of ionizing radiation (e.g., ultrasound scans). In other embodiments, the imaging device 112 may obtain the first image data and the second image data.
[0074] The imaging device 112 may be capable of taking a 2D image or a 3D image to yield the image data. The imaging device 112 may be or comprise, for example, an ultrasound scanner (which may comprise, for example, a physically separate transducer and receiver, or a single ultrasound transceiver), an 0-arm, a C-arm, a G-arm, or any other device utilizing X-ray-based imaging (e.g., a fluoroscope, a CT scanner, or other X-ray machine), a magnetic resonance imaging (MRI) scanner, an optical coherence tomography (OCT) scanner, an endoscope, a microscope, an optical camera, a thermographic camera (e.g., an infrared camera), a radar system (which may comprise, for example, a transmitter, a receiver, a processor, and one or more antennae), or any other imaging device 112 suitable for obtaining images of an anatomical feature of a patient. The imaging device 112 may be contained entirely within a single housing or may comprise a transmitter/emitter and a receiver/detector that are in separate housings or are otherwise physically separated.
[0075] In some embodiments, the imaging device 112 may comprise more than one imaging device 112. For example, a first imaging device may provide first image data and/or a first image, and a second imaging device may provide second image data and/or a second image. In still other embodiments, the same imaging device may be used to provide both the first image data and the second image data, and/or any other image data described herein. The imaging device 112 may be operable to generate a stream of image data. For example, the imaging device 112 may be configured to operate with an open shutter, or with a shutter that continuously alternates between open and shut so as to capture successive images. For purposes of the present disclosure, unless specified otherwise, image data may be considered to be continuous and/or provided as an image data stream if the image data represents two or more frames per second.
[0076] The robot 114 may be any surgical robot or surgical robotic system. The robot 114 may be or comprise, for example, the Mazor X™ Stealth Edition robotic guidance system. The robot 114 may be configured to position the imaging device 112 at one or more precise position(s) and orientation/ s), and/or to return the imaging device 112 to the same position(s) and orientation(s) at a later point in time. The robot 114 may additionally or alternatively be configured to manipulate a surgical tool (whether based on guidance from the navigation system 118 or not) to accomplish or to assist with a surgical task. The robot 114 may also be configured to position and/or insert one
or more fiducial(s) 126 into the patient and near a target anatomical element. In some embodiments, the robot 114 may be configured to hold and/or manipulate an anatomical element during or in connection with a surgical procedure. The robot 114 may comprise one or more robotic arms 116. In some embodiments, the robotic arm 116 may comprise a first robotic arm and a second robotic arm, though the robot 114 may comprise more than two robotic arms. In some embodiments, one or more of the robotic arms 116 may be used to hold and/or maneuver the imaging device 112. In embodiments where the imaging device 112 comprises two or more physically separate components (e.g., a transmitter and receiver), one robotic arm 116 may hold one such component, and another robotic arm 116 may hold another such component. Each robotic arm 116 may be positionable independently of the other robotic arm. The robotic arms 116 may be controlled in a single, shared coordinate space, or in separate coordinate spaces.
[0077] The robot 114, together with the robotic arm 116, may have, for example, one, two, three, four, five, six, seven, or more degrees of freedom. Further, the robotic arm 116 may be positioned or positionable in any pose, plane, and/or focal point. The pose includes a position and an orientation. As a result, an imaging device 112, surgical tool, or other object held by the robot 114 (or, more specifically, by the robotic arm 116) may be precisely positionable in one or more needed and specific positions and orientations.
[0078] The robotic arm(s) 116 may comprise one or more sensors that enable the processor 104 (or a processor of the robot 114) to determine a precise pose in space of the robotic arm (as well as any object or element held by or secured to the robotic arm).
[0079] In some embodiments, reference markers (e.g., navigation markers) may be placed on the robot 114 (including, e.g., on the robotic arm 116), the imaging device 112, or any other object in the surgical space. The reference markers may be tracked by the navigation system 118, and the results of the tracking may be used by the robot 114 and/or by an operator of the system 100 or any component thereof. In some embodiments, the navigation system 118 can be used to track other components of the system (e.g., imaging device 112) and the system can operate without the use of the robot 114 (e.g., with the surgeon manually manipulating the imaging device 112 and/or one or more surgical tools, based on information and/or instructions generated by the navigation system 118, for example).
[0080] In some aspects, the navigation system 118 may include one or more of an optical tracking system, an acoustic tracking system, an electromagnetic tracking system, a radar tracking system,
an inertial measurement unit (IMU) based tracking system, and a computer vision based tracking system. The navigation system 118 may include a corresponding transmission device 136 capable of transmitting signals associated with the tracking type. In some aspects, the navigation system 118 may be capable of computer vision based tracking of objects present in images captured by the imaging device(s) 112.
[0081] The navigation system 118 may provide navigation for a surgeon and/or a surgical robot during an operation. The navigation system 118 may be any now-known or future -developed navigation system, including, for example, the Medtronic StealthStation™ S8 surgical navigation system or any successor thereof. The navigation system 118 may include one or more cameras or other sensor(s) for tracking one or more reference markers, navigated trackers (e.g., tracking devices 140, etc.) or other objects within the operating room or other room in which some or all of the system 100 is located. The one or more cameras may be optical cameras, infrared cameras, or other cameras. In some implementations, the navigation system 118 may include one or more tracking devices 140 (e.g., electromagnetic sensors, acoustic sensors, etc.).
[0082] In some aspects, the navigation system 118 may include one or more of an optical tracking system, an acoustic tracking system, an electromagnetic tracking system, a radar tracking system, an inertial measurement unit (IMU) based tracking system, and a computer vision based tracking system. The navigation system 118 may include a corresponding transmission device 136 capable of transmitting signals associated with the tracking type. In some aspects, the navigation system 118 may be capable of computer vision based tracking of objects present in images captured by the imaging device(s) 112.
[0083] In various embodiments, the navigation system 118 may be used to track a position and orientation (e.g., a pose) of the imaging device 112, the robot 114 and/or robotic arm 116, and/or one or more surgical tools (or, more particularly, to track a pose of a navigated tracker attached, directly or indirectly, in fixed relation to the one or more of the foregoing). The navigation system 118 may include a display for displaying one or more images from an external source (e.g., the computing device 102, imaging device 112, or other source) or for displaying an image and/or video stream from the one or more cameras or other sensors of the navigation system 118. In some embodiments, the system 100 can operate without the use of the navigation system 118. The navigation system 118 may be configured to provide guidance to a surgeon or other user of the system 100 or a component thereof, to the robot 114, or to any other element of the system 100
regarding, for example, a pose of one or more anatomical elements, whether or not a tool is in the proper trajectory, and/or how to move a tool into the proper trajectory to carry out a surgical task according to a preoperative or other surgical plan.
[0084] The fiducial(s) 126 may be mounted on a patient on a target anatomical element to enable or aid in registration of the target anatomical element. The fiducial(s) 126 are visible in image data from any imaging device 112 such as, for example, ultrasound imaging devices, X-ray imaging devices, etc. The fiducial(s) 126 may be, for example, a liquid or a gel, though it will be appreciated that in other embodiments the fiducial(s) 126 may be a solid material such as metal or glass. According to one embodiment of the present disclosure, the fiducial(s) 126 made be radiopaque. The fiducial(s) 126 can be used in a registration processing using, for example the registration 124. [0085] The fiducial(s) 126 may be used to identify the target anatomical element in instances where identification of the target anatomical element may be difficult in the image data. For example, a vertebra may be difficult to identify in ultrasound or X-ray imaging, whereas the fiducial(s) 126 may be easily identified in ultrasound or X-ray imaging. Further, at least three fiducial(s) 126 may be implanted near the target anatomical element such that the at least three fiducial(s) 126 form a unique pattern and can aid in identification of the target anatomical element and identification of an orientation and/or position of the target anatomical element. Moreover at least three fiducial(s) 126 may be required to properly align the anatomical elements (such as the spine) of the patient.
[0086] The fiducial(s) 126 may also be used to detect movement of the target anatomical element using X-ray imaging. For example, first image data obtained from, for example, an X-ray device, depicting the fiducial(s) and at least a portion of the target anatomical element may be compared to second image data depicting the fiducial(s) and at least a portion of the target anatomical element and taken after the first image data. A difference in the first image data and the second image data (and in particular, a difference in a pose of the fiducial(s) 126 in the first image data compared to the second image data) may suggest that the target anatomical element has moved. In such instances, the registration may be updated to account for the movement of the target anatomical element. The process may be repeated as needed throughout a surgical procedure. For example, third image data may be obtained and compared to the second image data to determine if movement of the anatomical element has occurred.
Y1
[0087] The database 130 may store information that correlates one coordinate system to another (e.g., one or more robotic coordinate systems to a patient coordinate system and/or to a navigation coordinate system). The database 130 may additionally or alternatively store, for example, one or more surgical plans (including, for example, pose information about a target and/or image information about a patient’ s anatomy at and/or proximate the surgical site, for use by the robot 114, the navigation system 118, and/or a user of the computing device 102 or of the system 100); one or more images useful in connection with a surgery to be completed by or with the assistance of one or more other components of the system 100; and/or any other useful information. The database 130 may be configured to provide any such information to the computing device 102 or to any other device of the system 100 or external to the system 100, whether directly or via the cloud 134. In some embodiments, the database 130 may be or comprise part of a hospital image storage system, such as a picture archiving and communication system (PACS), a health information system (HIS), and/or another system for collecting, storing, managing, and/or transmitting electronic medical records including image data.
[0088] The cloud 134 may be or represent the Internet or any other wide area network. The computing device 102 may be connected to the cloud 134 via the communication interface 108, using a wired connection, a wireless connection, or both. In some embodiments, the computing device 102 may communicate with the database 130 and/or an external device (e.g., a computing device) via the cloud 134.
[0089] The system 100 or similar systems may be used, for example, to carry out one or more aspects of any of the method 800 described herein. The system 100 or similar systems may also be used for other purposes.
[0090] Fig. 2 is a schematic illustration of a set of images of fiducials 200 according to at least one embodiment of the present disclosure. The set of images of the fiducials 200 is in the form of spheres: a first sphere 204, a second sphere 208 and a third sphere 212. As illustrated in Fig. 2, the image of the third sphere 212 is larger than the image of the first sphere 204 and the image of the second sphere 208. Moreover, the image of the second sphere 208 has more dark edges.
[0091] Because spheres are manufactured by different suppliers, the spheres are not uniform. Therefore, when the spheres are imaged, there are differences in size, dark spots, and black edges, for example, affecting the images of the spheres especially when the spheres are captured using a camera for example. Imaging devices that obtain image data at a wavelength of 40 nanometers or
greater experience this distortion in the images. For example, a black edge around the image of the sphere indicates regions where the relative angle of a camera is greater than 50 degrees when the image was taken. These differences can be attributed to the microscopic nature of the materials used to manufacture the spheres. Conversely, when the spheres are imaged using a X-ray machine, this problem does not exist since the X-ray machine obtains image data at a wavelength of 10 nanometers or less. Therefore, an image of the spheres taken by an X-ray does not experience this distortion. Thus, errors in obtaining image data using a X-ray machine can be eliminated.
[0092] Fig. 3 is an example of items under X-ray 300 according to at least one embodiment of the present disclosure. As illustrated in Fig. 3, the X-ray 300 is taken using a C-arm X-ray based imaging device as discussed in greater detail below. As illustrated in Fig. 3, a plastic radiolucent material 304 is embedded with radiopaque material and is used from registration. Radiopaque spheres 308 (e.g., tungsten carbide spheres) are embedded within the plastic material. Moreover, images 312 of fiduciaries 126 are clearly illustrated on the X-ray without any distortion. Although the fiduciaries 126 are not uniform, the inaccuracies are removed when the fiduciaries 126 are imaged using X-ray images as compared to using camera imaging.
[0093] Fig. 4 is a schematic illustration of one or more surgical landmarks 400 according to at least one embodiment of the present disclosure. The surgical landmarks 400 may include fiducial 126 disposed on an anatomical element 404.
[0094] Turning to Fig. 4, a schematic illustration of a view of an example target anatomical element 404 with one or more fiducial(s) 126 disposed on an anatomical element 404 is shown. A reflective marker 408 (as discussed in greater detail below) may also be disposed on the anatomical element 404. Though the foregoing elements are illustrated together, it will be appreciated that the surgical landmark 400 may include fiducial(s) 126 and the reflective marker 408 provided on the anatomical element 404, and/or any other landmark 400 in any combination thereof. The fiducial(s) 126 may be the same as or similar to the fiducial(s) 126 described above. In the illustrated embodiment, the anatomical element 404 includes, for example, a vertebra. It will be appreciated that in other embodiments of the present disclosure, the anatomical element 404 may include any anatomical element of a patient such as, for example, a bone, an organ, soft tissue, hard tissue, or the like.
[0095] As previously described, the fiducial(s)126 may be implanted in a patient near the target anatomical element 404 or on the anatomical element 404 to enable or aid in registration of the
target anatomical element 404. The fiducial(s) 126 are visible in image data from any imaging device 112 such as, for example, ultrasound imaging devices, X-ray imaging devices, etc. The fiducial(s) 126 may be used to identify the target anatomical element 404 in instances where identification of the target anatomical element 404 may be difficult in the imaging.
[0096] As also previously described, the fiducial(s) 126 may also be used to detect movement of the target anatomical element 404. For example, first image data depicting the fiducial(s) 126 and at least a portion of the target anatomical element 404 may be compared to second image data depicting the fiducial(s) 126 and at least a portion of the target anatomical element 404 and taken after the first image data. A difference in the first image data and the second image data (and in particular, a difference in a pose of the fiducial(s) 126 in the first image data compared to the second image data) may suggest that the target anatomical element 404 has moved. In such instances, the registration may be updated to account for the movement of the target anatomical element 404.
[0097] According to an embodiment of the present disclosure and as discussed in greater detail below, three fiducials 126 may be mounted on the target anatomical element 404 of the patient. It will be appreciated that the fiducials 126 may include one fiducial, two fiducials, or more than two fiducials 126.
[0098] Fig. 5 is an example illustration of a patient anatomy 500 including at least one fiducial according to at least one embodiment of the present disclosure according to at least one embodiment of the present disclosure. As illustrated in Fig. 5, at least one fiducial 126 provided on a target anatomical element 404 (e.g., the vertebrae) of the spine 504 of a patient. The at least one fiducial 126 is supported by a bone clamp 508 which is mounted to the target anatomical element 404.
[0099] Fig. 6 is an example process 600 according to at least one embodiment of the present disclosure. The process 600 supports methods and systems for processing image data and registering one or more anatomical elements is shown. First image data 606 may be used by a processor such as the processor 104 as input for the image processing 120. The image processing 120 may output an identified target anatomical element 608 and/or an identified one or more fiducial(s) 126. In some embodiments of the present disclosure, the first image data 606 may be received from a first imaging device such as the imaging 112. In particular, the first imaging device is an X-ray machine. It will be appreciated that the first image data 606 may depict the one or more
fiducial(s) 126 and the image processing 120 may process the first image data 606 to output pose information of the fiducial(s) 126 (which may then be used, for example, to determine the pose information of the fiducial(s) 126 and/or the associated target anatomical element 404). The pose information may correspond to computer-encoded data that describes a pose of the fiducial(s) 126. For example, the pose information, in some embodiments of the present disclosure, may comprise coordinates and/or an orientation of the fiducial(s) 126. In other examples, the pose information may comprise, for example, a matrix that describes the pose of the fiducial(s) 126. It will be appreciated that the pose information may be encoded in any number of ways and may include, for example, a description of a location of the fiducial(s) 126 in a reference space, a vector (e.g., a three-element vector), or a matrix.
[0100] As previously described, the image processing 120 may use the segmentation 122 to identify the fiducial(s) 126 and/or anatomical elements 404. The segmentation 122 may be configured to segment the fiducial(s) 126 and/or the anatomical elements 404 from the first image data 606 to yield one or more identified anatomical elements 404 and/or identified fiducial(s) 126. Segmenting the fiducial(s) 126 and/or anatomical elements 404 from the first image data 606 when the first image data 606 includes a three-dimensional representation of the patient anatomy may include identifying a boundary of one or more fiducial(s) 126 and/or anatomical elements 404 and forming a separate three-dimensional representation of the one or more fiducial(s) 126 and/or anatomical elements 404.
[0101] In some embodiments of the present disclosure, identifying the boundary may include identifying adjacent sets of pixels having a large enough contrast to represent a border of an anatomical element 404 depicted therein. In other embodiments of the present disclosure, feature recognition may be used to identify a border of an anatomical element 404 and/or fiducial(s) 126. For example, a contour of a vertebrae may be identified using feature recognition.
[0102] The image processing 120 may be trained using historical image data. In other embodiments, the image processing 120 may be trained using the first image data 606. In such embodiments, the image processing 120 may be trained prior to inputting the first image data 606 into the image processing 120 or may be trained in parallel with inputting the first image data 606 into the image processing 120.
[0103] As previously described, the image processing 120 may output an identified anatomical element 404 and/or identified fiducial(s) 126. The identified anatomical element 404 and/or the
fiducial(s) 126 may be used by the processor 104 as input for a registration 124. The registration 124 may output one or more registered anatomical elements 612. The registration 124 may register the anatomical elements based on the fiducial(s) 126 identified in the first image data 606. More specifically, in some embodiments of the present disclosure, the registration 124 may use the identified fiducial(s) 126 and may use the pose information of the identified fiducial(s) 126 to register the anatomical element 608. The registration 214 may be configured to register the one or more identified anatomical elements 608 to, for example, a preoperative image or any image.
[0104] The registration 124 may be trained using historical or simulated image data depicting one or more identified anatomical elements 608 and one or more fiducial(s) 126, historical identified anatomical elements, and/or historical identified fiducial(s). In other embodiments of the present disclosure, the registration 124 may be trained using the identified anatomical elements 608 and the fiducial(s) 126. In such embodiments of the present disclosure, the registration 124 may be trained prior to inputting the identified anatomical elements 608 and the fiducial(s) 126 into the registration 124 or may be trained in parallel with inputting the identified anatomical elements 608 and the fiducial(s) 126 into the registration 124.
[0105] Fig. 7A is an example implementation of a navigation tool 700 according to at least one embodiment of the present disclosure. The navigation tool 700 may also be referred to as a sphere verification tracker and generally includes a top portion 704 and a bottom portion 712. The top portion 704 includes markers 708. The bottom portion 712 includes a divot 714 having a concave portion at its end. The divot 712 is designed to incorporate the specific diameter ball (e.g., the fiducial(s) 126) and point the marker 708 to its tip. In other words, the trackers represented the fiducials that can be seen with the navigation camera. The tip of the tracker can be placed in a divot or on tip of a shear to measure its position in space. The navigation tool 700 is used to verify the position of fiducial(s) 126 with respect to parts of the spine anatomy to perform active verification.
[0106] Fig. 7B is an example implementation of a navigation system 750 according to at least one embodiment of the present disclosure. The navigation system 750 generally includes the navigation tool 700, a spinal anatomy 504 including one or more anatomical elements 404 and one or more fiducial(s) 126 provided on each of the one or more anatomical elements 404. According to an embodiment of the present disclosure, at least three fiducial(s) 126 may be required for accurate verification. Once movement has been detected with the spinal anatomy 504, realignment
of the spinal anatomy 504 can be performed using the same at least three fiducial(s) 126. According to embodiments of the present disclosure, the navigation tool 700 is used along with the at least three fiducial(s) 126 to realign the spinal anatomy 504 to all 6 degrees of freedom. According to an embodiment of the present disclosure, since a location of the at least three fiducial(s) 126 is known at a beginning or start time, if the location of the at least three fiducial(s) 126 moves at a subsequent point in time, then the location of the spinal anatomy 504 would be known based on registration and navigation.
[0107] According to another embodiment of the present disclosure, three or more fiducial(s) 126 and at least one reflective marker 408 (not shown) may be required for accurate verification. In order to keep the surgical area clear, according to an embodiment of the present disclosure, an interface between the bone mount 508 (not shown) and a larger tracker are provided to assist with realigning the spinal anatomy 504.
[0108] The fiducial position is taken from the registration information (e.g., registered using the X-ray image data). The navigation tool 700 is used for verification that there has been no shift in the spinal anatomy 504. According to an embodiment of the present disclosure, instead of the navigation tool 700 discussed above, a camera or an electromagnetic tracking system may be used to verify alignment or assist in realignment.
[0109] Fig. 7C is an example implementation of a navigation system 780 according to at least one embodiment of the present disclosure. As illustrated, the navigation system 780 generally includes fiducial(s) 126 provided anatomical elements 404 of a spinal anatomy 504, a C-arm X- ray based imaging device 784 and a patient tracker 788. The patent patient tracker 788 would be fixed to a patient. The patient tracker 788 as well as the C-arm X-ray based imaging device 784 include tracking markers 790. The tracking markers 790, may for example, be reflective gray markers and used with a camera of the navigation system 118.
[0110] The location of the patient tracker 788 with respect to a patient is determined after registration. The location of the fiducial(s) 126 with respect to the patient is measured using X-rays as is shown in the X-ray image as illustrated in Fig. 3. The initial registration can be determined in a couple of different ways. A first option is a 3D registration of the anatomy/fiducial verses the patient tracker. A second option is to take two or more 2D registrations using an external option from one of the following: (1) controlling the location where the image is taken from; and (2) sense
where the image is taken (e.g., using the tracking markers 790 on the C-arm X-ray imaging device 784 illustrated in Fig. 7C).
[0111] Fig. 8 depicts a method 800 that may be used, for example, for a registration process using one or more fiducials such as the one or more fiducial(s) 126 and imaging device(s) 112 such as the X-ray device to identify a corresponding target anatomical element such as the target anatomical element 404. In some examples, method 800 may implement aspects of a computing device 102, an imaging device 112, a robot 114, and a navigation system 118 described with reference to Figs. 1-7C.
[0112] In the following description of method 800, the operations may be performed in a different order than the order shown, or the operations may be performed in different orders or at different times. Certain operations may also be left out of method 800, or other operations may be added to the method 800.
[0113] It is to be understood that any of the operations of method 800 may be performed by any device (e.g., a computing device 102, an imaging device 112, a robot 114, navigation system 118, etc.) of the system 100 described herein. Generally, method 800 starts with a START operation at step 504 and ends with an END operation at step 836. Method 800 can be executed as a set of computer-executable instructions executed by a computer system (e.g., computing device 102, etc.) and encoded or stored on a computer readable medium. Hereinafter, method 800 shall be explained with reference to the systems, components, modules, applications, software, data structures, user interfaces, etc. described in conjunction with Figs. 1-7C.
[0114] The method 800 (and/or one or more steps thereof) may be carried out or otherwise performed, for example, by at least one processor. The at least one processor may be the same as or similar to the processor(s) 104 of the computing device 102 described above. The at least one processor may be part of a robot (such as a robot 114) or part of a navigation system (such as a navigation system 118). A processor other than any processor described herein may also be used to execute the method 800. The at least one processor may perform the method 800 by executing elements stored in a memory such as the memory 106. The elements stored in memory and executed by the processor may cause the processor to execute one or more steps of a function as shown in method 800. One or more portions of a method 800 may be performed by the processor executing any of the contents of memory, such as an image processing 120, a segmentation 122, and/or a registration 124.
[0115] Method 800 begins with the START operation at step 804 and proceeds to step 808, where the system 100 receives first image data from a first imaging device including at least one fiducial and at least a portion of an anatomical element of a patient. The first image data is received or obtained from a CT scanner or any other X-ray based imaging device while the patient is being X- rayed. The first image data may be a two-dimensional image or a three-dimensional image (e.g., a three-dimensional representation) or a set of two-dimensional and/or three-dimensional images. The first image data may depict one or more fiducials such as the one or more fiducials 126 and at least a portion of a target anatomical element such as the target anatomical element 404. In some embodiments of the present disclosure, the first image data is captured preoperatively (e.g., before surgery) and may be stored in a system (e.g., a system 100) and/or one or more components thereof (e.g., a database 130). The stored image may then be received (e.g., by a processor 104), as described above, preoperatively (e.g., before the surgery) and/or intraoperatively (e.g., during surgery). In other embodiments, the first image data may be obtained during or prior to a surgical procedure. For example, the first image data may be used to establish an initial position of the fiducial(s) and the target anatomical element.
[0116] In some embodiments of the present disclosure the fiducials may aid in registration of the anatomical elements as depicted in the X-ray imaging as the fiducials may be clearly visible in the X-ray imaging, whereas the anatomical elements may be difficult to identify or distorted with camera images.
[0117] After receiving first image data from a first imaging device including at least one fiducial and at least a portion of an anatomical element of a patient at step 808, method 800 proceeds to step 812, where the system 100 determines a location of the at least one fiducial with respect to the anatomical element of the patient based on the first image. As previously described, at least one fiducial 126 is mounted on the patient’s targeted anatomical element 404 to enable or aid in registration of the target anatomical element. The at least one fiducial 126 may be used to identify the target anatomical element 404 in instances where identification of the target anatomical element 404 may be difficult in the imaging. The at least one fiducial 126 may also be used to detect movement of the target anatomical element 404. Pose information of the at least one fiducial 126 may be obtained from processing the first image data depicting the at least one fiducial 126 and at least a portion of the target anatomical element 404 by a processor such as the processor 104 (or a processor of the navigation system) using image processing such as the image processing 120. In
some embodiments of the present disclosure, pose information of the target anatomical element 404 may also be determined using, for example, the image processing.
[0118] After determining a location of the at least one fiducial with respect to the anatomical element of the patient based on the first image at step 812, method 800 proceeds to step 816, where errors in the obtained first image data and the determined location of the at least one fiducial with respect to the anatomical element of the patient are eliminated by using the first imaging device. As stated previously, an image of the at least one fiducial 126 (e.g., the spheres illustrated in Fig. 3) taken by an X-ray does not experience distortion as compared to images taken with a camera. Thus, imaging errors in obtaining image data using X-ray can be eliminated. Moreover, sensing errors created by locating the position of the at least one fiducial 126 (e.g., the spheres illustrated in Fig. 3) using camera images are also eliminated. These compounded errors are avoided when using an X-ray to determine the location of the at least one fiducial 126 with respect to the anatomical element 404 of the patient.
[0119] After eliminating errors in the obtained first image data and the determined location of the at least one fiducial with respect to the anatomical element of the patient by using the first imaging device at step 816, method 800 proceeds to step 820 where the system 100 registers the determined location of the at least one fiducial with respect to the anatomical element of the patient. The registration may be the same as or similar to the registration 124. As previously described, the first image data may depict the at least one fiducial 126 and the target anatomical element 404. The at least one fiducial 126 and/or the target anatomical element 404 as identified in the first image data enables the processor 104 to register the target anatomical element 404 depicted in the first image data based on the identified at least one fiducial 126 using the registration. More specifically, the registration may transform, map, or create a correlation between the first image data and/or components thereof and an initial or preliminary image data, which may then be used by a system (e.g., a system 100) and/or one or more components thereof (e.g., a navigation system 118) to translate one or more coordinates in the patient coordinate space to one or more coordinates in a coordinate space of a robot (e.g., a robot 114) and/or vice versa.
[0120] After registering the determined location of the at least one fiducial with respect to the anatomical element of the patient, wherein the first image device obtains the first image data at a wavelength of 10 nanometers or less at step 820, method 800 proceeds to step 824, where the system 100 receives second image data from a second imaging device. As discussed above, the
second imaging device may be a camera or an electromagnetic tracking system. The second image data may depict the fiducial(s) and at least a portion of the target anatomical element. In some embodiments of the present disclosure, the second image data is obtained at a time period after the first image data. For example, the first image data may be obtained near a beginning, or a start of a surgical operation and the second image data may be obtained during the surgical operation.
[0121] After receiving second image data from a second imaging device at step 824, method 800 proceeds to step 828 where the system 100 determines movement of the location of the at least one fiducial with respect to the anatomical element has exceeded a predetermined threshold. Determining whether movement of the location of the anatomical element has exceeded a predetermined threshold includes determining a distance difference between the at least one fiducial and/or the target anatomical element 404 in the first image data and the second image data. The distance difference can then be compared to the threshold distance. The distance difference can be determined automatically by, for example, the processor 104. In some embodiments of the present disclosure, the distance difference may be determined by a user such as, for example, a surgeon or other medical provider. Similarly, the threshold distance may be determined automatically by the processor 104 or may be received as user input via, for example, the user interface.
[0122] After determining movement of the location of the at least one fiducial with respect to the anatomical element has exceeded a predetermined threshold at step 828, method 800 proceeds to step 832 where the system 100 provides notification that the registered determined location of the at least one fiducial with respect to the anatomical element of the patient is invalid.
[0123] After providing notification that the registered determined location of the at least one fiducial with respect to the anatomical element of the patient is invalid at step 832, method 800 ends at the END operation at step 836.
[0124] The following examples provide various embodiments disclosed herein.
[0125] Example 1. A system comprising: a first imaging device operable to obtain first image data; at least one fiducial that is visible in the first image data, wherein the at least one fiducial is mounted on an anatomical element of a patient; a processor; and a memory storing data for processing by the processor, the data, when processed, causes the processor to: receive the first image data depicting the at least one fiducial and at least a portion of the anatomical element of the patient; determine a location of the at least one fiducial with respect to the anatomical element of
the patient based on the first image data; eliminate errors in the obtained first image data and the determined location of the at least one fiducial with respect to the anatomical element of the patient by using the first imaging device; and register the determined location of the at least one fiducial with respect to the anatomical element of the patient, wherein the first imaging device obtains the first image data at a wavelength of 10 nanometers or less.
[0126] Example 2. The system of Example 1, wherein the memory stores additional data for processing by the processor, that when processed, cause the processor to: receive second image data from a second imaging device; and determine movement of the location of the at least one fiducial with respect to the anatomical element of the patient based on comparing the first image data with the second image data.
[0127] Example 3. The system of Example 1 or 2, wherein the memory stores additional data for processing by the processor, that when processed, cause the processor to: determine movement of the location of the at least one fiducial with respect to the anatomical element of the patient has exceeded a predetermined threshold; and provide notification that the registered determined location of the at least one fiducial with respect to the anatomical element of the patient is invalid. [0128] Example 4. The system of Example 2, wherein the memory stores additional data for processing by the processor, that when processed, cause the processor to: compare the first image data with the second image data; determine movement of the location of the at least one fiducial with respect to the anatomical element of the patient based on the comparison of the first image data and the second image data; and update the registered determined location of the at least one fiducial with respect to the anatomical element of the patient based on the comparison of the first image data with the second image data.
[0129] Example 5. The system of any one of Examples 1-4, wherein the first imaging device is an X-ray device and the X-ray device obtains X-ray image data as the first image data.
[0130] Example 6. The system of any one of Examples 1-5, wherein the first imaging device is a computerized tomography (CT) machine and the CT machine obtains CT image data as the first image data.
[0131] Example 7. The system of Example 2, wherein the second imaging device obtains the second image data at a wavelength of 400 nanometers or greater.
[0132] Example 8. The system of Example 7, wherein the second imaging device is a three- dimensional (3D) imaging camera and the 3D imaging camera obtains 3D image data as the second image data.
[0133] Example 9. The system of any one of Examples 1-8, wherein the at least one fiducial comprises at least three fiducials.
[0134] Example 10. The system of any one of Examples 1-9, wherein the anatomical element comprises one or more vertebrae.
[0135] Example 11. The system of any one of Examples 1-10, wherein the at least one fiducial comprises a metal.
[0136] Example 12. The system of any one of Examples 1-11, wherein the at least one fiducial comprises glass.
[0137] Example 13. The system of any one of Examples 1-12, wherein the at least one fiducial is radiopaque.
[0138] Example 14. The system of any one of Examples 1-13, further comprising a reflective marker that is visible in the first image data.
[0139] Example 15. A system comprising: a first imaging device operable to obtain first image data; at least one fiducial that is visible in the first image data, wherein the at least one fiducial is mounted on an anatomical element of a patient; a processor; and a memory storing data for processing by the processor, the data, when processed, causes the processor to: receive the first image data depicting the at least one fiducial and at least a portion of the anatomical element of the patient; determine a location of the at least one fiducial with respect to the anatomical element of the patient based on the first image data; eliminate errors in the obtained first image data and the determined location of the at least one fiducial with respect to the anatomical element of the patient by using the first imaging device; register the determined location of the at least one fiducial with respect to the anatomical element of the patient, wherein the first imaging device obtains the first image data at a wavelength of 10 nanometers or less; receive second imaging data from a second imaging device; compare the first image data with the second image data; and determine movement of the location of the at least one fiducial with respect to the anatomical element of the patient based on the comparison of the first image data and the second image data.
[0140] Example 16. The system of Example 15, wherein the memory stores additional data for processing by the processor, that when processed, cause the processor to: determine movement of
the location of the at least one fiducial with respect to the anatomical element of the patient has exceeded a predetermined threshold; and provide notification that the registered determined location of the at least one fiducial with respect to the anatomical element of the patient is invalid. [0141] Example 17. The system of Example 15 or 16, wherein the memory stores additional data for processing by the processor, that when processed, cause the processor to: determine movement of the location of the at least one fiducial with respect to the anatomical element of the patient based on comparing the first image data with the second image data; and update the registered determined location of the at least one fiducial with respect to the anatomical element of the patient based on the comparison of the first image data with the second image data.
[0142] Example 18. A method comprising: receiving, from a first imaging device, first image data including at least one fiducial and at least a portion of an anatomical element of a patient that are visible in the first image data, wherein the at least one fiducial is mounted on that anatomical element of a patient; determining a location of the at least one fiducial with respect to the anatomical element of the patient based on the first image data; eliminating errors in the received first image data and the determined location of the at least one fiducial with respect to the anatomical element of the patient by using the first imaging device; and registering the determined location of the at least one fiducial with respect to the anatomical element of the patient, wherein the first imaging device obtains the first image data at a wavelength of 10 nanometers or less.
[0143] Example 19. The method of Example 18, further comprising: receiving, from a second imaging device, second image data; and determining movement of the location of the at least one fiducial with respect to the anatomical element of the patient based on comparing the first image data with the second image data.
[0144] Example 20. The method of Example 18 or 19, further comprising: determining movement of the location of the at least one fiducial with respect to the anatomical element of the patient has exceeded a predetermined threshold; and providing notification that the registered determined location of the at least one fiducial with respect to the anatomical element of the patient is invalid.
[0145] The foregoing is not intended to limit the disclosure to the form or forms disclosed herein. In the foregoing Detailed Description, for example, various features of the disclosure are grouped together in one or more aspects, embodiments, and/or configurations for the purpose of streamlining the disclosure. The features of the aspects, embodiments, and/or configurations of the
disclosure may be combined in alternate aspects, embodiments, and/or configurations other than those discussed above. This method of disclosure is not to be interpreted as reflecting an intention that the claims require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed aspect, embodiment, and/or configuration. Thus, the following claims are hereby incorporated into this Detailed Description, with each claim standing on its own as a separate preferred embodiment of the disclosure.
[0146] Moreover, though the foregoing has included description of one or more aspects, embodiments, and/or configurations and certain variations and modifications, other variations, combinations, and modifications are within the scope of the disclosure, e.g., as may be within the skill and knowledge of those in the art, after understanding the present disclosure. It is intended to obtain rights which include alternative aspects, embodiments, and/or configurations to the extent permitted, including alternate, interchangeable and/or equivalent structures, functions, ranges or steps to those claimed, whether or not such alternate, interchangeable and/or equivalent structures, functions, ranges or steps are disclosed herein, and without intending to publicly dedicate any patentable subject matter.
Claims
1. A system comprising: a first imaging device (112) operable to obtain first image data; at least one fiducial (126) that is visible in the first image data, wherein the at least one fiducial is mounted on an anatomical element (404) of a patient; a processor (104); and a memory (106) storing data for processing by the processor, the data, when processed, causes the processor to: receive the first image data depicting the at least one fiducial and at least a portion of the anatomical element of the patient; determine a location of the at least one fiducial with respect to the anatomical element of the patient based on the first image data; eliminate errors in the obtained first image data and the determined location of the at least one fiducial with respect to the anatomical element of the patient by using the first imaging device; and register the determined location of the at least one fiducial with respect to the anatomical element of the patient, wherein the first imaging device obtains the first image data at a wavelength of 10 nanometers or less.
2. The system of claim 1 , wherein the memory stores additional data for processing by the processor, that when processed, cause the processor to: receive second image data from a second imaging device (112); and determine movement of the location of the at least one fiducial with respect to the anatomical element of the patient based on comparing the first image data with the second image data.
3. The system of claim 1 or 2, wherein the memory stores additional data for processing by the processor, that when processed, cause the processor to: determine movement of the location of the at least one fiducial with respect to the anatomical element of the patient has exceeded a predetermined threshold; and
provide notification that the registered determined location of the at least one fiducial with respect to the anatomical element of the patient is invalid.
4. The system of claim 2, wherein the memory stores additional data for processing by the processor, that when processed, cause the processor to: compare the first image data with the second image data; determine movement of the location of the at least one fiducial with respect to the anatomical element of the patient based on the comparison of the first image data and the second image data; and update the registered determined location of the at least one fiducial with respect to the anatomical element of the patient based on the comparison of the first image data with the second image data.
5. The system of any one of claims 1 to 4, wherein the first imaging device is an X- ray device and the X-ray device obtains X-ray image data as the first image data.
6. The system of any one of claims 1 to 5, wherein the first imaging device is a computerized tomography (CT) machine and the CT machine obtains CT image data as the first image data.
7. The system of claim 2 or 4, wherein the second imaging device is a three- dimensional (3D) imaging camera and the 3D imaging camera obtains 3D image data as the second image data.
8. The system of any one of claims 1-7, wherein the at least one fiducial comprises at least three fiducials.
9. The system of any one of claims 1-8, wherein the at least one fiducial is radiopaque.
10. A system comprising: a first imaging device (112) operable to obtain first image data; at least one fiducial (126) that is visible in the first image data, wherein the at least one fiducial is mounted on an anatomical element (404) of a patient; a processor (104); and a memory (106) storing data for processing by the processor, the data, when processed, causes the processor to: receive the first image data depicting the at least one fiducial and at least a portion of the anatomical element of the patient;
determine a location of the at least one fiducial with respect to the anatomical element of the patient based on the first image data; eliminate errors in the obtained first image data and the determined location of the at least one fiducial with respect to the anatomical element of the patient by using the first imaging device; register the determined location of the at least one fiducial with respect to the anatomical element of the patient, wherein the first imaging device obtains the first image data at a wavelength of 10 nanometers or less; receive second imaging data from a second imaging device; compare the first image data with the second image data; and determine movement of the location of the at least one fiducial with respect to the anatomical element of the patient based on the comparison of the first image data and the second image data.
11. The system of claim 10, wherein the memory stores additional data for processing by the processor, that when processed, cause the processor to: determine movement of the location of the at least one fiducial with respect to the anatomical element of the patient has exceeded a predetermined threshold; and provide notification that the registered determined location of the at least one fiducial with respect to the anatomical element of the patient is invalid.
12. The system of claim 10 or 11, wherein the memory stores additional data for processing by the processor, that when processed, cause the processor to: determine movement of the location of the at least one fiducial with respect to the anatomical element of the patient based on comparing the first image data with the second image data; and update the registered determined location of the at least one fiducial with respect to the anatomical element of the patient based on the comparison of the first image data with the second image data.
13. A method comprising:
receiving, from a first imaging device (112), first image data including at least one fiducial (126) and at least a portion of an anatomical element (404) of a patient that are visible in the first image data, wherein the at least one fiducial is mounted on that anatomical element of a patient; determining a location of the at least one fiducial with respect to the anatomical element of the patient based on the first image data; eliminating errors in the received first image data and the determined location of the at least one fiducial with respect to the anatomical element of the patient by using the first imaging device; and registering the determined location of the at least one fiducial with respect to the anatomical element of the patient, wherein the first imaging device obtains the first image data at a wavelength of 10 nanometers or less.
14. The method of claim 13, further comprising: receiving, from a second imaging device (112), second image data; and determining movement of the location of the at least one fiducial with respect to the anatomical element of the patient based on comparing the first image data with the second image data.
15. The method of claim 13 or 14, further comprising. determining movement of the location of the at least one fiducial with respect to the anatomical element of the patient has exceeded a predetermined threshold; and providing notification that the registered determined location of the at least one fiducial with respect to the anatomical element of the patient is invalid.
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US63/676,783 | 2024-07-29 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2026028196A1 true WO2026028196A1 (en) | 2026-02-05 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US12295797B2 (en) | Systems, methods, and devices for providing an augmented display | |
| US20230389991A1 (en) | Spinous process clamp registration and methods for using the same | |
| US12295678B2 (en) | Systems and methods for intraoperative re-registration | |
| US20240293190A1 (en) | System and method for preliminary registration | |
| US12419692B2 (en) | Robotic arm navigation using virtual bone mount | |
| US12213748B2 (en) | Systems and methods for registering one or more anatomical elements | |
| WO2023148720A1 (en) | Segemental tracking combining optical tracking and inertial measurements | |
| US20240382265A1 (en) | Hybrid localization for minimally invasive surgery and cervical spinal referencing, and methods for using the same | |
| US12249099B2 (en) | Systems, methods, and devices for reconstructing a three-dimensional representation | |
| US20230240659A1 (en) | Systems, methods, and devices for tracking one or more objects | |
| WO2026028196A1 (en) | Validation using registered fiducial | |
| WO2023141800A1 (en) | Mobile x-ray positioning system | |
| US20250275818A1 (en) | Systems and methods for intraoperative re-registration | |
| WO2025109596A1 (en) | Systems and methods for registration using one or more fiducials | |
| WO2025186761A1 (en) | Systems and methods for determining a pose of an object relative to an imaging device | |
| EP4673078A1 (en) | Systems and methods for registering a target anatomical element | |
| WO2024236440A1 (en) | Hybrid localization for minimally invasive surgery and cervical spinal referencing, and methods for using the same | |
| WO2025046505A1 (en) | Systems and methods for patient registration using 2d image planes | |
| WO2025120636A1 (en) | Systems and methods for determining movement of one or more anatomical elements | |
| WO2026003841A1 (en) | Systems and methods for dynamic collision avoidance in robotic procedures | |
| EP4214667A1 (en) | Systems and methods for generating a corrected image |