US20200250796A1 - Composite radiographic image that corrects effects of parallax distortion - Google Patents
Composite radiographic image that corrects effects of parallax distortion Download PDFInfo
- Publication number
- US20200250796A1 US20200250796A1 US16/851,545 US202016851545A US2020250796A1 US 20200250796 A1 US20200250796 A1 US 20200250796A1 US 202016851545 A US202016851545 A US 202016851545A US 2020250796 A1 US2020250796 A1 US 2020250796A1
- Authority
- US
- United States
- Prior art keywords
- image
- image frames
- intra
- dimensional
- cropped image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G06T5/002—
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/149—Segmentation; Edge detection involving deformable models, e.g. active contour models
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/187—Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/38—Registration of image sequences
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/24—Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10116—X-ray image
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10116—X-ray image
- G06T2207/10121—Fluoroscopy
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20116—Active contour; Active surface; Snakes
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20152—Watershed segmentation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30008—Bone
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/41—Medical
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H20/00—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
- G16H20/40—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mechanical, radiation or invasive therapies, e.g. surgery, laser therapy, dialysis or acupuncture
Definitions
- the present disclosure relates to systems and methods to create a composite radiographic image that corrects effects of parallax distortion and help obtain a proper placement and positioning of a component such as an acetabular cup or a femoral component during a surgical procedure.
- a radiographic image of a particularly wide or long body part such as a pelvis or a femur
- these images are used to take various measurements of the patient.
- an orthopedic surgeon may take an intra-operative radiographic image of a patient's pelvis to, for example, measure a discrepancy in a patient's leg lengths.
- a common apparatus used to take such intra-operative images is a C-arm that includes an X-ray source and X-ray detector.
- a pelvis is often too wide to fit within the field of view of an image from a C-arm.
- distortions in the image particularly parallax distortions near the ends of the field of view, may cause the accuracy of any measurements taken using such an image to be reduced.
- the present disclosure is directed to methods and systems to improve the quality and accuracy of intra-operative radiographic images, particularly of wide-view objects, by generating a composite image that corrects effects of parallax distortion and then displaying a three-dimensional projection of a region of the patient relative to this composite image.
- the method of the present disclosure may comprise acquiring a sequence of digital radiographic image frames, identifying a region of interest on the image frames, cropping the identified region of interest from a plurality of the image frames, and stitching together a plurality of selected portions of cropped image frames to create a composite image.
- the plurality of selected portions of cropped image frames may be selected by (1) selecting a sequentially first selected portion from a first cropped image frame; (2) iteratively selecting a plurality of interior selected portions by (a) measuring a displacement between each pair of sequentially adjacent cropped image frames and (b) selecting a portion from the sequentially later cropped image frame corresponding to the measured displacement; and (3) selecting a sequentially last portion from a last cropped image frame.
- Each selected middle portion may have a dimension that corresponds to the measured displacement. In an embodiment, each selected middle portion may be selected from the interior 25% of the field of view of the respective cropped image frame.
- the sequence of radiographic images may be taken using a C-arm apparatus with an X-ray source and X-ray detector disposed thereon.
- the sequence of radiographic images is a result of several (for example, five to ten) discrete exposures.
- the sequence of radiographic images is a result of a fluoroscopic procedure generating a relatively larger number (for example, 25 to 75) of image frames.
- a surgeon may be able to use a composite image that corrects effects of parallax distortion according to an aspect of the present disclosure to take more accurate measurements, such as, for example, leg length discrepancy measurements, of a patient.
- the composite image is used with an initial three-dimensional image of a patient in a neutral position for generating a three-dimensional model corresponding to the spatial orientation of the composite image.
- a plurality of two-dimensional projections are scored by calculating the distance of the bony edge contour in the intra-operative imaging information to a corresponding contour in each two-dimensional projection and identifying a global minimum score. Calculating a transformation matrix for the two-dimensional projection having the global minimum score and displaying the three-dimensional image of the patient established thereby and optionally displaying a visual indication an intra-operative adjustment to be made to a component based on the adjustment factor to achieve a target component orientation.
- FIG. 1 presents a conventional radiographic image illustrating parallax distortion
- FIG. 2 depicts a flow chart presenting an exemplary method to generate a composite radiographic image that corrects effects of parallax distortion in accordance with an aspect of the present disclosure
- FIG. 3A presents an exemplary method for selecting constituent images to be formed into a composite image in accordance with an aspect of the present disclosure
- FIG. 3B presents an exemplary method for selecting constituent images to be formed into a composite image in accordance with an aspect of the present disclosure
- FIG. 4A depicts a first portion of an exemplary method for generating a composite radiographic image that corrects effects of parallax distortion in accordance with an aspect of the present disclosure
- FIG. 4B depicts a second portion of an exemplary method for generating a composite radiographic image that corrects effects of parallax distortion in accordance with an aspect of the present disclosure
- FIG. 4C illustrates the iterative nature of at least a portion of an exemplary method for generating a composite radiographic image in accordance with an aspect of the present disclosure
- FIG. 5 shows a composite radiographic image that corrects effects of parallax distortion in accordance with an aspect of the present disclosure
- FIG. 6 shows a composite radiographic image that corrects effects of parallax distortion in accordance with an aspect of the present disclosure
- FIG. 7 depicts a composite anteroposterior radiographic image of a pelvis that corrects effects of parallax distortion being used to measure a discrepancy in leg lengths of a patient in accordance with an aspect of the present disclosure
- FIG. 8 depicts a composite anteroposterior radiographic image of a pelvis that corrects effects of parallax distortion being used to measure a discrepancy in leg lengths of a patient in accordance with an aspect of the present disclosure
- FIG. 9 shows an embodiment of a computer architecture according to the present disclosure.
- FIG. 10 depicts a block diagram of an exemplary environment that may be used to implement systems and methods of the present disclosure
- FIG. 11 shows a flow chart depicting an exemplary process of implementing intensity-based image registration in accordance with an aspect of the present disclosure
- FIG. 12 shows an exemplary process that may be used in connection with measuring a displacement between two image frames generated from a plurality of discrete exposures according to an aspect of the present disclosure
- FIG. 13 is a block diagram view of an exemplary system and an associated patient and x-ray shows an embodiment of exemplary system architecture in accordance with an embodiment of the present disclosure
- FIG. 14A shows a portion of a patient at a neutral position with no tilt
- FIG. 14B shows a portion of a patient at a non-neutral position with forward tilt
- FIG. 14C shows a portion of a patient at a non-neutral position with backward tilt
- FIG. 15 is an exemplary flow chart diagram illustrating steps that may be taken to render one or more two-dimensional projections from a three-dimensional model of a portion of a patient in accordance with an embodiment of the present disclosure
- FIG. 16 is a diagram providing a conceptual model of a two-dimensional projection from a three-dimensional model in accordance with an embodiment of the present disclosure
- FIG. 17 shows a projected circle rotated along three axes that may be used to model an acetabular cup component in accordance with an embodiment of the present disclosure.
- FIG. 18 shows a screen shot of a display including an intra-operative radiographic image including a superimposed ellipse representing a target placement of an acetabular cup component in accordance with an embodiment of the present disclosure.
- Digital radiographic two-dimensional images can be acquired by exposing X-ray sensors at a detector to X-rays from an X-ray source.
- digital radiographic images may be acquired and transmitted to a computer (via wired or wireless connection) for review, archiving, and/or enhancement.
- Acquired images can be saved in a variety of formats, such as tiff, jpeg, png, bmp, dicom, and raw formatted images for both reading and writing.
- Digital radiography offers numerous advantages over traditional X-ray methods. For example, digital radiography generates lower radiation levels than the levels required by traditional X-ray techniques using radiographic or phosphor film. Further, two-dimensional radiographic images may be viewed much quicker than traditional X-ray film due to reduced image acquisition time. The ability to digitally enhance digital X-ray images may further reduce radiation exposure levels.
- the present disclosure is directed toward systems and methods for generating composite radiographic imaging information that corrects effects of parallax distortion.
- a sequence of radiographic images including a series of discrete exposures or image frames from a fluoroscopic procedure—may be acquired using a C-arm apparatus.
- Conventional C-arm images have a limited view, precluding a member of a surgical team from viewing the entirety of a long structure, such as the width of a pelvis or length of a spine, in one image.
- the images that are captured often incorporate significant parallax distortions.
- Composite two-dimensional imaging information generated according to one of the exemplary methods described in fuller detail below, however, may allow a member of a surgical team to obtain a full view of a long structure in one image, and the composite image may correct effects of parallax distortion.
- FIG. 1 presents a conventional radiographic image, and the effects of parallax distortion are clear from the image.
- the image 100 is an anteroposterior view of a portion of a pelvis in which a straight metal bar 101 was laid on the anterior side of a patient.
- the straight metal bar 101 appears rather curvy.
- the curves on straight metal bar 101 are the results of parallax distortion.
- Parallax distortion likewise affects the display of the patient structures being imaged and can cause inaccuracies in certain measurements that may be desired before or during an orthopedic surgery. More accurate measurements may allow members of the surgical team to complete certain aspects of orthopedic surgeries more efficiently, accurately, and confidently.
- the term “sequential” refers to the order in which a series of radiographic image frames are taken or selected, depending on the context. For clarity, a sequence of radiographic image frames, or sequential image frames, does not require that each and every image frame in a given set of image frames be selected or used but does indicate that the selected or used image frames maintain the relative order in which the image frames were taken or generated.
- a second image frame may have a displacement with regard to a first image frame, for example, due to movement of a C-arm apparatus with respect to an anatomical part being imaged.
- FIG. 2 depicts a flow chart presenting an exemplary method 200 to generate the intra-operative composite radiographic imaging information to correct effects of parallax distortion in accordance with an aspect of the present disclosure.
- a system such as a computer system communicatively coupled to a C-arm apparatus, receives a plurality of radiographic image frames pertaining to a patient in step 210 .
- the radiographic image frames may comprise a sequence of discrete exposures, or the radiographic image frames may comprise frames from a fluoroscopy procedure.
- the system or a user thereof may identify a region of interest on the image frames in step 220 .
- the region of interest may be a structure too large for a C-arm to capture in a single image, such as a pelvis or a spine.
- the identified region of interest may be cropped from a plurality of the image frames in step 230 .
- a plurality of sequential portions of cropped image frames may be selected in step 240 .
- the selected plurality of sequential portions of cropped image frames may be stitched together in step 250 to create a composite radiographic image that displays the entire large structure, and the composite image may correct effects of parallax distortion.
- each selected interior portion may be taken from an interior 25 % of the respective image frame.
- FIG. 3A presents a flow chart depicting an exemplary method 240 for selecting a plurality of sequential portions of cropped image frames generated in step 230 .
- the portions of cropped image frames once selected, become constituent images to be stitched together to form an intra-operative composite image, or composite two-dimensional imaging information.
- method 240 may comprise selecting a sequentially first portion from a first cropped image frame 241 .
- the selected sequentially first portion of step 241 may correspond to an edge part of a composite image.
- the selected sequentially first portion of step 241 may comprise up to about half of the first cropped image frame.
- the selected sequentially first portion of step 241 may comprise half of the first cropped image frame.
- the selected sequentially first portion of step 241 may comprise a non-overlapped part of the first cropped image frame plus half of the overlapping region.
- method 240 may further comprise measuring a displacement between each pair of sequentially adjacent cropped image frames 242 with respect to a longitudinal axis of the anatomical part being imaged.
- the measuring step 242 could include measuring a displacement with respect to a longitudinal axis of the anatomical part being imaged between every pair of adjacent cropped image frames generated by a C-arm and received in step 210 .
- measuring step 242 could occur after a step of selecting all of the constituent cropped image frames to be used to create a composite image, in which case only a displacement with respect to a longitudinal axis of the anatomical part being imaged between adjacent selected sequential cropped image frames that will be used to create the intra-operative composite imaging information will be measured.
- measuring step 242 may include applying intensity-based image registration using a gradient descent algorithm.
- measuring step 242 may measure a displacement between two cropped image frames with respect to a longitudinal axis of the anatomical part being imaged, an axis orthogonal to the longitudinal axis of the anatomical part being imaged, and/or a rotational axis about the anatomical part being imaged.
- Method 240 may further comprise selecting an interior portion from the sequentially later cropped image frame 243 .
- the sequentially later cropped image frame refers to the cropped image frame from the pair of sequentially adjacent cropped image frames in step 242 that occurs relatively later in the pair.
- Each selected interior portion from step 243 may be selected from an interior part of the sequentially later cropped image frame, as the sequentially later cropped image frame may be less distorted in an interior part of the image frame than near an edge of the image frame.
- Each selected interior portion from step 243 in a composite image, may be disposed in the interior part of the composite image, i.e., between the sequentially first portion from step 241 and a sequentially last portion.
- Each selected interior portion from step 243 may have a dimension corresponding to—and in an embodiment, equal to—a respective measured displacement from step 242 .
- a selected interior portion may have a width equal to the measured displacement between the two cropped image frames.
- the length of the selected interior portion may have a height corresponding to the height of the sequentially later cropped image frame.
- a composite image may have a plurality of selected interior portions, with each selected interior portion being selected from a respective cropped image frame from a sequence of cropped image frames.
- Method 240 may further comprise selecting a sequentially last portion from a last cropped image frame 245 .
- the selected sequentially last portion of step 245 may correspond to an edge part of a composite image.
- the selected sequentially last portion of step 245 may comprise up to about half of the last cropped image frame.
- each selected interior portion may correspond to a measured displacement with respect to a longitudinal axis of an anatomical part being imaged from step 242 .
- each measured displacement may be about a few pixels of a digital image, and each selected interior portion may also be a few pixels in width.
- the width of the measured displacement may depend on the speed of movement of a C-arm apparatus and image-capturing frame rate. For example, if a C-arm apparatus is moved such that it takes about 10 seconds to move from one side of the region of interest to the other and the frame rate is set to 5 frames per second, the measured displacement may be between about 10-20 pixels. Accordingly, in such an embodiment, each selected interior portion may also be about 10-20 pixels wide. If, for example, each image frame is about 1,024 pixels wide, the width of each selected interior portion may correspond to about 1-2% of the width of the image frame from which it is taken.
- FIG. 3B presents a flow chart depicting an alternate exemplary method 240 for selecting a plurality of sequential portions of cropped image frames generated in step 230 .
- the portions of cropped image frames once selected, become constituent images to be stitched together to form a composite image.
- method 240 may comprise selecting a sequentially first portion from a first cropped image frame 241 .
- the selected sequentially first portion of step 241 may correspond to an edge part of an intra-operative composite image.
- the selected sequentially first portion of step 241 may comprise up to about half of the first cropped image frame.
- the selected sequentially first portion of step 241 may comprise half of the first cropped image frame.
- the selected sequentially first portion of step 241 may comprise a non-overlapped part of the first cropped image frame plus half of the overlapping region.
- method 240 may further comprise measuring a displacement between each pair of sequentially adjacent cropped image frames 242 with respect to a longitudinal axis of the anatomical part being imaged.
- the measuring step 242 could include measuring a displacement with respect to a longitudinal axis of the anatomical part being imaged between every pair of adjacent cropped image frames generated by a C-arm and received in step 210 .
- measuring step 242 could occur after a step of selecting all of the constituent cropped image frames to be used to create a composite image, in which case only a displacement with respect to a longitudinal axis of the anatomical part being imaged between adjacent selected sequential cropped image frames that will be used to create the composite image will be measured.
- measuring step 242 may include applying intensity-based image registration using a gradient descent algorithm. In an embodiment, measuring step 242 may measure a displacement between two cropped image frames with respect to a longitudinal axis of the anatomical part being imaged, an axis orthogonal to the longitudinal axis of the anatomical part being imaged, and/or a rotational axis about the anatomical part being imaged.
- Method 240 may further comprise selecting a plurality of interior portions from a plurality of cropped image frames occurring sequentially later than the first cropped image frame such that each interior portion may have a dimension corresponding to a measured displacement of step 242 between (a) the cropped image frame from which the immediately preceding selected portion was taken and (b) the cropped image frame providing the selected interior portion 244 .
- the first selected interior portion may have a dimension corresponding to the measured displacement between the first cropped image frame and a sequentially second cropped image frame
- a second selected interior portion may have a dimension corresponding to the measured displacement between the sequentially second cropped image frame and a sequentially third cropped image frame.
- Each selected interior portion from step 244 may be selected from an interior part of the cropped image frame providing the selected interior portion, as there may be less distortion in the interior part of the image frame.
- Each selected interior portion from step 244 in a composite image, may be disposed in the interior part of the composite image, i.e., between the sequentially first portion from step 241 and a sequentially last portion.
- Each selected interior portion from step 244 may have a dimension corresponding to—and in an embodiment, equal to—a respective measured displacement from step 242 . For example, if measuring step 242 measures a displacement between two cropped image frames with respect to a longitudinal axis of an anatomical part being imaged, a selected interior portion may have a width equal to the measured displacement between the two cropped image frames.
- the length of the selected interior portion may have a height corresponding to the height of the sequentially later cropped image frame.
- a composite image may have a plurality of selected interior portions, with each selected interior portion being selected from a respective cropped image frame from a sequence of cropped image frames.
- Method 240 may further comprise selecting a sequentially last portion from a last cropped image frame 245 .
- the selected sequentially last portion of step 245 may correspond to an edge part of a composite image.
- the selected sequentially last portion of step 245 may comprise up to about half of the last cropped image frame.
- FIG. 4A depicts a first part 400 B of an exemplary method for generating a composite radiographic image that corrects effects of parallax distortion.
- Method 400 A may comprise receiving a plurality of radiographic image frames pertaining to a patient 410 .
- block 410 depicts two anteroposterior radiographic image frames of a patient's pelvis taken from a C-arm apparatus as the X-ray source and X-ray detector were rotated relative to the patient.
- Method 400 A may further comprise identifying a region of interest 421 on the image frames 420 .
- the region of interest 421 on each frame in block 420 is signified with shading.
- Method 400 A may further comprise cropping the identified region of interest 421 from a plurality of image frames 430 to generate cropped image frames 431 , 432 .
- cropped image frames 431 , 432 are in a sequential order, with cropped image frame 431 being sequentially first and cropped image frame 432 being sequentially later relative to cropped image frame 431 .
- Method 400 A may further comprise measuring a displacement between each pair of sequentially adjacent cropped image frames 440 with respect to a longitudinal axis of the anatomical part being imaged. The displacement 441 between the cropped image frames 431 , 432 in block 440 , which have been aligned with respect to one another, can be seen.
- measuring step 440 may include applying intensity-based image registration using a gradient descent algorithm. In an embodiment, measuring step 440 may measure a displacement between two cropped image frames with respect to a longitudinal axis of the anatomical part being imaged, an axis orthogonal to the longitudinal axis of the anatomical part being imaged, and/or a rotational axis about the anatomical part being imaged.
- FIG. 4B depicts a second part 400 B of the method from FIG. 4A .
- Method 400 B may further include selecting an interior portion 450 from the sequentially later cropped image frame 432 , the selected interior portion 452 having a dimension equal to the measured displacement 441 between cropped image frames 431 , 432 .
- the selected interior portion 452 can be seen as a shaded region 452 in the sequentially later cropped image frame 432 .
- the selected interior portion 452 may be taken from the interior 25% of the sequentially later cropped image frame 432 from an axis orthogonal to the longitudinal axis of the anatomical part being imaged.
- the longitudinal axis of the anatomical part being imaged extends from left to right in the drawings.
- Selected interior portion 452 can be seen as being taken from the axis orthogonal to the longitudinal axis of the anatomical part being imaged.
- a sequentially first portion 451 from a sequentially first cropped image frame 431 can also be seen as a shaded region 451 .
- Method 400 B may further comprise stitching 460 selected interior portion 452 to the first portion 451 to generate a partial composite image 461 .
- FIG. 4C describes an exemplary next iteration.
- the method may repeat the step of selecting a region of interest 421 from the next sequential pair of image frames; in this case, the pair of image frames would be the image frame from which cropped image frame 432 was generated and the sequentially next image frame occurring after image frame 432 .
- the pair of image frames may be cropped as was done in block 430 .
- a displacement 441 between cropped image frame 432 and the sequentially next cropped image frame 433 may be measured with respect to a longitudinal axis of the anatomical part being imaged as was done in block 440 .
- a sequentially next interior portion 453 may be selected from cropped image frame 433 .
- the selected interior portion 453 may have a dimension equal to the measured displacement 441 between cropped image frames 432 , 433 .
- the selected interior portion 453 may be taken from the interior 25% of cropped image frame 433 .
- the selected interior portion 453 may be aligned and stitched to partial composite image 461 .
- Method 400 B may include an iterative process 470 to select and add additional portions to the partial composite image, as described above, until the entire region of interest is entirely visible in the composite image. For example, if steps 420 through 460 were executed on a pair of image frames 1 and 2 from the set of image frames ⁇ 1 , 2 , 3 , 4 , 5 ⁇ , steps 420 through 460 would then be executed on image frames 2 and 3 , again on image frames 3 and 4 , and again until the composite image is entirely generated. As a result of method 400 A, 400 B, a composite image 481 is obtained at 480 . Composite image 481 may correct effects of parallax distortion.
- measuring step 440 may include applying intensity-based image registration.
- Intensity-based image registration may be an iterative process that may include spatially registering a moving, or sequentially later, image frame with a reference, or sequentially earlier, image frame.
- Intensity-based image registration may involve comparing intensity patterns in images via correlation metrics, which may also be referred to as optimizers.
- the correlation metric or optimizer may define an image similarity metric for evaluating accuracy of a registration.
- the image similarity metric may take two images and return a scalar value describing the level of similarity between the images, and the optimizer may define the methodology for minimizing or maximizing the similarity metric.
- FIG. 11 shows an exemplary flow chart describing an intensity-based image registration step.
- the process may begin at 1100 with an image transformation beginning with a pre-determined transformation matrix 1101 and a type of transformation 1102 .
- a transformation may be applied to a moving image 1111 with bilinear interpolation at step 1110 .
- a metric may compare the transformed moving image 1112 to a fixed image 1121 to compute a metric value 1122 .
- an optimizer 1131 may check the metric value 1122 to determine if a terminating condition 1132 has been satisfied. If the terminating condition 1132 is not satisfied, the process will be repeated with a new transformation matrix generated with a gradient descent method until a defined number of iterations and/or a terminating condition 1132 is reached.
- Edge detection software such as software on Pro Imaging with Surgeon's Checklist offered by Radlink, may locate critical landmarks between the constituent portions and use the landmarks to properly align the constituent portions.
- the landmarks may be physically placed externally to the anatomical part being imaged within the image field of view, or the landmarks may comprise anatomical landmarks within the body. In this manner, selected portions may be aligned into true mathematical orientation with respect to one another. Once properly aligned, the constituent portions may be stitched together.
- FIG. 12 illustrates an exemplary process for implementing at least a part of measuring step 440 in an embodiment where the radiographic images are generated from several discrete exposures.
- Reference image frame 1201 and moving image frame 1202 may be provided, and one of several pre-defined regions of interest may be cropped from image frames 1201 , 1202 at step 1210 .
- the cropped image frames from step 1210 may undergo a Canny edge detector process at 1220 to detect reference frame region of interest edges 1221 and moving frame region of interest edges 1222 .
- Edges 1221 , 1222 may undergo an intensity-based image registration progress, for example, as described in FIG. 11 , at step 1230 .
- reference image frame 1201 and moving image frame 1202 may undergo the same repeated process with a different, pre-defined region of interest cropped from image frames 1201 , 1202 .
- transformations may be calculated between reference image frame 1201 and moving image frame 1202 based on multiple region of interest combinations.
- the transformation with the highest metric value may be filtered to result in a transformed image 1270 .
- a Canny edge detector process may include the following steps: (1) apply a Gaussian filter to smooth the image in order to remove noise; (2) find the intensity gradients of the image; (3) apply non-maximum suppression to get rid of spurious responses to edge detection; (4) apply double threshold to determine potential edges; and (5) track by hysteresis to finalize the detection of edges by suppressing all the other edges that are weak and not connected to strong edges.
- two-dimensional composite radiographic imaging information that corrects effects of parallax distortion may be generated.
- An image displaying a wide field of view and that corrects effects of parallax distortion is a useful improvement in intra-operative imaging techniques and may allow members of a surgical team to make more accurate measurements during a procedure, resulting in improved outcomes for patients and more efficient surgical procedures.
- FIG. 5 shows a composite radiographic image that corrects effects of parallax distortion that was generated according to a method of the present disclosure.
- Composite image 500 depicts an anteroposterior image of a patient's pelvic region.
- Straight metal bar 501 is accurately depicted in composite image 500 —straight, and without any visible curves or defects caused by parallax distortion.
- Composite image 500 was generated from a plurality of discrete radiographic exposures taken on a C-arm apparatus.
- FIG. 6 shows a composite radiographic image that corrects effects of parallax distortion that was generated according to a method of the present disclosure.
- Composite image 600 depicts an anteroposterior image of a patient's pelvis.
- Straight metal bar 601 is accurately depicted in composite image 600 —straight, and without any visible curves or defects caused by parallax distortion.
- Composite image 600 was generated from a plurality of radiographic image frames from a fluoroscopic procedure using a C-arm apparatus.
- FIG. 7 depicts a composite anteroposterior radiographic image of a pelvis 700 that corrects effects of parallax distortion. Because composite image 700 corrects effects of parallax distortion, it can be used intra-operatively to accurately make measurements, such as a discrepancy in leg lengths 702 , 703 . Straight metal bar 701 can be seen accurately depicted in composite image 700 .
- Composite image 700 was generated from a plurality of radiographic image frames from a fluoroscopic procedure using a C-arm apparatus.
- FIG. 8 depicts a composite anteroposterior radiographic image of a pelvis 800 that corrects effects of parallax distortion. Because composite image 800 corrects effects of parallax distortion, it can be used intra-operatively to accurately make measurements, such as a discrepancy in leg lengths 802 , 803 . Straight metal bar 801 can be seen accurately depicted in composite image 800 .
- Composite image 800 was generated from a plurality of discrete radiographic exposures taken on a C-arm apparatus.
- a C-arm apparatus 1001 may capture video or image signals.
- the C-arm apparatus 1001 may have a display 1002 directly connected to the apparatus to instantly view the images or video.
- a wireless kit 1010 may, alternatively or additionally, be attached to the C-arm apparatus 1001 via video port 1003 to receive the video or image signal from the C-arm apparatus 1001 , the intra-operative signal representing digital data of a radiographic image frame or plurality of frames.
- Video port 1003 may utilize a BNC connection, a VGA connection, a DVI-D connection, or an alternative connection known to those of skill in the art.
- the wireless kit 1010 may be the Radlink Wireless C-Arm Kit.
- Wireless kit 1010 may include a resolution converter 1011 to convert the image signal to proper resolution for further transmission, frame grabber 1012 to produce a pixel-by-pixel digital copy of each image frame, central processing unit 1013 , memory 1014 , and dual-band wireless-N adapter 1015 .
- the wireless kit 1010 may convert the received signal to one or more image files and can send the converted file(s), for example, by wireless connection 1018 to one or more computer(s) and operatively connected display(s) 1021 , 1022 .
- the computer and operatively connected display may, for example, be a Radlink Galileo Positioning System (“GPS”) 1021 or GPS Tablet 1022 .
- the Wireless C-Arm Kit 1010 may receive, convert, and transmit the file(s) in real time.
- the methods described in the present disclosure may be implemented, for example, as software running on the GPS 1021 or GPS Tablet 1022 units.
- the GPS 1021 and GPS Tablet 1022 may also incorporate additional functions, such as those provided in the Radlink Pro Imaging with Surgeon's Checklist software. Using such equipment, a composite image that corrects effects of parallax distortion may thus be generated and viewed intra-operatively in real time.
- FIG. 9 depicts exemplary hardware for a system to generate intra-operative composite radiographic imaging information that correct effects of parallax distortion to generate and utilize a three-dimensional patient model to determine a proper placement of a component during a surgery.
- the system may take the form of a computer 900 that includes a processing unit 904 , a system memory 906 , and a system bus 920 that operatively couples various system components, including the system memory 906 to the processing unit 904 .
- the computer 900 may be a conventional computer, a distributed computer, a web server, a file server, a tablet or iPad, a smart phone, or any other type of computing device.
- the system bus 920 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, a switched fabric, point-to-point connections, and a local bus using any of a variety of bus architectures.
- the system memory 906 may also be referred to as simply the memory, and includes read only memory (ROM) 908 and random access memory (RAM) 907 .
- ROM read only memory
- RAM random access memory
- a basic input/output system (BIOS) 910 containing the basic routines that help to transfer information between elements within the computer 900 , such as during start-up, is stored in ROM 908 .
- the computer 900 may further include a hard disk drive 932 for reading from and writing to a hard disk, not shown, a magnetic disk drive 934 for reading from or writing to a removable magnetic disk 938 , and/or an optical disk drive 936 for reading from or writing to a removable optical disk 940 such as a CD-ROM or other optical media.
- a hard disk drive 932 for reading from and writing to a hard disk, not shown
- a magnetic disk drive 934 for reading from or writing to a removable magnetic disk 938
- an optical disk drive 936 for reading from or writing to a removable optical disk 940 such as a CD-ROM or other optical media.
- the hard disk drive 932 , magnetic disk drive 934 , and optical disk drive 936 may be connected to the system bus 920 by a hard disk drive interface 922 , a magnetic disk drive interface 924 , and an optical disk drive interface 926 , respectively.
- the drives and their associated computer-readable media provide nonvolatile storage of computer-readable instructions; data structures, e.g., a catalog and a context-based index; program modules, e.g., a web service and an indexing robot; and other data for the computer 900 .
- a number of program modules may be stored on the hard disk 932 , magnetic disk 934 , optical disk 936 , ROM 908 , or RAM 907 , including an operating system 912 , browser 914 , stand-alone program 916 , etc.
- a user may enter commands and information into the personal computer 900 through input devices such as a keyboard 942 and a pointing device 944 , for example, a mouse.
- Other input devices may include, for example, a microphone, a joystick, a game pad, a tablet, a touch screen device, a satellite dish, a scanner, a facsimile machine, and a video camera.
- These and other input devices are often connected to the processing unit 904 through a serial port interface 928 that is coupled to the system bus 920 , but may be connected by other interfaces, such as a parallel port, game port or a universal serial bus (USB).
- USB universal serial bus
- a monitor 946 or other type of display device is also connected to the system bus 920 via an interface, such as a video adapter 948 .
- computers typically include other peripheral output devices, such as speakers 960 connected to the system bus 920 via an audio adapter 96 and printers.
- peripheral output devices such as speakers 960 connected to the system bus 920 via an audio adapter 96 and printers.
- These and other output devices are often connected to the processing unit 904 through the serial port interface 928 that is coupled to the system bus 920 , but may be connected by other interfaces, such as a parallel port, game port, or a universal serial bus (USB).
- USB universal serial bus
- the computer 900 may operate in a networked environment using logical connections to one or more remote computers. These logical connections may be achieved by a communication device coupled to or integral with the computer 900 ; the application is not limited to a particular type of communications device.
- the remote computer may be another computer, a server, a router, a network personal computer, a client, a peer device, or other common network node, and typically includes many or all of the elements described above relative to the computer 900 , although only a memory storage device has been illustrated in FIG. 9 .
- the computer 900 can be logically connected to the Internet 972 .
- the logical connection can include a local area network (LAN), wide area network (WAN), personal area network (PAN), campus area network (CAN), metropolitan area network (MAN), or global area network (GAN).
- LAN local area network
- WAN wide area network
- PAN personal area network
- CAN campus area network
- MAN metropolitan area network
- GAN global area network
- the computer 900 When used in a LAN environment, the computer 900 may be connected to the local network through a network interface or adapter 930 , which is one type of communications device.
- the computer 900 When used in a WAN environment, the computer 900 typically includes a modem 950 , a network adapter 952 , or any other type of communications device for establishing communications over the wide area network.
- the modem 950 which may be internal or external, is connected to the system bus 920 via the serial port interface 928 .
- program modules depicted relative to the personal computer 900 , or portions thereof, may be stored in a remote memory storage device. It is appreciated that the network connections shown are exemplary and other means of, and communications devices for, establishing a communications link between the computers may be used.
- the system can take the form of a computer program product 916 accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system.
- a computer-usable or computer readable medium can be any apparatus that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
- the medium can be an apparatus or device that utilizes or implements electronic, magnetic, optical, electromagnetic, infrared signal or other propagation medium, or semiconductor system.
- Examples of a computer-readable medium comprise a semiconductor or solid-state memory, magnetic tape, a removable computer diskette, a random access memory, a read-only memory, a rigid magnetic disk and an optical disk.
- Current examples of optical disks comprise compact disk-read only memory (CD-ROM), compact disk-read/write (CD-R/W) and DVD formats.
- a data processing system suitable for storing and/or executing program code comprises at least one processor coupled directly or indirectly to memory elements through a system bus.
- the memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memory that provide temporary storage of at least some program code in order to reduce the number of times code is retrieved from bulk storage during execution.
- I/O devices including, but not limited to, keyboards, displays, pointing devices, etc.
- I/O controllers can be coupled to the system either directly or through intervening I/O controllers.
- Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks.
- Modems, cable modems and Ethernet cards are just a few of the currently available types of network adapters.
- computers and other related electronic devices can be remotely connected to either the LANs or the WAN via a digital communications device, modem and temporary telephone, or a wireless link.
- a digital communications device modem and temporary telephone, or a wireless link.
- the Internet comprises a vast number of such interconnected networks, computers, and routers.
- a computerized surgery assist computer 2102 may receive anatomic image information of a patient 10 or a portion of a patient 10 (e.g., a pelvis) taken by an anatomical scanning device, such as an x-ray scanner 16 (e.g., when receiving discrete images or fluorographic images) at a position of the patient 10 (lying on a patient table 14 ).
- an anatomical scanning device such as an x-ray scanner 16 (e.g., when receiving discrete images or fluorographic images) at a position of the patient 10 (lying on a patient table 14 ).
- the computerized surgery assist computer 2102 may receive anatomic image information of a patient 10 or a portion of a patient 10 obtained from a CT or MR scan.
- the anatomic image information may be a data set of three-dimensional imaging information.
- the computerized surgery assist computer 2102 may receive a data set of three-dimensional imaging information obtained while the patient 10 was in a neutral position.
- the anatomic image information may be received from an image processing computer server 18 positioned via wired or wireless data links 20 , 22 between the x-ray scanner 16 (or, e.g., the CT or MR scanner) and the surgery assist computer 2102 .
- the patient may have a three-dimensional positional sensor 2100 affixed to the patient's body, and the surgery assist computer 2102 may receive positional information via wired or wireless data link 2110 from sensor 2100 .
- the surgery assist computer 2102 may be programmed to display a visual representation of the anatomic image information on a computerized display 2108 ; determine a target positioning value of a component from the anatomic image information, either automatically or with input from a surgeon; and may make additional measurements as desired or programmed (e.g., measurements of one or more anatomical landmarks and/or ratios of anatomical landmarks), either automatically or with input from a surgeon.
- the surgery assist computer 2102 may further receive subsequent anatomic image information of the patient 10 ; display a visual representation of the subsequent anatomic image information on the display 2108 ; and may make additional measurements or display additional markers, either automatically or with input from a surgeon.
- the surgery assist computer 2102 may have a receiver to receive information and data, including image data from the x-ray scanner 16 and/or CT or MR scanner; a processor or microcontroller, such as a CPU, to process the received information and data and to execute other software instructions; system memory to store the received information and data, software instructions, and the like; and a display 2108 to display visual representations of received information and data as well as visual representations resulting from other executed system processes.
- a receiver to receive information and data, including image data from the x-ray scanner 16 and/or CT or MR scanner
- a processor or microcontroller such as a CPU, to process the received information and data and to execute other software instructions
- system memory to store the received information and data, software instructions, and the like
- a display 2108 to display visual representations of received information and data as well as visual representations resulting from other executed system processes.
- Such a system may allow a surgeon and/or other medical personnel to more accurately and consistently determine a proper placement of and position a component by helping a surgeon identify a target position for a component and making adjustments to the positioning value based on differences in initial anatomic image information and subsequent anatomic image information. Such differences may result, for example, when a patient and an imaging scanner are aligned differently with respect to each other when multiple sets of anatomic image information are acquired (e.g., pre-operatively at a neutral position of a patient and intra-operatively at a non-neutral position of the patient).
- FIGS. 14A-14C provide examples of a portion of a patient 2200 (in this case, the patient's pelvis) may appear differently when a patient is positioned in different orientations. For example, FIG.
- FIG. 14A shows a portion of the patient 2200 in a neutral position with no tilt
- FIG. 14B shows a portion of the patient 2210 with a forward tilt of about 20°
- FIG. 14C shows a portion of the patient 2220 having a backward tilt of about ⁇ 20°.
- moving a patient may also cause the portion of the patient to have different inclinations and anteversions, as a patient is generally manipulated in three-dimensional space.
- small differences in a patient's orientation relative to a neutral position may provide different measurements of anatomical or component orientations, which could affect the outcome of a surgical procedure.
- an acetabular cup 2201 is positioned with an inclination of 40.0° and an anteversion of 20.0°.
- pelvis 200 is tilted 20.0°, as pelvis 210 is in FIG. 14B , the acetabular cup 2211 is measured to have an inclination of 37.3° and an anteversion of 4.3°. If pelvis 200 is tilted to ⁇ 20.0°, as is pelvis 220 in FIG. 14C , the acetabular cup 2221 is measured to have an inclination of 47.2° and an anteversion of 34.6°. Accordingly, when positioning a component in a patient during surgery, such as an acetabular cup during THA, a surgeon may need to account for the effects of the patient's orientation on positioning values such as tilt, inclination, and/or anteversion.
- Adjustments to positional values of the acetabular cup, such as inclination, may be based on the study of a projected circle in three-dimensional space.
- the rotation of the circle in three-dimensional space may mimic the rotation of an acetabular cup.
- An acetabular cup may display shapes of ellipses under different angles of projection. Three rotational factors may affect the shape of the projected ellipse: Inclination (I)—rotation about the Z axis, Anteversion (A)—rotation about the Y axis, and Tilt (T)—rotation about the X axis.
- FIG. 17 illustrates an exemplary projection of a circle that may be used to model an opening 2792 of an acetabular cup 2780 with the X, Y, and Z axes labeled.
- R x ⁇ ( T ) [ 1 0 0 0 cos ⁇ ( T ) - sin ⁇ ( T ) 0 sin ⁇ ( T ) cos ⁇ ( T ) ]
- R y ⁇ ( A ) [ cos ⁇ ( A ) 0 sin ⁇ ( A ) 0 1 0 - sin ⁇ ( A ) 0 cos ⁇ ( A ) ]
- R Z ⁇ ( I ) [ cos ⁇ ( I ) - sin ⁇ ( I ) 0 sin ⁇ ( I ) cos ⁇ ( I ) 0 0 0 1 ]
- the following matrix may capture the initial circle lying on the X-Z plane:
- the normal of the circle may be in the direction of the Y-axis and may be described as follows:
- X and Y represent the coordinates of the projected ellipse on the X-Y plane
- R represents the size of the acetabular cup
- ⁇ represents the parameter
- the minor diameter of the projected ellipse may be derived and described as follows:
- the major diameter may be described as follows:
- the inclination value of the projected ellipse may be described as follows:
- an acetabular cup is placed or has target positioning values with a known inclination and anteversion, the inclination resulting after the acetabular cup is tilted (e.g., when the pelvis is tilted) may be calculated.
- Other positioning values may similarly be calculated, as will be apparent to one of ordinary skill in the art.
- the surgery assist computer 2102 may be configured to implement one or more methods of the present disclosure.
- one method 2300 may be used to position a component intra-operatively.
- the example method 2300 may include a step 2310 of receiving a data set of imaging information representing at least a first portion of a patient (e.g., a data set of three-dimensional imaging information from a CT or MR scan) in a neutral position.
- Method 2300 may further include a step 2320 of generating a three-dimensional model of the first portion of the patient based on the data set of imaging information.
- the three-dimensional model may be reconstructed based on a region grow algorithm, water shed algorithm, active contour algorithm, a combination of algorithms, or any algorithm that may be known to those of ordinary skill in the art for generating a three-dimensional model from a data set of imaging information.
- Method 2300 may additionally include a step 2330 of iteratively rendering a plurality of two-dimensional projections from the three-dimensional model, each two-dimensional projection having a corresponding spatial orientation.
- the two-dimensional projections may be made at a specified orientation and distance (e.g., an x-ray-source-to-detector distance or an object-to-detector distance).
- method 2300 may include a step 2330 of rendering a first two-dimensional projection from the three-dimensional model, the first two-dimensional projection having a corresponding spatial orientation, proceeding through step 2360 (described in example form below), then repeating step 2330 with a next sequential projection (or even an out of order projection).
- Method 2300 may include a step 2340 of receiving intra-operative imaging information (e.g., an intra-operative x-ray image) representing the first portion of the patient.
- Method 2300 may further include a step 2350 of identifying a bony edge contour in the intra-operative imaging information.
- the bony edge contour in the intra-operative imaging information may be detected using a canny edge detector algorithm, another edge-detection algorithm that may be known to those of ordinary skill in the art, a combination of algorithms, shape-based segmentation, or manual selection.
- a canny edge detector process may include the following steps: (1) apply a Gaussian filter to smooth the image in order to remove noise; (2) find the intensity gradients of the image; (3) apply non-maximum suppression to get rid of spurious responses to edge detection; (4) apply double threshold to determine potential edges; and (5) track by hysteresis to finalize the detection of edges by suppressing all the other edges that are weak and not connected to strong edges.
- Method 2300 may further include the step 2360 of scoring each two-dimensional projection by calculating the distance of the bony edge contour in the intra-operative imaging information to a corresponding contour in each two-dimensional projection and identifying a global minimum score. Scoring step 2360 may be performed using best-fit techniques. In an alternate embodiment, such as when the system initially renders only a first two-dimensional projection before proceeding through the method, step 2360 may include scoring only the first two-dimensional projection, storing the score in memory, and repeating scoring step 2360 for subsequent two-dimensional projections as they are rendered, then selecting a global minimum score from the plurality of scores. A repetitive process such as this may be illustrated by steps 2330 , 2360 , and 2370 in FIG. 15 . The process of repeating 2330 , 2360 , and 2370 may be referred to as an enumeration process 2380 based on the fitting of the two-dimensional projection and the detected bony edge contour from the intra-operative imaging information.
- Method 2300 may include a step 2390 of outputting the orientation of the three-dimensional model as a final result.
- step 2390 may include outputting a transformation matrix for the two-dimensional projection having the global minimum score, the transformation matrix representing the orientation of the three-dimensional model relative to the neutral position when the two-dimensional projection having the global minimum score was rendered.
- a method may include a step of calculating an adjustment factor based on the transformation matrix.
- the calculated adjustment factor may be used to output a visual indication of an intra-operative adjustment to be made to a component based on the adjustment factor to achieve a target component orientation.
- FIG. 16 illustrates one embodiment of such a visual indication 2790 .
- an image of an ellipse 2790 may be superimposed onto radiographic image 2500 to illustrate how the opening 2792 of acetabular cup 2780 should appear when properly aligned.
- a method may include the step of applying the calculated adjustment factor to an intra-operative leg length measurement and outputting a visual indication of an intra-operative adjustment to be made to the patient to achieve a target leg length measurement.
- the radiographic image 2500 (and any visual indication discussed in a similar context) may be displayed on display 2108 from FIG. 15 .
- a visual indication may include an outline of the target component orientation; real-time inclination, anteversion, and tilt values of the component; target inclination, anteversion, and tilt values of the component; and/or combinations thereof.
- FIG. 16 illustrates a conceptual model of a two-dimensional projection 2410 from a three-dimensional model 2430 in accordance with an embodiment of the present disclosure.
- one or more two-dimensional projection(s) 2410 may be rendered based on the three-dimensional model 2430 onto a projected plan view 2420 .
- Projected plan view 2420 may be comparable to an x-ray image, where the two-dimensional projection 2410 may be comparable to an anatomical visualization on an x-ray image.
- Each two-dimensional projection may have a corresponding spatial orientation depending on the position of the x-ray source 16 a to the three-dimensional model 2430 .
- FIG. 16 illustrates a conceptual model of a two-dimensional projection 2410 from a three-dimensional model 2430 in accordance with an embodiment of the present disclosure.
- one or more two-dimensional projection(s) 2410 may be rendered based on the three-dimensional model 2430 onto a projected plan view 2420 .
- Projected plan view 2420 may be comparable to an x-ray
- the two-dimensional projections may be rendered at a specified orientation and distance (e.g., an x-ray-source-to-detector distance 2440 or an object-to-detector distance 2450 ).
- the spatial relationship of the x-ray source 16 a and the three-dimensional model 430 as well as distance(s) 450 , 460 may want to be taken into account in certain embodiments to ensure accurate measurements.
- systems and methods of the present disclosure may be used to ensure consistent measurements between radiographic images taken of a patient at a neutral position and radiographic images taken of a patient in a non-neutral (e.g., intra-operative) position without having to ensure that the patient is precisely placed in a neutral position and, potentially, with less x-ray exposure, by simulating movement of the patient back to the neutral position using the three-dimensional model and calculating an adjustment factor taking into account the differences between the actual, non-neutral position of the patient and the patient in a neutral position.
- a non-neutral e.g., intra-operative
- a C-arm apparatus 1001 may capture video or image signals using x-rays.
- C-arm apparatus 1001 may, for example, capture an intra-operative x-ray image.
- the C-arm apparatus 1001 may have a display 1002 directly connected to the apparatus to instantly view the images or video.
- Display 1002 may be configured with a number of various inputs, including, for example, an input to receive one or more data sets of three-dimensional image information.
- a wireless kit 1010 may, alternatively or additionally, be attached to the C-arm apparatus 1001 via video port 1003 to receive the video or image signal from the C-arm apparatus 1001 , the signal representing digital data of a radiographic image frame or plurality of frames.
- Video port 1003 may utilize a BNC connection, a VGA connection, a DVI-D connection, or an alternative connection known to those of skill in the art.
- the wireless kit 1010 may be the Radlink Wireless C-Arm Kit.
- Wireless kit 1010 may include a resolution converter 1011 to convert the image signal to proper resolution for further transmission, frame grabber 1012 to produce a pixel-by-pixel digital copy of each image frame, central processing unit 1013 , memory 1014 , and dual-band wireless-N adapter 1015 .
- the wireless kit 1010 may convert the received signal to one or more image files and can send the converted file(s), for example, by wireless connection 1018 to one or more computer(s) and operatively connected display(s) 1021 , 1022 .
- the computer and operatively connected display may, for example, be a Radlink Galileo Positioning System (“GPS”) 1021 or GPS Tablet 1022 .
- the Wireless C-Arm Kit 1010 may receive, convert, and transmit the file(s) in real time.
- the methods described in the present disclosure may be implemented, for example, as software running on the GPS 1021 or GPS Tablet 1022 units.
- the GPS 1021 and GPS Tablet 1022 may also incorporate additional functions, such as those provided in the Radlink Pro Imaging with Surgeon's Checklist software.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Epidemiology (AREA)
- Medical Informatics (AREA)
- Primary Health Care (AREA)
- Public Health (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Geometry (AREA)
Abstract
Description
- This patent application claims is a continuation-in-part application to U.S. patent application Ser. No. 15/708,821, filed Sep. 19, 2017, which claims the benefit of U.S. Provisional Application No. 62/396,611, which was filed on Sep. 19, 2016; and this application is a continuation-in-part application to U.S. patent application Ser. No. 16/212,065, filed Dec. 6, 2018, which claims the benefit of U.S. Provisional Application No. 62/595,670, which was filed on Dec. 7, 2017. Each of the above-identified applications is, by this reference thereto, incorporated herein in their entireties.
- The present disclosure relates to systems and methods to create a composite radiographic image that corrects effects of parallax distortion and help obtain a proper placement and positioning of a component such as an acetabular cup or a femoral component during a surgical procedure.
- During certain surgical procedures, particularly orthopedic surgeries, it may be desired to obtain radiographic images before and even during surgery. Sometimes a radiographic image of a particularly wide or long body part, such as a pelvis or a femur, is taken. Often these images are used to take various measurements of the patient. For example, an orthopedic surgeon may take an intra-operative radiographic image of a patient's pelvis to, for example, measure a discrepancy in a patient's leg lengths. A common apparatus used to take such intra-operative images is a C-arm that includes an X-ray source and X-ray detector. However, a pelvis is often too wide to fit within the field of view of an image from a C-arm. Furthermore, distortions in the image, particularly parallax distortions near the ends of the field of view, may cause the accuracy of any measurements taken using such an image to be reduced.
- There is a need for a system that allows a surgeon to view intra-operatively, for example, a full body part, such as a pelvis, that corrects effects of parallax distortion to allow the surgeon to make accurate intra-operative measurements.
- The present disclosure is directed to methods and systems to improve the quality and accuracy of intra-operative radiographic images, particularly of wide-view objects, by generating a composite image that corrects effects of parallax distortion and then displaying a three-dimensional projection of a region of the patient relative to this composite image. The method of the present disclosure may comprise acquiring a sequence of digital radiographic image frames, identifying a region of interest on the image frames, cropping the identified region of interest from a plurality of the image frames, and stitching together a plurality of selected portions of cropped image frames to create a composite image. The plurality of selected portions of cropped image frames may be selected by (1) selecting a sequentially first selected portion from a first cropped image frame; (2) iteratively selecting a plurality of interior selected portions by (a) measuring a displacement between each pair of sequentially adjacent cropped image frames and (b) selecting a portion from the sequentially later cropped image frame corresponding to the measured displacement; and (3) selecting a sequentially last portion from a last cropped image frame. Each selected middle portion may have a dimension that corresponds to the measured displacement. In an embodiment, each selected middle portion may be selected from the interior 25% of the field of view of the respective cropped image frame.
- The sequence of radiographic images may be taken using a C-arm apparatus with an X-ray source and X-ray detector disposed thereon. In an embodiment, the sequence of radiographic images is a result of several (for example, five to ten) discrete exposures. In an embodiment, the sequence of radiographic images is a result of a fluoroscopic procedure generating a relatively larger number (for example, 25 to 75) of image frames. A surgeon may be able to use a composite image that corrects effects of parallax distortion according to an aspect of the present disclosure to take more accurate measurements, such as, for example, leg length discrepancy measurements, of a patient.
- Then, the composite image is used with an initial three-dimensional image of a patient in a neutral position for generating a three-dimensional model corresponding to the spatial orientation of the composite image. A plurality of two-dimensional projections are scored by calculating the distance of the bony edge contour in the intra-operative imaging information to a corresponding contour in each two-dimensional projection and identifying a global minimum score. Calculating a transformation matrix for the two-dimensional projection having the global minimum score and displaying the three-dimensional image of the patient established thereby and optionally displaying a visual indication an intra-operative adjustment to be made to a component based on the adjustment factor to achieve a target component orientation.
- The foregoing and other features of the present disclosure will become more fully apparent from the following description, taken in conjunction with the accompanying drawings. Understanding that these drawings depict only several embodiments in accordance with the disclosure, they are, therefore, not to be considered limiting of its scope. The disclosure will be described with additional specificity and detail through use of the accompanying drawings.
- In the drawings:
-
FIG. 1 presents a conventional radiographic image illustrating parallax distortion; -
FIG. 2 depicts a flow chart presenting an exemplary method to generate a composite radiographic image that corrects effects of parallax distortion in accordance with an aspect of the present disclosure; -
FIG. 3A presents an exemplary method for selecting constituent images to be formed into a composite image in accordance with an aspect of the present disclosure; -
FIG. 3B presents an exemplary method for selecting constituent images to be formed into a composite image in accordance with an aspect of the present disclosure; -
FIG. 4A depicts a first portion of an exemplary method for generating a composite radiographic image that corrects effects of parallax distortion in accordance with an aspect of the present disclosure; -
FIG. 4B depicts a second portion of an exemplary method for generating a composite radiographic image that corrects effects of parallax distortion in accordance with an aspect of the present disclosure; -
FIG. 4C illustrates the iterative nature of at least a portion of an exemplary method for generating a composite radiographic image in accordance with an aspect of the present disclosure; -
FIG. 5 shows a composite radiographic image that corrects effects of parallax distortion in accordance with an aspect of the present disclosure; -
FIG. 6 shows a composite radiographic image that corrects effects of parallax distortion in accordance with an aspect of the present disclosure; -
FIG. 7 depicts a composite anteroposterior radiographic image of a pelvis that corrects effects of parallax distortion being used to measure a discrepancy in leg lengths of a patient in accordance with an aspect of the present disclosure; -
FIG. 8 depicts a composite anteroposterior radiographic image of a pelvis that corrects effects of parallax distortion being used to measure a discrepancy in leg lengths of a patient in accordance with an aspect of the present disclosure; -
FIG. 9 shows an embodiment of a computer architecture according to the present disclosure. -
FIG. 10 depicts a block diagram of an exemplary environment that may be used to implement systems and methods of the present disclosure; -
FIG. 11 shows a flow chart depicting an exemplary process of implementing intensity-based image registration in accordance with an aspect of the present disclosure; -
FIG. 12 shows an exemplary process that may be used in connection with measuring a displacement between two image frames generated from a plurality of discrete exposures according to an aspect of the present disclosure; -
FIG. 13 is a block diagram view of an exemplary system and an associated patient and x-ray shows an embodiment of exemplary system architecture in accordance with an embodiment of the present disclosure; -
FIG. 14A shows a portion of a patient at a neutral position with no tilt; -
FIG. 14B shows a portion of a patient at a non-neutral position with forward tilt; -
FIG. 14C shows a portion of a patient at a non-neutral position with backward tilt; -
FIG. 15 is an exemplary flow chart diagram illustrating steps that may be taken to render one or more two-dimensional projections from a three-dimensional model of a portion of a patient in accordance with an embodiment of the present disclosure; -
FIG. 16 is a diagram providing a conceptual model of a two-dimensional projection from a three-dimensional model in accordance with an embodiment of the present disclosure; -
FIG. 17 shows a projected circle rotated along three axes that may be used to model an acetabular cup component in accordance with an embodiment of the present disclosure; and -
FIG. 18 shows a screen shot of a display including an intra-operative radiographic image including a superimposed ellipse representing a target placement of an acetabular cup component in accordance with an embodiment of the present disclosure. - In the following detailed description, reference is made to the accompanying drawings, which form a part hereof. In the drawings, similar symbols typically identify similar components, unless context dictates otherwise. The illustrative embodiments described herein are not meant to be limiting. Other embodiments may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented here. It will be readily understood that the aspects of the present disclosure, as generally described herein and illustrated in the figures, may be arranged, substituted, combined, and designed in a wide variety of different configurations, all of which are explicitly contemplated and make part of this disclosure.
- Digital radiographic two-dimensional images can be acquired by exposing X-ray sensors at a detector to X-rays from an X-ray source. When a patient is disposed between the X-ray source and the detector, digital radiographic images may be acquired and transmitted to a computer (via wired or wireless connection) for review, archiving, and/or enhancement. Acquired images can be saved in a variety of formats, such as tiff, jpeg, png, bmp, dicom, and raw formatted images for both reading and writing.
- Digital radiography offers numerous advantages over traditional X-ray methods. For example, digital radiography generates lower radiation levels than the levels required by traditional X-ray techniques using radiographic or phosphor film. Further, two-dimensional radiographic images may be viewed much quicker than traditional X-ray film due to reduced image acquisition time. The ability to digitally enhance digital X-ray images may further reduce radiation exposure levels.
- The present disclosure is directed toward systems and methods for generating composite radiographic imaging information that corrects effects of parallax distortion. A sequence of radiographic images—including a series of discrete exposures or image frames from a fluoroscopic procedure—may be acquired using a C-arm apparatus. Conventional C-arm images have a limited view, precluding a member of a surgical team from viewing the entirety of a long structure, such as the width of a pelvis or length of a spine, in one image. The images that are captured often incorporate significant parallax distortions. Composite two-dimensional imaging information generated according to one of the exemplary methods described in fuller detail below, however, may allow a member of a surgical team to obtain a full view of a long structure in one image, and the composite image may correct effects of parallax distortion.
-
FIG. 1 presents a conventional radiographic image, and the effects of parallax distortion are clear from the image. Theimage 100 is an anteroposterior view of a portion of a pelvis in which astraight metal bar 101 was laid on the anterior side of a patient. In theimage 100, thestraight metal bar 101 appears rather curvy. The curves onstraight metal bar 101 are the results of parallax distortion. Parallax distortion likewise affects the display of the patient structures being imaged and can cause inaccuracies in certain measurements that may be desired before or during an orthopedic surgery. More accurate measurements may allow members of the surgical team to complete certain aspects of orthopedic surgeries more efficiently, accurately, and confidently. - As used herein, the term “sequential” refers to the order in which a series of radiographic image frames are taken or selected, depending on the context. For clarity, a sequence of radiographic image frames, or sequential image frames, does not require that each and every image frame in a given set of image frames be selected or used but does indicate that the selected or used image frames maintain the relative order in which the image frames were taken or generated.
- As used herein, the term “displacement” refers to the amount of offset between sequential image frames along a particular axis. A second image frame may have a displacement with regard to a first image frame, for example, due to movement of a C-arm apparatus with respect to an anatomical part being imaged.
-
FIG. 2 depicts a flow chart presenting anexemplary method 200 to generate the intra-operative composite radiographic imaging information to correct effects of parallax distortion in accordance with an aspect of the present disclosure. In an embodiment, a system, such as a computer system communicatively coupled to a C-arm apparatus, receives a plurality of radiographic image frames pertaining to a patient instep 210. The radiographic image frames may comprise a sequence of discrete exposures, or the radiographic image frames may comprise frames from a fluoroscopy procedure. The system or a user thereof may identify a region of interest on the image frames instep 220. For example, the region of interest may be a structure too large for a C-arm to capture in a single image, such as a pelvis or a spine. The identified region of interest may be cropped from a plurality of the image frames in step 230. A plurality of sequential portions of cropped image frames may be selected instep 240. The selected plurality of sequential portions of cropped image frames may be stitched together instep 250 to create a composite radiographic image that displays the entire large structure, and the composite image may correct effects of parallax distortion. - In a given image frame, there may be less distortion, including parallax distortion, in an interior part of the image frame than in an edge part of the image frame. Combining a plurality of relatively small-width portions from a plurality of image frames, with each portion being taken from an interior part of an image frame, may result in a wide-view composite image that corrects effects of parallax distortion. The effects of parallax distortion may become more pronounced at a location farther from the middle of the image frame. In an embodiment, each selected interior portion may be taken from an interior 25% of the respective image frame.
-
FIG. 3A presents a flow chart depicting anexemplary method 240 for selecting a plurality of sequential portions of cropped image frames generated in step 230. The portions of cropped image frames, once selected, become constituent images to be stitched together to form an intra-operative composite image, or composite two-dimensional imaging information. In an embodiment,method 240 may comprise selecting a sequentially first portion from a first croppedimage frame 241. The selected sequentially first portion ofstep 241 may correspond to an edge part of a composite image. In an embodiment, the selected sequentially first portion ofstep 241 may comprise up to about half of the first cropped image frame. In an embodiment where the radiographic image frames are generated from a fluoroscopic procedure, the selected sequentially first portion ofstep 241 may comprise half of the first cropped image frame. In an embodiment where the radiographic image frames are generated from a series of discrete radiographic exposures, when the first cropped image frame and a sequentially next image frame are aligned to create partial overlap in the images, the selected sequentially first portion ofstep 241 may comprise a non-overlapped part of the first cropped image frame plus half of the overlapping region. - Still referring to
FIG. 3A ,method 240 may further comprise measuring a displacement between each pair of sequentially adjacent cropped image frames 242 with respect to a longitudinal axis of the anatomical part being imaged. For clarity, the measuringstep 242 could include measuring a displacement with respect to a longitudinal axis of the anatomical part being imaged between every pair of adjacent cropped image frames generated by a C-arm and received instep 210. Alternatively, measuringstep 242 could occur after a step of selecting all of the constituent cropped image frames to be used to create a composite image, in which case only a displacement with respect to a longitudinal axis of the anatomical part being imaged between adjacent selected sequential cropped image frames that will be used to create the intra-operative composite imaging information will be measured. In an embodiment, measuringstep 242 may include applying intensity-based image registration using a gradient descent algorithm. In an embodiment, measuringstep 242 may measure a displacement between two cropped image frames with respect to a longitudinal axis of the anatomical part being imaged, an axis orthogonal to the longitudinal axis of the anatomical part being imaged, and/or a rotational axis about the anatomical part being imaged.Method 240 may further comprise selecting an interior portion from the sequentially later croppedimage frame 243. The sequentially later cropped image frame refers to the cropped image frame from the pair of sequentially adjacent cropped image frames instep 242 that occurs relatively later in the pair. Each selected interior portion fromstep 243 may be selected from an interior part of the sequentially later cropped image frame, as the sequentially later cropped image frame may be less distorted in an interior part of the image frame than near an edge of the image frame. Each selected interior portion fromstep 243, in a composite image, may be disposed in the interior part of the composite image, i.e., between the sequentially first portion fromstep 241 and a sequentially last portion. Each selected interior portion fromstep 243 may have a dimension corresponding to—and in an embodiment, equal to—a respective measured displacement fromstep 242. For example, if measuringstep 242 measures a displacement between two cropped image frames with respect to a longitudinal axis of an anatomical part being imaged, a selected interior portion may have a width equal to the measured displacement between the two cropped image frames. In such a case, the length of the selected interior portion may have a height corresponding to the height of the sequentially later cropped image frame. A composite image may have a plurality of selected interior portions, with each selected interior portion being selected from a respective cropped image frame from a sequence of cropped image frames.Method 240 may further comprise selecting a sequentially last portion from a last croppedimage frame 245. The selected sequentially last portion ofstep 245 may correspond to an edge part of a composite image. In an embodiment, the selected sequentially last portion ofstep 245 may comprise up to about half of the last cropped image frame. - In an embodiment, each selected interior portion may correspond to a measured displacement with respect to a longitudinal axis of an anatomical part being imaged from
step 242. In an embodiment, each measured displacement may be about a few pixels of a digital image, and each selected interior portion may also be a few pixels in width. The width of the measured displacement may depend on the speed of movement of a C-arm apparatus and image-capturing frame rate. For example, if a C-arm apparatus is moved such that it takes about 10 seconds to move from one side of the region of interest to the other and the frame rate is set to 5 frames per second, the measured displacement may be between about 10-20 pixels. Accordingly, in such an embodiment, each selected interior portion may also be about 10-20 pixels wide. If, for example, each image frame is about 1,024 pixels wide, the width of each selected interior portion may correspond to about 1-2% of the width of the image frame from which it is taken. -
FIG. 3B presents a flow chart depicting an alternateexemplary method 240 for selecting a plurality of sequential portions of cropped image frames generated in step 230. The portions of cropped image frames, once selected, become constituent images to be stitched together to form a composite image. In an embodiment,method 240 may comprise selecting a sequentially first portion from a first croppedimage frame 241. The selected sequentially first portion ofstep 241 may correspond to an edge part of an intra-operative composite image. In an embodiment, the selected sequentially first portion ofstep 241 may comprise up to about half of the first cropped image frame. In an embodiment where the radiographic image frames are generated from a fluoroscopic procedure, the selected sequentially first portion ofstep 241 may comprise half of the first cropped image frame. In an embodiment where the two-dimensional radiographic image frames are generated from a series of discrete radiographic exposures, when the first cropped image frame and a sequentially next image frame are aligned to create partial overlap in the images, the selected sequentially first portion ofstep 241 may comprise a non-overlapped part of the first cropped image frame plus half of the overlapping region. - Still referring to
FIG. 3B ,method 240 may further comprise measuring a displacement between each pair of sequentially adjacent cropped image frames 242 with respect to a longitudinal axis of the anatomical part being imaged. For clarity, the measuringstep 242 could include measuring a displacement with respect to a longitudinal axis of the anatomical part being imaged between every pair of adjacent cropped image frames generated by a C-arm and received instep 210. Alternatively, measuringstep 242 could occur after a step of selecting all of the constituent cropped image frames to be used to create a composite image, in which case only a displacement with respect to a longitudinal axis of the anatomical part being imaged between adjacent selected sequential cropped image frames that will be used to create the composite image will be measured. In an embodiment, measuringstep 242 may include applying intensity-based image registration using a gradient descent algorithm. In an embodiment, measuringstep 242 may measure a displacement between two cropped image frames with respect to a longitudinal axis of the anatomical part being imaged, an axis orthogonal to the longitudinal axis of the anatomical part being imaged, and/or a rotational axis about the anatomical part being imaged.Method 240 may further comprise selecting a plurality of interior portions from a plurality of cropped image frames occurring sequentially later than the first cropped image frame such that each interior portion may have a dimension corresponding to a measured displacement ofstep 242 between (a) the cropped image frame from which the immediately preceding selected portion was taken and (b) the cropped image frame providing the selectedinterior portion 244. For clarity, the first selected interior portion may have a dimension corresponding to the measured displacement between the first cropped image frame and a sequentially second cropped image frame, and a second selected interior portion may have a dimension corresponding to the measured displacement between the sequentially second cropped image frame and a sequentially third cropped image frame. Each selected interior portion fromstep 244 may be selected from an interior part of the cropped image frame providing the selected interior portion, as there may be less distortion in the interior part of the image frame. Each selected interior portion fromstep 244, in a composite image, may be disposed in the interior part of the composite image, i.e., between the sequentially first portion fromstep 241 and a sequentially last portion. Each selected interior portion fromstep 244 may have a dimension corresponding to—and in an embodiment, equal to—a respective measured displacement fromstep 242. For example, if measuringstep 242 measures a displacement between two cropped image frames with respect to a longitudinal axis of an anatomical part being imaged, a selected interior portion may have a width equal to the measured displacement between the two cropped image frames. In such a case, the length of the selected interior portion may have a height corresponding to the height of the sequentially later cropped image frame. A composite image may have a plurality of selected interior portions, with each selected interior portion being selected from a respective cropped image frame from a sequence of cropped image frames.Method 240 may further comprise selecting a sequentially last portion from a last croppedimage frame 245. The selected sequentially last portion ofstep 245 may correspond to an edge part of a composite image. In an embodiment, the selected sequentially last portion ofstep 245 may comprise up to about half of the last cropped image frame. -
FIG. 4A depicts afirst part 400B of an exemplary method for generating a composite radiographic image that corrects effects of parallax distortion.Method 400A may comprise receiving a plurality of radiographic image frames pertaining to apatient 410. As an example, block 410 depicts two anteroposterior radiographic image frames of a patient's pelvis taken from a C-arm apparatus as the X-ray source and X-ray detector were rotated relative to the patient.Method 400A may further comprise identifying a region ofinterest 421 on the image frames 420. The region ofinterest 421 on each frame inblock 420 is signified with shading. -
Method 400A may further comprise cropping the identified region ofinterest 421 from a plurality of image frames 430 to generate cropped image frames 431, 432. In the example shown inblock 430, cropped image frames 431, 432 are in a sequential order, with croppedimage frame 431 being sequentially first and croppedimage frame 432 being sequentially later relative to croppedimage frame 431.Method 400A may further comprise measuring a displacement between each pair of sequentially adjacent cropped image frames 440 with respect to a longitudinal axis of the anatomical part being imaged. Thedisplacement 441 between the cropped image frames 431, 432 inblock 440, which have been aligned with respect to one another, can be seen. In an embodiment, measuringstep 440 may include applying intensity-based image registration using a gradient descent algorithm. In an embodiment, measuringstep 440 may measure a displacement between two cropped image frames with respect to a longitudinal axis of the anatomical part being imaged, an axis orthogonal to the longitudinal axis of the anatomical part being imaged, and/or a rotational axis about the anatomical part being imaged. -
FIG. 4B depicts asecond part 400B of the method fromFIG. 4A .Method 400B may further include selecting aninterior portion 450 from the sequentially later croppedimage frame 432, the selectedinterior portion 452 having a dimension equal to the measureddisplacement 441 between cropped image frames 431, 432. The selectedinterior portion 452 can be seen as ashaded region 452 in the sequentially later croppedimage frame 432. In an embodiment, the selectedinterior portion 452 may be taken from the interior 25% of the sequentially later croppedimage frame 432 from an axis orthogonal to the longitudinal axis of the anatomical part being imaged. For example, inblock 450, the longitudinal axis of the anatomical part being imaged extends from left to right in the drawings. Selectedinterior portion 452 can be seen as being taken from the axis orthogonal to the longitudinal axis of the anatomical part being imaged. A sequentiallyfirst portion 451 from a sequentially first croppedimage frame 431 can also be seen as ashaded region 451.Method 400B may further comprise stitching 460 selectedinterior portion 452 to thefirst portion 451 to generate a partialcomposite image 461. - The series of steps may be repeated to select the sequentially next interior portion.
FIG. 4C describes an exemplary next iteration. For example, referring back to block 420, the method may repeat the step of selecting a region ofinterest 421 from the next sequential pair of image frames; in this case, the pair of image frames would be the image frame from which croppedimage frame 432 was generated and the sequentially next image frame occurring afterimage frame 432. The pair of image frames may be cropped as was done inblock 430. Adisplacement 441 between croppedimage frame 432 and the sequentially next croppedimage frame 433 may be measured with respect to a longitudinal axis of the anatomical part being imaged as was done inblock 440. A sequentially nextinterior portion 453 may be selected from croppedimage frame 433. The selectedinterior portion 453 may have a dimension equal to the measureddisplacement 441 between cropped image frames 432, 433. The selectedinterior portion 453 may be taken from the interior 25% of croppedimage frame 433. The selectedinterior portion 453 may be aligned and stitched to partialcomposite image 461. -
Method 400B may include aniterative process 470 to select and add additional portions to the partial composite image, as described above, until the entire region of interest is entirely visible in the composite image. For example, ifsteps 420 through 460 were executed on a pair of image frames 1 and 2 from the set of image frames {1, 2, 3, 4, 5}, steps 420 through 460 would then be executed on image frames 2 and 3, again on image frames 3 and 4, and again until the composite image is entirely generated. As a result of 400A, 400B, amethod composite image 481 is obtained at 480.Composite image 481 may correct effects of parallax distortion. - As described above, in an embodiment, measuring
step 440 may include applying intensity-based image registration. Intensity-based image registration may be an iterative process that may include spatially registering a moving, or sequentially later, image frame with a reference, or sequentially earlier, image frame. Intensity-based image registration may involve comparing intensity patterns in images via correlation metrics, which may also be referred to as optimizers. The correlation metric or optimizer may define an image similarity metric for evaluating accuracy of a registration. The image similarity metric may take two images and return a scalar value describing the level of similarity between the images, and the optimizer may define the methodology for minimizing or maximizing the similarity metric. -
FIG. 11 shows an exemplary flow chart describing an intensity-based image registration step. The process may begin at 1100 with an image transformation beginning with apre-determined transformation matrix 1101 and a type of transformation 1102. A transformation may be applied to a movingimage 1111 with bilinear interpolation atstep 1110. At 1120 a metric may compare the transformed movingimage 1112 to afixed image 1121 to compute ametric value 1122. At 1130, anoptimizer 1131 may check themetric value 1122 to determine if a terminatingcondition 1132 has been satisfied. If the terminatingcondition 1132 is not satisfied, the process will be repeated with a new transformation matrix generated with a gradient descent method until a defined number of iterations and/or a terminatingcondition 1132 is reached. - When building a composite image, the constituent portions must be properly aligned and stitched together. Edge detection software, such as software on Pro Imaging with Surgeon's Checklist offered by Radlink, may locate critical landmarks between the constituent portions and use the landmarks to properly align the constituent portions. The landmarks may be physically placed externally to the anatomical part being imaged within the image field of view, or the landmarks may comprise anatomical landmarks within the body. In this manner, selected portions may be aligned into true mathematical orientation with respect to one another. Once properly aligned, the constituent portions may be stitched together.
-
FIG. 12 illustrates an exemplary process for implementing at least a part of measuringstep 440 in an embodiment where the radiographic images are generated from several discrete exposures. Reference image frame 1201 and movingimage frame 1202 may be provided, and one of several pre-defined regions of interest may be cropped fromimage frames 1201, 1202 atstep 1210. The cropped image frames fromstep 1210 may undergo a Canny edge detector process at 1220 to detect reference frame region ofinterest edges 1221 and moving frame region of interest edges 1222. 1221, 1222 may undergo an intensity-based image registration progress, for example, as described inEdges FIG. 11 , atstep 1230. Atstep 1240, reference image frame 1201 and movingimage frame 1202 may undergo the same repeated process with a different, pre-defined region of interest cropped fromimage frames 1201, 1202. Atstep 1250, transformations may be calculated between reference image frame 1201 and movingimage frame 1202 based on multiple region of interest combinations. Atstep 1260, the transformation with the highest metric value may be filtered to result in a transformedimage 1270. - A Canny edge detector process, such as in the exemplary process described above, may include the following steps: (1) apply a Gaussian filter to smooth the image in order to remove noise; (2) find the intensity gradients of the image; (3) apply non-maximum suppression to get rid of spurious responses to edge detection; (4) apply double threshold to determine potential edges; and (5) track by hysteresis to finalize the detection of edges by suppressing all the other edges that are weak and not connected to strong edges.
- Using one of the exemplary methods described above, two-dimensional composite radiographic imaging information that corrects effects of parallax distortion may be generated. An image displaying a wide field of view and that corrects effects of parallax distortion is a useful improvement in intra-operative imaging techniques and may allow members of a surgical team to make more accurate measurements during a procedure, resulting in improved outcomes for patients and more efficient surgical procedures.
-
FIG. 5 shows a composite radiographic image that corrects effects of parallax distortion that was generated according to a method of the present disclosure.Composite image 500 depicts an anteroposterior image of a patient's pelvic region.Straight metal bar 501 is accurately depicted incomposite image 500—straight, and without any visible curves or defects caused by parallax distortion.Composite image 500 was generated from a plurality of discrete radiographic exposures taken on a C-arm apparatus. -
FIG. 6 shows a composite radiographic image that corrects effects of parallax distortion that was generated according to a method of the present disclosure.Composite image 600 depicts an anteroposterior image of a patient's pelvis.Straight metal bar 601 is accurately depicted incomposite image 600—straight, and without any visible curves or defects caused by parallax distortion.Composite image 600 was generated from a plurality of radiographic image frames from a fluoroscopic procedure using a C-arm apparatus. -
FIG. 7 depicts a composite anteroposterior radiographic image of apelvis 700 that corrects effects of parallax distortion. Becausecomposite image 700 corrects effects of parallax distortion, it can be used intra-operatively to accurately make measurements, such as a discrepancy in 702, 703.leg lengths Straight metal bar 701 can be seen accurately depicted incomposite image 700.Composite image 700 was generated from a plurality of radiographic image frames from a fluoroscopic procedure using a C-arm apparatus. -
FIG. 8 depicts a composite anteroposterior radiographic image of apelvis 800 that corrects effects of parallax distortion. Becausecomposite image 800 corrects effects of parallax distortion, it can be used intra-operatively to accurately make measurements, such as a discrepancy in 802, 803.leg lengths Straight metal bar 801 can be seen accurately depicted incomposite image 800.Composite image 800 was generated from a plurality of discrete radiographic exposures taken on a C-arm apparatus. - The methods and systems described in the present disclosure may be implemented using certain hardware. For example, referring to
FIG. 10 , a C-arm apparatus 1001 may capture video or image signals. The C-arm apparatus 1001 may have adisplay 1002 directly connected to the apparatus to instantly view the images or video. Awireless kit 1010 may, alternatively or additionally, be attached to the C-arm apparatus 1001 viavideo port 1003 to receive the video or image signal from the C-arm apparatus 1001, the intra-operative signal representing digital data of a radiographic image frame or plurality of frames.Video port 1003 may utilize a BNC connection, a VGA connection, a DVI-D connection, or an alternative connection known to those of skill in the art. Unique in the field in its ability to convert any wired image acquisition device (such as a C-arm) into a wireless imaging device, thewireless kit 1010 may be the Radlink Wireless C-Arm Kit.Wireless kit 1010 may include aresolution converter 1011 to convert the image signal to proper resolution for further transmission,frame grabber 1012 to produce a pixel-by-pixel digital copy of each image frame,central processing unit 1013,memory 1014, and dual-band wireless-N adapter 1015. Thewireless kit 1010 may convert the received signal to one or more image files and can send the converted file(s), for example, bywireless connection 1018 to one or more computer(s) and operatively connected display(s) 1021, 1022. - The computer and operatively connected display may, for example, be a Radlink Galileo Positioning System (“GPS”) 1021 or
GPS Tablet 1022. The Wireless C-Arm Kit 1010 may receive, convert, and transmit the file(s) in real time. The methods described in the present disclosure may be implemented, for example, as software running on theGPS 1021 orGPS Tablet 1022 units. TheGPS 1021 andGPS Tablet 1022 may also incorporate additional functions, such as those provided in the Radlink Pro Imaging with Surgeon's Checklist software. Using such equipment, a composite image that corrects effects of parallax distortion may thus be generated and viewed intra-operatively in real time. -
FIG. 9 depicts exemplary hardware for a system to generate intra-operative composite radiographic imaging information that correct effects of parallax distortion to generate and utilize a three-dimensional patient model to determine a proper placement of a component during a surgery. The system, or part thereof, may take the form of acomputer 900 that includes aprocessing unit 904, asystem memory 906, and asystem bus 920 that operatively couples various system components, including thesystem memory 906 to theprocessing unit 904. There may be only one or there may be more than oneprocessing unit 904, such that the processor ofcomputer 900 comprises a single central processing unit (CPU), or a plurality of processing units, commonly referred to as a parallel processing environment. Thecomputer 900 may be a conventional computer, a distributed computer, a web server, a file server, a tablet or iPad, a smart phone, or any other type of computing device. - The
system bus 920 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, a switched fabric, point-to-point connections, and a local bus using any of a variety of bus architectures. Thesystem memory 906 may also be referred to as simply the memory, and includes read only memory (ROM) 908 and random access memory (RAM) 907. A basic input/output system (BIOS) 910, containing the basic routines that help to transfer information between elements within thecomputer 900, such as during start-up, is stored inROM 908. Thecomputer 900 may further include ahard disk drive 932 for reading from and writing to a hard disk, not shown, amagnetic disk drive 934 for reading from or writing to a removablemagnetic disk 938, and/or anoptical disk drive 936 for reading from or writing to a removableoptical disk 940 such as a CD-ROM or other optical media. - The
hard disk drive 932,magnetic disk drive 934, andoptical disk drive 936 may be connected to thesystem bus 920 by a harddisk drive interface 922, a magneticdisk drive interface 924, and an opticaldisk drive interface 926, respectively. The drives and their associated computer-readable media provide nonvolatile storage of computer-readable instructions; data structures, e.g., a catalog and a context-based index; program modules, e.g., a web service and an indexing robot; and other data for thecomputer 900. It should be appreciated by those skilled in the art that any type of computer-readable media that can store data that is accessible by a computer, for example, magnetic cassettes, flash memory cards, USB drives, digital video disks, RAM, and ROM, may be used in the exemplary operating environment. - A number of program modules may be stored on the
hard disk 932,magnetic disk 934,optical disk 936,ROM 908, orRAM 907, including anoperating system 912,browser 914, stand-alone program 916, etc. A user may enter commands and information into thepersonal computer 900 through input devices such as akeyboard 942 and apointing device 944, for example, a mouse. Other input devices (not shown) may include, for example, a microphone, a joystick, a game pad, a tablet, a touch screen device, a satellite dish, a scanner, a facsimile machine, and a video camera. These and other input devices are often connected to theprocessing unit 904 through aserial port interface 928 that is coupled to thesystem bus 920, but may be connected by other interfaces, such as a parallel port, game port or a universal serial bus (USB). - A
monitor 946 or other type of display device is also connected to thesystem bus 920 via an interface, such as avideo adapter 948. In addition to themonitor 946, computers typically include other peripheral output devices, such asspeakers 960 connected to thesystem bus 920 via an audio adapter 96 and printers. These and other output devices are often connected to theprocessing unit 904 through theserial port interface 928 that is coupled to thesystem bus 920, but may be connected by other interfaces, such as a parallel port, game port, or a universal serial bus (USB). - The
computer 900 may operate in a networked environment using logical connections to one or more remote computers. These logical connections may be achieved by a communication device coupled to or integral with thecomputer 900; the application is not limited to a particular type of communications device. The remote computer may be another computer, a server, a router, a network personal computer, a client, a peer device, or other common network node, and typically includes many or all of the elements described above relative to thecomputer 900, although only a memory storage device has been illustrated inFIG. 9 . Thecomputer 900 can be logically connected to theInternet 972. The logical connection can include a local area network (LAN), wide area network (WAN), personal area network (PAN), campus area network (CAN), metropolitan area network (MAN), or global area network (GAN). Such networking environments are commonplace in office networks, enterprise-wide computer networks, intranets and the Internet, which are all types of networks. - When used in a LAN environment, the
computer 900 may be connected to the local network through a network interface oradapter 930, which is one type of communications device. When used in a WAN environment, thecomputer 900 typically includes amodem 950, anetwork adapter 952, or any other type of communications device for establishing communications over the wide area network. Themodem 950, which may be internal or external, is connected to thesystem bus 920 via theserial port interface 928. In a networked environment, program modules depicted relative to thepersonal computer 900, or portions thereof, may be stored in a remote memory storage device. It is appreciated that the network connections shown are exemplary and other means of, and communications devices for, establishing a communications link between the computers may be used. - The system can take the form of a
computer program product 916 accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. - The medium can be an apparatus or device that utilizes or implements electronic, magnetic, optical, electromagnetic, infrared signal or other propagation medium, or semiconductor system. Examples of a computer-readable medium comprise a semiconductor or solid-state memory, magnetic tape, a removable computer diskette, a random access memory, a read-only memory, a rigid magnetic disk and an optical disk. Current examples of optical disks comprise compact disk-read only memory (CD-ROM), compact disk-read/write (CD-R/W) and DVD formats.
- A data processing system suitable for storing and/or executing program code comprises at least one processor coupled directly or indirectly to memory elements through a system bus. The memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memory that provide temporary storage of at least some program code in order to reduce the number of times code is retrieved from bulk storage during execution.
- Input/output or I/O devices (including, but not limited to, keyboards, displays, pointing devices, etc.) can be coupled to the system either directly or through intervening I/O controllers.
- Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modems and Ethernet cards are just a few of the currently available types of network adapters.
- Furthermore, computers and other related electronic devices can be remotely connected to either the LANs or the WAN via a digital communications device, modem and temporary telephone, or a wireless link. It will be appreciated that the Internet comprises a vast number of such interconnected networks, computers, and routers.
- Referring to
FIG. 13 , a computerized surgery assistcomputer 2102 may receive anatomic image information of a patient 10 or a portion of a patient 10 (e.g., a pelvis) taken by an anatomical scanning device, such as an x-ray scanner 16 (e.g., when receiving discrete images or fluorographic images) at a position of the patient 10 (lying on a patient table 14). - Alternatively, the computerized surgery assist
computer 2102 may receive anatomic image information of a patient 10 or a portion of a patient 10 obtained from a CT or MR scan. For example, in such an embodiment, the anatomic image information may be a data set of three-dimensional imaging information. In an embodiment, the computerized surgery assistcomputer 2102 may receive a data set of three-dimensional imaging information obtained while thepatient 10 was in a neutral position. The anatomic image information may be received from an imageprocessing computer server 18 positioned via wired or 20, 22 between the x-ray scanner 16 (or, e.g., the CT or MR scanner) and the surgery assistwireless data links computer 2102. - Optionally, the patient may have a three-dimensional
positional sensor 2100 affixed to the patient's body, and the surgery assistcomputer 2102 may receive positional information via wired or wireless data link 2110 fromsensor 2100. Thesurgery assist computer 2102 may be programmed to display a visual representation of the anatomic image information on a computerized display 2108; determine a target positioning value of a component from the anatomic image information, either automatically or with input from a surgeon; and may make additional measurements as desired or programmed (e.g., measurements of one or more anatomical landmarks and/or ratios of anatomical landmarks), either automatically or with input from a surgeon. - The
surgery assist computer 2102 may further receive subsequent anatomic image information of thepatient 10; display a visual representation of the subsequent anatomic image information on the display 2108; and may make additional measurements or display additional markers, either automatically or with input from a surgeon. - The
surgery assist computer 2102 may have a receiver to receive information and data, including image data from thex-ray scanner 16 and/or CT or MR scanner; a processor or microcontroller, such as a CPU, to process the received information and data and to execute other software instructions; system memory to store the received information and data, software instructions, and the like; and a display 2108 to display visual representations of received information and data as well as visual representations resulting from other executed system processes. - Such a system may allow a surgeon and/or other medical personnel to more accurately and consistently determine a proper placement of and position a component by helping a surgeon identify a target position for a component and making adjustments to the positioning value based on differences in initial anatomic image information and subsequent anatomic image information. Such differences may result, for example, when a patient and an imaging scanner are aligned differently with respect to each other when multiple sets of anatomic image information are acquired (e.g., pre-operatively at a neutral position of a patient and intra-operatively at a non-neutral position of the patient).
FIGS. 14A-14C provide examples of a portion of a patient 2200 (in this case, the patient's pelvis) may appear differently when a patient is positioned in different orientations. For example,FIG. 14A shows a portion of thepatient 2200 in a neutral position with no tilt, whileFIG. 14B shows a portion of thepatient 2210 with a forward tilt of about 20°, andFIG. 14C shows a portion of thepatient 2220 having a backward tilt of about −20°. Of course, moving a patient may also cause the portion of the patient to have different inclinations and anteversions, as a patient is generally manipulated in three-dimensional space. Importantly, small differences in a patient's orientation relative to a neutral position may provide different measurements of anatomical or component orientations, which could affect the outcome of a surgical procedure. For example, anacetabular cup 2201 is positioned with an inclination of 40.0° and an anteversion of 20.0°. Ifpelvis 200 is tilted 20.0°, aspelvis 210 is inFIG. 14B , theacetabular cup 2211 is measured to have an inclination of 37.3° and an anteversion of 4.3°. Ifpelvis 200 is tilted to −20.0°, as ispelvis 220 inFIG. 14C , theacetabular cup 2221 is measured to have an inclination of 47.2° and an anteversion of 34.6°. Accordingly, when positioning a component in a patient during surgery, such as an acetabular cup during THA, a surgeon may need to account for the effects of the patient's orientation on positioning values such as tilt, inclination, and/or anteversion. - Adjustments to positional values of the acetabular cup, such as inclination, may be based on the study of a projected circle in three-dimensional space. The rotation of the circle in three-dimensional space may mimic the rotation of an acetabular cup. An acetabular cup may display shapes of ellipses under different angles of projection. Three rotational factors may affect the shape of the projected ellipse: Inclination (I)—rotation about the Z axis, Anteversion (A)—rotation about the Y axis, and Tilt (T)—rotation about the X axis.
FIG. 17 illustrates an exemplary projection of a circle that may be used to model anopening 2792 of anacetabular cup 2780 with the X, Y, and Z axes labeled. - With reference to
FIG. 17 , the rotational matrices along the X, Y, and Z axes may be described as follows: -
- The following matrix may capture the initial circle lying on the X-Z plane:
-
- The normal of the circle may be in the direction of the Y-axis and may be described as follows:
-
- After three rotations, the parametric equations of the circle projected on the X-Y plane may be described as follows:
-
X=R*[sin(θ)*cos(I)*cos(A)+cos(θ)*sin(A)]; and -
Y=R*cos(T)*sin(θ)*sin(I)−R*[−sin(θ)*cos(I)*sin(A)*sin(T)+cos(θ)*cos(A)*sin(T)]. - where X and Y represent the coordinates of the projected ellipse on the X-Y plane, R represents the size of the acetabular cup, and θ represents the parameter.
- After three rotations along the three axes, the parametric equations of the normal of the circle surface may be described as follows:
-
X normal=sin(I)*cos(A) -
Y normal=−cos(I)*cos(T)+sin(I)*sin(A)*sin(T) - The normal of the circle has the property that it is always parallel to the minor diameter of the projected ellipse. Accordingly, the minor diameter of the projected ellipse may be derived and described as follows:
-
Minor Diameter=sin (αcos(√{square root over (Xnormal 2 Y normal 2)}))*2*R - The major diameter may be described as follows:
-
Major Diameter=2*R - Accordingly, the inclination value of the projected ellipse may be described as follows:
-
- Therefore, if an acetabular cup is placed or has target positioning values with a known inclination and anteversion, the inclination resulting after the acetabular cup is tilted (e.g., when the pelvis is tilted) may be calculated. Other positioning values may similarly be calculated, as will be apparent to one of ordinary skill in the art.
- Unless otherwise expressly stated or obviously required by context, steps in methods described herein need not be performed in a particular order. Rather, an example order may be provided for ease of explanation.
- In an embodiment, the surgery assist
computer 2102 may be configured to implement one or more methods of the present disclosure. For example, with reference toFIG. 15 , onemethod 2300 may be used to position a component intra-operatively. Theexample method 2300 may include astep 2310 of receiving a data set of imaging information representing at least a first portion of a patient (e.g., a data set of three-dimensional imaging information from a CT or MR scan) in a neutral position.Method 2300 may further include astep 2320 of generating a three-dimensional model of the first portion of the patient based on the data set of imaging information. The three-dimensional model may be reconstructed based on a region grow algorithm, water shed algorithm, active contour algorithm, a combination of algorithms, or any algorithm that may be known to those of ordinary skill in the art for generating a three-dimensional model from a data set of imaging information.Method 2300 may additionally include astep 2330 of iteratively rendering a plurality of two-dimensional projections from the three-dimensional model, each two-dimensional projection having a corresponding spatial orientation. The two-dimensional projections may be made at a specified orientation and distance (e.g., an x-ray-source-to-detector distance or an object-to-detector distance). - Alternatively,
method 2300 may include astep 2330 of rendering a first two-dimensional projection from the three-dimensional model, the first two-dimensional projection having a corresponding spatial orientation, proceeding through step 2360 (described in example form below), then repeatingstep 2330 with a next sequential projection (or even an out of order projection). -
Method 2300 may include astep 2340 of receiving intra-operative imaging information (e.g., an intra-operative x-ray image) representing the first portion of the patient.Method 2300 may further include astep 2350 of identifying a bony edge contour in the intra-operative imaging information. In an embodiment, the bony edge contour in the intra-operative imaging information may be detected using a canny edge detector algorithm, another edge-detection algorithm that may be known to those of ordinary skill in the art, a combination of algorithms, shape-based segmentation, or manual selection. In an embodiment, a canny edge detector process, such as in the exemplary process described above, may include the following steps: (1) apply a Gaussian filter to smooth the image in order to remove noise; (2) find the intensity gradients of the image; (3) apply non-maximum suppression to get rid of spurious responses to edge detection; (4) apply double threshold to determine potential edges; and (5) track by hysteresis to finalize the detection of edges by suppressing all the other edges that are weak and not connected to strong edges. -
Method 2300 may further include thestep 2360 of scoring each two-dimensional projection by calculating the distance of the bony edge contour in the intra-operative imaging information to a corresponding contour in each two-dimensional projection and identifying a global minimum score. Scoringstep 2360 may be performed using best-fit techniques. In an alternate embodiment, such as when the system initially renders only a first two-dimensional projection before proceeding through the method,step 2360 may include scoring only the first two-dimensional projection, storing the score in memory, and repeatingscoring step 2360 for subsequent two-dimensional projections as they are rendered, then selecting a global minimum score from the plurality of scores. A repetitive process such as this may be illustrated by 2330, 2360, and 2370 insteps FIG. 15 . The process of repeating 2330, 2360, and 2370 may be referred to as anenumeration process 2380 based on the fitting of the two-dimensional projection and the detected bony edge contour from the intra-operative imaging information. -
Method 2300 may include astep 2390 of outputting the orientation of the three-dimensional model as a final result. In an embodiment,step 2390 may include outputting a transformation matrix for the two-dimensional projection having the global minimum score, the transformation matrix representing the orientation of the three-dimensional model relative to the neutral position when the two-dimensional projection having the global minimum score was rendered. - In an embodiment, a method may include a step of calculating an adjustment factor based on the transformation matrix. The calculated adjustment factor may be used to output a visual indication of an intra-operative adjustment to be made to a component based on the adjustment factor to achieve a target component orientation. For example,
FIG. 16 illustrates one embodiment of such avisual indication 2790. Still referencingFIG. 16 , for example, in a THA surgical procedure, an image of anellipse 2790 may be superimposed ontoradiographic image 2500 to illustrate how theopening 2792 ofacetabular cup 2780 should appear when properly aligned. In an alternate embodiment, a method may include the step of applying the calculated adjustment factor to an intra-operative leg length measurement and outputting a visual indication of an intra-operative adjustment to be made to the patient to achieve a target leg length measurement. In an embodiment, the radiographic image 2500 (and any visual indication discussed in a similar context) may be displayed on display 2108 fromFIG. 15 . In an embodiment, a visual indication may include an outline of the target component orientation; real-time inclination, anteversion, and tilt values of the component; target inclination, anteversion, and tilt values of the component; and/or combinations thereof. -
FIG. 16 illustrates a conceptual model of a two-dimensional projection 2410 from a three-dimensional model 2430 in accordance with an embodiment of the present disclosure. As discussed above, one or more two-dimensional projection(s) 2410 may be rendered based on the three-dimensional model 2430 onto a projectedplan view 2420. Projectedplan view 2420 may be comparable to an x-ray image, where the two-dimensional projection 2410 may be comparable to an anatomical visualization on an x-ray image. Each two-dimensional projection may have a corresponding spatial orientation depending on the position of the x-ray source 16a to the three-dimensional model 2430. Of course,FIG. 16 may represent a conceptualization of rendering two-dimensional projections, so there is not necessarily a physical x-ray source 16a or a physical three-dimensional model 430 (though it may be possible to visualize the three-dimensional model 2430 on the display 2108 in some embodiments). The two-dimensional projections may be rendered at a specified orientation and distance (e.g., an x-ray-source-to-detector distance 2440 or an object-to-detector distance 2450). The spatial relationship of the x-ray source 16a and the three-dimensional model 430 as well as distance(s) 450, 460 may want to be taken into account in certain embodiments to ensure accurate measurements. - In an embodiment, systems and methods of the present disclosure may be used to ensure consistent measurements between radiographic images taken of a patient at a neutral position and radiographic images taken of a patient in a non-neutral (e.g., intra-operative) position without having to ensure that the patient is precisely placed in a neutral position and, potentially, with less x-ray exposure, by simulating movement of the patient back to the neutral position using the three-dimensional model and calculating an adjustment factor taking into account the differences between the actual, non-neutral position of the patient and the patient in a neutral position.
- The methods and systems described in the present disclosure may be implemented, at least in part, using certain hardware. For example, referring to
FIG. 8 , a C-arm apparatus 1001 may capture video or image signals using x-rays. C-arm apparatus 1001 may, for example, capture an intra-operative x-ray image. The C-arm apparatus 1001 may have adisplay 1002 directly connected to the apparatus to instantly view the images or video.Display 1002 may be configured with a number of various inputs, including, for example, an input to receive one or more data sets of three-dimensional image information. Awireless kit 1010 may, alternatively or additionally, be attached to the C-arm apparatus 1001 viavideo port 1003 to receive the video or image signal from the C-arm apparatus 1001, the signal representing digital data of a radiographic image frame or plurality of frames.Video port 1003 may utilize a BNC connection, a VGA connection, a DVI-D connection, or an alternative connection known to those of skill in the art. Unique in the field in its ability to convert any wired image acquisition device (such as a C-arm) into a wireless imaging device, thewireless kit 1010 may be the Radlink Wireless C-Arm Kit.Wireless kit 1010 may include aresolution converter 1011 to convert the image signal to proper resolution for further transmission,frame grabber 1012 to produce a pixel-by-pixel digital copy of each image frame,central processing unit 1013,memory 1014, and dual-band wireless-N adapter 1015. Thewireless kit 1010 may convert the received signal to one or more image files and can send the converted file(s), for example, bywireless connection 1018 to one or more computer(s) and operatively connected display(s) 1021, 1022. - The computer and operatively connected display may, for example, be a Radlink Galileo Positioning System (“GPS”) 1021 or
GPS Tablet 1022. The Wireless C-Arm Kit 1010 may receive, convert, and transmit the file(s) in real time. The methods described in the present disclosure may be implemented, for example, as software running on theGPS 1021 orGPS Tablet 1022 units. TheGPS 1021 andGPS Tablet 1022 may also incorporate additional functions, such as those provided in the Radlink Pro Imaging with Surgeon's Checklist software. - While various aspects and embodiments have been disclosed herein, other aspects and embodiments will be apparent to those skilled in the art. The various aspects and embodiments disclosed herein are for purposes of illustration and are not intended to be limiting.
Claims (20)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US16/851,545 US10748319B1 (en) | 2016-09-19 | 2020-04-17 | Composite radiographic image that corrects effects of parallax distortion |
Applications Claiming Priority (5)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US201662396611P | 2016-09-19 | 2016-09-19 | |
| US15/708,821 US10650561B2 (en) | 2016-09-19 | 2017-09-19 | Composite radiographic image that corrects effects of parallax distortion |
| US201762595670P | 2017-12-07 | 2017-12-07 | |
| US16/212,065 US11257241B2 (en) | 2017-12-07 | 2018-12-06 | System and method for component positioning by registering a 3D patient model to an intra-operative image |
| US16/851,545 US10748319B1 (en) | 2016-09-19 | 2020-04-17 | Composite radiographic image that corrects effects of parallax distortion |
Related Parent Applications (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US15/708,821 Continuation-In-Part US10650561B2 (en) | 2016-09-19 | 2017-09-19 | Composite radiographic image that corrects effects of parallax distortion |
| US16/212,065 Continuation-In-Part US11257241B2 (en) | 2012-10-02 | 2018-12-06 | System and method for component positioning by registering a 3D patient model to an intra-operative image |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20200250796A1 true US20200250796A1 (en) | 2020-08-06 |
| US10748319B1 US10748319B1 (en) | 2020-08-18 |
Family
ID=71836018
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US16/851,545 Expired - Fee Related US10748319B1 (en) | 2016-09-19 | 2020-04-17 | Composite radiographic image that corrects effects of parallax distortion |
Country Status (1)
| Country | Link |
|---|---|
| US (1) | US10748319B1 (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20240031675A1 (en) * | 2021-03-29 | 2024-01-25 | Huawei Technologies Co., Ltd. | Image processing method and related device |
| US20240127467A1 (en) * | 2022-10-14 | 2024-04-18 | Okuma Corporation | Three-dimensional shape measuring system |
Families Citing this family (21)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| GB2536650A (en) | 2015-03-24 | 2016-09-28 | Augmedics Ltd | Method and system for combining video-based and optic-based augmented reality in a near eye display |
| US10991070B2 (en) | 2015-12-18 | 2021-04-27 | OrthoGrid Systems, Inc | Method of providing surgical guidance |
| US12521201B2 (en) | 2017-12-07 | 2026-01-13 | Augmedics Ltd. | Spinous process clamp |
| US12458411B2 (en) | 2017-12-07 | 2025-11-04 | Augmedics Ltd. | Spinous process clamp |
| US11980507B2 (en) | 2018-05-02 | 2024-05-14 | Augmedics Ltd. | Registration of a fiducial marker for an augmented reality system |
| US11540794B2 (en) | 2018-09-12 | 2023-01-03 | Orthogrid Systesm Holdings, LLC | Artificial intelligence intra-operative surgical guidance system and method of use |
| US11589928B2 (en) | 2018-09-12 | 2023-02-28 | Orthogrid Systems Holdings, Llc | Artificial intelligence intra-operative surgical guidance system and method of use |
| US11766296B2 (en) | 2018-11-26 | 2023-09-26 | Augmedics Ltd. | Tracking system for image-guided surgery |
| JP7497355B2 (en) * | 2018-12-05 | 2024-06-10 | ストライカー コーポレイション | Systems and methods for displaying medical imaging data - Patents.com |
| US11980506B2 (en) | 2019-07-29 | 2024-05-14 | Augmedics Ltd. | Fiducial marker |
| US12178666B2 (en) | 2019-07-29 | 2024-12-31 | Augmedics Ltd. | Fiducial marker |
| US11382712B2 (en) | 2019-12-22 | 2022-07-12 | Augmedics Ltd. | Mirroring in image guided surgery |
| US11564651B2 (en) * | 2020-01-14 | 2023-01-31 | GE Precision Healthcare LLC | Method and systems for anatomy/view classification in x-ray imaging |
| US11389252B2 (en) | 2020-06-15 | 2022-07-19 | Augmedics Ltd. | Rotating marker for image guided surgery |
| US12502163B2 (en) | 2020-09-09 | 2025-12-23 | Augmedics Ltd. | Universal tool adapter for image-guided surgery |
| US12239385B2 (en) | 2020-09-09 | 2025-03-04 | Augmedics Ltd. | Universal tool adapter |
| EP4086839A1 (en) * | 2021-05-04 | 2022-11-09 | Koninklijke Philips N.V. | Stitching multiple images to create a panoramic image |
| US12150821B2 (en) | 2021-07-29 | 2024-11-26 | Augmedics Ltd. | Rotating marker and adapter for image-guided surgery |
| WO2023021451A1 (en) | 2021-08-18 | 2023-02-23 | Augmedics Ltd. | Augmented reality assistance for osteotomy and discectomy |
| EP4511809A1 (en) | 2022-04-21 | 2025-02-26 | Augmedics Ltd. | Systems and methods for medical image visualization |
| WO2024057210A1 (en) | 2022-09-13 | 2024-03-21 | Augmedics Ltd. | Augmented reality eyewear for image-guided medical intervention |
Family Cites Families (32)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH07220056A (en) | 1993-11-26 | 1995-08-18 | Philips Electron Nv | Image composition method and image apparatus for execution of said method |
| DE69738162T2 (en) | 1996-08-21 | 2008-06-26 | Koninklijke Philips Electronics N.V. | COMPOSITION OF A PICTURE OF PARTICULAR PICTURES |
| US6101238A (en) | 1998-11-25 | 2000-08-08 | Siemens Corporate Research, Inc. | System for generating a compound x-ray image for diagnosis |
| US6947038B1 (en) | 2000-04-27 | 2005-09-20 | Align Technology, Inc. | Systems and methods for generating an appliance with tie points |
| US6793390B2 (en) | 2002-10-10 | 2004-09-21 | Eastman Kodak Company | Method for automatic arrangement determination of partial radiation images for reconstructing a stitched full image |
| WO2005020790A2 (en) | 2003-08-21 | 2005-03-10 | Ischem Corporation | Automated methods and systems for vascular plaque detection and analysis |
| US7327865B2 (en) | 2004-06-30 | 2008-02-05 | Accuray, Inc. | Fiducial-less tracking with non-rigid image registration |
| US8090166B2 (en) | 2006-09-21 | 2012-01-03 | Surgix Ltd. | Medical image analysis |
| EP2358269B1 (en) | 2007-03-08 | 2019-04-10 | Sync-RX, Ltd. | Image processing and tool actuation for medical procedures |
| IL184151A0 (en) | 2007-06-21 | 2007-10-31 | Diagnostica Imaging Software Ltd | X-ray measurement method |
| US9109998B2 (en) | 2008-06-18 | 2015-08-18 | Orthopedic Navigation Ltd. | Method and system for stitching multiple images into a panoramic image |
| US8600193B2 (en) | 2008-07-16 | 2013-12-03 | Varian Medical Systems, Inc. | Image stitching and related method therefor |
| US10512451B2 (en) * | 2010-08-02 | 2019-12-24 | Jointvue, Llc | Method and apparatus for three dimensional reconstruction of a joint using ultrasound |
| JP2012070836A (en) | 2010-09-28 | 2012-04-12 | Fujifilm Corp | Image processing apparatus, image processing method, image processing program, and radiographic image capturing system |
| US8526700B2 (en) * | 2010-10-06 | 2013-09-03 | Robert E. Isaacs | Imaging system and method for surgical and interventional medical procedures |
| RU2594811C2 (en) | 2011-03-02 | 2016-08-20 | Конинклейке Филипс Н.В. | Visualisation for navigation instruction |
| EP2702449B1 (en) | 2011-04-25 | 2017-08-09 | Generic Imaging Ltd. | System and method for correction of geometric distortion of multi-camera flat panel x-ray detectors |
| US8948487B2 (en) * | 2011-09-28 | 2015-02-03 | Siemens Aktiengesellschaft | Non-rigid 2D/3D registration of coronary artery models with live fluoroscopy images |
| CA2851366C (en) | 2011-10-12 | 2021-01-12 | The Johns Hopkins University | Methods for evaluating regional cardiac function and dyssynchrony from a dynamic imaging modality using endocardial motion |
| US8831324B2 (en) | 2012-10-02 | 2014-09-09 | Brad L. Penenberg | Surgical method and workflow |
| EP2951528B1 (en) | 2013-01-29 | 2018-07-25 | Andrew Robert Korb | Methods for analyzing and compressing multiple images |
| US10433914B2 (en) | 2014-02-25 | 2019-10-08 | JointPoint, Inc. | Systems and methods for intra-operative image analysis |
| US10758198B2 (en) * | 2014-02-25 | 2020-09-01 | DePuy Synthes Products, Inc. | Systems and methods for intra-operative image analysis |
| US9196039B2 (en) | 2014-04-01 | 2015-11-24 | Gopro, Inc. | Image sensor read window adjustment for multi-camera array tolerance |
| EP3151750B1 (en) | 2014-06-06 | 2017-11-08 | Koninklijke Philips N.V. | Imaging system for a vertebral level |
| US9386222B2 (en) | 2014-06-20 | 2016-07-05 | Qualcomm Incorporated | Multi-camera system using folded optics free from parallax artifacts |
| US10271817B2 (en) * | 2014-06-23 | 2019-04-30 | Siemens Medical Solutions Usa, Inc. | Valve regurgitant detection for echocardiography |
| US20170360578A1 (en) | 2014-12-04 | 2017-12-21 | James Shin | System and method for producing clinical models and prostheses |
| US9582882B2 (en) | 2015-03-02 | 2017-02-28 | Nokia Technologies Oy | Method and apparatus for image registration in the gradient domain |
| WO2017106357A1 (en) * | 2015-12-14 | 2017-06-22 | Nuvasive, Inc. | 3d visualization during surgery with reduced radiation exposure |
| US10010249B1 (en) | 2017-03-23 | 2018-07-03 | Doheny Eye Institute | Systems, methods, and devices for optical coherence tomography multiple enface angiography averaging |
| US10022192B1 (en) | 2017-06-23 | 2018-07-17 | Auris Health, Inc. | Automatically-initialized robotic systems for navigation of luminal networks |
-
2020
- 2020-04-17 US US16/851,545 patent/US10748319B1/en not_active Expired - Fee Related
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20240031675A1 (en) * | 2021-03-29 | 2024-01-25 | Huawei Technologies Co., Ltd. | Image processing method and related device |
| US12437197B2 (en) * | 2021-03-29 | 2025-10-07 | Huawei Technologies Co., Ltd. | Image processing method and related device |
| US20240127467A1 (en) * | 2022-10-14 | 2024-04-18 | Okuma Corporation | Three-dimensional shape measuring system |
Also Published As
| Publication number | Publication date |
|---|---|
| US10748319B1 (en) | 2020-08-18 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10748319B1 (en) | Composite radiographic image that corrects effects of parallax distortion | |
| US11257241B2 (en) | System and method for component positioning by registering a 3D patient model to an intra-operative image | |
| Weese et al. | Voxel-based 2-D/3-D registration of fluoroscopy images and CT scans for image-guided surgery | |
| US11020189B2 (en) | System and method for component positioning by registering a 3D patient model to an intra-operative image | |
| EP2849630B1 (en) | Virtual fiducial markers | |
| JP5243754B2 (en) | Image data alignment | |
| US8121380B2 (en) | Computerized imaging method for a three-dimensional reconstruction from two-dimensional radiological images; implementation device | |
| CN114642444B (en) | Oral implant accuracy evaluation method, system and terminal equipment | |
| US10078906B2 (en) | Device and method for image registration, and non-transitory recording medium | |
| CN102124320A (en) | Method and system for stitching multiple images into a panoramic image | |
| JP2011125568A (en) | Image processor, image processing method, program and image processing system | |
| JP6806655B2 (en) | Radiation imaging device, image data processing device and image processing program | |
| JP3910239B2 (en) | Medical image synthesizer | |
| US20210128243A1 (en) | Augmented reality method for endoscope | |
| KR101988531B1 (en) | Navigation system for liver disease using augmented reality technology and method for organ image display | |
| US9576353B2 (en) | Method for verifying the relative position of bone structures | |
| US10650561B2 (en) | Composite radiographic image that corrects effects of parallax distortion | |
| CN116612161B (en) | Method and device for constructing data set for medical image registration | |
| JP6429958B2 (en) | Image processing apparatus, image processing method, and program | |
| JP2017042247A (en) | Reference point evaluation device, method, and program, and positioning device, method, and program | |
| US20250363725A1 (en) | System and method for generation of registration transform for surgical navigation | |
| JP2006139782A (en) | Method of superimposing images | |
| JP6391544B2 (en) | Medical image processing apparatus, medical image processing method, and program | |
| CN114066947B (en) | Image registration method and image registration device | |
| WO2026017817A1 (en) | Method and system for obtaining a location and/or representation of a three-dimensional object on a two-dimensional image obtained by x-ray angiography |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
| AS | Assignment |
Owner name: RADLINK, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:TAO, WENCHAO;XUAN, NING;REEL/FRAME:052458/0204 Effective date: 20200410 |
|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
| CC | Certificate of correction | ||
| FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
| LAPS | Lapse for failure to pay maintenance fees |
Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
| STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
| FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20240818 |