[go: up one dir, main page]

WO2010052615A2 - Motion information extraction - Google Patents

Motion information extraction Download PDF

Info

Publication number
WO2010052615A2
WO2010052615A2 PCT/IB2009/054792 IB2009054792W WO2010052615A2 WO 2010052615 A2 WO2010052615 A2 WO 2010052615A2 IB 2009054792 W IB2009054792 W IB 2009054792W WO 2010052615 A2 WO2010052615 A2 WO 2010052615A2
Authority
WO
WIPO (PCT)
Prior art keywords
image data
data
projection data
volumetric image
slices
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/IB2009/054792
Other languages
French (fr)
Other versions
WO2010052615A3 (en
Inventor
Peter Forthmann
Holger Schmitt
Udo Van Stevendaal
Thomas Koehler
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Philips Intellectual Property and Standards GmbH
Koninklijke Philips NV
Original Assignee
Philips Intellectual Property and Standards GmbH
Koninklijke Philips Electronics NV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Philips Intellectual Property and Standards GmbH, Koninklijke Philips Electronics NV filed Critical Philips Intellectual Property and Standards GmbH
Publication of WO2010052615A2 publication Critical patent/WO2010052615A2/en
Anticipated expiration legal-status Critical
Publication of WO2010052615A3 publication Critical patent/WO2010052615A3/en
Ceased legal-status Critical Current

Links

Classifications

    • G06T12/10
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/174Segmentation; Edge detection involving the use of two or more images
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2211/00Image generation
    • G06T2211/40Computed tomography
    • G06T2211/412Dynamic

Definitions

  • CT computer tomography
  • a computed tomography (CT) scanner generally includes an x-ray tube mounted on a rotatable gantry opposite a detector array including one or more rows of detector pixels.
  • the x-ray tube rotates around an examination region located between the x-ray tube and the detector array and emits radiation that traverses the examination region and an object or subject disposed therein.
  • the detector array detects radiation that traverses the examination region and generates projection data indicative of the examination region and the object or subject disposed therein.
  • a reconstructor processes the projection data and generates volumetric image data indicative of the examination region and the object or subject disposed therein.
  • the volumetric image data can be processed to generate one or more images that include the scanned portion of the object or subject.
  • the scanned portion of the object or subject includes a moving structure such as the heart or lung, or anatomy affected by the movement of such an organ.
  • motion artifacts may be introduced into the projection data and hence the images thereof.
  • Motion compensation algorithms have been used to reduce motion artifact.
  • some motion compensation algorithms require the temporal positions of the scanned anatomy during the various breathing phases.
  • Techniques for obtaining such information have included respiratory belts, light emitting landmarks, and the like.
  • the image data and the motion data need to be correlated or registered, and belts and other instrumentation and devices may increase procedure complexity and time, and cause patient discomfort. Aspects of the present application address the above-referenced matters and others.
  • a method includes creating a second set of projection data that includes substantially only selected structure of interest based on a first set of projection data that includes the selected structure of interest and other structure.
  • a method in another aspect, includes generating a second plurality of sliding window slices for projection data corresponding to for a last slice of a first plurality of slices. The method further includes selecting a second sliding window slice from the second plurality of sliding window slices based on the last slice of the first plurality of slices. The method further includes generating a second plurality of slices, including a first slice and a last slice, from a range of projection data around projection data corresponding to the last slice of the first plurality of slices.
  • a system in another aspect, includes a motion information extractor that extracts motion information from data from an imaging system.
  • the motion information extractor includes at least one of means for creating a set of projection data that includes substantially only selected structure of interest based on a different set of projection data that includes the selected structure of interest and other structure, and means for generating a volume of data for a moving structure based on a motion state obtained from the extracted motion information.
  • a method for correcting for motion in projection data includes obtaining first projection data indicative of both moving and static structures, generating second projection data, which is indicative substantially only of the moving structure, based on the first projection data, and using the second projection data to motion correct the first projection data.
  • FIGURE 1 illustrates an example imaging system in connection with a motion information extractor.
  • FIGURE 2 illustrates an example motion information extractor.
  • FIGURE 3 illustrates example projection data.
  • FIGURE 4 illustrates example image data.
  • FIGURE 5 illustrates example segmented image data.
  • FIGURE 6 illustrates example static only image data.
  • FIGURE 7 illustrates example motion only projection data.
  • FIGURE 8 illustrates an example method.
  • FIGURE 9 illustrates another example motion information extractor.
  • FIGURE 10 illustrates another example method.
  • FIGURE 1 illustrates an imaging system 100 such as a computed tomography (CT) scanner.
  • CT computed tomography
  • the imaging system 100 can be a positron emission tomography (PET) scanner, single photon emission computed tomography (SPECT) scanner, a combination PET/CT scanner, and/or other emission or transmission tomography scanner.
  • PET positron emission tomography
  • SPECT single photon emission computed tomography
  • the imaging system 100 includes a stationary gantry 102 and a rotating gantry 104, which is rotatably supported by the stationary gantry 102.
  • the rotating gantry 104 rotates around an examination region 106 about a longitudinal or z-axis.
  • a radiation source 108 such as an x-ray tube, is supported by and rotates with the rotating gantry 104, and emits radiation that traverses the examination region 106.
  • a source collimator 110 collimates the emitted radiation to form a generally fan, wedge, or cone shaped radiation that traverses the examination region 106.
  • a radiation sensitive detector array 112 detects radiation emitted by the radiation source 108 that traverses the examination region 106 and generates projection data indicative of the detected radiation.
  • the illustrated radiation sensitive detector array 112 includes one or more rows of radiation sensitive photosensor pixels along the z-axis.
  • a reconstructor 114 reconstructs the projection data and generates volumetric image data indicative of the examination region 106, including structure of an object or subject disposed therein. One or more images can be generated from the volumetric image data.
  • a motion information extractor 116 extracts motion information from the projection and/or volumetric imaged data.
  • extracted motion information is used to generate projection data that predominately includes moving structure of interest and substantially no non-moving structure. This allows for characterization of the moving structure of interest, which may otherwise be obscured by other structure.
  • extracted motion information is used to identify image data corresponding to a motion state of interest and facilitate reconstructing a volume of data based on motion state.
  • the moving structure can be a heart, a lung, the diaphragm, and/or other periodically moving organ of a human or animal patient, a moving object, flowing fluid, etc.
  • motion information extractor 116 devices such as respiratory belts, light emitting landmarks, and the like can be mitigated, which may decrease procedure complexity and time and patient discomfort relative to a configuration in which the motion information extractor 116 is not employed. It is to be understood that although the motion information extractor 116 is shown in connection with the imaging system 100, the motion information extractor 116 can be located remote from and/or employed without the imaging system 100.
  • a support 118 such as a couch, supports the object or subject in the examination region 106.
  • the support 118 is movable along the z-axis in coordination with the rotation of the rotating gantry 104 to facilitate helical, axial, or other desired scanning trajectories.
  • a general purpose computing system serves as an operator console 120, which includes human readable output devices such as a display and/or printer and input devices such as a keyboard and/or mouse.
  • Software resident on the console 120 allows the operator to control the operation of the system 100, for example, by allowing the operator to initiate scanning to obtain data that can be processed by the motion information extractor 116.
  • FIGURE 2 illustrates an example motion information extractor 116.
  • the motion information extractor 116 can receive volumetric image data from the reconstructor 114.
  • the motion information extractor 116 may include a reconstructor and/or employ another reconstructor and generate volumetric image data from projection data generated by the imaging system 100 and/or another system.
  • FIGURES 3 and 4 respectively show example projection data and volumetric image data for an area of the abdomen of a human patient.
  • a segmentor 204 segments the volumetric image data. In one instance, this may include segmenting the volumetric image data to extract substantially only structure of interest, such as moving structure, from the volumetric image data. The resulting volumetric image data is indicative substantially of the extracted moving structure of interest, and is generally referred to herein as motion only volumetric image data.
  • FIGURE 5 shows example segmented volumetric image data for the area of the abdomen show in FIGURE 4.
  • an image data subtracter 206 subtracts the segmented or motion only volumetric image data from the original volumetric image data.
  • the resulting difference volumetric image data is indicative substantially of static only volumetric image data, which represents volumetric image data with substantially no extracted moving structure of interest, and is generally referred to herein as static only volumetric image data.
  • FIGURE 6 shows example difference volumetric image data for the area of the abdomen show in FIGURE 4.
  • a forward-projector 208 forward projects the static only volumetric image data to generate projection data therefore, which includes substantially only the non extracted structure or not extracted moving structure of interest, and is generally referred to herein as static only projection data.
  • static only projection data includes substantially only the non extracted structure or not extracted moving structure of interest.
  • a projection data subtracter 210 subtracts the static only projection data from the original projection data.
  • the resulting difference projection data provides projection data generally referred to herein as motion only projection data.
  • This data represents projection data substantially corresponding to the moving structure of interest. With this data, the static parts of the scanned subject are effectively removed from the original projection data.
  • FIGURE 7 shows example motion only projection data.
  • the reconstructor 114 can be used to reconstruct the motion only projection data to generate motion only volumetric image data with substantially only the extracted structure and substantially no non-extracted structure.
  • This data can be further processed to determine information such as quantitative and/or qualitative information about the moving structure of interest, including, but not limited, one or more images of the moving structure of interest, information about the motion state of the structure, etc.
  • the motion only projection data is also processed to determine such information.
  • FIGURE 8 illustrates a method in connection with the motion state extractor of FIGURE 2.
  • an object or subject such as a patient is scanned.
  • the projection data is reconstructed to generate a first set of volumetric image data.
  • a second set of volumetric image data is generated by segmenting the first set of volumetric data to generate volumetric image data indicative of the segmented portion, which may only include moving structure of interest.
  • a third set of volumetric image data is generated by subtracting the second set of volumetric image data from the first set of volumetric image data. This data only includes the static structure.
  • the third set of volumetric image data is forward projected to generate a second projection data, which provides an estimate of the projection data that would render the third set of volumetric image data.
  • a third set of projection data is generated by subtracting the second set of projection data from the first set of projection data. The resulting data provides projection data indicative substantially only of the moving structure of interest.
  • the third set of projection data is reconstructed to generate a fourth set of volumetric image data, which substantially only includes the moving structure of interest. This data can be further processed as discussed herein.
  • FIGURE 9 illustrates another motion information extractor 116.
  • the motion information is used to track motion states (phases) for reconstruction purposes.
  • the motion information extractor 116 includes a motion state or phase selector 902, which selects data for reconstruction based on a motion state or phase of interest.
  • the phase selector 902 selects data based on image data generated by the reconstructor 114 and/or other reconstructor.
  • the image data includes a plurality of sliding window reconstructions or slices for a series of short scan segments. It is to be appreciated that the reconstructions can be generated with projection data corresponding to half rotations, or one hundred eighty degrees (180°) plus a fan angle, or other amounts projection data.
  • the reconstructed slices correspond to the same z-axis position within the scan volume, but different motion states.
  • the phase selector 902 presents the slices, for example, via a monitor or other display device, and receives user input indicative of the motion state of interest. In one instance, this can be achieved by having the user manually select one of the slices, for example, by entering a slice number, clicking on a slice with a mouse or other pointing device, and/or otherwise. In another instance, the user enters indicia that identifies the state, and the phase selector 902 automatically selects a slice corresponding to the state. For example, where the data includes the abdomen and the user enters selection indicia corresponding to the highest inhalation state of the lung, the phase selector 902 identifies the slice in which the lung is the largest. This can be done based on Hounsfield units or otherwise. Of course, the user can override the phase selector 902.
  • the phase selector 902 generates a signal based on the selected sliding window slice.
  • the signal or another signal may also provide information such as the selected motion state.
  • the signal(s) is provided to the reconstructor 114, which reconstructs a plurality of contiguous slices for a range of projection data around the projection data corresponding to the selected sliding window slice (the start slice). For example, the slices for projections in the range of start slice projection ⁇ ⁇ /2.
  • the reconstructor 114 reconstructs a second set of sliding window slices. These slices are based on the last slice of the above noted plurality of contiguous slices. Likewise, these sliding window slices can be generated with projection data corresponding to half rotations or otherwise, and correspond to the same z-axis position within the scan volume, but different motion states.
  • a motion state identifier 904 identifies which of the sliding window slices corresponds to the motion state of the last slice in the above noted plurality of contiguous slices.
  • a similarity metric determiner 906 is used to identify such a slice.
  • the similarity metric determiner 906 can determine a similarity between each of the sliding window slices and the last slice. In one instance, this includes determining a correlation value for each of the sliding window slices with respect to the last slice. Other metrics such as cross-correlation, root mean square, deviation, and/or other metrics can additionally or alternatively be used.
  • the sliding window slice having the highest degree of correlation with the last slice is deemed to correspond to the same motion state of the last slice.
  • the projection data corresponding to this slice can be used as the start data for the next set of contiguous slices to be generated.
  • the start projection data is provided to the reconstructor 114, which reconstructs the next set of plurality of contiguous slices for a range of projection data around the projection that marks the center of the reconstruction interval of the next start slice.
  • the above can be repeated until enough sets of slices (e.g., 2 to 100) for a desired volume (e.g., 1 mm to 100 mm) or field of view along the z-axis is generated.
  • the resulting volume of data can then be used for various reconstructions, including, but not limited to, multi-cycle reconstructions based on phase or motion states in which data from adjacent motion cycles is combined, which can improve image quality.
  • FIGURE 10 illustrates a method in connection with the motion state extractor of FIGURE 10.
  • a plurality of sliding window slices 1100, 1102, 1104, ..., 1106 for a series of short scan segments are generated.
  • the sliding window slice corresponding to a motion state of interest is selected as discussed herein.
  • slice 1104 is selected and referred to as the start slice.
  • a first plurality of contiguous slices 1108, including a first slice 1110 and a last slice 1112, is generated for a range of projection data around the projection data corresponding to the selected sliding window slice 1106.
  • a second plurality of sliding window slices 1114, 1116, ..., 1118 is generated based on the last slice 1112 of the first plurality of contiguous slices 1108.
  • the projection data for the sliding window slice of the second plurality with the highest degree of similarity (slice 1114 in the illustrated example) with the last slice 1112 of the first plurality of contiguous slices 1108 is selected as the next start projection data for the next set of contiguous slices, and acts 1006 to 1008 are repeated. If no more sets of contiguous slices are to be generated, then at 1014, the method ends.
  • the central projections corresponding to the first slice in the stacks of slices, or their associated points in time can be recorded for use as so- called phase points for other reconstructions like gated reconstruction, for example.
  • the techniques described herein may be implemented by way of computer readable instructions, which when executed by a computer processor(s), cause the processor(s) to carry out the described acts.
  • the instructions are stored in a computer readable storage medium associated with or otherwise accessible to a relevant computer, such as a dedicated workstation, a home computer, a distributed computing system, the console 120, and/or other computer.
  • the acts need not be performed concurrently with data acquisition.
  • the term “substantially” as used herein means that the data is at least 99% structure of interest or 99% free of other structure. In another instance, the term “substantially” as used herein means that the data is at least 95% structure of interest or 95% free of other structure. In another instance, the term “substantially” as used herein means that the data is at least 90% structure of interest or 90% free of other structure. In another instance, the term “substantially” as used herein means that the data is at least 80% structure of interest or 80% free of other structure. In another instance, the term “substantially” as used herein means that the data is at least 50% structure of interest or 50% free of other structure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Image Analysis (AREA)

Abstract

A method includes creating a second set of projection data that includes substantially only selected structure of interest based on a first set of projection data that includes the selected structure of interest and other structure. Another method includes generating a second plurality of sliding window slices for a last slice of a first plurality of slices, selecting a second sliding window slice from the second plurality of sliding window slices based on the last slice of the first plurality of slices, and generating a second plurality of slices, including a first slice and a last slice, from a range of projection data around projection data corresponding to the last slice of the first plurality of slices.

Description

MOTION INFORMATION EXTRACTION
DESCRIPTION
The following generally relates to motion information extraction in connection with imaging. Although it is described herein in connection with computer tomography (CT), it is also amenable to other imaging applications, including other transmission tomography applications and/or emission tomography applications.
A computed tomography (CT) scanner generally includes an x-ray tube mounted on a rotatable gantry opposite a detector array including one or more rows of detector pixels. The x-ray tube rotates around an examination region located between the x-ray tube and the detector array and emits radiation that traverses the examination region and an object or subject disposed therein. The detector array detects radiation that traverses the examination region and generates projection data indicative of the examination region and the object or subject disposed therein. A reconstructor processes the projection data and generates volumetric image data indicative of the examination region and the object or subject disposed therein. The volumetric image data can be processed to generate one or more images that include the scanned portion of the object or subject. In some instances, the scanned portion of the object or subject includes a moving structure such as the heart or lung, or anatomy affected by the movement of such an organ. In such instance, motion artifacts may be introduced into the projection data and hence the images thereof. Motion compensation algorithms have been used to reduce motion artifact. Unfortunately, some motion compensation algorithms require the temporal positions of the scanned anatomy during the various breathing phases. Techniques for obtaining such information have included respiratory belts, light emitting landmarks, and the like. However, the image data and the motion data need to be correlated or registered, and belts and other instrumentation and devices may increase procedure complexity and time, and cause patient discomfort. Aspects of the present application address the above-referenced matters and others.
In one aspect, a method includes creating a second set of projection data that includes substantially only selected structure of interest based on a first set of projection data that includes the selected structure of interest and other structure.
In another aspect, a method includes generating a second plurality of sliding window slices for projection data corresponding to for a last slice of a first plurality of slices. The method further includes selecting a second sliding window slice from the second plurality of sliding window slices based on the last slice of the first plurality of slices. The method further includes generating a second plurality of slices, including a first slice and a last slice, from a range of projection data around projection data corresponding to the last slice of the first plurality of slices.
In another aspect, a system includes a motion information extractor that extracts motion information from data from an imaging system. The motion information extractor includes at least one of means for creating a set of projection data that includes substantially only selected structure of interest based on a different set of projection data that includes the selected structure of interest and other structure, and means for generating a volume of data for a moving structure based on a motion state obtained from the extracted motion information. In another aspect, a method for correcting for motion in projection data includes obtaining first projection data indicative of both moving and static structures, generating second projection data, which is indicative substantially only of the moving structure, based on the first projection data, and using the second projection data to motion correct the first projection data.
The invention may take form in various components and arrangements of components, and in various steps and arrangements of steps. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. FIGURE 1 illustrates an example imaging system in connection with a motion information extractor.
FIGURE 2 illustrates an example motion information extractor. FIGURE 3 illustrates example projection data. FIGURE 4 illustrates example image data. FIGURE 5 illustrates example segmented image data. FIGURE 6 illustrates example static only image data. FIGURE 7 illustrates example motion only projection data.
FIGURE 8 illustrates an example method.
FIGURE 9 illustrates another example motion information extractor. FIGURE 10 illustrates another example method.
FIGURE 1 illustrates an imaging system 100 such as a computed tomography (CT) scanner. However, it is to be appreciated that in other embodiments the imaging system 100 can be a positron emission tomography (PET) scanner, single photon emission computed tomography (SPECT) scanner, a combination PET/CT scanner, and/or other emission or transmission tomography scanner. The imaging system 100 includes a stationary gantry 102 and a rotating gantry 104, which is rotatably supported by the stationary gantry 102. The rotating gantry 104 rotates around an examination region 106 about a longitudinal or z-axis. A radiation source 108, such as an x-ray tube, is supported by and rotates with the rotating gantry 104, and emits radiation that traverses the examination region 106. A source collimator 110 collimates the emitted radiation to form a generally fan, wedge, or cone shaped radiation that traverses the examination region 106.
A radiation sensitive detector array 112 detects radiation emitted by the radiation source 108 that traverses the examination region 106 and generates projection data indicative of the detected radiation. The illustrated radiation sensitive detector array 112 includes one or more rows of radiation sensitive photosensor pixels along the z-axis. A reconstructor 114 reconstructs the projection data and generates volumetric image data indicative of the examination region 106, including structure of an object or subject disposed therein. One or more images can be generated from the volumetric image data.
A motion information extractor 116 extracts motion information from the projection and/or volumetric imaged data. As described in greater detail below, in one instance extracted motion information is used to generate projection data that predominately includes moving structure of interest and substantially no non-moving structure. This allows for characterization of the moving structure of interest, which may otherwise be obscured by other structure. In another instance, extracted motion information is used to identify image data corresponding to a motion state of interest and facilitate reconstructing a volume of data based on motion state. It is to be appreciated that the moving structure can be a heart, a lung, the diaphragm, and/or other periodically moving organ of a human or animal patient, a moving object, flowing fluid, etc. In addition, by using the motion information extractor 116 devices such as respiratory belts, light emitting landmarks, and the like can be mitigated, which may decrease procedure complexity and time and patient discomfort relative to a configuration in which the motion information extractor 116 is not employed. It is to be understood that although the motion information extractor 116 is shown in connection with the imaging system 100, the motion information extractor 116 can be located remote from and/or employed without the imaging system 100.
A support 118, such as a couch, supports the object or subject in the examination region 106. The support 118 is movable along the z-axis in coordination with the rotation of the rotating gantry 104 to facilitate helical, axial, or other desired scanning trajectories. A general purpose computing system serves as an operator console 120, which includes human readable output devices such as a display and/or printer and input devices such as a keyboard and/or mouse. Software resident on the console 120 allows the operator to control the operation of the system 100, for example, by allowing the operator to initiate scanning to obtain data that can be processed by the motion information extractor 116.
FIGURE 2 illustrates an example motion information extractor 116. As shown, the motion information extractor 116 can receive volumetric image data from the reconstructor 114. In other embodiments, the motion information extractor 116 may include a reconstructor and/or employ another reconstructor and generate volumetric image data from projection data generated by the imaging system 100 and/or another system. FIGURES 3 and 4 respectively show example projection data and volumetric image data for an area of the abdomen of a human patient. Returning to FIGURE 2, a segmentor 204 segments the volumetric image data. In one instance, this may include segmenting the volumetric image data to extract substantially only structure of interest, such as moving structure, from the volumetric image data. The resulting volumetric image data is indicative substantially of the extracted moving structure of interest, and is generally referred to herein as motion only volumetric image data.
It is to be appreciated that a manual segmentation technique employing user input, an automatic segmentation technique, and/or a semi-automatic segmentation technique can be employed to segment the volumetric image data. The technique may include using anatomical models, patient information, fitting algorithms, Hounsfield units, etc. FIGURE 5 shows example segmented volumetric image data for the area of the abdomen show in FIGURE 4. Returning to FIGURE 2, an image data subtracter 206 subtracts the segmented or motion only volumetric image data from the original volumetric image data. In this example, the resulting difference volumetric image data is indicative substantially of static only volumetric image data, which represents volumetric image data with substantially no extracted moving structure of interest, and is generally referred to herein as static only volumetric image data. FIGURE 6 shows example difference volumetric image data for the area of the abdomen show in FIGURE 4.
Returning to FIGURE 2, a forward-projector 208 forward projects the static only volumetric image data to generate projection data therefore, which includes substantially only the non extracted structure or not extracted moving structure of interest, and is generally referred to herein as static only projection data. Again, the term "substantially" can cover various ranges including those discussed herein.
A projection data subtracter 210 subtracts the static only projection data from the original projection data. The resulting difference projection data provides projection data generally referred to herein as motion only projection data. This data represents projection data substantially corresponding to the moving structure of interest. With this data, the static parts of the scanned subject are effectively removed from the original projection data. FIGURE 7 shows example motion only projection data.
Returning to FIGURE 2, the reconstructor 114 can be used to reconstruct the motion only projection data to generate motion only volumetric image data with substantially only the extracted structure and substantially no non-extracted structure. This data can be further processed to determine information such as quantitative and/or qualitative information about the moving structure of interest, including, but not limited, one or more images of the moving structure of interest, information about the motion state of the structure, etc. In another embodiment, additionally or alternatively the motion only projection data is also processed to determine such information.
FIGURE 8 illustrates a method in connection with the motion state extractor of FIGURE 2. At 802 an object or subject such as a patient is scanned. At 804, the projection data is reconstructed to generate a first set of volumetric image data. At 806, a second set of volumetric image data is generated by segmenting the first set of volumetric data to generate volumetric image data indicative of the segmented portion, which may only include moving structure of interest. At 808, a third set of volumetric image data is generated by subtracting the second set of volumetric image data from the first set of volumetric image data. This data only includes the static structure.
At 810, the third set of volumetric image data is forward projected to generate a second projection data, which provides an estimate of the projection data that would render the third set of volumetric image data. At 812, a third set of projection data is generated by subtracting the second set of projection data from the first set of projection data. The resulting data provides projection data indicative substantially only of the moving structure of interest. At 814, the third set of projection data is reconstructed to generate a fourth set of volumetric image data, which substantially only includes the moving structure of interest. This data can be further processed as discussed herein. FIGURE 9 illustrates another motion information extractor 116. In this example, the motion information is used to track motion states (phases) for reconstruction purposes.
The motion information extractor 116 includes a motion state or phase selector 902, which selects data for reconstruction based on a motion state or phase of interest. The phase selector 902 selects data based on image data generated by the reconstructor 114 and/or other reconstructor. In one instance, the image data includes a plurality of sliding window reconstructions or slices for a series of short scan segments. It is to be appreciated that the reconstructions can be generated with projection data corresponding to half rotations, or one hundred eighty degrees (180°) plus a fan angle, or other amounts projection data. The reconstructed slices correspond to the same z-axis position within the scan volume, but different motion states. In the illustrated embodiment, the phase selector 902 presents the slices, for example, via a monitor or other display device, and receives user input indicative of the motion state of interest. In one instance, this can be achieved by having the user manually select one of the slices, for example, by entering a slice number, clicking on a slice with a mouse or other pointing device, and/or otherwise. In another instance, the user enters indicia that identifies the state, and the phase selector 902 automatically selects a slice corresponding to the state. For example, where the data includes the abdomen and the user enters selection indicia corresponding to the highest inhalation state of the lung, the phase selector 902 identifies the slice in which the lung is the largest. This can be done based on Hounsfield units or otherwise. Of course, the user can override the phase selector 902.
The phase selector 902 generates a signal based on the selected sliding window slice. The signal or another signal may also provide information such as the selected motion state. The signal(s) is provided to the reconstructor 114, which reconstructs a plurality of contiguous slices for a range of projection data around the projection data corresponding to the selected sliding window slice (the start slice). For example, the slices for projections in the range of start slice projection ± π/2.
The reconstructor 114 reconstructs a second set of sliding window slices. These slices are based on the last slice of the above noted plurality of contiguous slices. Likewise, these sliding window slices can be generated with projection data corresponding to half rotations or otherwise, and correspond to the same z-axis position within the scan volume, but different motion states.
A motion state identifier 904 identifies which of the sliding window slices corresponds to the motion state of the last slice in the above noted plurality of contiguous slices. In the illustrated embodiment, a similarity metric determiner 906 is used to identify such a slice. For example, the similarity metric determiner 906 can determine a similarity between each of the sliding window slices and the last slice. In one instance, this includes determining a correlation value for each of the sliding window slices with respect to the last slice. Other metrics such as cross-correlation, root mean square, deviation, and/or other metrics can additionally or alternatively be used. The sliding window slice having the highest degree of correlation with the last slice is deemed to correspond to the same motion state of the last slice. The projection data corresponding to this slice can be used as the start data for the next set of contiguous slices to be generated. In this instance, the start projection data is provided to the reconstructor 114, which reconstructs the next set of plurality of contiguous slices for a range of projection data around the projection that marks the center of the reconstruction interval of the next start slice. The above can be repeated until enough sets of slices (e.g., 2 to 100) for a desired volume (e.g., 1 mm to 100 mm) or field of view along the z-axis is generated. The resulting volume of data can then be used for various reconstructions, including, but not limited to, multi-cycle reconstructions based on phase or motion states in which data from adjacent motion cycles is combined, which can improve image quality. FIGURE 10 illustrates a method in connection with the motion state extractor of FIGURE 10. At 1002, a plurality of sliding window slices 1100, 1102, 1104, ..., 1106 for a series of short scan segments are generated. At 1004, the sliding window slice corresponding to a motion state of interest is selected as discussed herein. In this example, slice 1104 is selected and referred to as the start slice. At 1006, a first plurality of contiguous slices 1108, including a first slice 1110 and a last slice 1112, is generated for a range of projection data around the projection data corresponding to the selected sliding window slice 1106.
At 1008, it is determined whether another set of contiguous slices is to be generated. In one instance, this determination is based on the desired z-axis field of view. If so, then at 1010 a second plurality of sliding window slices 1114, 1116, ..., 1118 is generated based on the last slice 1112 of the first plurality of contiguous slices 1108. At 1012, the projection data for the sliding window slice of the second plurality with the highest degree of similarity (slice 1114 in the illustrated example) with the last slice 1112 of the first plurality of contiguous slices 1108 is selected as the next start projection data for the next set of contiguous slices, and acts 1006 to 1008 are repeated. If no more sets of contiguous slices are to be generated, then at 1014, the method ends.
It is to be appreciated that the central projections corresponding to the first slice in the stacks of slices, or their associated points in time, can be recorded for use as so- called phase points for other reconstructions like gated reconstruction, for example. The techniques described herein may be implemented by way of computer readable instructions, which when executed by a computer processor(s), cause the processor(s) to carry out the described acts. In such a case, the instructions are stored in a computer readable storage medium associated with or otherwise accessible to a relevant computer, such as a dedicated workstation, a home computer, a distributed computing system, the console 120, and/or other computer. The acts need not be performed concurrently with data acquisition. It to be appreciated that in one instance the term "substantially" as used herein means that the data is at least 99% structure of interest or 99% free of other structure. In another instance, the term "substantially" as used herein means that the data is at least 95% structure of interest or 95% free of other structure. In another instance, the term "substantially" as used herein means that the data is at least 90% structure of interest or 90% free of other structure. In another instance, the term "substantially" as used herein means that the data is at least 80% structure of interest or 80% free of other structure. In another instance, the term "substantially" as used herein means that the data is at least 50% structure of interest or 50% free of other structure.
The invention has been described herein with reference to the various embodiments. Modifications and alterations may occur to others upon reading the description herein. It is intended that the invention be construed as including all such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims

CLAIMSWhat is claim is:
1. A method for processing imaging data from an imaging system (100), comprising: creating a second set of projection data that includes substantially only selected structure of interest based on a first set of projection data that includes the selected structure of interest and other structure.
2. The method of claim 1, further including: reconstructing the second set of projection data to generate second volumetric image data that includes substantially only the selected structure of interest.
3. The method of any of claims 1 to 2, further including: reconstructing the first set of projection data to generate first volumetric image data; generating third volumetric image data based on the first volumetric image data; generating fourth volumetric image data based on the third volumetric image data; forward projecting the fourth volumetric image data to generate a fourth set of projection data; and creating the second set of projection data based on the first and fourth sets of projection data.
4. The method of claim 3, wherein the third volumetric image data includes substantially only selected structure of interest.
5. The method of any of claims 3 to 4, wherein generating the third set of volumetric image data includes extracting substantially only the selected structure of interest from the first volumetric image data.
6. The method of any of claims 3 to 4, wherein generating the third set of volumetric image data includes segmenting substantially only the selected structure of interest from the first volumetric image data.
7. The method of any of claims 3 to 6, wherein the fourth volumetric image data includes substantially only non-selected structure of interest.
8. The method of any of claims 3 to 7, wherein generating fourth volumetric image data includes subtracting the third set of volumetric image data from the first set of volumetric image data.
9. The method of any of claims 1 to 8, wherein the second set of projection data is at least 95% free of non-selected structure of interest.
10. The method of any of claims 1 to 9, wherein the selected structure of interest includes a periodically moving organ of a human or animal patient.
11. A method, comprising: generating a second plurality of sliding window slices (1114, 1116, 1118) for projection data corresponding to a last slice (1112) of a first plurality of a first plurality of slices (1108); selecting a second sliding window slice from the second plurality of sliding window slices (1114, 1116, 1118) based on the last slice (1112) of the first plurality of slices (1108); and generating a second plurality of slices, including a first slice and a last slice, from a range of projection data around the projection data corresponding to the last slice (1112) of the first plurality of slices (1108).
12. The method of claim 11, wherein the first plurality of a first plurality of slices (1108) includes a first slice (1110) and a last slice (1112), which are generated from a range of projection data around projection data corresponding to a first selected sliding window slice (1100, 1102, 1104, 1106) that is selected from a first plurality of sliding window slices (1100, 1102, 1104, 1106), wherein the first selected sliding window slice is used to generate the first plurality of slices (1108)
13. The method of claim 12, further including: generating a first plurality of sliding window slices (1100, 1102, 1104, 1106) from image data of an object or a subject from an imaging system; and selecting the first sliding window slice (1100, 1102, 1104, 1106) by selecting a sliding window (1100, 1102, 1104, 1106) from the first plurality of sliding window slices (1100, 1102, 1104, 1106) that corresponds to a motion state of interest.
14. The method of any of claims 11 to 13, wherein selecting the second sliding window slice includes selecting a second sliding window slice that has a highest degree of similarity with the last slice (1112).
15. The method of any of claims 11 to 13, wherein selecting the second sliding window slice includes selecting a second sliding window slice that corresponds to a same motion phase of the last slice (1112).
16. The method of any of claims 11 to 15, further comprising: generating a third plurality of sliding window slices for the last slice of the second plurality of slices; selecting a third sliding window slice from the third plurality of sliding window slices based on the last slice of the second plurality of slices; and generating a third plurality of slices, including a first slice and a last slice, from a range of projection data around projection data corresponding to the last slice of the second plurality of slices, wherein the first, second and third plurality of slices form the volume data set.
17. A system, comprising: a motion information extractor (116) that extracts motion information from data from an imaging system (100); wherein the motion information extractor (116) includes at least one of: means (204, 206, 208, 210) for creating a set of projection data that includes substantially only selected structure of interest based on a different set of projection data that includes the selected structure of interest and other structure; and means (902, 904, 906) for generating a volume of data for a moving structure based on a motion state obtained from the extracted motion information.
18. The system of claim 17, wherein the set of projection data is created based on estimated projection data that is estimated from derived image data that does not include the selected structure of interest.
19. The system of claim 18, wherein the derived image data is generated from image data that includes substantially only the selected structure of interest and image data that includes the selected structure of interest and the other structure.
20. The system of claim 19, wherein the image data that includes substantially only the selected structure of interest is generated from the image data that includes the selected structure of interest and the other structure.
21. The system of claim 17, wherein the volume of data includes a plurality of sub- volumes of data, wherein a sub-volume of data is based on projection data for a last slice in a previous sub-volume of data.
22. The system of claim 21, wherein the sub-volume of data is based on a sliding window slice from a plurality of sliding window slices that has a highest degree of similarity with the last slice.
23. The system of claim 22, wherein the plurality of sliding window slices is generated based on a range of projection data around projection data corresponding to the last slice.
24. A method for correcting for motion in projection data, comprising: obtaining first projection data indicative of both moving and static structures; generating second projection data, which is indicative substantially only of the moving structure, based on the first projection data; and using the second projection data to motion correct the first projection data.
25. The method of claim 24, further including: reconstructing the motion corrected projection data to generate motion corrected volumetric image data.
26. The method of any of claims 24 to 25, further including: reconstructing the first projection data to generate first volumetric image data; generating third volumetric image data based on the first volumetric image data wherein the third volumetric image data includes substantially only the moving structure; generating fourth volumetric image data based on the first and third volumetric image data, wherein the fourth volumetric image data includes substantially only the static structure; forward projecting the fourth volumetric image data to generate a fourth projection data, wherein the fourth projection data includes substantially only the static structure; and generating the second projection data based on the first and fourth projection data.
27. The method of claim 26, wherein generating fourth volumetric image data includes subtracting the third set of volumetric image data from the first set of volumetric image data.
28. The method of any of claims 24 to 27, wherein the moving structure includes a periodically moving organ of a human or animal patient.
PCT/IB2009/054792 2008-11-07 2009-10-28 Motion information extraction Ceased WO2010052615A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US11230008P 2008-11-07 2008-11-07
US61/112,300 2008-11-07

Publications (2)

Publication Number Publication Date
WO2010052615A2 true WO2010052615A2 (en) 2010-05-14
WO2010052615A3 WO2010052615A3 (en) 2011-05-12

Family

ID=42153347

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2009/054792 Ceased WO2010052615A2 (en) 2008-11-07 2009-10-28 Motion information extraction

Country Status (1)

Country Link
WO (1) WO2010052615A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103198497A (en) * 2011-09-28 2013-07-10 西门子公司 Method and system for determining a motion field and for motion-compensated reconstruction using said motion field

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10320233B4 (en) * 2003-05-07 2012-11-08 "Stiftung Caesar" (Center Of Advanced European Studies And Research) Method for reducing artifact-induced noise

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103198497A (en) * 2011-09-28 2013-07-10 西门子公司 Method and system for determining a motion field and for motion-compensated reconstruction using said motion field

Also Published As

Publication number Publication date
WO2010052615A3 (en) 2011-05-12

Similar Documents

Publication Publication Date Title
EP2501290B1 (en) Scan plan field of view adjustor, determiner, and/or quality assessor
US8553959B2 (en) Method and apparatus for correcting multi-modality imaging data
US7348564B2 (en) Multi modality imaging methods and apparatus
US6856666B2 (en) Multi modality imaging methods and apparatus
US8682415B2 (en) Method and system for generating a modified 4D volume visualization
RU2471204C2 (en) Local positron emission tomography
US7920670B2 (en) Keyhole computed tomography
US11410349B2 (en) Methods for data driven respiratory motion estimation
US20120278055A1 (en) Motion correction in radiation therapy
CN102236903B (en) In CT shoots, temporal resolution is improved by the image reconstruction of iteration
US20110110570A1 (en) Apparatus and methods for generating a planar image
IL158197A (en) Methods and apparatus for truncation compensation
US10736596B2 (en) System to improve nuclear image of moving volume
US12070348B2 (en) Methods and systems for computed tomography
JP2018505390A (en) Radiation emission imaging system, storage medium, and imaging method
US20170004636A1 (en) Methods and systems for computed tomography motion compensation
US10299752B2 (en) Medical image processing apparatus, X-ray CT apparatus, and image processing method
JP2004237076A (en) Method and apparatus for multimodality imaging
US7853314B2 (en) Methods and apparatus for improving image quality
US8873823B2 (en) Motion compensation with tissue density retention
JP2021525145A (en) Time-gating 3D imaging
WO2010052615A2 (en) Motion information extraction
JP6877881B2 (en) Medical image processing device, X-ray CT device and image processing method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09796065

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09796065

Country of ref document: EP

Kind code of ref document: A2