[go: up one dir, main page]

GB2638129A - Method and system for image reconstruction - Google Patents

Method and system for image reconstruction

Info

Publication number
GB2638129A
GB2638129A GB2401379.9A GB202401379A GB2638129A GB 2638129 A GB2638129 A GB 2638129A GB 202401379 A GB202401379 A GB 202401379A GB 2638129 A GB2638129 A GB 2638129A
Authority
GB
United Kingdom
Prior art keywords
region
image
subset
reconstruction method
image reconstruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
GB2401379.9A
Other versions
GB202401379D0 (en
Inventor
Joyce Thomas
Mason Jonathan
Stancanello Joseph
Lachaine Martin
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Elekta Ltd
Original Assignee
Elekta Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Elekta Ltd filed Critical Elekta Ltd
Priority to GB2401379.9A priority Critical patent/GB2638129A/en
Publication of GB202401379D0 publication Critical patent/GB202401379D0/en
Priority to PCT/GB2025/050176 priority patent/WO2025163320A1/en
Publication of GB2638129A publication Critical patent/GB2638129A/en
Pending legal-status Critical Current

Links

Classifications

    • G06T12/00
    • G06T12/10
    • G06T12/20
    • G06T12/30
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2211/00Image generation
    • G06T2211/40Computed tomography
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2211/00Image generation
    • G06T2211/40Computed tomography
    • G06T2211/412Dynamic
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2211/00Image generation
    • G06T2211/40Computed tomography
    • G06T2211/444Low dose acquisition or reduction of radiation dose

Landscapes

  • Apparatus For Radiation Diagnosis (AREA)

Abstract

A method for reconstructing images in such a way to reduce motion artifacts. The method comprises receiving a projection set 202 comprising a plurality of two-dimensional projections of a patient volume 202; reconstructing 204 a volumetric image 206 using the projection set; selecting a region 208 such as the central region of the volumetric image, and for the selected region: choosing a subset 210 of the projection set that minimizes motion artifacts within the region, while maintaining a minimum image quality within the region; and reconstructing 212 the region using the chosen subset; repeating, until the whole volumetric image has been included in a selected region, the step of: selecting another region 214 of the volumetric image and repeating the choosing and reconstructing steps above may be choosing a different subset 218. The subsets may or may not be overlapping regions; and combining the regions of the volumetric image to obtain a final volumetric image 230 of the patient. The combining step may use a merge-mask.

Description

METHOD AND SYSTEM FOR IMAGE RECONSTRUCTION
Field of the Invention
Embodiments of the present invention described herein relate to methods and systems for image reconstruction. More specifically, the present invention relates to a computer-implemented method for image reconstruction, and data processing apparatuses, computer programs, and non-transitory computer-readable storage mediums configured to execute methods of image reconstruction.
Background of the Invention
Radiotherapy (RT) is one of the cornerstones of cancer treatment, using ionizing radiation to eradicate tumor cells. A total radiation dose for a patient is typically divided into 3-30 daily fractions to optimize its effect. As the surrounding normal tissue is also sensitive to radiation, highly accurate delivery is a key part of effective RT. Image guided RT (IGRT) is a technique to capture the anatomy of the patient at the time of dose fraction delivery, using in-room imaging in order to align the treatment beam with the tumor location. Cone Beam Computed Tomography (CBCT) is the most widely used imaging modality for IGRT.
Standard CBCT reconstruction algorithms assume the scanned object is static. When the object changes during scanning, this intra-scanning motion induces blurring and streak artifacts. These artifacts hamper IGRT applications, as they reduce the success rate and accuracy of automatic registration techniques and hamper visual inspection. For periodic motions, such as breathing, motion artifacts can be managed by respiratory correlated imaging techniques, i.e., exploiting the periodic feature of such motion through retrospective sorting of projection images into phase bins yielding four dimensional datasets. Non-periodic or sporadic motion presents a difficulty.
In the field, previous work on CBCT motion artifact correction includes: faster image acquisition, which often requires specialised hardware; binning of projection data into several discrete bins corresponding to different phases of the breathing cycle, and then reconstructing these bins independently -this generally requires a long scan time; applying corrections to the reconstructed volume, i.e., attempting to correct motion artifacts in image space after reconstruction; iterative solutions, which suffer from prolonged reconstruction times; method leveraging registration; using a patient-specific respiration model; and 4D-CBCT specific approaches.
In the context of medical imaging and treatment, motion artifacts are one of the most critical limitations to the clinical use of CBCT, especially in the context of adaptive radiation therapy. That is, although existing techniques yield promising results in motion artifact correction, these techniques often require specialised hardware, longer scans, prolonged reconstruction times and/or patient-specific models. In medical imaging, high image quality, such as high contrast, homogeneity, and resolution, during radiation therapy is an important aid to accurately adjust, for instance, cancer treatments according to the current medical state of the patient.
Summary of the Invention
It is an aim of the present disclosure to at least partially address one or more of the challenges mentioned above. The invention is defined in the independent claims, to which reference should now be made. Further features are set out in the dependent claims.
According to an aspect of the invention, there is provided a method of image processing, so as to reduce motion artifacts. The method comprises receiving a projection set comprising a plurality of two-dimensional (2D) projections of a patient volume. The projection data may be acquired or received from external computational means, such as a data storage server, or directly from an imaging or treatment system. The projection data comprises projections, which are representative of measured ray intensities corresponding to attenuated rays of radiation.
Beams of rays are emitted from some radiation source, passed through the region of the patient, and subsequently detected at a detector. The detector and the radiation source may be configured on or in a gantry, and may be rotated about an axis of the patient. Equivalently this rotation is about a radiation isocentre, which is the point (or centre of mass of a small bound region) in space through which rays intersect when the detector and the radiation source are rotated during beam-on. The techniques herein are equally applicable to static detectors and static radiation sources, arranged circumferentially about an axis of the patient. Similarly, the techniques herein are equally applicable to a combination of a static detector and rotating radiation sources, or a rotating detector and static radiation sources.
The projection set is then used to reconstruct a volumetric image. A region of the volumetric image is identified, and for that selected region, a subset of the projection set is chosen. The chosen subset minimizes motion artifacts within the region, while maintaining a minimum image quality within the region. In some examples, prior to the choosing of a subset, the projection set is divided into a plurality of subsets which may be overlapping subsets. The subset may be chosen from this plurality of subsets. Choosing the subset such that motion artefacts within the region are minimised may comprise ensuring the subset comprises less than a threshold number of characteristics operative to cause one or more motion artefacts in a volume reconstructed from the subset. Choosing the subset such that a minimum image quality is maintained within the region may comprise ensuring the region is present in at least a threshold number of projections of the chosen subset (i.e., the region is sufficiently sampled).
The method further comprises reconstructing (at least) the selected region using the chosen subset (i.e., the subset which ideally will not contain any motion artifacts, once reconstructed). In some embodiments, the first selected region of the volumetric image may be defined as being present in at least a threshold number of projections in the projection set. For example, the first region may be defined as being present in all of the projections in the projection set. In other examples, the first region may be defined as being present in 50%, 60%, 70%, 80% 90%, 95%, or 99% of the projections in the projection set. Typically, the first selected region is a central region of the final image.
The method further comprises repeating, until the whole volumetric image has been included in a selected region, the step of selecting another region of the volumetric image and repeating the choosing and reconstructing steps above. This may involve reconstructing (at least) a second selected region of the volumetric image of the patient using a different chosen subset to that chosen for the first selected region. The subset chosen for the second selected region may use a greater number of projections to reconstruct the second selected region than were used to reconstruct the first selected region to fulfil the requirement of maintaining a minimum image quality within the region. In some examples, the second selected region of the volumetric image may be defined as being present in less than the threshold number of projections in the projection set, e.g., the second region may be defined as being present in less than 50%, 60%, 70%, 80% 90%, 95%, 99% or 100% of the projections in the projection set. Typically, the second selected region is an outer region of the final image and may constitute the remaining part of a patient volume that is not encompassed by the first selected region. In other examples, the method may select more than two regions.
For the above reconstruction steps, any conventional image reconstruction technique suitable for use with projection data may be used, so as to obtain the first and second regions of the volumetric image of the patient. Examples of known suitable reconstruction techniques include a known Feldkamp, Davis, and Kress (FDK) reconstruction method, an iterative reconstruction method, an Al reconstruction method, a deep-learning based reconstruction method and a Polyquant reconstruction method.
The method further comprises combining the regions of the volumetric image to obtain a final volumetric image of the patient.
Several advantages are obtained from embodiments according to the above-described aspect.
For example, the method allows for the mitigation of motion artifacts when located at the centre of the FOV, providing users with the possibility of visualizing critical structures such as bladder, prostate and rectum even when a gas bubble moves unfortunately during image acquisition, for example. The method can be run post-hoc on standard scans (i.e., no special or longer scan is required or additional x-ray dose), and focuses on correcting the centre of the FOV, which is generally the most relevant anatomical area where a radiation target usually lies (the region of interest, ROI). The invention can be used on any standard CBCT scan, and doesn't require any registration, models of respiration, or slow iterative processes. This approach is particularly useful for sporadic motion such as movement of gas bubbles. Whilst the invention is useful for selecting and using projections free from movement, the invention could equally be used for selecting and using projections where movement is happening, for example, swallowing or following a gas bubble.
Optionally, the image reconstruction method may be applied in adaptive radiotherapy, which enables a patient's treatment to be changed, or adapted, to respond to a signal that additional information is known about the patient or that the patient has changed from the original state at the time of planning.
Embodiments of another aspect include a data processing apparatus comprising a memory storing computer-readable instructions and a processor. The processor (or controller circuitry) is configured to execute the instructions to carry out the image reconstruction method.
Embodiments of another aspect include a computer program comprising instructions, which, when executed by computer, causes the computer to execute the image reconstruction method.
Embodiments of another aspect include a non-transitory computer-readable storage medium comprising instructions, which, when executed by a computer, cause the computer to execute the image reconstruction method.
Other features of the disclosure are described below and recited in the appended claims.
The invention may be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations thereof. The invention may be implemented as a computer program or a computer program product, i.e., a computer program tangibly embodied in a non-transitory information carrier, e.g., in a machine-readable storage device or in a propagated signal, for execution by, or to control the operation of, one or more hardware modules.
A computer program may be in the form of a stand-alone program, a computer program portion, or more than one computer program, and may be written in any form of programming language, including compiled or interpreted languages, and it may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a data processing environment.
The invention is described in terms of particular embodiments. Other embodiments are within the scope of the following claims. For example, the steps of the invention may be performed in a different order and still achieve desirable results.
Elements of the invention have been described using the terms "processor", "input device" etc. The skilled person will appreciate that such functional terms and their equivalents may refer to parts of the system that are spatially separate but combine to serve the function defined.
Equally, the same physical parts of the system may provide two or more of the functions defined. For example, separately defined means may be implemented using the same memory and/or processor as appropriate.
Brief Description of the Drawings
Embodiments of the invention will now be further described by way of example only and with reference to the accompanying drawings, wherein like reference numerals refer to like parts, and wherein: Figure 1 is a flow chart of a method of image reconstruction, according to an embodiment; Figure 2 schematically illustrates a method of image reconstruction, according to an embodiment; Figure 3 is a diagram of the geometric arrangement for cone beam projection scanning and reconstruction; Figure 4 is a diagram of a cone beam CT scanner; Figure 5 is a collection of six axial slices from volumes reconstructed from projection subsets (top row) and slices from their corresponding mask volumes (bottom row); Figure 6 is a confidence circle with values between 0 and 1; Figure 7 shows how the final volumetric image (right) is reconstructed using a reconstruction from the full projection set (top left), its corresponding weight mask (bottom left), a reconstruction from clean projection subsets (centre top) and its corresponding weight mask (centre bottom); Figure 8 is a radiotherapy system, suitable for implementing image reconstruction according to embodiments; and Figure 9 is a radiotherapy device or apparatus, suitable for implementing image reconstruction according to embodiments.
Detailed Description
Motion artifacts in 3D CBCT images represent one of the most critical limitations to the clinical use of CBCT, especially in adaptive radiation therapy. Motion artifacts arise from patient motion (e.g., quasi-periodic motions such as respiratory or cardiac correlated motions, or unpredictable/sporadic motions such as gas bubbles or swallowing motions) during the CBCT image acquisition. This disclosure aims to remove, or at least reduce, motion artifacts from 3D CBCT images by using different subsets of projection data to reconstruct different regions of the final 3D image. Due to the nature of the CBCT acquisition process, in which the patient is imaged from a range of angles, some regions of the anatomy are imaged more often that other regions. In particular, the region at the centre of the field of view is always imaged (if using standard scan settings). This means that there is potentially more than enough projection data to reconstruct this central region of the image, and we can thus select only a "clean" subset to use, improving results.
Thus, the central region of the image may be reconstructed based on a first number of projections, while the outer region of the image may be reconstructed based on a second, larger number of projections. These different regions can then be merged/blended to create the final 3D image. The central region of the image can be constructed from fewer projections than the outer region because the central region of the image ideally is present in all of the projections (where the x-ray cone beams of CBCT imaging for example present an overlap), whereas the outer region is not. This arises from the geometry of the field of view (FOV), as such, the size and shape of the first region is dependent on the FOV. As the central region of the image has lots of projection data, the projection data containing motion artifacts can be discarded such that reconstruction of the final image uses clean projection data without motion artifacts. However, if this approach was taken across the whole of the image, the outer regions of the image would not have enough clean projection data to form a complete image.
Therefore, the outer regions of the image instead may use all of the projection data (including projection data with motion artifacts). The central region and the outer region are then blended to produce a final image. As the central region contains the patient's region of interest, this ensures that the final image is clear of motion artifacts in the region of interest, while still producing a complete image for the entire FOV. In some embodiments, more than two regions could be used.
A CT image may be reconstructed from multiple projections that are acquired as an X-ray source rotates around the object (a patient). The acquisition geometry may be defined by the acquisition FOV, which may be determined by the beam angle, and may determine the maximum possible size of reconstructed image. For head CT scans, a FOV may be on the order of 250 mm. A typical large FOV (LFOV), which is commonly used for whole-body scanning for example, may be of a diameter on the order of 500 mm. More specifically, a system for which the techniques herein are particularly applicable has a FOV diameter of around 510 to 520 mm. A medium FOV (MFOV) system may have a FOV of a diameter on the order of 400 mm.
Various aspects and details of these principal concepts will be described below with reference to Figures 1 to 9.
Figure 1 shows a general method for image processing which addresses motion induced image artifacts in CBCT imaging. At step 102, a projection set comprising a plurality of two-dimensional projections of a patient volume is received. At step 104, a volumetric image is reconstructed using the projection set. At step 106, a region of the volumetric image is selected. It should be noted here that the idea of "selecting regions" is to help describe the invention and the idea of selecting a region during these first stages is not necessarily a physical step. The reconstructions are always of the same full volume size, although later on in step 114 the regions are 'selected' and combined via masking operations. At step 108, a subset of the projection set is chosen that minimises motion artifacts within the region, while maintaining a minimum image quality within the region. Minimising motion artefacts within the region may comprise ensuring the subset comprises less than a threshold number of characteristics operative to cause one or more motion artefacts in a volume reconstructed from the subset. For example, this may comprise analysing the projection set to determine which subsets will contain motion artifacts, once reconstructed. This may be done by analysing the projection set in the projection space (i.e., looking at the 2D projections themselves), or the analysing may be carried out in the reconstructed image space (i.e., possible subsets are reconstructed to check whether the reconstructed image contains motion artifacts).
Maintaining a minimum image quality may comprise meeting a minimum threshold of a metric for measuring image quality. This may be done by ensuring the region is present in at least a threshold number of projections of the chosen subset, i.e., the region is sufficiently sampled. At step 110, the region is reconstructed using the chosen subset. At step 112, the step of selecting another region of the volumetric image and repeating the choosing and reconstructing steps is repeated until the whole volumetric image has been included in a selected region. In some examples, the regions may not overlap with each other. Finally, at step 114, the regions of the volumetric image are combined to obtain a final volumetric image of the patient.
Figure 2 schematically illustrates the image processing method of Figure 1. At step 102, the projection set 202 is received. At step 104, the projection set 202 is used to reconstruct 204 a volumetric image 206. At step 106, a region 208 of the volumetric image 206 is selected. In this example, the region 208 is the central region of the volumetric image 206. At step 108, a subset 210 of the projection set 202 is chosen. The chosen subset 210 is chosen to minimise motion artifacts within the region 208, while maintaining a minimum image quality within the region 208. While the chosen subset 210 is illustrated as a plurality of consecutive projections, this is not required. In some examples, the chosen subset may comprise non-consecutive projections, e.g., the first, seventh and eighth projections in the set (numbers simplified for the sake of example). At step 110, the chosen subset 210 is used to reconstruct 212 at least the region 208 in a corresponding reconstruction volume 216. At step 112, if the whole of the volumetric image 206 has not yet been included in a region, another region is selected and the choosing and reconstructing steps are repeated. In the example shown in Figure 2, region 214 is selected as the next region. For region 214, a subset 218 is chosen which minimises motion artifacts within the region, while maintaining a minimum image quality within the region 214.
As can be seen schematically in the reconstructed image 216, the central region 208 has more data than the outer region 214. This is because the central region 208 is present in a greater number of the projections than the outer region 214 due to the geometry of the FOV of the scan. Therefore, in some cases it may be necessary for the subset 218 chosen for region 214 to be the entirety of the projection set 202 in order to maintain a minimum image quality. In other cases, the subset chosen for region 214 may have more projections than subset 210, but less projections than the entire subset 202. The subsets 210, 218 may be overlapping subsets. The regions 208, 214 may be non-overlapping regions. The chosen subset 218 for region 214 is then used to reconstruct 220 at least the region 214 in a corresponding reconstruction volume 222. It should be noted that in both reconstructions 216 and 222, the whole area of the volumetric image may be reconstructed, but in the first reconstruction 216, only the region 208 has enough data to produce a meaningful reconstruction. In the second reconstruction 218, the whole area will have enough data for a meaningful reconstruction because more projections have been used. However, the second reconstructed image 220 will be likely to contain some motion artifacts because more projections 218 have been used in the reconstruction to meet the minimum image quality requirement. At step 114, the regions are combined to obtain a final volumetric image of the patient. To combine the regions, each region 208, 214 may be isolated 224 from its corresponding reconstruction volume 216, 222. This may be done using a mask as described below. This leaves the isolated regions 226, 228. In the example shown in Figure 2, region 214 is the whole image except for the region 208. The isolated regions 226, 228 are then combined to obtain a final volumetric image 230 of the patient. The combining step may use a merge-mask as described below.
This approach is particularly useful for sporadic motion, for example, if a gas bubble appeared and disappeared within a number of consecutive projections, then the subset for the reconstruction could be chosen to avoid those projections.
In another scenario, if a gas bubble appeared in a projection at position A, moved to position B and then remained at position B throughout the rest of the projections, then there would be three "states" -a first state where the gas bubble is not present, a second state where the gas bubble is moving between position A and B, and a third state where the gas bubble is stationary at position B. For consistency, the chosen subsets may be selected where the state is consistent. Whilst there is no motion in the projections where the gas bubble is stationary at position B, these projections would not be consistent with the projections before the gas bubble appeared because of the introduction of the gas bubble. The example of a gas bubble is merely exemplary, the same approach applies to all types of sporadic motion -e.g., muscular motion.
Figure 3 illustrates the geometry of a typical cone beam projection acquisition setup, as may be used in image reconstruction techniques according to embodiments. For simplicity, only the x-y plane is illustrated; the skilled reader will appreciate that the beam of rays extends also into the z-axis, orthogonal to the x-y plane. In this example, the acquisition setup captures projection data of object 302. X-ray source 304 generates and emits X-rays 306 towards object 302. In CBCT, X-rays may be considered as a beam of rays, emitted from a point source. Detector (or detectors) 308 capture projections, which are sets of line integrals along paths that radiate from the source 304. Multiple projections of the image may be acquired from different angles by rotating the source 304 and detector 308 around the centre of the image 310. In the present example, the source 304 and detector 308 may be rotated arcuately along orbital path 312. In this way, the x-y plane may be rotated counterclockwise around the point of origin (or centre of the image 310) in a manner that keeps the mutual positional relationship between source 304 and detector 308 when passing through the orbital path 312. Other configurations of imaging systems are, of course, feasible; for instance, the source may be configured to rotate and a complete ring of detectors may be configured to capture projections, or there may be multiple sources arranged circumferentially around a complete ring of detectors. Further, the imaging system may rotate the source along a helical path, so as to capture projection data along the axis of the object 302 of interest (along the z-axis in the
present example).
The attenuation of the intensity of the rays that pass through the object 302 may be measured by processing signals received from the detector 308. By making projective measurements at a series of different projection angles through the object 302, a sinogram may be constructed from the projection data, mapping the spatial dimension of the detector array to the projection angle dimension. The intensity attenuation resulting from a particular volume within the object will trace out a sine wave for the spatial dimension along the detector perpendicular to the rotation axis of the system. Volumes of the object farther from the centre of rotation correspond to sine waves with greater amplitudes than those corresponding to volumes nearer the centre of rotation. The phase of each sine wave in the sinogram corresponds to the relative angular positions with respect to the rotation axis. By performing an image reconstruction technique (such as an inverse Radon transform) on the projection data in the sinogram, one may reconstruct an image, where the reconstructed image corresponds to a cross-sectional slice of the object 302. The detector 308 may comprise underlying detector elements (such as pixels or bins of pixels), rather than a single active region. The measured ray intensities may then be taken as a function of underlying intensity signals acquired at detector elements of the detector. This is advantageous in that one reduces the impact of random noise, which may plague an intensity (particularly a low intensity) measured at a single detector element. For example, the measured ray intensities may be an average (mean, mode or median) of underlying single detector element intensities.
Figure 4 illustrates the geometry of a cone beam projection acquisition setup, as incorporated into a medical CT scanner 400. Radiation sources 404 emit beams of X-ray radiation, which may pass through the patient supported on a couch within the CT scanner (not depicted). Detectors 408 capture attenuated X-rays. The radiation sources 404 and detectors 408 may be configured to rotate within a gantry of the CT scanner, so as to acquire projective measurements at a series of different projection angles. As shown, the angular separation between measurement orientations need not be equal. The image reconstruction method may account for flex within (or of) the gantry. For instance, in one implementation, the flex may be considered when determining where the isocentral ray hits the detector. In turn, the method may determine the intensity at the isocentral location through interpolation.
Referring back to the method of Figures 1 and 2, the received projection set may be acquired using the set up described above in relation to Figures 3 and 4. As such, the projections of the received projection set may be acquired at a series of different projection angles.
In more detail, prior to the choice of subset in step 108, the projection set may be thought as being divided into a plurality of subsets. The projection set may be divided into any number of subsets, for example, two, three, four, five, six, seven, eight, nine or ten subsets. As an example, for a projection set containing N projections, some K < N are chosen to be the number of projections for each subset, and then J (potentially overlapping) subsets of K projections are chosen. For example, if a projection set has N=700 projections, and K=400, then J may equal 4 subsets comprising projection numbers: [1,400], [101,500], [201,600], [301,700]. This may correspond to ranges of projection angles, e.g., a first subset may have projections from angles 0 to 120 degrees, a second subset may have projections from angles 80 to 200 degrees, a third subset may have projections from angles 160 to 280 degrees, and a fourth subset may have projections from angles 240 to 360 degrees.
The key idea here is that one or more subsets of projections may be created by selecting subsets of the projections from the full projection set. Any individual projections can occur in multiple subsets (i.e., they do not need to be non-overlapping subsets) and some projections might not be in any of the subsets. The example subset creation strategy used throughout this example implementation explanation is to select K consecutive projections to create a subset, and to do this from a few different starting projections. Alternative selection criteria may include but are not limited to: making a single subset by discarding projections that have been identified as tad' or unacceptable quality, taking all projections before (or after) a given time point, and using a breathing surrogate to select projections within a given angle and also within a given range of breathing states.
As described above in relation to step 108, choosing a subset that minimizes motion artifacts may comprise analysing the plurality of subsets to determine which subsets will contain motion artifacts, once reconstructed. This may be done by analysing the projection set in the projection space (i.e., looking at the 2D projections themselves), or the analysing may be carried out in the reconstructed image space (i.e., possible subsets are reconstructed to check whether the reconstructed image contains motion artifacts). For example, if analysing in the reconstructed image space, each of the J subsets may be reconstructed, creating J reconstructed subset volumes. J 'mask volumes' may be created by back-projecting the same subsets, but with all projection values replaced by 1. The mask volumes are the same size as the reconstructed volume, but each voxel contains a value indicating how much the projections in the subset contributed to that voxel's value in the reconstruction (higher values meaning a greater contribution). The reconstruction defaults to zero (i.e. black) for any voxels for which there is no information in the projections. Figure 5 shows a slice from the reconstructions and mask volumes for a real case where J=6. In Figure 5, the top row has six example axial slices from volumes reconstructed from projection subsets and the bottom row has slices from their corresponding mask-volumes. Brighter voxels means higher voxel values.
As can be seen in Figure 5, one result of reconstructing with only a small arc of projections (i.e., a limited angle range) in a subset is bright artifacts near the centre of the volume. As an optional step, these (non-motion) artifacts can be corrected/reduced. In order to reduce the artifacts, 'weighting masks' can be used. Based on the observation that, for a given subset-volume the artifact that presents is similar across all (axial) slices, whereas any anatomical differences (e.g., a temporary gas bubble) between the subset-volume and the volume reconstructed using the full set of projections are present only in some axial slices or some regions of the axial slices, the artifacts may be identified by taking the full reconstruction divided by the subset-volume reconstruction and averaging it across all axial slices. This produces a single average axial slice of 'weights' or a "bias image". To approximately correct the artifacts seen in the subset-volume, all axial slices in the subset-volume may be multiplied by this slice of weights. This correction may be restricted to only the regions maximally covered by the volume mask, as this is the region that will be used later in the processing method. Alternative approaches for correcting these above-mentioned (non-motion) artifacts could be used. For example, instead of or additionally to using weighting masks or bias images, the artifacts may be removed by trimming. For example, if a different subset selection method was used (e.g., one that selected from a large range of angles) then different (or no) non-motion artifacts might be present. The nature of the artifacts also depends on the reconstruction algorithm used and will be also likely be influenced by the kind of images being reconstructed (e.g., field of view, anatomy imaged, noise levels, etc.). As a specific example, if the projection subset came from a wide range of angles, but contained few projections, then it might instead be appropriate to try and correct artifacts resulting from under-sampling, rather than from a limited angle acquisition. The corrections applied here are to correct (as far as possible) the volumes resulting from the subsets. The priority is to try to clean/correct the central area, which will be used later (as the first selected region) to produce the final volumetric image.
One or more of the J (optionally (non-motion) artifact corrected) subset-volumes may then be used to correct the full reconstruction volume. The first step is to choose which subset volume(s) to use to make the correction (i.e., step 108). This may be done either fully manually, semi-automatically, or fully automatically. Fully or semi-automatic approaches might, for example, auto select the volume(s) in which gas bubbles are not present in the centre of the volume (gas bubbles appear as dark pixels in the reconstructed image and can be automatically detected with thresholding-based approaches). Alternatively, the volume could be selected by re-projecting the reconstructed volume and measuring the agreement of these re-projections with the real projections. When this error is higher it suggests that the projections may come from differing motion states (for example some may have been acquired whilst a bubble was present), and thus the subset volume that has the lowest re-projection error could be automatically chosen. In a manual approach, the reconstructions may be shown to a user who then selects the reconstruction with the fewest motion artifacts at the centre of the FOV.
More generally, any method that can automatically predict which subset volumes are most free from artifacts in the central region may be used. It could also be the case that in the original subset selection process at the start of the method, one subset is already known to be the one to use for the correction (for example, if the subset was created by explicitly removing the projections deemed 'bad' or unacceptable in some way).
Once the subset-volumes of interest have been chosen (step 108) then the final step (step 114) is to merge them with the full reconstruction volume. To achieve this, merge-masks may be calculated for each subset-volume that are the same size as the volume and are 0 everywhere except where there are high confidence in the values in the subset-volume. These are similar to, and derived from, the mask-volumes defined above. Specifically, the mask volume is binarized, setting values to 1 if they equal the highest value seen in the mask volume, and to 0 otherwise. Next, the edge of the resulting binary mask may be softened (i.e., the boundary between the first and second regions is blurred). This softening may be achieved by applying a Gaussian blur to the mask. Alternatively, the softening may be achieved by taking a strip of pixels (e.g. 20 pixels) around the edge of the binary mask (i.e. around the region where the values are 1) and setting these pixels' values so that they vary smoothly from 0 to 1 as they get further from the masks edge. For example, for a strip of 20 pixels, the pixel right at the edge of the mask would be set to have value 1/20 (rather than their previous value of 1), pixels 10 pixels in from the edge of the mask would have value 10/20, and any pixels 20 or more pixels in from the edge of the mask would have values of 1. Finally, the resulting mask may be multiplied by a 'confidence circle' to produce the final merge-mask. This confidence circle has values between 0 and 1 and is shown in Figure 6. Multiplying by this confidence circle ensures that the final merge-mask does not extend too far from the centre of the image. Figure 6 illustrates a confidence circle. In Figure 6, the brighter the image, the higher the values.
The exact method for creating a suitable mask will also depend on the subset selection criteria, image quality, etc. However, the overarching idea is that the centre of the volume is oversampled in the full projection set, and a subset can still have good image quality in the centre whilst potentially avoiding artifacts/movement issues. As a result, suitable selection masks will likely focus on a central region (the first selected region).
Using the merge-mask, a weighted sum of the full reconstructed volume and a volume from a projection subset may be computed. If m is the merge-mask for the subset-volume, s is the subset-volume, and f is the full volume, then the final reconstruction may be expressed as: s*m + f*(1-m) (where * means voxel-wise multiplication). This is illustrated in Figure 7. Figure 7 illustrates one approach for combining one sub-volume with the full volume. However, a few such volumes could instead be combined, or at this point the current subset-volumes could be used to guide a new round of subset selection. Figure 7 shows how the reconstruction from the full projection set (top left) and its corresponding weight mask (1-m, bottom left) is combined with the (corrected) reconstruction from the projection subset (centre top) with corresponding mask m (centre bottom), to produce the final fused reconstruction (right).
Figure 8 depicts a radiotherapy apparatus, suitable for acquiring projection data for image reconstruction according to embodiments. The cross-section through radiotherapy apparatus 800 includes a radiation head 810 and a beam receiving apparatus (detector) 802, both of which are attached to a gantry 804. The radiation head 810 includes a radiation source 812, which emits a beam of radiation 806. The radiation head 810 also includes a beam shaping apparatus 818, which controls the size and shape of the radiation field associated with the beam.
The beam receiving apparatus 802 is configured to receive radiation emitted from the radiation head 810, for the purpose of absorbing and/or measuring the beam of radiation. In the view shown, the radiation head 810 and the beam receiving apparatus 802 are positioned diametrically opposed to one another.
The gantry 804 is rotatable, and supports the radiation head 810 and the beam receiving apparatus 802 such that they are rotatable around an axis of rotation 808, which may coincide with the patient longitudinal axis. The gantry provides rotation of the radiation head 810 and the beam receiving apparatus 802 in a plane perpendicular to the patient longitudinal axis (e.g., a sagittal plane). Three gantry directions xG, yo, zG may be defined such that the yG direction is perpendicular with the gantry axis of rotation. The yG direction extends from a point on the gantry corresponding to the radiation head 810, towards the axis of rotation of the gantry.
Therefore, from the patient frame of reference, the y G direction rotates around as the gantry rotates.
The radiotherapy apparatus 800 also includes a support surface or couch 820 on which a subject (or patient) is supported during radiotherapy treatment or image acquisition. The radiation head 810 is configured to rotate around the axis of rotation 808 such that the radiation head 810 directs radiation towards the subject from various angles around the subject in order to spread out the radiation dose received by healthy tissue to a larger region of healthy tissue while building up a prescribed dose of radiation at a target region.
The radiotherapy apparatus 800 is configured to deliver a radiation beam towards a radiation isocentre, which is substantially located on the axis of rotation 808 at the centre of the gantry 804 regardless of the angle at which the radiation head 810 is placed.
The rotatable gantry 804 and radiation head 810 are dimensioned so as to allow a central bore 822 to exist. The central bore 822 provides an opening, sufficient to allow a subject to be positioned therethrough without the possibility of being incidentally contacted by the radiation head 810 or other mechanical components as the gantry rotates the radiation head 810 about the subject.
The radiation head 810 emits the radiation beam 806 along a beam axis 824 (or radiation axis or beam path), where the beam axis 824 is used to define the direction in which the radiation is emitted by the radiation head 810. The radiation beam 806 is incident on the beam receiving apparatus 802, which may include at least one of a beam stopper and a radiation detector.
The beam receiving apparatus 802 is attached to the gantry 804 on a diametrically opposite side to the radiation head 810 to attenuate and/or detect a beam of radiation after the beam has passed through the subject.
The radiation beam axis 824 may be defined as, for example, a centre of the radiation beam 806 or a point of maximum intensity.
The beam shaping apparatus 818 delimits the spread of the radiation beam 806. The beam shaping apparatus 818 is configured to adjust the shape and/or size of a field of radiation produced by the radiation source. The beam shaping apparatus 818 does this by defining an aperture (also referred to as a window or an opening) of variable shape to collimate the radiation beam 806 to a chosen cross-sectional shape. In this example, the beam shaping apparatus 818 may be provided by a combination of a diaphragm and an MLC. Beam shaping apparatus 818 may also be referred to as a beam modifier.
The radiotherapy apparatus 800 may be configured to deliver both coplanar and non-coplanar (also referred to as tilted) modes of radiotherapy treatment. In coplanar treatment, radiation is emitted in a plane that is perpendicular to the axis of rotation of the radiation head 810. In non-coplanar treatment, radiation is emitted at an angle that is not perpendicular to the axis of rotation. In order to deliver coplanar and non-coplanar treatment, the radiation head 810 may move between at least two positions, one in which the radiation is emitted in a plane which is perpendicular to the axis of rotation (coplanar configuration) and one in which radiation is emitted in a plane which is not perpendicular to the axis of rotation (non-coplanar configuration).
In the coplanar configuration, the radiation head 810 is positioned to rotate about a rotation axis and in a first plane. In the non-coplanar configuration, the radiation head is tilted with respect to the first plane such that a field of radiation produced by the radiation head is directed at an oblique angle relative to the first plane and the rotation axis. In the non-coplanar configuration, the radiation head 810 is positioned to rotate in a respective second plane parallel to and displaced from the first plane. The radiation beam is emitted at an oblique angle with respect to the second plane, and therefore as the radiation head rotates the beam sweeps out a cone shape.
In one configuration, the beam receiving apparatus 802 may remain in the same place relative to the rotatable gantry when the radiotherapy apparatus is in both the coplanar and non-coplanar modes. Therefore, the beam receiving apparatus 802 is configured to rotate about the rotation axis in the same plane in both coplanar and non-coplanar modes. This may be the same plane as the plane in which the radiation head rotates. In alternative configurations, the beam receiving apparatus 801 may also rotate.
The beam shaping apparatus 810 is configured to reduce the spread of the field of radiation in the non-coplanar configuration in comparison to the coplanar configuration.
The radiotherapy apparatus 800 includes a controller 830, which is programmed to control the radiation source 812, beam receiving apparatus 806 and the gantry 802. Controller 830 may perform functions or operations such as treatment planning, treatment execution, image acquisition, image processing, motion tracking, motion management, and/or other tasks involved in a radiotherapy process.
Controller 830 is programmed to control various components of apparatus 800, such as gantry 804, radiation head 810, beam receiving apparatus 802, and support surface 820, so as to acquire projection data suitable for image reconstruction.
Hardware components of controller 830 may include one or more computers (e.g., general purpose computers, workstations, servers, terminals, portable/mobile devices, etc.); processors (e.g., central processing units (CPUs), graphics processing units (GPUs), microprocessors, digital signal processors (DSPs), field programmable gate arrays (FPGAs), special-purpose or specially-designed processors, etc.); memory/storage devices such as a memory (e.g., read-only memories (ROMs), random access memories (RAMs), flash memories, hard drives, optical disks, solid-state drives (SSDs), etc.); input devices (e.g., keyboards, mice, touch screens, mics, buttons, knobs, trackballs, levers, handles, joysticks, etc.); output devices (e.g., displays, printers, speakers, vibration devices, etc.); circuitries; printed circuit boards (PCBs); or other suitable hardware. Software components of controller 830 may include operation device software, application software, etc. The radiation head 810 may be connected to a head actuator 814, which is configured to actuate the radiation head 810, for example between a coplanar configuration and one or more non-coplanar configurations, or for example to actuate the radiation source 812 and/or detector 802 in response to detection of flex. This may involve translation and rotation of the radiation head 810 relative to the gantry. In some implementations, the head actuator may include a curved rail along which the radiation head 810 may be moved to adjust the position and angle of the radiation head 810. The controller 830 may control the configuration of the radiation head 830 via the head actuator 814.
The beam shaping apparatus 818 includes a shaping actuator 816. The shaping actuator is configured to control the position of one or more elements in the beam shaping apparatus 818 in order to shape the radiation beam 806. In some implementations, the beam shaping apparatus 816 includes an MLC, and the shaping actuator 816 includes means for actuating leaves of the MLC. The beam shaping apparatus 818 may further comprise a diaphragm, and the shaping actuator 816 may include means for actuating blocks of the diaphragm. The controller 830 may control the beam shaping apparatus 818 via the shaping actuator 816.
Figure 9 is a block diagram of an implementation of a radiotherapy system 900, suitable for executing methods for image reconstruction according to embodiments. The example radiotherapy system 900 comprises a computing system 910 within which a set of instructions, for causing the computing system 910 to perform the method (or steps thereof) discussed herein, may be executed. The computing system 910 may implement an image reconstruction system. The computing system 910 may also be referred to as a computer. In particular, the methods described herein may be implemented by a processor or controller circuitry 911 of the treatment planning system 910.
The computing system 910 shall be taken to include any number or collection of machines, e.g., computing device(s), that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methods discussed herein. That is, hardware and/or software may be provided in a single computing device, or distributed across a plurality of computing devices in the computing system. In some implementations, one or more elements of the computing system may be connected (e.g., networked) to other machines, for example in a Local Area Network (LAN), an intranet, an extranet, or the Internet. One or more elements of the computing system may operate in the capacity of a server or a client machine in a client-server network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. One or more elements of the computing system may be a personal computer (PC), a tablet computer, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.
The computing system 910 includes controller circuitry 911 and a memory 913 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.). The memory 913 may comprise a static memory (e.g., flash memory, static random access memory (SRAM), etc.), and/or a secondary memory (e.g., a data storage device), which communicate with each other via a bus (not shown). Memory 913 may be used to store or buffer projection data until required for image processing.
Controller circuitry 911 represents one or more general-purpose processors such as a microprocessor, central processing unit, accelerated processing units, or the like. More particularly, the controller circuitry 911 may comprise a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIVV) microprocessor, processor implementing other instruction sets, or processors implementing a combination of instruction sets. Controller circuitry 911 may also include one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. One or more processors of the controller circuitry may have a multicore design. Controller circuitry 911 is configured to execute the processing logic for performing the operations and steps discussed herein.
The computing system 910 may further include a network interface circuitry 915. The computing system 910 may be communicatively coupled to an input device 920 and/or an output device 930, via input/output circuitry 916. In some implementations, the input device 920 and/or the output device 930 may be elements of the computing system 910. The input device 920 may include an alphanumeric input device (e.g., a keyboard or touchscreen), a cursor control device (e.g., a mouse or touchscreen), an audio device such as a microphone, and/or a haptic input device. The output device 930 may include an audio device such as a speaker, a video display unit (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), and/or a haptic output device. In some implementations, the input device 920 and the output device 930 may be provided as a single device, or as separate devices.
In some implementations, the computing system 910 may comprise image processing circuitry 914. Image processing circuitry 914 may be configured to process image data 970 (e.g., images, imaging data, projections, projection data), such as medical images obtained from one or more imaging data sources, a treatment device 950 and/or an image acquisition device 940. Image processing circuitry 914 may be configured to process, or pre-process, image data 970.
For example, image processing circuitry 914 may convert received image data into a particular format, size, resolution or the like. Image processing circuitry 914 may be configured to perform image reconstruction. In some implementations, image processing circuitry 914 may be combined with controller circuitry 911.
In some implementations, the radiotherapy system 900 may further comprise an image acquisition device 940 and/or a treatment device 950. The image acquisition device 940 and the treatment device 950 may be provided as a single device. In some implementations, treatment device 950 is configured to perform imaging, for example in addition to providing treatment and/or during treatment.
Image acquisition device 940 may be configured to perform positron emission tomography (PET), computed tomography (CT), magnetic resonance imaging (MRI), single positron emission computed tomography (SPECT), X-ray, and the like.
Image acquisition device 940 may be configured to output image data 970, which may be accessed by computing system 910. Treatment device 950 may be configured to output treatment data 960, which may be accessed by computing system 910. Treatment data 960 may be obtained from an internal data source (e.g., from memory 913) or from an external data source, such as treatment device 950 or an external database.
The various methods described above may be implemented by a computer program. The computer program may include computer code (e.g., instructions) arranged to instruct a computer to perform the functions of one or more of the various methods described above. For example, the steps of the methods described in relation to Figure 1 may be performed by the computer code. The steps of the methods described above may be performed in any suitable order. The computer program and/or the code for performing such methods may be provided to an apparatus, such as a computer, on one or more computer readable media or, more generally, a computer program product. The computer readable media may be transitory or non-transitory. The one or more computer readable media could be, for example, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, or a propagation medium for data transmission, for example for downloading the code over the Internet.
Alternatively, the one or more computer readable media could take the form of one or more physical computer readable media such as semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disc, and an optical disk, such as a CD-ROM, CD-R/W or DVD. The instructions may also reside, completely or at least partially, within the memory 913 and/or within the controller circuitry 911 during execution thereof by the computing system 910, the memory 913 and the controller circuitry 911 also constituting computer-readable storage media.
In an implementation, the modules, components and other features described herein may be implemented as discrete components or integrated in the functionality of hardware components such as ASICS, FPGAs, DSPs or similar devices.
A "hardware component' is a tangible (e.g., non-transitory) physical component (e.g., a set of one or more processors) capable of performing certain operations and may be configured or arranged in a certain physical manner. A hardware component may include dedicated circuitry or logic that is permanently configured to perform certain operations. A hardware component may comprise a special-purpose processor, such as an FPGA or an ASIC. A hardware component may also include programmable logic or circuitry that is temporarily configured by software to perform certain operations.
In addition, the modules and components may be implemented as firmware or functional circuitry within hardware devices. Further, the modules and components may be implemented in any combination of hardware devices and software components, or only in software (e.g., code stored or otherwise embodied in a machine-readable medium or in a transmission medium).
Unless specifically stated otherwise, as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as "receiving", "determining", "comparing ", "enabling", "maintaining", "identifying", "obtaining", "accessing", or the like, refer to the actions and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel methods and apparatuses described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of methods and apparatus described herein may be made.

Claims (18)

  1. Claims 1. An image reconstruction method, the method comprising: receiving a projection set comprising a plurality of two-dimensional projections of a patient volume; reconstructing a volumetric image using the projection set; selecting a region of the volumetric image, and for the selected region: choosing a subset of the projection set that minimizes motion artifacts within the region, while maintaining a minimum image quality within the region; and reconstructing the region using the chosen subset; repeating, until the whole volumetric image has been included in a selected region, the step of: selecting another region of the volumetric image and repeating the choosing and reconstructing steps above; and combining the regions of the volumetric image to obtain a final volumetric image of the patient.
  2. 2. The image reconstruction method of claim 1, wherein the projection set serves for cone-beam computed tomography, CBCT, reconstruction.
  3. 3. The image reconstruction method of claim 1 or 2, wherein the chosen subsets overlap.
  4. 4. The image reconstruction method of any preceding claim, wherein each of the chosen subsets are corrected for non-motion artifacts.
  5. 5. The image reconstruction method of claim 4, wherein the correction uses a bias image.
  6. 6. The image reconstruction method of any preceding claim, wherein maintaining a minimum image quality comprises ensuring the region is present in at least a threshold number of projections of the chosen subset.
  7. 7. The image reconstruction method of any preceding claim, wherein minimising motion artefacts within the region comprises ensuring the subset comprises less than a threshold number of characteristics operative to cause one or more motion artefacts in a volume reconstructed from the subset.
  8. 8. The image reconstruction method of any preceding claim, wherein the regions do not overlap with one another.
  9. 9. The image reconstruction method of any preceding claim, wherein the first selected region is a central region of the volumetric image.
  10. 10. The image reconstruction method of any preceding claim, wherein the chosen subset for at least one of the regions comprises the whole of the projection set.
  11. 11. The image reconstruction method of any preceding claim, wherein reconstructing each region produces a corresponding reconstruction volume comprising the region, and wherein the combining step comprises: isolating each region from its corresponding reconstruction volume; and combining the isolated regions together to obtain the final volumetric image of the patient.
  12. 12. The image reconstruction method of claim 11, wherein the isolating comprises using a mask-volume.
  13. 13. The image reconstruction method of claim 11 or 12, wherein combining the isolated regions comprises using a merge-mask.
  14. 14. The image reconstruction method of any preceding claim, wherein combining the regions of the volumetric image comprises blurring the boundary between the regions.
  15. 15. The image reconstruction method of any preceding claim, for use in adaptive radiotherapy.
  16. 16. A data processing apparatus comprising a memory storing computer-executable instructions, and a processor configured to execute the instructions to carry out the method of any of claims 1 to 15.
  17. 17. A computer program comprising instructions which, when the program is executed by a computer, cause the computer to carry out the method of any of claims 1 to 15.
  18. 18. A computer-readable medium comprising instructions which, when executed by a computer, cause the computer to carry out the method of any of claims 1 to 15.
GB2401379.9A 2024-02-02 2024-02-02 Method and system for image reconstruction Pending GB2638129A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB2401379.9A GB2638129A (en) 2024-02-02 2024-02-02 Method and system for image reconstruction
PCT/GB2025/050176 WO2025163320A1 (en) 2024-02-02 2025-01-30 Method and system for image reconstruction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB2401379.9A GB2638129A (en) 2024-02-02 2024-02-02 Method and system for image reconstruction

Publications (2)

Publication Number Publication Date
GB202401379D0 GB202401379D0 (en) 2024-03-20
GB2638129A true GB2638129A (en) 2025-08-20

Family

ID=90236301

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2401379.9A Pending GB2638129A (en) 2024-02-02 2024-02-02 Method and system for image reconstruction

Country Status (2)

Country Link
GB (1) GB2638129A (en)
WO (1) WO2025163320A1 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200034999A1 (en) * 2018-07-28 2020-01-30 Varian Medical Systems, Inc. Adaptive image filtering for volume reconstruction using partial image data

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200034999A1 (en) * 2018-07-28 2020-01-30 Varian Medical Systems, Inc. Adaptive image filtering for volume reconstruction using partial image data

Also Published As

Publication number Publication date
GB202401379D0 (en) 2024-03-20
WO2025163320A1 (en) 2025-08-07

Similar Documents

Publication Publication Date Title
JP7234064B2 (en) Iterative image reconstruction framework
US11216992B2 (en) System and method for computed tomography
US20110044559A1 (en) Image artifact reduction
US10111626B2 (en) X-ray CT apparatus
CN108352077B (en) System and method for image reconstruction
KR20100133950A (en) Reduced amount and image enhancement in tomography through use of objects around objects due to dynamic constraints
CN112204607B (en) Scatter correction for X-ray imaging
JP2016152916A (en) X-ray computer tomographic apparatus and medical image processing apparatus
US10984564B2 (en) Image noise estimation using alternating negation
WO2025233613A1 (en) Method and system for projection binning for image reconstruction
WO2016132880A1 (en) Arithmetic device, x-ray ct device, and image reconstruction method
GB2638129A (en) Method and system for image reconstruction
EP4510078A1 (en) Method and system for image reconstruction
US20260030819A1 (en) Cone beam artifact reduction
US20250037329A1 (en) Ct image generating method and image data reconstruction device
US20240346717A1 (en) Method and apparatus for reconstructing image acquisitions for extended fields-of-view
US20250359837A1 (en) Simulating x-ray from low dose ct
Sun Rigid motion correction for head CT imaging
WO2025233618A1 (en) Method and system for obtaining a motion surrogate signal
WO2024214017A1 (en) Method and apparatus for reconstucting image acquisitions for extended fields-of-view
CN121101613A (en) Cone beam computed tomography system and image reconstruction method, apparatus and article of manufacture
CN114868152A (en) Device for acquiring computed tomography X-ray data at high relative helical pitch