WO2024108203A1 - Patch-based denoising diffusion probabilistic model for sparse tomographic imaging - Google Patents
Patch-based denoising diffusion probabilistic model for sparse tomographic imaging Download PDFInfo
- Publication number
- WO2024108203A1 WO2024108203A1 PCT/US2023/080441 US2023080441W WO2024108203A1 WO 2024108203 A1 WO2024108203 A1 WO 2024108203A1 US 2023080441 W US2023080441 W US 2023080441W WO 2024108203 A1 WO2024108203 A1 WO 2024108203A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- tomographic
- circuitry
- tomographic data
- full
- subset
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
- A61B6/5205—Devices using data or image processing specially adapted for radiation diagnosis involving processing of raw data to produce diagnostic data
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N23/00—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00
- G01N23/02—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material
- G01N23/04—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material and forming images of the material
- G01N23/046—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material and forming images of the material using tomography, e.g. computed tomography [CT]
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01R—MEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
- G01R33/00—Arrangements or instruments for measuring magnetic variables
- G01R33/20—Arrangements or instruments for measuring magnetic variables involving magnetic resonance
- G01R33/44—Arrangements or instruments for measuring magnetic variables involving magnetic resonance using nuclear magnetic resonance [NMR]
- G01R33/48—NMR imaging systems
- G01R33/54—Signal processing systems, e.g. using pulse sequences ; Generation or control of pulse sequences; Operator console
- G01R33/56—Image enhancement or correction, e.g. subtraction or averaging techniques, e.g. improvement of signal-to-noise ratio and resolution
- G01R33/5608—Data processing and visualization specially adapted for MR, e.g. for feature analysis and pattern recognition on the basis of measured MR data, segmentation of measured MR data, edge contour detection on the basis of measured MR data, for enhancing measured MR data in terms of signal-to-noise ratio by means of noise filtering or apodization, for enhancing measured MR data in terms of resolution by means for deblurring, windowing, zero filling, or generation of gray-scaled images, colour-coded images or images displaying vectors instead of pixels
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01R—MEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
- G01R33/00—Arrangements or instruments for measuring magnetic variables
- G01R33/20—Arrangements or instruments for measuring magnetic variables involving magnetic resonance
- G01R33/44—Arrangements or instruments for measuring magnetic variables involving magnetic resonance using nuclear magnetic resonance [NMR]
- G01R33/48—NMR imaging systems
- G01R33/54—Signal processing systems, e.g. using pulse sequences ; Generation or control of pulse sequences; Operator console
- G01R33/56—Image enhancement or correction, e.g. subtraction or averaging techniques, e.g. improvement of signal-to-noise ratio and resolution
- G01R33/561—Image enhancement or correction, e.g. subtraction or averaging techniques, e.g. improvement of signal-to-noise ratio and resolution by reduction of the scanning time, i.e. fast acquiring systems, e.g. using echo-planar pulse sequences
- G01R33/5611—Parallel magnetic resonance imaging, e.g. sensitivity encoding [SENSE], simultaneous acquisition of spatial harmonics [SMASH], unaliasing by Fourier encoding of the overlaps using the temporal dimension [UNFOLD], k-t-broad-use linear acquisition speed-up technique [k-t-BLAST], k-t-SENSE
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G06T12/30—
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/02—Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
- A61B6/03—Computed tomography [CT]
- A61B6/032—Transmission computed tomography [CT]
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N2223/00—Investigating materials by wave or particle radiation
- G01N2223/30—Accessories, mechanical or electrical features
- G01N2223/304—Accessories, mechanical or electrical features electric circuits, signal processing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2211/00—Image generation
- G06T2211/40—Computed tomography
- G06T2211/432—Truncation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2211/00—Image generation
- G06T2211/40—Computed tomography
- G06T2211/441—AI-based methods, deep learning or artificial neural networks
Definitions
- the present disclosure relates to sparse tomographic imaging, in particular to, a patch- based denoising diffusion probabilistic model for sparse tomographic imaging.
- tomographic (i.e., measurement) data is acquired, then an image may be reconstructed from the tomographic data.
- CT computed tomography
- projection data sinogram
- FBP filtered back projection
- MRI magnetic resonance imaging
- k-space data is acquired and a corresponding image may then be reconstructed using, for example, a Fourier transform.
- the full tomographic data estimation circuitry includes a tomographic data preprocessing circuitry, a trained score model circuitry, a parallel subset estimation circuitry, and a subset assembly circuitry.
- the tomographic data preprocessing circuitry is configured to divide an input sparse tomographic dataset into a number, N, input sparse tomographic data subsets.
- the parallel subset estimation circuitry is configured to estimate a respective estimated full tomographic data subset for each input sparse tomographic data subset in the input sparse 104795-101 tomographic dataset.
- the estimating is performed in parallel.
- the subset assembly circuitry is configured to combine the N estimated full tomographic data subsets to form an estimated full tomographic dataset.
- each subset corresponds to a two-dimensional (2D) patch.
- the estimating includes determining a solution to an ordinary differential equation.
- the estimating each estimated full tomographic data subset is conditioned on a respective pseudo full tomographic data subset.
- each pseudo full tomographic data subset is generated based, at least in part, on a respective input sparse tomographic data subset, and based, at least in part, on a respective reconstructed image dataset corresponding to the input sparse tomographic dataset.
- a method for estimating full tomographic data includes receiving, by a full tomographic estimation circuitry, an input sparse tomographic dataset.
- the method further includes dividing, by a tomographic data preprocessing circuitry, the input sparse tomographic dataset into a number, N, input sparse tomographic data subsets.
- the method further includes estimating, by a parallel subset estimation circuitry that includes a trained score model circuitry, a respective estimated full tomographic data subset for each input sparse tomographic data subset in the input sparse tomographic dataset. The estimating is performed in parallel.
- the method further includes combining, by a subset assembly circuitry, the N estimated full tomographic data subsets to form an estimated full tomographic dataset.
- the method further includes reconstructing, by an image reconstruction circuitry, an estimated image dataset based, at least in part, on the estimated full tomographic dataset.
- each subset corresponds to a two-dimensional (2D) patch.
- the estimating includes determining a solution to an ordinary differential equation.
- the trained score model circuitry is trained based, at least in part, on a training full tomographic dataset that has been divided into N training full tomographic data subsets. 104795-101
- the training includes training a score model, s ⁇ .
- the training is unsupervised.
- the training includes training an artificial neural network.
- the estimating each estimated full tomographic data subset is conditioned on a respective pseudo full tomographic data subset.
- each pseudo full tomographic data subset is generated based, at least in part, on a respective input sparse tomographic data subset, and based, at least in part, on a respective reconstructed image dataset corresponding to the input sparse tomographic dataset.
- the full tomographic data estimation system includes a training circuitry, and a full tomographic data estimation circuitry.
- the training circuitry is configured to train a score model based, at least in part, on a training full tomographic dataset.
- the full tomographic data estimation circuitry includes a tomographic data preprocessing circuitry, a trained score model circuitry, a parallel subset estimation circuitry, and a subset assembly circuitry.
- the tomographic data preprocessing circuitry is configured to divide an input sparse tomographic dataset into a number, N, input sparse tomographic data subsets.
- the parallel subset estimation circuitry is configured to estimate a respective estimated full tomographic data subset for each input sparse tomographic data subset in the input sparse tomographic dataset. The estimating is performed in parallel.
- the subset assembly circuitry is configured to combine the N estimated full tomographic data subsets to form an estimated full tomographic dataset.
- the estimating includes determining a solution to an ordinary differential equation.
- the training is unsupervised.
- the estimating each estimated full tomographic data subset is conditioned on a respective pseudo full tomographic data subset.
- a computer readable storage device has stored thereon instructions that when executed by one or more processors result in the following operations including: any embodiment of the method. 104795-101 BRIEF DESCRIPTION OF DRAWINGS
- the drawings show embodiments of the disclosed subject matter for the purpose of illustrating features and advantages of the disclosed subject matter.
- FIG. 1 is a graphic illustrating training a score model and sampling a full tomographic data subset for one example (i.e., a full-view sinogram), according to the present disclosure
- FIG. 2 is a graphic illustrating a conditioning technique, according to the present disclosure
- FIG. 3 illustrates a functional block diagram of a system that includes a full tomographic data estimation system, according to several embodiments of the present disclosure
- FIG. 4 is a flowchart of operations for training a score model, according to various embodiments of the present disclosure
- FIG. 5 is a flowchart of operations for estimating full tomographic data, according to various embodiments of the present disclosure.
- Sparse tomographic imaging may be utilized to reduce radiation dose for CT or to reduce data acquisition time for MRI.
- tomographic data acquisitions are reduced relative to full tomographic imaging.
- sparse CT imaging a subset of possible views may be acquired.
- Image reconstruction from sparse tomographic data i.e., sparse image reconstruction
- Deep learning techniques e.g., artificial neural networks (ANNs)
- ANNs artificial neural networks
- an apparatus, method and/or system 104795-101 may include a patch-based denoising diffusion probabilistic model (DDPM) for sparse image reconstruction.
- DDPM diffusion probabilistic model
- tomographic data may include, but is not limited to, projection data associated with a CT scan, k-space data associated with MRI.
- Tomographic data may further include other imaging modalities where acquired measurement data may be sparse.
- An apparatus, system, and/or method, according to the present disclosure is configured to receive sparse tomographic data and, using a trained score model (related to DDPM), estimate corresponding full tomographic data.
- Full image data may then be generated by an appropriate image reconstruction technique.
- CT image data may be generated by filtered back projection using the estimated full tomographic data as input.
- the score model may be trained based, at least in part, on full tomographic data. In an embodiment, the training may be unsupervised.
- the trained score model may then be used by or included in a full tomographic data estimation circuitry to estimate the full tomographic data.
- the training and/or estimating may be performed on subsets of tomographic data, in parallel.
- the tomographic data (training or sparse input tomographic data) may be divided into a number, N, subsets of tomographic data. Operations may then be performed, in parallel, utilizing N processing units, as will be described in more detail below.
- the parallel processing over a plurality of processing units is configured to reduce a computational load, and/or memory usage of each processing unit.
- training the score model circuitry in an unsupervised matter is configured to address a lack of actual training data pairs.
- DDPM denoising diffusion probabilistic model
- DDPM denoising diffusion probabilistic model
- noise e.g., Gaussian noise
- the DDPM technique is then configured to train a network to learn the denoising process to trace the latent spaces backward for a generated image out of the original distribution.
- DDPM may not be subject to mode collapse, for example, and may exhibit better stability than selected other generative techniques in image processing tasks.
- a patch-based DDPM technique is configured to enhance a resulting image reconstruction when the tomographic data is sparse.
- the DDPM technique is configured to operate in the tomographic domain (e.g., projection domain for CT, and k-space for MRI).
- the DDPM technique includes two stages: training and sampling 104795-101 (i.e., inference).
- the training stage corresponds to a (forward) diffusion process and the sampling stage corresponds to a reverse diffusion process.
- an artificial neural network e.g., a U-Net
- ANN artificial neural network
- a fully sampled Radon transform may be applied to sparse images to obtain pseudo fully sampled tomographic data.
- the pseudo fully sampled tomographic data may then be divided (e.g., cropped) into subsets (e.g., patches) and implemented as a condition for the reverse diffusion process.
- the patches may be restored by ordinary differential equation (ODE) sampling.
- ODE ordinary differential equation
- the restored patches may then be combined to form a final tomographic dataset.
- a relatively high-quality image may then be directly reconstructed using a reconstruction technique, e.g., filtered back-projection (FBP) for CT, or Fourier transform for MRI.
- FBP filtered back-projection
- the image reconstruction with the estimated full tomographic data may reduce or eliminate image artifacts while preserving relatively important clinical details.
- an apparatus, method and/or system, according to the present disclosure are configured to be relatively clinically friendly.
- the apparatus, method and/or system do not require paired data. Both training and sampling operations may be done in an unsupervised mode.
- the apparatus, method and/or system is configured to operate in parallel, on subsets of tomographic data acquisition domain datasets, i.e., is patch-based.
- a relatively large-scale dataset may be divided into a number of independent subsets (e.g., two- dimensional patches or three-dimensional cubes), facilitating parallel processing, possibly on a plurality of processing units (e.g., graphics processing units).
- the apparatus, method and/or system, according to the present disclosure may be configured to solve relatively large scale deep reconstruction tasks, for example, relatively high resolution breast cone-beam CT.
- dataset corresponds to tomographic data or a reconstructed image
- data subset corresponds to a portion of a dataset.
- patch corresponds to a portion of a two-dimensional dataset.
- a tomographic dataset may be divided into a number, N, tomographic data subsets.
- N tomographic data subsets.
- a score-based DDPM may be utilized to generate a plurality of tomographic data subsets, as will be described in more detail below using CT as an example.
- this disclosure is not limited in this regard, and a similar technique may apply to sparse MRI data.
- a fully sampled projection dataset (i.e., a full tomographic dataset) may be denoted as ⁇ ⁇ R ⁇ , where ⁇ ⁇ represents the number of projection views and ⁇ ⁇ 104795-101 represents the number of detector elements, in a CT scan.
- a down-sampled (i.e., sparse) tomographic dataset ( ⁇ ) can be obtained by a linear transform: where ⁇ ⁇ R ⁇ in this example, denotes the sub-sampled (i.e., sparse-view or down- sampled) projection dataset, ⁇ ⁇ is a mask that implements the down-sampling.
- the symbol ⁇ represents element-wise multiplication.
- ⁇ ⁇ R ⁇ ⁇ R ⁇ corresponds to an operation configured to extract selected tomographic data from an original tomographic dataset.
- ⁇ ⁇ R ⁇ ⁇ R ⁇ is configured to remove the views of ⁇ that have been set to zero by the mask. It may be appreciated that these masked tomographic dataset portions (i.e., projection views) may not be captured in sparse tomographic data (i.e., sparse view sinograms).
- (1) is configured to describe a relationship between a full tomographic dataset ( ⁇ ) and a corresponding sparse tomographic dataset ( ⁇ ).
- training input data includes full tomographic datasets, and ⁇ is generated.
- input data corresponds to sparse tomographic data, ⁇ , and a corresponding full tomographic dataset, ⁇ , is estimated.
- patch-based diffusion a patch ⁇ ⁇ R ⁇ is randomly extracted from the full tomographic dataset Y.
- a forward process of DDPM is a Markov chain configured to Gaussian noise to a clean patch ⁇ ⁇ ⁇ ⁇ with a predefined variance where: Due to the properties of the Gaussian distribution, an iteratively perturbed patch at any time step t can 104795-101 where ⁇ ⁇ ⁇ 1 ⁇ ⁇ ⁇ and ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ . A final result of the forward process is configured to approach the normal distribution ⁇ ⁇ ⁇ ⁇ ⁇ 0, ⁇ ⁇ . It may be appreciated that a specific implementation of Eq.
- a U-Net may be used to learn the Gaussian perturbations involved in the diffusion process, with a loss function:
- ⁇ ⁇ ⁇ may be iteratively determined as: It may be appreciated that a method to construct the diffusion process for the continuous time variable ⁇ ⁇ ⁇ 0,1 ⁇ , may be configured to allow a tractable form for relatively more efficient sampling.
- a stochastic differential equation may be used to describe the diffusion process: where ⁇ ⁇ R ⁇ is a Wiener process, ⁇ ⁇ R ⁇ R is a scalar function to define a drift component, and ⁇ ⁇ R ⁇ R is another scalar function to define a diffusion coefficient.
- the reverse diffusion process can also be modeled as a solution to an SDE: where ⁇ ⁇ R ⁇ is another Wiener process for time-reversed SDE, and ⁇ ⁇ log ⁇ ⁇ ⁇ ⁇ ⁇ is referred to as the score.
- a relatively high-quality patch can be obtained by time-reversed SDE sampling from ⁇ ⁇ ⁇ ⁇ 0, ⁇ .
- a score- based model may be configured to use an artificial neural network to estimate the score.
- a time-dependent network score estimation model ⁇ ⁇
- ⁇ ⁇ may be trained with a corresponding loss function: 104795-101 where ⁇ ⁇
- ⁇ ⁇ It be appreciated that ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ log ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇
- a step size should be relatively small to reflect the Wiener process.
- the time- reversed SDE sampling may share a same marginal probability density with an ordinary differential equation (ODE) sampling process.
- ODE ordinary differential equation
- the flow ODE may be formulated as: Based, at least in part, on the ODE sampling, the reverse diffusion process may be configured to reduce the noise generated by the Wiener process and may thus allow a larger step size. The larger step size may improve sampling efficiency.
- a DDPM may correspond to a special form of SDE.
- the diffusion process of Eq. (3) may be approximately equivalent to an SDE as: where ⁇ ⁇ ⁇ ⁇ corresponds to a continuous form of the parameter sequence ⁇ .
- ⁇ ⁇ , for the sequence ⁇ ⁇ ⁇ 0,1 ⁇ may be written as: It may be appreciated that the continuous version of Eq. (4) can be obtained by solving Eq.
- a trained perturbation prediction network may be trained on a plurality of patches extracted from a full tomographic dataset.
- Table 1 is pseudocode for one example algorithm (Algorithm 1) that describes the training procedure for the estimation model, ⁇ ⁇ .
- Algorithm 1 Algorithm 1
- Example 100 is a graphic illustrating training a score model and sampling a full tomographic data subset for one example full tomographic dataset (i.e., full-view projection data (full-view sinogram)) 100.
- Example 100 includes a full tomographic dataset, i.e., sinogram 102, a forward diffusion process 106, and a reverse diffusion process 110.
- a full tomographic data subset i.e., patch 104
- the selected patch 104 corresponds to a starting tomographic data subset for the forward diffusion process 106.
- the forward diffusion process 106 results in a noise data (e.g., Gaussian noise) subset 108.
- noise data e.g., Gaussian noise
- the noise data subset 108 is the input to the reverse diffusion process 110.
- the reverse diffusion process 110 results in an estimate of patch 104 that configured to be within the probability distribution of patch 104.
- a score-based DDPM may be utilized to generate a tomographic data subset.
- a plurality of data patches may be sampled in parallel.
- estimating (and/or training) may be performed on subsets of tomographic data, in parallel.
- the tomographic data (training or sparse input tomographic data) may be divided into a number, N, subsets of tomographic data.
- an estimate of a full tomographic dataset may include a condition.
- the condition may be related to a sparse image reconstructed from sparse tomographic data, as will be described in more detail below.
- the process to sample each patch may be written as: that can L angevin dynamics. Starting from ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ 0, ⁇ ⁇ , such a sampling process is configured to generate a random tomographic patch.
- an actual down-sampled tomographic patch may be included as the condition to the reverse diffusion process.
- an image reconstruction technique e.g., FBP for CT or Fourier transform for MRI
- FBP FBP for CT or Fourier transform for MRI
- a full Radon transform of ⁇ ⁇ ⁇ may then be performed to obtain a noisy fully sampled tomographic dataset ⁇ ⁇ ⁇ .
- the down-sampled tomographic dataset Z may then be used to rectify the noisy fully sampled tomographic data by inserting the actual tomographic values of Z into ⁇ ⁇ ⁇ : where ⁇ ⁇ ⁇ R ⁇ ⁇ R ⁇ is the operation (for CT) that reshapes the down-sampled data into the fully sampled counterpart by inserting zero into the pixels corresponding to the discarded views, and ⁇ ⁇ ⁇ corresponds to the final pseudo fully sampled tomographic dataset. Then, with a fixed stride, N patches may be extracted from the pseudo full dataset and the full down-sampling mask to obtain two sets: ⁇ ⁇ ⁇ ⁇ ⁇ and ⁇ ⁇ ⁇ ⁇ ⁇ , respectively.
- an a conditioned diffusion technique may be configured to restore tomographic data for sparse 104795-101 input tomographic data, to yield an estimated full tomographic dataset based, at least in part, on the sparse tomographic input data.
- FIG. 2 is a graphic illustrating a conditioning technique 200, according to the present disclosure. The conditioning technique corresponds to Eq. (19).
- a first conditioning parameter, ⁇ , and a second conditioning parameter, ⁇ may each have a value between zero and one.
- the values of the first conditioning parameter ⁇ and the second conditioning parameter ⁇ are configured to adjust a relative contribution of the forward diffusion distribution, i.e., the pseudo full tomographic data subset, to the estimated full tomographic dataset.
- the forward diffusion distribution ⁇ ⁇ ⁇ ⁇ may be used to condition the reverse diffusion sampling as: where ⁇ ⁇ R ⁇ corresponds to a matrix with all elements being one.
- the conditioning illustrated in Eq. (19) may be applied before implementing Eq. (16).
- Table 2 is pseudocode for one example algorithm (Algorithm 2) that describes a reverse diffusion process conditioned by the pseudo fully sampled tomographic data, using CT as an example.
- Algorithm 2 may be applied to MRI tomographic data by replacing the FBP operation with a corresponding Fourier transform operation. 104795-101 It may be appreciated that the SDE time reversal may perturb tomographic data with the Wiener process. Although the tomographic data perturbations may be indistinguishable on visual inspection, the tomographic data perturbations may spread over the image domain after reconstruction, resulting in degraded image quality.
- An ODE sampling method is configured to reduce or eliminate the perturbations by the Wiener process.
- a relatively efficient ODE solver as described herein, may be used to improve the sampling performance. It may be appreciated that the scalar functions of Eq.
- Eq. (21) corresponds to a semi-linear ODE.
- a variation of constants technique may be used to determine a solution to Eq. (21) as: It may be appreciated that may be obtained with a strictly decreasing function of t, denoted as ⁇ ⁇ , which has an inverse function ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ .
- the solvers are labeled DPM-Solver-1, DPM-Solver-2, and DPM-Solver-3, respectively. It may be appreciated that k times of computational complexity corresponds to a functional evaluation in DPM-Solver-k per step. It may be further appreciated that a relatively higher order solver has a faster convergence speed so that it takes fewer steps to achieve satisfactory results. For a selected number of functional evaluations (NFE), the DPM- Solver-3 is recommended. Table 3 Table 4 104795-101 Table 5 Table 6 is pseudocode for one example algorithm (Algorithm 6) that illustrates the complete workflow with ODE sampling for sparse CT reconstruction.
- an apparatus, method and/or system may be applied to sparse MRI reconstruction by replacing the FBP function with a corresponding Fourier transform. 104795-101
- an apparatus, method and/or system may include a patch-based DDPM for sparse image reconstruction.
- the apparatus, method and/or system yield relatively good anti- artifact performance while preserving structural details and textural perception.
- the apparatus, method and/or system utilize unsupervised learning, overcoming the difficulty in the acquisition of clinical paired data.
- the apparatus, method and/or system are configured to divide the tomographic dataset into subsets (i.e., patches) so that a plurality of patch-based reverse diffusion processes can proceed in parallel, enabling the deep reconstruction in the cases of large-scale datasets such as for high-resolution breast CT and photon-counting CT. 104795-101
- a full tomographic data estimation circuitry includes a tomographic data preprocessing circuitry, a trained score model circuitry, a parallel subset estimation circuitry, and a subset assembly circuitry.
- the tomographic data preprocessing circuitry is configured to divide an input sparse tomographic dataset into a number, N, input sparse tomographic data subsets.
- FIG. 3 illustrates a functional block diagram 300 of a system that includes a full tomographic data estimation system 302, according to several embodiments of the present disclosure.
- Full tomographic data estimation system 302 includes a full tomographic data estimation circuitry 304, a training circuitry 308, and a score model circuitry 310-1.
- System 300 may further include a computing device 306.
- system 300 may include an image reconstruction circuitry 352.
- the score model circuitry 310-1 may include or correspond to an artificial neural network (ANN) 312.
- the ANN 312 may correspond to a U-Net architecture.
- ANN artificial neural network
- a U-Net is a convolutional neural network that includes only convolutional layers, and is a relatively popular architecture for medical image segmentation.
- Each U-Net is an asymmetrical network that includes one or more Encoder and Decoder units.
- the training circuitry 308 includes a subset training circuitry 320, training data 322 and may include a loss function 323.
- the loss function 323 may correspond to Eq. (10), as described herein.
- training circuitry 308 is configured to receive training input data 307, and to determine network parameters, ⁇ , associated with score model circuitry 310-1, i.e., score estimation model, ⁇ ⁇ .
- the training input data 307 may include one or more training full tomographic datasets, Y, and other training-related parameters, as described herein, including, e.g., diffusion parameters ⁇ 1 , and ⁇ T .
- the training circuitry 308, e.g., subset training circuitry 320 is configured to train ANN 312 104795-101 (corresponding to the score estimation model), based, at least in part, on training full tomographic data subsets of the training full tomographic dataset, Y.
- Full tomographic data estimation circuitry 304 includes a tomographic data preprocessing circuitry 330, a parallel subset estimation circuitry 332 (that includes N subset estimation circuitries 332-1, ..., 332-N), a parallel subset solver circuitry 334 (that includes N subset solver circuitries 334-1, ..., 334-N), and a subset assembly circuitry 336.
- Full tomographic data estimation circuitry 304 may further include a trained score model circuitry 310-2 (that includes trained ANN 312).
- the trained score model circuitry 310-2 may be coupled to or included in parallel subset estimation circuitry 332.
- the trained score model circuitry 310-2 corresponds to score model circuitry 310-1, after training, as described herein.
- full tomographic data estimation circuitry 304 is configured to receive the network parameters, ⁇ , corresponding to score model circuitry 310-1, after training, thus, corresponding to trained score model circuitry 310-2.
- Full tomographic data estimation circuitry 304 is further configured to receive input data and parameters 311.
- input data may include a selected under-sampled tomographic dataset, Z.
- the input parameters may include one or more of: a number of time steps, T, the diffusion parameters ⁇ 1 , and ⁇ T , and an under-sampling mask, M, that corresponds to the selected under-sampled tomographic dataset, Z.
- Full tomographic data estimation circuitry 304 e.g., subset assembly circuitry 336
- Computing device 306 may include, but is not limited to, a computing system (e.g., a server, a workstation computer, a desktop computer, a laptop computer, a tablet computer, an ultraportable computer, an ultramobile computer, a netbook computer and/or a subnotebook computer, etc.), and/or a smart phone.
- a computing system e.g., a server, a workstation computer, a desktop computer, a laptop computer, a tablet computer, an ultraportable computer, an ultramobile computer, a netbook computer and/or a subnotebook computer, etc.
- Computing device 306 includes processor circuitry 340, a memory circuitry 342, input/output (I/O) circuitry 344, a user interface (UI) 346, and data store 348.
- Processor circuitry 340 includes a plurality of processing units 350-1, ..., 350- p. One or more of the processing units 350-1, ..., 350-p may include and/or may correspond to a graphics processing unit (GPU).
- GPU graphics processing unit
- Processor circuitry 340 is configured to perform operations of full tomographic data estimation system 302, including, for example, full tomographic data estimation circuitry 304, training circuitry 308, and/or score model circuitry 310-1.
- Processor circuitry 340 may be further configured to perform operations of image reconstruction circuitry 352.
- Memory circuitry 342 may include a plurality of memory units 342-1, ..., 342-p, similar to the processing units 350-1, ..., 350-p of processor circuitry 340.
- each processing unit e.g., processing unit 350-1
- each pair of processing unit 350-1 and memory unit 342-1 may be distributed.
- Memory circuitry 342 may be configured to store data associated with full tomographic data estimation circuitry 304 (including tomographic data preprocessing circuitry 330, parallel subset estimation circuitry 332, parallel subset solver circuitry 334, and trained score model circuitry 310-2), score model circuitry 310-1, and/or training circuitry 308. Memory circuitry 342 may be further configured to store training input data 307, input data and parameters 311, and estimated full tomographic data 351. I/O circuitry 344 may be configured to provide wired and/or wireless communication functionality for full tomographic data estimation system 302. For example, I/O circuitry 344 may be configured to receive training input data 307, and/or input data and parameters 311.
- UI 346 may include a user input device (e.g., keyboard, mouse, microphone, touch sensitive display, etc.) and/or a user output device, e.g., a display.
- Data store 348 may be configured to store one or more of training input data 307, input data and parameters 311, training data 322, network parameters 309, and/or other data associated with score model circuitry 304, and/or training circuitry 308, as described herein. Other data may include, for example, function parameters related to loss function 323, etc.
- the operation of full tomographic data estimation system 302 may be best understood when considered in combination with Tables 1 through 6. Operation of full tomographic data estimation system 302 may include two portions: training and estimating.
- Training operations are configured to train score model circuitry 310-1, i.e., score model, ⁇ ⁇ , as described herein.
- Estimating operations are configured to estimate a full tomographic dataset, for a sparse tomographic dataset input based, at least in part, on the trained score model, ⁇ ⁇ , as described herein.
- score model circuitry is designated 310-1 prior to and during training, and is designated 310-2 after training.
- the training operations are configured to be performed on subsets, i.e., patches, of a training full tomographic dataset.
- the estimating operations are configured to be performed in parallel across the N patches of a sparse tomographic dataset.
- full tomographic data estimation system 302 e.g., training circuitry 308, is configured to manage training operations of score model circuitry 310-1 (and ANN 104795-101 312), that corresponds to the score estimation model, ⁇ ⁇ , as described herein. It may be appreciated that training circuitry 308 is configured to conduct training operations on subsets (y) of a training full tomographic dataset. Thus, for each training full tomographic dataset, Y, included in the training input data 307, training circuitry 308 (e.g., subset training circuitry 320) may be configured to divide a selected training full tomographic dataset into N training full tomographic data subsets.
- training circuitry 308 is configured to receive and/or retrieve training input data 307.
- the training input data 307 may include one or more training full tomographic dataset(s), Y, as described herein.
- Training circuitry 308 may be configured to determine/retrieve a probability distribution of the received/retrieved training full tomographic dataset, p(Y).
- Training circuitry 308 may be further configured to retrieve configuration (e.g., diffusion) parameters ( ⁇ 1, ⁇ T). In one nonlimiting example, the diffusion parameters may be included in training input data 307, and may be stored in training data 322.
- Parameters (e.g., ⁇ ) of the score model may be initialized randomly by training circuitry 308 (e.g., subset training circuitry 320).
- Training circuitry 308 is configured to perform training operations on a training full tomographic data subset (i.e., patch). Training operations may thus include dividing the training full tomographic dataset into N subsets. Training operations on subsets of training full tomographic data are configured to be performed iteratively, and may be repeated until convergence is achieved.
- the iterative training operations include selecting a training full tomographic dataset, Y, that has a probability distribution p(Y), and dividing Y into N subsets (e.g., patches).
- Subset training circuitry 320 may be configured to extract a random patch y(0) from the training full tomographic dataset.
- Training circuitry 308, e.g., subset training circuitry 320 may then be configured to generate/determine t, based, at least in part, on a uniform distribution.
- Subset training circuitry 320 may the be configured to update ⁇ with a gradient related to the loss function of Eq. (10), as described herein.
- the iterative operations may be performed/repeated until convergence is achieved.
- the parameters of score model, s ⁇ may be set, and correspond to an output of training operations.
- the score model parameters may then be provided to or may correspond to trained score model circuitry 310-2 of full tomographic data estimation circuitry 304.
- Full 104795-101 tomographic data estimation circuitry 304 may be configured to perform estimation operations, as described herein, e.g., Table 2 (Algorithm 2).
- full tomographic data estimation circuitry 304 e.g., tomographic data preprocessing circuitry 330, may be configured to receive and/or retrieve configuration data (T, parameters ( ⁇ 1 , ⁇ T ), score model, s ⁇ , and associated parameters).
- the score model may be loaded to trained score model circuitry 310-2.
- the received/retrieved data may be included in input data and parameters 311.
- NFE data as described herein, may be similarly received and/or retrieved by, for example, tomographic data preprocessing circuitry 330.
- Full tomographic data estimation circuitry 304 e.g., tomographic data preprocessing circuitry 330, may be configured to receive and/or retrieve input data (Z, M), included in input data and parameters 311.
- Intermediate parameters including sparse image data, ⁇ ⁇ ⁇ , noisy fully sampled tomographic data, ⁇ ⁇ ⁇ , and pseudo fully sampled tomographic data, ⁇ ⁇ ⁇ , may be determined (as described herein) by, for example, tomographic data preprocessing circuitry 330, based, at least in part, on sparse tomographic data, Z, and under-sampling mask, M. Sparse tomographic data, Z, and under-sampling mask, M, may then be divided into patches 331 by tomographic data preprocessing circuitry 330.
- the patches 331 may then be provided to the parallel subset estimation circuitry 332.
- Parallel subset estimation circuitry 332 may then be configured to estimate a respective estimated full tomographic data subset for each input sparse tomographic data subset.
- the estimating may include determining ⁇ ⁇ ⁇ ⁇ ⁇ , that has a normal distribution, and, for each time interval, iteratively determining ⁇ using Eq.
- Parallel subset estimation circuitry 332 and trained score model circuitry 310-2 may exchange patch-based data 333 that includes score model input data and corresponding score model results.
- the resulting set of patches 337, ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ may then be provided to subset assembly circuitry 336.
- Subset assembly circuitry 336 may then be configured to combine the set of estimated full tomographic data subsets to form a corresponding estimated full tomographic dataset, Y.
- the estimated full tomographic dataset, Y may then be provided as output 351 to image reconstruction circuitry 352.
- image reconstruction circuitry 352 may be configured to reconstruct an estimated full image 353 based, at least in part, on the estimated full tomographic dataset, Y.
- full tomographic data estimation circuitry 304 may further include a parallel subset solver circuitry 334.
- the parallel subset solver circuitry 334 may include N subset solver circuitries 334-1,..., 334-N.
- the parallel subset solver circuitry 334 may include one or more ordinary differential equation solvers, as described herein.
- Parallel subset solver circuitry 334 and parallel subset estimation circuitry are configured to exchange patch-based data 335, during estimation operations.
- a full tomographic data estimation system may be configured to estimate a full tomographic dataset based, at least in part, on a sparse tomographic dataset, and based, at least in part, on a trained score model.
- FIG. 4 is a flowchart 400 of operations for training a score model, according to various embodiments of the present disclosure.
- the flowchart 400 illustrates training a score model based, at least in part, on a full tomographic dataset, unsupervised.
- the operations may be performed, for example, by the full tomographic data estimation system 302 (e.g., training circuitry 308) of FIG. 3. Operations of this embodiment may begin with receiving and/or retrieving a training full tomographic dataset at operation 402.
- a probability distribution of training full tomographic dataset(s) may be determined or retrieved at operation 404.
- Operation 406 may include retrieving configuration (e.g., diffusion) parameters ( ⁇ 1 , ⁇ T ).
- a score model (e.g., model parameters) may be initialized randomly at operation 408.
- a training full tomographic dataset, Y that has a probability distribution p(Y) may be selected at operation 410.
- the training full tomographic dataset, Y may be divided into N subsets (e.g., patches) at operation 412.
- a random patch y(0) may be extracted from the training full tomographic dataset at operation 414.
- Operation 416 may include generating/determining t, from a uniform distribution.
- Operation 418 may include updating ⁇ with a gradient related to the loss function.
- Operation 420 may include determining whether the model parameters have converged. If convergence is achieved, the parameters of score model, s ⁇ , may be set at operation 422, and program flow may continue at operation 424. If convergence is not achieved, program flow may proceed to operation 410, and operations 412 through 420 may be repeated.
- a score model, ⁇ ⁇ may be trained based, at least in part, on a full tomographic dataset, unsupervised. 104795-101
- FIG. 5 is a flowchart 500 of operations for estimating full tomographic data, according to various embodiments of the present disclosure.
- the flowchart 500 illustrates estimating a full tomographic dataset, for a sparse tomographic dataset input.
- the operations may be performed, for example, by the full tomographic data estimation system 302 (e.g., full tomographic data estimation circuitry 304) of FIG. 3.
- Operations of this embodiment may begin with receiving and/or retrieving configuration data (e.g., T, parameters ( ⁇ 1 , ⁇ T ), and score model, s ⁇ , parameters at operation 502.
- Operation 504 may include receiving and/or retrieving input data (Z, M).
- Operation 506 may include determining intermediate parameters, e.g., ⁇ ⁇ ⁇ , ⁇ ⁇ ⁇ , and ⁇ ⁇ ⁇ .
- Operation 508 may include dividing Z, and M into subsets (e.g., patches).
- Operation 510 may include generating a set of time steps at evenly spaced time intervals.
- Operation 512 may include estimating a respective estimated full tomographic data subset for each input sparse tomographic data subset. The estimating is performed in parallel, for the N subsets. The estimating may include determining ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ , that has a normal distribution, and, for each time interval, iteratively determining using Eq. (19), and using Eq. (16), conditioned by Eq. (19), as described herein.
- Operation 514 may include combining the set of estimated full tomographic data subsets to form a corresponding estimated full tomographic dataset, Y.
- operation 516 may include reconstructing an estimated full tomographic dataset into estimated image data. Program flow may then continue at operation 520.
- a full tomographic dataset may be estimated from a sparse view projection dataset input.
- the estimated full tomographic dataset may then be reconstructed into an estimated full image.
- an apparatus, method and/or system may include a patch-based denoising diffusion probabilistic model (DDPM) for sparse image reconstruction.
- DDPM denoising diffusion probabilistic model
- the apparatus, method and/or system yield relatively good anti- artifact performance while preserving structural details and textural perception.
- the apparatus, method and/or system utilize unsupervised learning, overcoming the difficulty in the acquisition of clinical paired data.
- the apparatus, method and/or system are configured to divide the tomographic dataset into subsets (i.e., patches) so that a plurality of patch-based reverse diffusion processes can proceed in parallel, enabling the deep reconstruction in the cases of large-scale datasets such as for high-resolution breast CT and photon-counting CT. 104795-101 Experimental data To evaluate the performance of the patch-based technique for sparse image reconstruction, according to the present disclosure, the 2016 NIH-AAPM- Mayo Clinic Low- Dose CT Grand Challenge dataset was used.
- the dataset has 2,378 paired CT images with a slice thickness of 3mm from 10 patients.
- 1,923 paired images were selected from 8 patients as the training set, and 455 paired images from the remaining 2 patients as the test set.
- the image size was 512 x 512.
- Simulated projection datasets were obtained with a distance-driven algorithm. The distance from the x-ray source focal spot to the isocenter of the imaging field of view was 595 mm. The distance from the detector to the source was 1085.6 mm. The number of detector elements was 736, each of which had a pitch of 1.2854 mm. The image pixel was 0.6641 mm. In total, 736 projection views were uniformly collected as a full projection dataset. In the experiments, the patch size was 64x64.
- the continuous time range was ⁇ ⁇ ⁇ 0,1 ⁇ . ⁇ 1 and ⁇ T are 10 ⁇ 4 and 0.02 respectively.
- the model was trained with the Adam optimizer at a learning rate of 1 ⁇ 10 ⁇ 4 .
- the training process converged relatively well after 2 ⁇ 10 5 iterations on a computing server equipped with an Nvidia RTX A5000 GPU.
- the stride for extracting patches was set to 32 for overlapped patches.
- the number of functional evaluations (NFE) was set to 1000.
- the conditioning parameters ⁇ and ⁇ were set to 1.0 and 0.1 respectively.
- the reconstructed full CT image according to the present disclosure had a texture relatively very similar to the ground truth.
- the noise be determined as: where ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ is the physical size of the pixel, ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ 127 are the height and width of a region of interest (ROI), and ⁇ ⁇ ⁇ , ⁇ represents the noise-only realization obtained by subtracting the ground truth from each reconstructed ROI.
- NPS neuronuclear system
- Each ANN may include, but is not limited to, a deep NN (DNN), a convolutional neural network (CNN), a deep CNN (DCNN), a multilayer perceptron (MLP), etc.
- Training generally corresponds to “optimizing” the ANN, according to a defined metric, e.g., minimizing a cost (e.g., loss) function.
- the term “logic” may refer to an app, software, firmware and/or circuitry configured to perform any of the aforementioned operations.
- Software may be embodied as a software package, code, instructions, instruction sets and/or data recorded on non-transitory computer readable storage medium.
- Firmware may be embodied as code, instructions or instruction sets and/or data that are hard-coded (e.g., nonvolatile) in memory devices.
- Circuitry may include, for example, singly or in any combination, hardwired circuitry, programmable circuitry such as computer processors comprising one or more individual instruction processing cores (including, but not limited to, graphics processing units), state machine circuitry, and/or firmware that stores instructions executed by programmable circuitry.
- the logic may, collectively or individually, be embodied as circuitry that forms part of a larger system, for example, an integrated circuit (IC), an application-specific integrated circuit (ASIC), a system on-chip (SoC), desktop computers, laptop computers, tablet computers, servers, smart phones, etc.
- Memory circuitry 342 may include one or more of the following types of memory: semiconductor firmware memory, programmable memory, non-volatile memory, read only memory, electrically programmable memory, random access memory, flash memory, magnetic disk memory, solid state memory, and/or optical disk memory. Either additionally or alternatively system memory may include other and/or later-developed types of computer- readable memory.
- Embodiments of the operations described herein may be implemented in a computer- readable storage device having stored thereon instructions that when executed by one or more 104795-101 processors perform the methods.
- the processor may include, for example, a processing unit and/or programmable circuitry.
- the storage device may include a machine readable storage device including any type of tangible, non-transitory storage device, for example, any type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritables (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic and static RAMs, erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), flash memories, magnetic or optical cards, or any type of storage devices suitable for storing electronic instructions.
- ROMs read-only memories
- RAMs random access memories
- EPROMs erasable programmable read-only memories
- EEPROMs electrically erasable programmable read-only memories
- flash memories magnetic or optical cards, or any type of storage devices suitable for storing electronic instructions.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Mathematical Physics (AREA)
- Radiology & Medical Imaging (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Linguistics (AREA)
- High Energy & Nuclear Physics (AREA)
- Medical Informatics (AREA)
- Pathology (AREA)
- Condensed Matter Physics & Semiconductors (AREA)
- Signal Processing (AREA)
- Analytical Chemistry (AREA)
- Immunology (AREA)
- Biochemistry (AREA)
- Chemical & Material Sciences (AREA)
- Pulmonology (AREA)
- Optics & Photonics (AREA)
- Heart & Thoracic Surgery (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
Abstract
In an embodiment, there is provided a full tomographic data estimation circuitry. The full tomographic data estimation circuitry includes a tomographic data preprocessing circuitry, a trained score model circuitry, a parallel subset estimation circuitry, and a subset assembly circuitry. The tomographic data preprocessing circuitry is configured to divide an input sparse tomographic dataset into a number, N, input sparse tomographic data subsets. The parallel subset estimation circuitry is configured to estimate a respective estimated full tomographic data subset for each input sparse tomographic data subset in the input sparse tomographic dataset. The estimating is performed in parallel. The subset assembly circuitry is configured to combine the N estimated full tomographic data subsets to form an estimated full tomographic dataset.
Description
104795-101 PATCH-BASED DENOISING DIFFUSION PROBABILISTIC MODEL FOR SPARSE TOMOGRAPHIC IMAGING CROSS REFERENCE TO RELATED APPLICATION(S) This application claims the benefit of U.S. Provisional Application No. 63/426,415, filed November 18, 2022, which is incorporated by reference as if disclosed herein in its entirety. GOVERNMENT LICENSE RIGHTS This invention was made with government support under award numbers CA237267, EB032716, and EB031102, all awarded by the National Institutes of Health (NIH). The government has certain rights in the invention. FIELD The present disclosure relates to sparse tomographic imaging, in particular to, a patch- based denoising diffusion probabilistic model for sparse tomographic imaging. BACKGROUND In tomographic imaging, tomographic (i.e., measurement) data is acquired, then an image may be reconstructed from the tomographic data. For example, in computed tomography (CT), projection data (sinogram) is acquired and a corresponding image may then be reconstructed using, for example, filtered back projection (FBP). In another example, in magnetic resonance imaging (MRI)), k-space data is acquired and a corresponding image may then be reconstructed using, for example, a Fourier transform. As is known, CT exposes a test subject to ionizing radiation, and test times associated with MRI are relatively long. SUMMARY In an embodiment, there is provided a full tomographic data estimation circuitry. The full tomographic data estimation circuitry includes a tomographic data preprocessing circuitry, a trained score model circuitry, a parallel subset estimation circuitry, and a subset assembly circuitry. The tomographic data preprocessing circuitry is configured to divide an input sparse tomographic dataset into a number, N, input sparse tomographic data subsets. The parallel subset estimation circuitry is configured to estimate a respective estimated full tomographic data subset for each input sparse tomographic data subset in the input sparse
104795-101 tomographic dataset. The estimating is performed in parallel. The subset assembly circuitry is configured to combine the N estimated full tomographic data subsets to form an estimated full tomographic dataset. In some embodiments of the full tomographic data estimation circuitry, each subset corresponds to a two-dimensional (2D) patch. In some embodiments of the full tomographic data estimation circuitry, the estimating includes determining a solution to an ordinary differential equation. In some embodiments of the full tomographic data estimation circuitry, the estimating each estimated full tomographic data subset is conditioned on a respective pseudo full tomographic data subset. In some embodiments of the full tomographic data estimation circuitry, each pseudo full tomographic data subset is generated based, at least in part, on a respective input sparse tomographic data subset, and based, at least in part, on a respective reconstructed image dataset corresponding to the input sparse tomographic dataset. In some embodiments, there is provided a method for estimating full tomographic data. The method includes receiving, by a full tomographic estimation circuitry, an input sparse tomographic dataset. The method further includes dividing, by a tomographic data preprocessing circuitry, the input sparse tomographic dataset into a number, N, input sparse tomographic data subsets. The method further includes estimating, by a parallel subset estimation circuitry that includes a trained score model circuitry, a respective estimated full tomographic data subset for each input sparse tomographic data subset in the input sparse tomographic dataset. The estimating is performed in parallel. The method further includes combining, by a subset assembly circuitry, the N estimated full tomographic data subsets to form an estimated full tomographic dataset. In some embodiments, the method further includes reconstructing, by an image reconstruction circuitry, an estimated image dataset based, at least in part, on the estimated full tomographic dataset. In some embodiments of the method, each subset corresponds to a two-dimensional (2D) patch. In some embodiments of the method, the estimating includes determining a solution to an ordinary differential equation. In some embodiments of the method, the trained score model circuitry is trained based, at least in part, on a training full tomographic dataset that has been divided into N training full tomographic data subsets.
104795-101 In some embodiments of the method, the training includes training a score model, sθ. In some embodiments of the method, the training is unsupervised. In some embodiments of the method, the training includes training an artificial neural network. In some embodiments of the method, the estimating each estimated full tomographic data subset is conditioned on a respective pseudo full tomographic data subset. In some embodiments of the method, each pseudo full tomographic data subset is generated based, at least in part, on a respective input sparse tomographic data subset, and based, at least in part, on a respective reconstructed image dataset corresponding to the input sparse tomographic dataset. In an embodiment, there is provided a full tomographic data estimation system. The full tomographic data estimation system includes a training circuitry, and a full tomographic data estimation circuitry. The training circuitry is configured to train a score model based, at least in part, on a training full tomographic dataset. The full tomographic data estimation circuitry includes a tomographic data preprocessing circuitry, a trained score model circuitry, a parallel subset estimation circuitry, and a subset assembly circuitry. The tomographic data preprocessing circuitry is configured to divide an input sparse tomographic dataset into a number, N, input sparse tomographic data subsets. The parallel subset estimation circuitry is configured to estimate a respective estimated full tomographic data subset for each input sparse tomographic data subset in the input sparse tomographic dataset. The estimating is performed in parallel. The subset assembly circuitry is configured to combine the N estimated full tomographic data subsets to form an estimated full tomographic dataset. In some embodiments of the full tomographic data estimation system, the estimating includes determining a solution to an ordinary differential equation. In some embodiments of the full tomographic data estimation system, the training is unsupervised. In some embodiments of the full tomographic data estimation system, the estimating each estimated full tomographic data subset is conditioned on a respective pseudo full tomographic data subset. In some embodiments, there is provided a computer readable storage device. The device has stored thereon instructions that when executed by one or more processors result in the following operations including: any embodiment of the method.
104795-101 BRIEF DESCRIPTION OF DRAWINGS The drawings show embodiments of the disclosed subject matter for the purpose of illustrating features and advantages of the disclosed subject matter. However, it should be understood that the present application is not limited to the precise arrangements and instrumentalities shown in the drawings, wherein: FIG. 1 is a graphic illustrating training a score model and sampling a full tomographic data subset for one example (i.e., a full-view sinogram), according to the present disclosure; FIG. 2 is a graphic illustrating a conditioning technique, according to the present disclosure; FIG. 3 illustrates a functional block diagram of a system that includes a full tomographic data estimation system, according to several embodiments of the present disclosure; FIG. 4 is a flowchart of operations for training a score model, according to various embodiments of the present disclosure; and FIG. 5 is a flowchart of operations for estimating full tomographic data, according to various embodiments of the present disclosure. Although the following Detailed Description will proceed with reference being made to illustrative embodiments, many alternatives, modifications, and variations thereof will be apparent to those skilled in the art. DETAILED DESCRIPTION Sparse tomographic imaging may be utilized to reduce radiation dose for CT or to reduce data acquisition time for MRI. In sparse tomographic imaging, tomographic data acquisitions are reduced relative to full tomographic imaging. For example, in sparse CT imaging, a subset of possible views may be acquired. Image reconstruction from sparse tomographic data (i.e., sparse image reconstruction) may result in image artifacts. Deep learning techniques (e.g., artificial neural networks (ANNs)) may be applied to sparse image reconstruction. The ability of ANNs to remove the artifacts when working only in the image domain may be relatively limited. Deep learning-based tomographic data processing can achieve a better anti-artifact performance, but typically utilizes feature maps of an entire image in a video memory, which makes handling large-scale or three-dimensional (3D) images relatively challenging. Generally, this disclosure relates to a full tomographic data estimation system with sparse tomographic data as input. In an embodiment, an apparatus, method and/or system
104795-101 may include a patch-based denoising diffusion probabilistic model (DDPM) for sparse image reconstruction. As used herein, tomographic data may include, but is not limited to, projection data associated with a CT scan, k-space data associated with MRI. Tomographic data may further include other imaging modalities where acquired measurement data may be sparse. An apparatus, system, and/or method, according to the present disclosure, is configured to receive sparse tomographic data and, using a trained score model (related to DDPM), estimate corresponding full tomographic data. Full image data may then be generated by an appropriate image reconstruction technique. In one nonlimiting example, CT image data may be generated by filtered back projection using the estimated full tomographic data as input. The score model may be trained based, at least in part, on full tomographic data. In an embodiment, the training may be unsupervised. The trained score model may then be used by or included in a full tomographic data estimation circuitry to estimate the full tomographic data. The training and/or estimating may be performed on subsets of tomographic data, in parallel. In an embodiment, the tomographic data (training or sparse input tomographic data) may be divided into a number, N, subsets of tomographic data. Operations may then be performed, in parallel, utilizing N processing units, as will be described in more detail below. It may be appreciated that the parallel processing over a plurality of processing units is configured to reduce a computational load, and/or memory usage of each processing unit. It may be further appreciated that training the score model circuitry in an unsupervised matter is configured to address a lack of actual training data pairs. By way of background, a denoising diffusion probabilistic model (DDPM) is a generative technique configured to generate sample datasets that correspond to a given probability distribution. In a DDPM process, noise (e.g., Gaussian noise) is gradually added to an image, transforming the image through a plurality of latent spaces to a Gaussian distribution. The DDPM technique is then configured to train a network to learn the denoising process to trace the latent spaces backward for a generated image out of the original distribution. It may be appreciated that DDPM may not be subject to mode collapse, for example, and may exhibit better stability than selected other generative techniques in image processing tasks. A patch-based DDPM technique, according to the present disclosure, is configured to enhance a resulting image reconstruction when the tomographic data is sparse. The DDPM technique is configured to operate in the tomographic domain (e.g., projection domain for CT, and k-space for MRI). The DDPM technique includes two stages: training and sampling
104795-101 (i.e., inference). The training stage corresponds to a (forward) diffusion process and the sampling stage corresponds to a reverse diffusion process. In the training stage, an artificial neural network (ANN), e.g., a U-Net, is trained to learn the reverse diffusion process for generating fully sampled tomographic patches. In the sampling stage, a fully sampled Radon transform may be applied to sparse images to obtain pseudo fully sampled tomographic data. The pseudo fully sampled tomographic data may then be divided (e.g., cropped) into subsets (e.g., patches) and implemented as a condition for the reverse diffusion process. The patches may be restored by ordinary differential equation (ODE) sampling. The restored patches may then be combined to form a final tomographic dataset. A relatively high-quality image may then be directly reconstructed using a reconstruction technique, e.g., filtered back-projection (FBP) for CT, or Fourier transform for MRI. The image reconstruction with the estimated full tomographic data may reduce or eliminate image artifacts while preserving relatively important clinical details. Advantageously, an apparatus, method and/or system, according to the present disclosure, are configured to be relatively clinically friendly. The apparatus, method and/or system do not require paired data. Both training and sampling operations may be done in an unsupervised mode. The apparatus, method and/or system is configured to operate in parallel, on subsets of tomographic data acquisition domain datasets, i.e., is patch-based. Thus, a relatively large-scale dataset may be divided into a number of independent subsets (e.g., two- dimensional patches or three-dimensional cubes), facilitating parallel processing, possibly on a plurality of processing units (e.g., graphics processing units). In one nonlimiting example, the apparatus, method and/or system, according to the present disclosure, may be configured to solve relatively large scale deep reconstruction tasks, for example, relatively high resolution breast cone-beam CT. As used herein, “dataset” corresponds to tomographic data or a reconstructed image, and “data subset” corresponds to a portion of a dataset. As further used herein, “patch” corresponds to a portion of a two-dimensional dataset. A tomographic dataset may be divided into a number, N, tomographic data subsets. It should be noted that “patch” and “data subset” are used interchangeably. By way of background, a score-based DDPM may be utilized to generate a plurality of tomographic data subsets, as will be described in more detail below using CT as an example. However, this disclosure is not limited in this regard, and a similar technique may apply to sparse MRI data. A fully sampled projection dataset (i.e., a full tomographic dataset) may be denoted as ^^ ∈ ℝேೡ×ே^, where ^^௩ represents the number of projection views and ^^ௗ
104795-101 represents the number of detector elements, in a CT scan. A down-sampled (i.e., sparse) tomographic dataset ( ^^) can be obtained by a linear transform: where ^^ ∈ ℝேೡᇲ×ே^ in this example, denotes the sub-sampled (i.e., sparse-view or down- sampled) projection dataset, ^^ ∈ is a mask that implements the down-sampling. The
mask is configured with an element ^^^^ ൌ 1 if the i-th view (j = 1, …, Nd) is sampled, otherwise ^^^^ ൌ 0. The symbol ^ represents element-wise multiplication. ^^ ∶ ℝேೡ×ே^ → ℝேೡᇲ×ே^ corresponds to an operation configured to extract selected tomographic data from an original tomographic dataset. In other words, ^^ ∶ ℝேೡ×ே^ → ℝேೡᇲ×ே^ is configured to remove the views of ^^ that have been set to zero by the mask. It may be appreciated that these masked tomographic dataset portions (i.e., projection views) may not be captured in sparse tomographic data (i.e., sparse view sinograms). Eq. (1) is configured to describe a relationship between a full tomographic dataset ( ^^) and a corresponding sparse tomographic dataset ( ^^). During training, as will be described in more detail below, training input data includes full tomographic datasets, and ^^ is generated. In actual use (i.e., for estimating full tomographic data), input data corresponds to sparse tomographic data, ^^, and a corresponding full tomographic dataset, ^^, is estimated. To perform the patch-based diffusion, a patch ^^ ∈ ℝௗ×ௗ is randomly extracted from the full tomographic dataset Y. It may be appreciated that a forward process of DDPM is a Markov chain configured to Gaussian noise to a clean patch ^^^ ൌ ^^ with a
predefined variance
where:
Due to the properties of the Gaussian distribution, an iteratively perturbed patch at any time step t can
104795-101 where ^^௧ ൌ 1 െ ^^௧ and ^ത^௧ ൌ ∏௧ ^ୀ^ ^^^ . A final result of the forward process is configured to approach the normal distribution ^^் ∼ ^^^0, ^^^. It may be appreciated that a specific implementation of Eq. (4) may correspond to:
In one nonlimiting example, a U-Net may be used to learn the Gaussian perturbations involved in the diffusion process, with a loss function:
In the inference stage, a reverse diffusion process ^^௧ି^ ∼ ^^ఏ^ ^^௧ି^| ^^௧^ may be iteratively determined as:
It may be appreciated that a method to construct the diffusion process for the continuous time variable ^^ ∈ ^0,1^, may be configured to allow a tractable form for relatively more efficient sampling. It may be further appreciated that a stochastic differential equation (SDE) may be used to describe the diffusion process: where ^^ ∈ ℝௗ×ௗ is a Wiener process, ^^ ∶ ℝ → ℝ is a scalar function to define a drift component, and ^^ ∶ ℝ → ℝ is another scalar function to define a diffusion coefficient. It may be appreciated that the reverse diffusion process can also be modeled as a solution to an SDE:
where ^ഥ^ ∈ ℝௗ×ௗ is another Wiener process for time-reversed SDE, and ∇ ^^ log ^^௧ ^ ^^^ is referred to as the score. Once the score of each marginal distribution ∇ ^^ log ^^௧^ ^^^ is known, a relatively high-quality patch can be obtained by time-reversed SDE sampling from ^^் ∼ ^^^0, ^^^. Similar to a DDPM that is configured to learn each incremental noise perturbation, a score- based model, according to the present disclosure, may be configured to use an artificial neural network to estimate the score. It may thus be appreciated that a time-dependent network score estimation model, ^^ఏ, may be trained with a corresponding loss function:
104795-101 where ^^^ ^^^
It be appreciated that ^^^ ^^^ ∝ ^^ ^ฮ∇ log ^^ ൫ ^^^ ^^^ ^ ^ ଶ ௧ | ^^ 0 ൯ฮଶ^, and ^^^ ^^^ may be determined as ^^^ ^^^ ൌ ^^^ ^^^ଶ.
It may be appreciated that in time-reversed SDE sampling, a step size should be relatively small to reflect the Wiener process. It may be further appreciated that the time- reversed SDE sampling may share a same marginal probability density with an ordinary differential equation (ODE) sampling process. A corresponding ODE probability may be termed a “flow ODE”. The flow ODE may be formulated as:
Based, at least in part, on the ODE sampling, the reverse diffusion process may be configured to reduce the noise generated by the Wiener process and may thus allow a larger step size. The larger step size may improve sampling efficiency. It may be appreciated that a DDPM may correspond to a special form of SDE. It may be further appreciated that the diffusion process of Eq. (3) may be approximately equivalent to an SDE as:
where ^^^ ^^^ corresponds to a continuous form of the parameter sequence ^^ . In one nonlimiting example, ^^^ ^^^, for the sequence ^^ ∈ ^0,1^ may be written as: It may be appreciated that the continuous version of Eq. (4) can be obtained by solving Eq. (12), that may then correspond to an iteratively perturbed patch at any time instant, t:
where:
104795-101 It should be noted that, the perturbation prediction network of a DDPM ^^ఏ may be regarded as estimating a scaled score െ
Thus, a trained perturbation prediction network and a corresponding score prediction network may be considered equivalent. In an embodiment, a trained score estimation network may be trained on a plurality of patches extracted from a full tomographic dataset. Table 1 is pseudocode for one example algorithm (Algorithm 1) that describes the training procedure for the estimation model, ^^ఏ. Table 1
FIG. 1 is a graphic illustrating training a score model and sampling a full tomographic data subset for one example full tomographic dataset (i.e., full-view projection data (full-view sinogram)) 100. Example 100 includes a full tomographic dataset, i.e., sinogram 102, a forward diffusion process 106, and a reverse diffusion process 110. In operation, a full tomographic data subset, i.e., patch 104, may be selected from sinogram 102. The selected patch 104 corresponds to a starting tomographic data subset for the forward diffusion process 106. The forward diffusion process 106 results in a noise data (e.g., Gaussian noise) subset 108. The noise data subset 108 is the input to the reverse diffusion process 110. The reverse diffusion process 110 results in an estimate of patch 104 that configured to be within the probability distribution of patch 104. Thus, a score-based DDPM may be utilized to generate a tomographic data subset. In an embodiment, when sampling (e.g., estimating) tomographic data via time reversal, a plurality of data patches may be sampled in parallel. In other words, estimating (and/or training) may be performed on subsets of tomographic data, in parallel. In an embodiment, the tomographic data (training or sparse input tomographic data) may be divided into a number, N, subsets of tomographic data. Operations may then be performed, in parallel, utilizing N processing units, as described herein.
104795-101 By way of further background, an estimate of a full tomographic dataset may include a condition. For example, the condition may be related to a sparse image reconstructed from sparse tomographic data, as will be described in more detail below. The process to sample each patch may be written as:
that can Langevin dynamics. Starting from ^^ ^^ ^^^ ^ ∼ ^^ ^ 0, ^^ ^ , such a sampling process is configured to generate a random tomographic patch. To restore a down-sampled tomographic patch (i.e., to estimate a full tomographic patch from the down-sampled tomographic patch), an actual down-sampled tomographic patch may be included as the condition to the reverse diffusion process. For example, for the down-sampled tomographic data Z, an image reconstruction technique (e.g., FBP for CT or Fourier transform for MRI), may be used to obtain a corresponding noisy image ^ ഥ ^. A full Radon transform of ^ ഥ ^ may then be performed to obtain a noisy fully sampled tomographic dataset ^ഥ^. The down-sampled tomographic dataset Z may then be used to rectify the noisy fully sampled tomographic data by inserting the actual tomographic values of Z into ^ ഥ ^: where ^^ି^ ∶ ℝேೡᇲ×ே^ → ℝேೡ×ே^ is the operation (for CT) that reshapes the down-sampled data into the fully sampled counterpart by inserting zero into the pixels corresponding to the discarded views, and ^^^ corresponds to the final pseudo fully sampled tomographic dataset. Then, with a fixed stride, N patches may be extracted from the pseudo full dataset and the full down-sampling mask to obtain two sets: ^ ^^̃^^ ே and ^ ^^^ ^^ ே , respectively. In one example, the N patches may overlap. In another
may not overlap. For the reverse diffusion process at time ^^^, the forward diffusion results ^ ^^̃^^ ே may be first obtained:
These process. an a conditioned diffusion technique may be configured to restore tomographic data for sparse
104795-101 input tomographic data, to yield an estimated full tomographic dataset based, at least in part, on the sparse tomographic input data. FIG. 2 is a graphic illustrating a conditioning technique 200, according to the present disclosure. The conditioning technique corresponds to Eq. (19). A first conditioning parameter, ^^, and a second conditioning parameter, ^, may each have a value between zero and one. It may be appreciated that the values of the first conditioning parameter ^^ and the second conditioning parameter ^ are configured to adjust a relative contribution of the forward diffusion distribution, i.e., the pseudo full tomographic data subset, to the estimated full tomographic dataset. The forward diffusion distribution ^ ^^̃^൫
may be used to condition the reverse diffusion sampling as:
where ^^ ∈ ℝௗ×ௗ corresponds to a matrix with all elements being one. It may be appreciated that the conditioning illustrated in Eq. (19) may be applied before implementing Eq. (16). Table 2 is pseudocode for one example algorithm (Algorithm 2) that describes a reverse diffusion process conditioned by the pseudo fully sampled tomographic data, using CT as an example. It may be appreciated that Algorithm 2 may be applied to MRI tomographic data by replacing the FBP operation with a corresponding Fourier transform operation.
104795-101 It may be appreciated that the SDE time reversal may perturb tomographic data with the Wiener process. Although the tomographic data perturbations may be indistinguishable on visual inspection, the tomographic data perturbations may spread over the image domain after reconstruction, resulting in degraded image quality. An ODE sampling method, as described herein, is configured to reduce or eliminate the perturbations by the Wiener process. A relatively efficient ODE solver, as described herein, may be used to improve the sampling performance. It may be appreciated that the scalar functions of Eq. (8) may be defined as: where ^^௧ ൌ ^ ^ത^௧. of Eq. (20) into Eq. (11), a reverse ODE process may be
It may be appreciated that Eq. (21) corresponds to a semi-linear ODE. In one nonlimiting example, a variation of constants technique may be used to determine a solution to Eq. (21) as:
It may be appreciated that may be obtained with a strictly decreasing function of t, denoted
as ^^^ ^^^, which has an inverse function ^^ ൌ ^^ఒ^ ^^^. Then, by changing the time variable t into the parameter variable λ and denoting ^^^ ^ ^^ ^
104795-101 where the integral ^ ^^ିఒ ^^^ఏ ^^ ^^ is called the exponentially weighted integral of ^^^ఏ. This integral may be numerically determined using a Taylor expansion. It may be appreciated that a solver for the flow ODE may be selected according to the order of the Taylor expansion. Tables 3, 4, and 5 are pseudocode for respective example algorithms (Algorithms 3, 4, and 5) that describe a plurality of DPM solvers. Algorithms 3, 4 and 5 correspond to first- order, second-order and third-order Taylor expansions, respectively. The solvers are labeled DPM-Solver-1, DPM-Solver-2, and DPM-Solver-3, respectively. It may be appreciated that k times of computational complexity corresponds to a functional evaluation in DPM-Solver-k per step. It may be further appreciated that a relatively higher order solver has a faster convergence speed so that it takes fewer steps to achieve satisfactory results. For a selected number of functional evaluations (NFE), the DPM- Solver-3 is recommended. Table 3
Table 4
104795-101 Table 5
Table 6 is pseudocode for one example algorithm (Algorithm 6) that illustrates the complete workflow with ODE sampling for sparse CT reconstruction. It may be appreciated that Algorithm 6 may be applied to sparse MRI reconstruction by replacing the FBP function with a corresponding Fourier transform.
104795-101
Thus, an apparatus, method and/or system, according to the present disclosure, relates to a full tomographic data estimation system with sparse tomographic data as input. In an embodiment, an apparatus, method and/or system may include a patch-based DDPM for sparse image reconstruction. The apparatus, method and/or system yield relatively good anti- artifact performance while preserving structural details and textural perception. The apparatus, method and/or system utilize unsupervised learning, overcoming the difficulty in the acquisition of clinical paired data. The apparatus, method and/or system are configured to divide the tomographic dataset into subsets (i.e., patches) so that a plurality of patch-based reverse diffusion processes can proceed in parallel, enabling the deep reconstruction in the cases of large-scale datasets such as for high-resolution breast CT and photon-counting CT.
104795-101 In an embodiment, there is provided a full tomographic data estimation circuitry. The full tomographic data estimation circuitry includes a tomographic data preprocessing circuitry, a trained score model circuitry, a parallel subset estimation circuitry, and a subset assembly circuitry. The tomographic data preprocessing circuitry is configured to divide an input sparse tomographic dataset into a number, N, input sparse tomographic data subsets. The parallel subset estimation circuitry is configured to estimate a respective estimated full tomographic data subset for each input sparse tomographic data subset in the input sparse tomographic dataset. The estimating is performed in parallel. The subset assembly circuitry is configured to combine the N estimated full tomographic data subsets to form an estimated full tomographic dataset. FIG. 3 illustrates a functional block diagram 300 of a system that includes a full tomographic data estimation system 302, according to several embodiments of the present disclosure. Full tomographic data estimation system 302 includes a full tomographic data estimation circuitry 304, a training circuitry 308, and a score model circuitry 310-1. System 300 may further include a computing device 306. In some embodiments, system 300 may include an image reconstruction circuitry 352. The score model circuitry 310-1 may include or correspond to an artificial neural network (ANN) 312. In one non-limiting example, the ANN 312 may correspond to a U-Net architecture. However, this disclosure is not limited in this regard. Other ANN architectures may be implemented consistent with the present disclosure. As is known, a U-Net is a convolutional neural network that includes only convolutional layers, and is a relatively popular architecture for medical image segmentation. Each U-Net is an asymmetrical network that includes one or more Encoder and Decoder units. The training circuitry 308 includes a subset training circuitry 320, training data 322 and may include a loss function 323. In one nonlimiting example, the loss function 323 may correspond to Eq. (10), as described herein. Generally, training circuitry 308 is configured to receive training input data 307, and to determine network parameters, θ, associated with score model circuitry 310-1, i.e., score estimation model, ^^ఏ. The training input data 307 may include one or more training full tomographic datasets, Y, and other training-related parameters, as described herein, including, e.g., diffusion parameters β1, and βT. The training circuitry 308, e.g., subset training circuitry 320, is configured to train ANN 312
104795-101 (corresponding to the score estimation model), based, at least in part, on training full tomographic data subsets of the training full tomographic dataset, Y. Full tomographic data estimation circuitry 304 includes a tomographic data preprocessing circuitry 330, a parallel subset estimation circuitry 332 (that includes N subset estimation circuitries 332-1, …, 332-N), a parallel subset solver circuitry 334 (that includes N subset solver circuitries 334-1, …, 334-N), and a subset assembly circuitry 336. Full tomographic data estimation circuitry 304 may further include a trained score model circuitry 310-2 (that includes trained ANN 312). The trained score model circuitry 310-2 may be coupled to or included in parallel subset estimation circuitry 332. The trained score model circuitry 310-2 corresponds to score model circuitry 310-1, after training, as described herein. Generally, full tomographic data estimation circuitry 304 is configured to receive the network parameters, θ, corresponding to score model circuitry 310-1, after training, thus, corresponding to trained score model circuitry 310-2. Full tomographic data estimation circuitry 304 is further configured to receive input data and parameters 311. For example, input data may include a selected under-sampled tomographic dataset, Z. The input parameters may include one or more of: a number of time steps, T, the diffusion parameters β1, and βT, and an under-sampling mask, M, that corresponds to the selected under-sampled tomographic dataset, Z. Full tomographic data estimation circuitry 304, e.g., subset assembly circuitry 336, is configured to provide as output, estimated full tomographic dataset, Y, that corresponds to the input under-sampled tomographic dataset, Z. Computing device 306 may include, but is not limited to, a computing system (e.g., a server, a workstation computer, a desktop computer, a laptop computer, a tablet computer, an ultraportable computer, an ultramobile computer, a netbook computer and/or a subnotebook computer, etc.), and/or a smart phone. Computing device 306 includes processor circuitry 340, a memory circuitry 342, input/output (I/O) circuitry 344, a user interface (UI) 346, and data store 348. Processor circuitry 340 includes a plurality of processing units 350-1, …, 350- p. One or more of the processing units 350-1, …, 350-p may include and/or may correspond to a graphics processing unit (GPU). Processor circuitry 340 is configured to perform operations of full tomographic data estimation system 302, including, for example, full tomographic data estimation circuitry 304, training circuitry 308, and/or score model circuitry 310-1. Processor circuitry 340 may be further configured to perform operations of image reconstruction circuitry 352.
104795-101 Memory circuitry 342 may include a plurality of memory units 342-1, …, 342-p, similar to the processing units 350-1, …, 350-p of processor circuitry 340. Thus, each processing unit, e.g., processing unit 350-1, may have an associated memory unit, e.g., memory unit 342-1, configured to store a respective subset of data, as described herein. In one nonlimiting example, each pair of processing unit 350-1 and memory unit 342-1, may be distributed. Memory circuitry 342 may be configured to store data associated with full tomographic data estimation circuitry 304 (including tomographic data preprocessing circuitry 330, parallel subset estimation circuitry 332, parallel subset solver circuitry 334, and trained score model circuitry 310-2), score model circuitry 310-1, and/or training circuitry 308. Memory circuitry 342 may be further configured to store training input data 307, input data and parameters 311, and estimated full tomographic data 351. I/O circuitry 344 may be configured to provide wired and/or wireless communication functionality for full tomographic data estimation system 302. For example, I/O circuitry 344 may be configured to receive training input data 307, and/or input data and parameters 311. UI 346 may include a user input device (e.g., keyboard, mouse, microphone, touch sensitive display, etc.) and/or a user output device, e.g., a display. Data store 348 may be configured to store one or more of training input data 307, input data and parameters 311, training data 322, network parameters 309, and/or other data associated with score model circuitry 304, and/or training circuitry 308, as described herein. Other data may include, for example, function parameters related to loss function 323, etc. The operation of full tomographic data estimation system 302 may be best understood when considered in combination with Tables 1 through 6. Operation of full tomographic data estimation system 302 may include two portions: training and estimating. Training operations are configured to train score model circuitry 310-1, i.e., score model, ^^ఏ, as described herein. Estimating operations are configured to estimate a full tomographic dataset, for a sparse tomographic dataset input based, at least in part, on the trained score model, ^^ఏ, as described herein. As used herein, score model circuitry is designated 310-1 prior to and during training, and is designated 310-2 after training. The training operations are configured to be performed on subsets, i.e., patches, of a training full tomographic dataset. The estimating operations are configured to be performed in parallel across the N patches of a sparse tomographic dataset. During training, full tomographic data estimation system 302, e.g., training circuitry 308, is configured to manage training operations of score model circuitry 310-1 (and ANN
104795-101 312), that corresponds to the score estimation model, ^^ఏ, as described herein. It may be appreciated that training circuitry 308 is configured to conduct training operations on subsets (y) of a training full tomographic dataset. Thus, for each training full tomographic dataset, Y, included in the training input data 307, training circuitry 308 (e.g., subset training circuitry 320) may be configured to divide a selected training full tomographic dataset into N training full tomographic data subsets. The ANN 312 of score model circuitry 310-1 may then be trained to yield a trained score estimation model. Initially, training circuitry 308 is configured to receive and/or retrieve training input data 307. The training input data 307 may include one or more training full tomographic dataset(s), Y, as described herein. Training circuitry 308 may be configured to determine/retrieve a probability distribution of the received/retrieved training full tomographic dataset, p(Y). Training circuitry 308 may be further configured to retrieve configuration (e.g., diffusion) parameters (β1, βT). In one nonlimiting example, the diffusion parameters may be included in training input data 307, and may be stored in training data 322. Parameters (e.g., θ) of the score model, e.g., score model circuitry 310-1 (including ANN 312), may be initialized randomly by training circuitry 308 (e.g., subset training circuitry 320). Training circuitry 308 is configured to perform training operations on a training full tomographic data subset (i.e., patch). Training operations may thus include dividing the training full tomographic dataset into N subsets. Training operations on subsets of training full tomographic data are configured to be performed iteratively, and may be repeated until convergence is achieved. The iterative training operations include selecting a training full tomographic dataset, Y, that has a probability distribution p(Y), and dividing Y into N subsets (e.g., patches). Subset training circuitry 320 may be configured to extract a random patch y(0) from the training full tomographic dataset. Training circuitry 308, e.g., subset training circuitry 320, may then be configured to generate/determine t, based, at least in part, on a uniform distribution. Subset training circuitry 320 may the be configured to update θ with a gradient related to the loss function of Eq. (10), as described herein. The iterative operations may be performed/repeated until convergence is achieved. Once convergence is achieved, the parameters of score model, sθ, may be set, and correspond to an output of training operations. The score model parameters may then be provided to or may correspond to trained score model circuitry 310-2 of full tomographic data estimation circuitry 304. Full
104795-101 tomographic data estimation circuitry 304 may be configured to perform estimation operations, as described herein, e.g., Table 2 (Algorithm 2). Initially, full tomographic data estimation circuitry 304, e.g., tomographic data preprocessing circuitry 330, may be configured to receive and/or retrieve configuration data (T, parameters (β1, βT), score model, sθ, and associated parameters). The score model may be loaded to trained score model circuitry 310-2. The received/retrieved data may be included in input data and parameters 311. In some embodiments, NFE data, as described herein, may be similarly received and/or retrieved by, for example, tomographic data preprocessing circuitry 330. Full tomographic data estimation circuitry 304, e.g., tomographic data preprocessing circuitry 330, may be configured to receive and/or retrieve input data (Z, M), included in input data and parameters 311. Intermediate parameters, including sparse image data, ^ഥ^, noisy fully sampled tomographic data, ^ ഥ ^, and pseudo fully sampled tomographic data, ^ ^ ^, may be determined (as described herein) by, for example, tomographic data preprocessing circuitry 330, based, at least in part, on sparse tomographic data, Z, and under-sampling mask, M. Sparse tomographic data, Z, and under-sampling mask, M, may then be divided into patches 331 by tomographic data preprocessing circuitry 330. A set of time steps at evenly spaced time intervals, tj, for j = 1,…, T, from 1 to 0, exclusive, may then be generated by tomographic data preprocessing circuitry 330. The patches 331 may then be provided to the parallel subset estimation circuitry 332. Parallel subset estimation circuitry 332 may then be configured to estimate a respective estimated full tomographic data subset for each input sparse tomographic data subset. The estimating is configured to be performed in parallel, across the subsets (i.e., i = 1, …, N). The estimating may include determining ^^^^ ^^^ ^, that has a normal distribution, and, for each time interval, iteratively determining ^
using Eq. (19), and ^^ ൫ ^^^ା^ using Eq. (16), conditioned by Eq. (19), as described herein. Parallel subset estimation circuitry 332 and trained score model circuitry 310-2 may exchange patch-based data 333 that includes score model input data and corresponding score model results. At the completion of the estimation operations, the resulting set of patches 337, ^ ^^^^ ^^்^^ ே
, may then be provided to subset assembly circuitry 336. Subset assembly circuitry 336 may then be configured to combine the set of estimated full tomographic data subsets to form a corresponding estimated full tomographic dataset, Y. The estimated full tomographic dataset, Y, may then be provided as output 351 to image reconstruction circuitry 352. In some
104795-101 embodiments, image reconstruction circuitry 352 may be configured to reconstruct an estimated full image 353 based, at least in part, on the estimated full tomographic dataset, Y. In some embodiments, full tomographic data estimation circuitry 304 may further include a parallel subset solver circuitry 334. The parallel subset solver circuitry 334 may include N subset solver circuitries 334-1,…, 334-N. In one nonlimiting example, the parallel subset solver circuitry 334 may include one or more ordinary differential equation solvers, as described herein. Parallel subset solver circuitry 334 and parallel subset estimation circuitry are configured to exchange patch-based data 335, during estimation operations. Thus, a full tomographic data estimation system may be configured to estimate a full tomographic dataset based, at least in part, on a sparse tomographic dataset, and based, at least in part, on a trained score model. FIG. 4 is a flowchart 400 of operations for training a score model, according to various embodiments of the present disclosure. In particular, the flowchart 400 illustrates training a score model based, at least in part, on a full tomographic dataset, unsupervised. The operations may be performed, for example, by the full tomographic data estimation system 302 (e.g., training circuitry 308) of FIG. 3. Operations of this embodiment may begin with receiving and/or retrieving a training full tomographic dataset at operation 402. A probability distribution of training full tomographic dataset(s) may be determined or retrieved at operation 404. Operation 406 may include retrieving configuration (e.g., diffusion) parameters (β1, βT). A score model (e.g., model parameters) may be initialized randomly at operation 408. A training full tomographic dataset, Y, that has a probability distribution p(Y) may be selected at operation 410. The training full tomographic dataset, Y, may be divided into N subsets (e.g., patches) at operation 412. A random patch y(0) may be extracted from the training full tomographic dataset at operation 414. Operation 416 may include generating/determining t, from a uniform distribution. Operation 418 may include updating θ with a gradient related to the loss function. Operation 420 may include determining whether the model parameters have converged. If convergence is achieved, the parameters of score model, sθ, may be set at operation 422, and program flow may continue at operation 424. If convergence is not achieved, program flow may proceed to operation 410, and operations 412 through 420 may be repeated. Thus, a score model, ^^ఏ, may be trained based, at least in part, on a full tomographic dataset, unsupervised.
104795-101 FIG. 5 is a flowchart 500 of operations for estimating full tomographic data, according to various embodiments of the present disclosure. In particular, the flowchart 500 illustrates estimating a full tomographic dataset, for a sparse tomographic dataset input. The operations may be performed, for example, by the full tomographic data estimation system 302 (e.g., full tomographic data estimation circuitry 304) of FIG. 3. Operations of this embodiment may begin with receiving and/or retrieving configuration data (e.g., T, parameters (β1, βT), and score model, sθ, parameters at operation 502. Operation 504 may include receiving and/or retrieving input data (Z, M). Operation 506 may include determining intermediate parameters, e.g., ^ ഥ ^, ^ ഥ ^, and ^ ^ ^. Operation 508 may include dividing Z, and M into subsets (e.g., patches). Operation 510 may include generating a set of time steps at evenly spaced time intervals. Operation 512 may include estimating a respective estimated full tomographic data subset for each input sparse tomographic data subset. The estimating is performed in parallel, for the N subsets. The estimating may include determining ^^^^ ^^^^, that has a normal distribution, and, for each time interval, iteratively determining
using Eq. (19), and using Eq. (16), conditioned by Eq. (19), as described herein. Operation 514 may include combining the set of estimated full tomographic data subsets to form a corresponding estimated full tomographic dataset, Y. In some embodiments, operation 516 may include reconstructing an estimated full tomographic dataset into estimated image data. Program flow may then continue at operation 520. Thus, a full tomographic dataset may be estimated from a sparse view projection dataset input. The estimated full tomographic dataset may then be reconstructed into an estimated full image. Generally, the present disclosure relates to a full tomographic data estimation system with sparse tomographic data as input. In an embodiment, an apparatus, method and/or system may include a patch-based denoising diffusion probabilistic model (DDPM) for sparse image reconstruction. The apparatus, method and/or system yield relatively good anti- artifact performance while preserving structural details and textural perception. The apparatus, method and/or system utilize unsupervised learning, overcoming the difficulty in the acquisition of clinical paired data. The apparatus, method and/or system are configured to divide the tomographic dataset into subsets (i.e., patches) so that a plurality of patch-based reverse diffusion processes can proceed in parallel, enabling the deep reconstruction in the cases of large-scale datasets such as for high-resolution breast CT and photon-counting CT.
104795-101 Experimental data To evaluate the performance of the patch-based technique for sparse image reconstruction, according to the present disclosure, the 2016 NIH-AAPM-Mayo Clinic Low- Dose CT Grand Challenge dataset was used. The dataset has 2,378 paired CT images with a slice thickness of 3mm from 10 patients. In the experiments, 1,923 paired images were selected from 8 patients as the training set, and 455 paired images from the remaining 2 patients as the test set. The image size was 512 x 512. Simulated projection datasets were obtained with a distance-driven algorithm. The distance from the x-ray source focal spot to the isocenter of the imaging field of view was 595 mm. The distance from the detector to the source was 1085.6 mm. The number of detector elements was 736, each of which had a pitch of 1.2854 mm. The image pixel was 0.6641 mm. In total, 736 projection views were uniformly collected as a full projection dataset. In the experiments, the patch size was 64x64. However, this disclosure is not limited in this regard. For the forward and backward diffusion processes, the continuous time range was ^^ ∈ ^0,1^. β1 and βT are 10−4 and 0.02 respectively. The model was trained with the Adam optimizer at a learning rate of 1×10−4. The training process converged relatively well after 2×105 iterations on a computing server equipped with an Nvidia RTX A5000 GPU. When sampling, the projection data was uniformly down- sampled to 92 projection views. The stride for extracting patches was set to 32 for overlapped patches. The number of functional evaluations (NFE) was set to 1000. The conditioning parameters γ and η were set to 1.0 and 0.1 respectively. Results indicated that image artifacts contained in a FBP sparse CT reconstruction were substantially eliminated. It may be further appreciated that structural features were relatively well-preserved with neither significant blurring nor false features. Regarding textural analysis, the reconstructed full CT image according to the present disclosure had a texture relatively very similar to the ground truth. To analyze the statistics of reconstruction quality, the noise be determined as:
where ∆^ൌ ∆^ is the physical size of the pixel, ^^^ ൌ ^^^ ൌ 127 are the height and width of a region of interest (ROI), and ∆ ^^^ ^^, ^^^ represents the noise-only realization obtained by subtracting the ground truth from each reconstructed ROI. A total of 129 × 129 ൌ 16641 ROIs were extracted to calculate NPS. The operator 〈∙〉 denotes the ensemble average over all
104795-101 the ROIs. Experimental data indicated that a reconstruction error of a method consistent with the present disclosure is mainly concentrated in the mid-frequency band, thus, reconstruction results are relatively good. As used in any embodiment herein, “network”, “model”, “ANN”, and “neural network” (NN) may be used interchangeably, and all refer to an artificial neural network that has an appropriate network architecture. Network architectures may include one or more layers that may be sparse, dense, linear, convolutional, and/or fully connected. It may be appreciated that deep learning includes training an ANN. Each ANN may include, but is not limited to, a deep NN (DNN), a convolutional neural network (CNN), a deep CNN (DCNN), a multilayer perceptron (MLP), etc. Training generally corresponds to “optimizing” the ANN, according to a defined metric, e.g., minimizing a cost (e.g., loss) function. As used in any embodiment herein, the term “logic” may refer to an app, software, firmware and/or circuitry configured to perform any of the aforementioned operations. Software may be embodied as a software package, code, instructions, instruction sets and/or data recorded on non-transitory computer readable storage medium. Firmware may be embodied as code, instructions or instruction sets and/or data that are hard-coded (e.g., nonvolatile) in memory devices. “Circuitry”, as used in any embodiment herein, may include, for example, singly or in any combination, hardwired circuitry, programmable circuitry such as computer processors comprising one or more individual instruction processing cores (including, but not limited to, graphics processing units), state machine circuitry, and/or firmware that stores instructions executed by programmable circuitry. The logic may, collectively or individually, be embodied as circuitry that forms part of a larger system, for example, an integrated circuit (IC), an application-specific integrated circuit (ASIC), a system on-chip (SoC), desktop computers, laptop computers, tablet computers, servers, smart phones, etc. Memory circuitry 342 may include one or more of the following types of memory: semiconductor firmware memory, programmable memory, non-volatile memory, read only memory, electrically programmable memory, random access memory, flash memory, magnetic disk memory, solid state memory, and/or optical disk memory. Either additionally or alternatively system memory may include other and/or later-developed types of computer- readable memory. Embodiments of the operations described herein may be implemented in a computer- readable storage device having stored thereon instructions that when executed by one or more
104795-101 processors perform the methods. The processor may include, for example, a processing unit and/or programmable circuitry. The storage device may include a machine readable storage device including any type of tangible, non-transitory storage device, for example, any type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritables (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic and static RAMs, erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), flash memories, magnetic or optical cards, or any type of storage devices suitable for storing electronic instructions. The terms and expressions which have been employed herein are used as terms of description and not of limitation, and there is no intention, in the use of such terms and expressions, of excluding any equivalents of the features shown and described (or portions thereof), and it is recognized that various modifications are possible within the scope of the claims. Accordingly, the claims are intended to cover all such equivalents. Various features, aspects, and embodiments have been described herein. The features, aspects, and embodiments are susceptible to combination with one another as well as to variation and modification, as will be understood by those having skill in the art. The present disclosure should, therefore, be considered to encompass such combinations, variations, and modifications.
Claims
104795-101 CLAIMS What is claimed is: 1. A full tomographic data estimation circuitry comprising: a tomographic data preprocessing circuitry configured to divide an input sparse tomographic dataset into a number, N, input sparse tomographic data subsets; a trained score model circuitry; a parallel subset estimation circuitry configured to estimate a respective estimated full tomographic data subset for each input sparse tomographic data subset in the input sparse tomographic dataset, the estimating performed in parallel; and a subset assembly circuitry configured to combine the N estimated full tomographic data subsets to form an estimated full tomographic dataset. 2. The full tomographic data estimation circuitry of claim 1, wherein each subset corresponds to a two-dimensional (2D) patch. 3. The full tomographic data estimation circuitry of claim 1, wherein the estimating comprises determining a solution to an ordinary differential equation. 4. The full tomographic data estimation circuitry according to any one of claims 1 to 3, wherein the estimating each estimated full tomographic data subset is conditioned on a respective pseudo full tomographic data subset. 5. The full tomographic data estimation circuitry of claim 4, wherein each pseudo full tomographic data subset is generated based, at least in part, on a respective input sparse tomographic data subset, and based, at least in part, on a respective reconstructed image dataset corresponding to the input sparse tomographic dataset. 6. A method for estimating full tomographic data, the method comprising: receiving, by a full tomographic estimation circuitry, an input sparse tomographic dataset; dividing, by a tomographic data preprocessing circuitry, the input sparse tomographic dataset into a number, N, input sparse tomographic data subsets; estimating, by a parallel subset estimation circuitry comprising a trained score model circuitry, a respective estimated full tomographic data subset for each input sparse
104795-101 tomographic data subset in the input sparse tomographic dataset, the estimating performed in parallel; and combining, by a subset assembly circuitry, the N estimated full tomographic data subsets to form an estimated full tomographic dataset. 7. The method of claim 6, further comprising reconstructing, by an image reconstruction circuitry, an estimated image dataset based, at least in part, on the estimated full tomographic dataset. 8. The method of claim 6, wherein each subset corresponds to a two-dimensional (2D) patch. 9. The method of claim 6, wherein the estimating comprises determining a solution to an ordinary differential equation. 10. The method of claim 6, wherein the trained score model circuitry is trained based, at least in part, on a training full tomographic dataset that has been divided into N training full tomographic data subsets. 11. The method of claim 10, wherein the training comprises training a score model, sθ. 12. The method of claim 10, wherein the training is unsupervised. 13. The method of claim 10, wherein the training comprises training an artificial neural network. 14. The method of claim 6, wherein the estimating each estimated full tomographic data subset is conditioned on a respective pseudo full tomographic data subset. 15. The method of claim 14, wherein each pseudo full tomographic data subset is generated based, at least in part, on a respective input sparse tomographic data subset, and based, at least in part, on a respective reconstructed image dataset corresponding to the input sparse tomographic dataset. 16. A full tomographic data estimation system comprising: a training circuitry configured to train a score model based, at least in part, on a training full tomographic dataset; and
104795-101 a full tomographic data estimation circuitry comprising: a tomographic data preprocessing circuitry configured to divide an input sparse tomographic dataset into a number, N, input sparse tomographic data subsets, a trained score model circuitry, a parallel subset estimation circuitry configured to estimate a respective estimated full tomographic data subset for each input sparse tomographic data subset in the input sparse tomographic dataset, the estimating performed in parallel, and a subset assembly circuitry configured to combine the N estimated full tomographic data subsets to form an estimated full tomographic dataset. 17. The full tomographic data estimation system of claim 16, wherein the estimating comprises determining a solution to an ordinary differential equation. 18. The full tomographic data estimation system of claim 16, wherein the training is unsupervised. 19. The full tomographic data estimation system of claim 16, wherein the estimating each estimated full tomographic data subset is conditioned on a respective pseudo full tomographic data subset. 20. A computer readable storage device having stored thereon instructions that when executed by one or more processors result in the following operations comprising: the method according to any one of claims 6 to 15.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202263426415P | 2022-11-18 | 2022-11-18 | |
| US63/426,415 | 2022-11-18 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2024108203A1 true WO2024108203A1 (en) | 2024-05-23 |
Family
ID=91085564
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2023/080441 Ceased WO2024108203A1 (en) | 2022-11-18 | 2023-11-20 | Patch-based denoising diffusion probabilistic model for sparse tomographic imaging |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2024108203A1 (en) |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20160005169A1 (en) * | 2013-03-15 | 2016-01-07 | Synaptive Medical (Barbados) Inc. | System and method for detecting tissue and fiber tract deformation |
| US20190206095A1 (en) * | 2017-12-29 | 2019-07-04 | Tsinghua University | Image processing method, image processing device and storage medium |
-
2023
- 2023-11-20 WO PCT/US2023/080441 patent/WO2024108203A1/en not_active Ceased
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20160005169A1 (en) * | 2013-03-15 | 2016-01-07 | Synaptive Medical (Barbados) Inc. | System and method for detecting tissue and fiber tract deformation |
| US20190206095A1 (en) * | 2017-12-29 | 2019-07-04 | Tsinghua University | Image processing method, image processing device and storage medium |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP7539954B2 (en) | Dose Reduction for Medical Imaging Using Deep Convolutional Neural Networks | |
| Zhou et al. | Handbook of medical image computing and computer assisted intervention | |
| Wu et al. | Computationally efficient deep neural network for computed tomography image reconstruction | |
| US10387765B2 (en) | Image correction using a deep generative machine-learning model | |
| Cai et al. | Cine cone beam CT reconstruction using low-rank matrix factorization: algorithm and a proof-of-principle study | |
| CN117203671A (en) | Iterative image reconstruction improvements based on machine learning | |
| CN107527359A (en) | A kind of PET image reconstruction method and PET imaging devices | |
| Kim et al. | Low‐dose CT reconstruction using spatially encoded nonlocal penalty | |
| Cheng et al. | Learned full-sampling reconstruction from incomplete data | |
| US12406411B2 (en) | System and method for image reconstruction | |
| WO2023279316A1 (en) | Pet reconstruction method based on denoising score matching network | |
| WO2024226421A1 (en) | Systems and methods for medical images denoising using deep learning | |
| US20250238980A1 (en) | Computer-implemented method for image reconstruction | |
| Li et al. | Learning non-local perfusion textures for high-quality computed tomography perfusion imaging | |
| Wang et al. | Improved low-dose positron emission tomography image reconstruction using deep learned prior | |
| CN114596379A (en) | Image reconstruction method, electronic device and storage medium based on depth image prior | |
| US11806175B2 (en) | Few-view CT image reconstruction system | |
| US10347014B2 (en) | System and method for image reconstruction | |
| Zhang et al. | Weighted tensor low-rankness and learnable analysis sparse representation model for texture preserving low-dose CT reconstruction | |
| Zhou et al. | Limited view tomographic reconstruction using a deep recurrent framework with residual dense spatial-channel attention network and sinogram consistency | |
| Bazrafkan et al. | Deep neural network assisted iterative reconstruction method for low dose CT | |
| Kim et al. | CNN-based CT denoising with an accurate image domain noise insertion technique | |
| WO2024243250A2 (en) | Parallel denoising diffusion probabilistic model-based tomographic image reconstruction from sparse projections | |
| CN113469915A (en) | PET reconstruction method based on denoising and scoring matching network | |
| WO2024108203A1 (en) | Patch-based denoising diffusion probabilistic model for sparse tomographic imaging |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23892715 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 23892715 Country of ref document: EP Kind code of ref document: A1 |