CN110599528A - Unsupervised three-dimensional medical image registration method and system based on neural network - Google Patents
Unsupervised three-dimensional medical image registration method and system based on neural network Download PDFInfo
- Publication number
- CN110599528A CN110599528A CN201910828807.7A CN201910828807A CN110599528A CN 110599528 A CN110599528 A CN 110599528A CN 201910828807 A CN201910828807 A CN 201910828807A CN 110599528 A CN110599528 A CN 110599528A
- Authority
- CN
- China
- Prior art keywords
- image
- layer
- feature map
- slice
- upsampling
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
Description
技术领域technical field
本发明涉及医学图像配准领域,具体涉及一种基于神经网络的无监督三维医学图像配准方法及系统,主要用于三维人脑图像的配准。The invention relates to the field of medical image registration, in particular to an unsupervised three-dimensional medical image registration method and system based on a neural network, which is mainly used for registration of three-dimensional human brain images.
背景技术Background technique
医学图像配准,是指对一幅医学图像寻求一种或一系列空间变换,使它与另一幅医学图像上的对应点达到空间上的一致。这种一致是指人体上的同一解剖点在两幅匹配图像上具有相同的空间位置。通常,待配准的图像称之为浮动图像(Moving Image),变换目标图像称之为固定图像或参考图像(Fi×ed Image)。Medical image registration refers to seeking one or a series of spatial transformations for a medical image to make it spatially consistent with the corresponding points on another medical image. This consistency means that the same anatomical point on the human body has the same spatial position on the two matching images. Usually, the image to be registered is called a moving image (Moving Image), and the transformed target image is called a fixed image or a reference image (Fixed Image).
当前已有诸多开源软件可用于医学图像的分割、配准等工作,例如:FreeSurfer可用于分析和可视化来自切片或者时间序列的结构和功能神经影像数据,其在颅骨剥离,B1偏置场校正、灰白质分割和形态差异测量等方面具有较好的表现;FSL与Freesufer类似,可以对FMRI、MRI和DTI脑成像数据进行综合分析;ITK软件包可用于多维图像的分割与配准;NiftyReg实现了对nifti图像的刚体、仿射和非线性配准方法,同时支持GPU运行;elasti×是基于ITK的开源软件,包含了处理医学图像配准的常用算法;ANTS作为当前较好的配准工具,可以实现微分同胚的可变形配准。以上配准工具都是基于传统的配准方法来拟合图像的形变。另外,当前诸多基于深度学习的配准方法也被相继提出,如:DIRNet、BIRNet、voxelmorph等利用神经网络获得图像变换参数,然后利用变换网络对浮动图像进行变换得到配准结果。There are many open source software available for segmentation and registration of medical images, for example: FreeSurfer can be used to analyze and visualize structural and functional neuroimaging data from slices or time series, which can be used in skull stripping, B1 bias field correction, Gray and white matter segmentation and morphological difference measurement have good performance; FSL is similar to Freesufer, which can comprehensively analyze FMRI, MRI and DTI brain imaging data; ITK software package can be used for multi-dimensional image segmentation and registration; NiftyReg realizes The rigid body, affine and nonlinear registration methods for nifti images support GPU operation at the same time; elasti× is an open source software based on ITK, which includes common algorithms for processing medical image registration; ANTS is currently a better registration tool, Deformable registration of diffeomorphisms can be achieved. The above registration tools are all based on the traditional registration method to fit the deformation of the image. In addition, many current registration methods based on deep learning have also been proposed, such as: DIRNet, BIRNet, voxelmorph, etc., use neural networks to obtain image transformation parameters, and then use transformation networks to transform floating images to obtain registration results.
尽管以上配准工具及方法都在图像配准方面得到了较好的结果,然而仍然存在以下问题:Although the above registration tools and methods have achieved good results in image registration, there are still the following problems:
(1)部分方法需要人工标记和监督信息,配准专业要求性高、且配准速率相对低。医学图像的特征点和特征区域的标记,以及图像监督信息的获取需要专业的医学影像医生来完成,对于没有相关医学经验的配准工作者来说具有极大的困难,同时不同的医生以及同一个医生在不同时间所得到特征标记都可能存在差异,而且人为标记的过程费时费力,医生的主观判断对配准结果的影响较大。(1) Some methods require manual labeling and supervision information, and the registration requirements are high and the registration speed is relatively low. The marking of feature points and feature regions of medical images and the acquisition of image supervision information need professional medical imaging doctors to complete, which is extremely difficult for registration workers without relevant medical experience. At the same time, different doctors and colleagues There may be differences in the feature marks obtained by a doctor at different times, and the process of artificial marking is time-consuming and laborious, and the subjective judgment of the doctor has a great impact on the registration results.
(2)配准精度相对较低。近几年提出的一些方法虽然在配准结果上取得了较大进展,但精度仍有待提高。(2) The registration accuracy is relatively low. Although some methods proposed in recent years have made great progress in registration results, the accuracy still needs to be improved.
为此,本发明提供一种基于神经网络的无监督三维医学图像配准方法及系统,用于解决上述问题。Therefore, the present invention provides a neural network-based unsupervised three-dimensional medical image registration method and system for solving the above problems.
发明内容Contents of the invention
针对现有技术的上述不足,本发明提供一种基于神经网络的无监督三维医学图像配准方法及系统,用于实现医学图像的快速配准。还用于提高配准精度。In view of the above-mentioned deficiencies in the prior art, the present invention provides an unsupervised three-dimensional medical image registration method and system based on a neural network, which is used to realize rapid registration of medical images. Also used to improve registration accuracy.
第一方面,本发明提供一种基于神经网络的无监督三维医学图像配准方法,包括步骤:In a first aspect, the present invention provides a neural network-based unsupervised three-dimensional medical image registration method, comprising steps:
L1,图像采集:从公开数据集OASIS和ADNI获取三维医学图像,和/或:从CT、MRI或超声成像仪的DICOM接口获取三维医学图像;L1, image acquisition: acquisition of 3D medical images from public datasets OASIS and ADNI, and/or: acquisition of 3D medical images from CT, MRI, or DICOM interfaces of ultrasound imagers;
L2,对获取到的三维医学图像进行预处理:包括图像分割、裁剪、归一化处理和仿射对齐,并从仿射对齐后的图像中选取任意一个图像作为固定图像IF,其余图像作为浮动图像IM;其中裁剪后的图像大小一致;L2, preprocessing the acquired 3D medical images: including image segmentation, cropping, normalization and affine alignment, and selecting any image from the affine aligned images as the fixed image I F , and the rest of the images as Floating image I M ; the cropped images have the same size;
L3,基于预处理后得到的固定图像IF和浮动图像IM训练神经网络,得到训练好的神经网络模型;L3, training the neural network based on the fixed image I F and the floating image I M obtained after preprocessing, to obtain a trained neural network model;
L4,将待配准的医学图像输入上述训练好的神经网络模型进行配准,得到并输出该待配准的医学图像的配准图像;L4, inputting the medical image to be registered into the above-mentioned trained neural network model for registration, obtaining and outputting a registration image of the medical image to be registered;
其中步骤L3中,基于预处理后得到的固定图像IF和浮动图像IM训练神经网络,得到训练好的神经网络模型,包括:Wherein in the step L3, based on the fixed image I F obtained after preprocessing and the floating image I M training neural network, obtain the trained neural network model, including:
S1、将预处理后得到的固定图像IF和浮动图像IM作为神经网络的输入层输入神经网络,每一组输入数据均包括所述的固定图像IF和一个所述的浮动图像IM;S1. Input the fixed image I F and the floating image I M obtained after preprocessing into the neural network as the input layer of the neural network, and each set of input data includes the fixed image I F and a floating image I M ;
S2、对输入层中输入的固定图像IF和浮动图像IM进行下采样,输出特征图;S2. Down-sampling the fixed image I F and the floating image I M input in the input layer, and output the feature map;
所述的下采样包括3个下采样过程、以及位于该3个下采样过程之后的一个卷积核大小为3×3×3的卷积计算过程和一个LeakyReLU激活函数计算过程;所述的3个下采样过程对应3个下采样过程层;所述的3个下采样过程层,依下采样过程的执行顺序依次记为第一下采样过程层、第二下采样过程层和第三下采样过程层;每个下采样过程层均包括一个卷积核大小为3×3×3的卷积层、一个LeakyReLU激活函数层和一个最大池化层;The downsampling includes 3 downsampling processes, a convolution calculation process with a convolution kernel size of 3×3×3 and a LeakyReLU activation function calculation process after the 3 downsampling processes; the 3 Each down-sampling process corresponds to three down-sampling process layers; the three down-sampling process layers are recorded as the first down-sampling process layer, the second down-sampling process layer and the third down-sampling process layer according to the execution order of the down-sampling process Process layer; each downsampling process layer includes a convolution layer with a convolution kernel size of 3×3×3, a LeakyReLU activation function layer and a maximum pooling layer;
S3、对下采样对应的第一下采样过程层、第二下采样过程层和第三下采样过程层中的LeakyReLU激活函数层输出的特征图分别进行特征重加权,得到三个加权后的特征图,依次为:第一加权特征图、第二加权特征图、第三加权特征图;S3. Perform feature reweighting on the feature maps output by the first downsampling process layer, the second downsampling process layer, and the third downsampling process layer corresponding to the downsampling process layer, respectively, to obtain three weighted features The graphs are: the first weighted feature map, the second weighted feature map, and the third weighted feature map;
S4、对步骤S2中输出的特征图进行1×1×1卷积,输出浮动图像IM到固定图像IF的变形场 S4. Perform 1×1×1 convolution on the feature map output in step S2, and output the deformation field from the floating image I M to the fixed image I F
S5、将步骤S2中输出的特征图输入上采样层进行上采样,所述的上采样层包括3个上采样过程层,每个上采样过程层均包括一个UpSampling层和一个卷积核大小为3×3×3的卷积层,每个上采样过程层中的卷积核大小为3×3×3的卷积层后分别设有LeakyReLU激活函数层;3个上采样过程层对应所述上采样的3个上采样过程;所述的3个上采样过程层,依所述3个上采样过程发生的顺序依次记为第一上采样过程层、第二上采样过程层和第三上采样过程层;S5. Input the feature map output in step S2 into the upsampling layer for upsampling. The upsampling layer includes 3 upsampling process layers, and each upsampling process layer includes an UpSampling layer and a convolution kernel with a size of 3 × 3 × 3 convolutional layers, the convolution kernel size in each upsampling process layer is 3 × 3 × 3 convolutional layers, respectively, with a LeakyReLU activation function layer; the 3 upsampling process layers correspond to the above Three upsampling processes of upsampling; the three upsampling process layers are recorded as the first upsampling process layer, the second upsampling process layer and the third upsampling process layer in sequence according to the order in which the three upsampling processes occur Sampling process layer;
其中,第一上采样过程层的UpSampling层输出的特征图与所述的第三加权特征图融合后,作为第一上采样过程层中卷积核大小为3×3×3的卷积层的输入;第二上采样过程层的UpSampling层输出的特征图与所述的第二加权特征图融合后,作为第二上采样过程层中卷积核大小为3×3×3的卷积层的输入;第三上采样过程层的UpSampling层输出的特征图与所述的第一加权特征图融合后,作为第三上采样过程层中卷积核大小为3×3×3的卷积层的输入;Wherein, after the feature map output by the UpSampling layer of the first upsampling process layer is fused with the third weighted feature map, it is used as the convolutional layer with a convolution kernel size of 3×3×3 in the first upsampling process layer Input; after the feature map output by the UpSampling layer of the second upsampling process layer is fused with the second weighted feature map, it is used as the convolution layer whose convolution kernel size is 3×3×3 in the second upsampling process layer Input; after the feature map output by the UpSampling layer of the third upsampling process layer is fused with the first weighted feature map, it is used as the convolution layer whose convolution kernel size is 3×3×3 in the third upsampling process layer enter;
S6、对上述第一上采样过程层、第二上采样过程层和第三上采样过程层输出的特征图分别进行1×1×1卷积,输出第一上采样过程层、第二上采样过程层和第三上采样过程层对应的浮动图像IM到固定图像IF的变形场,依次为变形场变形场和变形场 S6. Perform 1×1×1 convolution on the feature maps output by the first upsampling process layer, the second upsampling process layer, and the third upsampling process layer, and output the first upsampling process layer and the second upsampling process layer The deformation fields from the floating image I M to the fixed image I F corresponding to the process layer and the third upsampling process layer are the deformation fields in turn deformation field and deformation field
S7、将浮动图像IM和上述输出的变形场输入到空间变换网络、将浮动图像IM和上述输出的变形场输入到所述空间变换网络、将浮动图像IM和上述输出的变形场输入到所述空间变换网络、将浮动图像IM和上述输出的变形场输入到所述空间变换网络,分别经所述空间变换网络的空间变换,对应得到浮动图像IM对应的变形后的图像,依次为变形后的图像变形后的图像变形后的图像以及变形后的图像 S7, the floating image I M and the deformation field of the above-mentioned output Input to the spatial transformation network, transform the floating image I M and the above output deformation field Input to the space transformation network, the floating image I M and the above output deformation field Input to the space transformation network, the floating image I M and the above output deformation field Input to the space transformation network, respectively through the space transformation of the space transformation network, correspondingly obtain the deformed image corresponding to the floating image I M , which is the transformed image successively deformed image deformed image and the transformed image
S8、基于上述输出的变形场变形场变形场以及基于上述得到的变形后的图像变形后的图像变形后的图像变形后的图像利用损失函数计算固定图像IF与所述变形后的图像之间的损失函数值,并对神经网络进行反向传播优化,直至计算所得的损失函数值不再变小或网络训练达到预先设定的训练迭代次数,神经网络训练完成,得到所述的训练好的神经网络模型;S8. Deformation field based on the above output deformation field deformation field And the deformed image based on the above deformed image deformed image deformed image Using the loss function to calculate the fixed image I F and the deformed image Between the loss function value, and optimize the neural network by backpropagation, until the calculated loss function value no longer becomes smaller or the network training reaches the preset number of training iterations, the neural network training is completed, and the training good neural network model;
所述损失函数的计算表达式为:The calculation expression of the loss function is:
在此式①中,表示计算所得的损失函数值,α、β均为常数且α+β=1,是正则项,λ是正则化控制常数参数,表示预先给定的由所述固定图像IF降采样得到的三维医学图像;三维医学图像的大小,依次与所述变形后的图像相等,三维医学图像 的分辨率依次降低并且均小于所述固定图像IF的分辨率,表示所述固定图像IF与所述变形后的图像之间的相似度度量,表示上述三维医学图像与所述变形后的图像之间的相似度度量,表示上述三维医学图像与所述变形后的图像之间的相似度度量,表示上述三维医学图像与所述变形后的图像之间的相似度度量;采用相同的相似度度量函数。In this formula ①, Indicates the calculated loss function value, α and β are constants and α+β=1, is a regular term, λ is a regularization control constant parameter, Represents a pre-specified three-dimensional medical image obtained by downsampling the fixed image IF ; a three-dimensional medical image The size, in turn, corresponds to the warped image Equal, 3D medical images The resolutions of are successively reduced and are smaller than the resolution of the fixed image I F , Denotes the fixed image I F and the deformed image The similarity measure between Represents the above three-dimensional medical image with the warped image The similarity measure between Represents the above three-dimensional medical image with the warped image The similarity measure between Represents the above three-dimensional medical image with the warped image The similarity measure between; The same similarity measure function is used.
进一步地,步骤S3中所述特征重加权的实现步骤包括:Further, the implementation steps of feature reweighting described in step S3 include:
步骤S31、记下采样中输出的每个要进行特征重加权的特征图均为特征图X,该X∈R(H×W×D),对特征图X在其D维度上进行切片处理,使用全局平均池化策略对切片处理得到的每个切片x∈R(H×W)进行全局平均池化处理,得到特征图X其D维度上的每个切片x的切片描述符z,每个切片描述符z的具体公式如下:Step S31, write down that each feature map to be reweighted in the sampling output is a feature map X, and this X∈R (H×W×D) , perform slice processing on the feature map X in its D dimension, Use the global average pooling strategy to perform global average pooling on each slice x∈R (H×W) obtained by slice processing, and obtain the slice descriptor z of each slice x on the D dimension of the feature map X, each The specific formula of the slice descriptor z is as follows:
式中(i,j)表示切片x上的像素点,x(i,j)表示切片x在像素点(i,j)处的灰度值;In the formula (i, j) represents the pixel point on the slice x, and x(i, j) represents the gray value of the slice x at the pixel point (i, j);
步骤S32、获取所述特征图X其D维度上的每个切片x的权重s,其中每个切片x的权重s的计算公式如下:Step S32, obtaining the weight s of each slice x on the D dimension of the feature map X, wherein the calculation formula of the weight s of each slice x is as follows:
s=σ(δ(z)),s=σ(δ(z)),
其中,σ表示sigmoid激活函数,δ是ReLU激活函数,z为步骤S31中得到的切片x的切片描述符;Wherein, σ represents the sigmoid activation function, δ is the ReLU activation function, and z is the slice descriptor of the slice x obtained in step S31;
步骤S33、将步骤S32中获取的每个权重s对应加载到各自对应的切片上,得到特征图X其D维度上的每个切片×对应的重加权后的切片其中所述每个切片x对应的特征重加权计算公式如下:Step S33, correspondingly load each weight s obtained in step S32 to its corresponding slice, and obtain each slice on the D dimension of the feature map X × the corresponding reweighted slice The feature reweighting calculation formula corresponding to each slice x is as follows:
该式中,Fscale(x,s)表示切片x及其对应权重s之间的乘法操作; In this formula, F scale (x, s) represents the multiplication operation between the slice x and its corresponding weight s;
步骤S34、基于步骤S33中得到的特征图X其D维度上的每个切片x对应的重加权后的切片对应得到所述特征图X对应的重加权后的特征图 Step S34, the reweighted slice corresponding to each slice x on the D dimension of the feature map X obtained in step S33 Corresponding to obtain the reweighted feature map corresponding to the feature map X
进一步地,所述的相似度度量函数,采用互相关函数。Further, the similarity measurement function adopts a cross-correlation function.
进一步地,所述的空间变换网络,采用STN空间变换网络。Further, the space transformation network adopts the STN space transformation network.
进一步地,所述的预处理还包括数据增强;Further, the preprocessing also includes data enhancement;
所述的数据增强包括:对已得到的每个浮动图像分别进行弯曲变换,得到所述已得到的每个浮动图像对应的弯曲变换后的图像;所得到的各弯曲变换后的图像为新增加的浮动图像。The data enhancement includes: performing warping transformation on each obtained floating image respectively to obtain the warped transformed image corresponding to each obtained floating image; the obtained warped transformed images are newly added floating image.
第二方面,本发明提供一种基于神经网络的无监督三维医学图像配准系统,包括:In a second aspect, the present invention provides a neural network-based unsupervised three-dimensional medical image registration system, including:
图像获取单元,从公开数据集OASIS和ADNI获取三维医学图像,和/或:从CT、MRI或超声成像仪的DICOM接口获取三维医学图像;The image acquisition unit acquires three-dimensional medical images from public datasets OASIS and ADNI, and/or: acquires three-dimensional medical images from CT, MRI or DICOM interfaces of ultrasound imagers;
图像预处理单元,对获取到的三维医学图像进行预处理,包括图像分割、裁剪、归一化处理和仿射对齐,并从仿射对齐后的图像中选取任意一个图像作为固定图像IF,其余图像作为浮动图像IM;其中裁剪后的图像大小一致;The image preprocessing unit performs preprocessing on the obtained 3D medical image, including image segmentation, cropping, normalization processing and affine alignment, and selects any image from the affine aligned image as a fixed image I F , The rest of the images are used as floating images I M ; wherein the cropped images have the same size;
神经网络训练单元,基于预处理后得到的固定图像IF和浮动图像IM训练神经网络,得到训练好的神经网络模型;The neural network training unit is based on the fixed image I F obtained after preprocessing and the floating image I M to train the neural network to obtain a trained neural network model;
图像配准单元,将待配准的医学图像输入上述训练好的神经网络模型进行配准,得到并输出该待配准的医学图像的配准图像;An image registration unit, inputting the medical image to be registered into the above-mentioned trained neural network model for registration, obtaining and outputting a registration image of the medical image to be registered;
其中,所述的神经网络训练单元,包括:Wherein, the neural network training unit includes:
输入模块,将预处理后得到的固定图像IF和浮动图像IM作为神经网络的输入层输入神经网络,每一组输入数据均包括所述的固定图像IF和一个所述的浮动图像IM;Input module, the fixed image I F and floating image I M obtained after preprocessing are input into neural network as the input layer of neural network, and each group of input data all comprises described fixed image I F and a described floating image I M ;
下采样模块,对输入层中输入的固定图像IF和浮动图像IM进行下采样,输出特征图;所述的下采样包括3个下采样过程、以及位于该3个下采样过程之后的一个卷积核大小为3×3×3的卷积计算过程和一个LeakyReLU激活函数计算过程;3个下采样过程对应3个下采样过程层;所述的3个下采样过程层,依下采样过程的执行顺序依次记为第一下采样过程层、第二下采样过程层和第三下采样过程层;每个下采样过程层均包括一个卷积核大小为3×3×3的卷积层、一个LeakyReLU激活函数层和一个最大池化层;The downsampling module is used to downsample the fixed image IF and the floating image IM input in the input layer, and output the feature map; the downsampling includes 3 downsampling processes and one after the 3 downsampling processes The convolution calculation process with a convolution kernel size of 3×3×3 and a LeakyReLU activation function calculation process; the three downsampling processes correspond to the three downsampling process layers; the three downsampling process layers described above follow the downsampling process The execution sequence of is recorded as the first downsampling process layer, the second downsampling process layer and the third downsampling process layer; each downsampling process layer includes a convolution layer with a convolution kernel size of 3×3×3 , a LeakyReLU activation function layer and a maximum pooling layer;
重加权模块,对下采样对应的第一下采样过程层、第二下采样过程层和第三下采样过程层中的LeakyReLU激活函数层输出的特征图分别进行特征重加权,得到三个加权后的特征图,依次为:第一加权特征图、第二加权特征图、第三加权特征图;The reweighting module performs feature reweighting on the feature maps output by the LeakyReLU activation function layer in the first downsampling process layer, the second downsampling process layer, and the third downsampling process layer corresponding to downsampling, and obtains three weighted The feature maps of , in order: the first weighted feature map, the second weighted feature map, and the third weighted feature map;
第一变形场输出模块,对下采样模块输出的特征图进行1×1×1卷积,输出浮动图像IM到固定图像IF的变形场 The first deformation field output module performs 1×1×1 convolution on the feature map output by the downsampling module, and outputs the deformation field from the floating image I M to the fixed image I F
上采样模块,将下采样模块输出的特征图输入上采样层进行上采样,所述的上采样层包括3个上采样过程层,每个上采样过程层均包括一个UpSampling(即上采样)层和一个卷积核大小为3×3×3的卷积层,每个上采样过程层中的卷积核大小为3×3×3的卷积层后分别设有LeakyReLU激活函数层;3个上采样过程层对应所述上采样的3个上采样过程;所述的3个上采样过程层,依所述3个上采样过程发生的顺序依次记为第一上采样过程层、第二上采样过程层和第三上采样过程层;其中,第一上采样过程层的UpSampling层输出的特征图与所述的第三加权特征图融合后,作为第一上采样过程层中卷积核大小为3×3×3的卷积层的输入;第二上采样过程层的UpSampling层输出的特征图与所述的第二加权特征图融合后,作为第二上采样过程层中卷积核大小为3×3×3的卷积层的输入;第三上采样过程层的UpSampling层输出的特征图与所述的第一加权特征图融合后,作为第三上采样过程层中卷积核大小为3×3×3的卷积层的输入;The upsampling module inputs the feature map output by the downsampling module into the upsampling layer for upsampling. The upsampling layer includes 3 upsampling process layers, and each upsampling process layer includes an UpSampling (i.e. upsampling) layer And a convolution layer with a convolution kernel size of 3×3×3, and a LeakyReLU activation function layer after each upsampling process layer with a convolution kernel size of 3×3×3; 3 The upsampling process layer corresponds to the three upsampling processes of the upsampling; the three upsampling process layers are recorded as the first upsampling process layer, the second upsampling process layer, and the second upsampling process layer according to the order in which the three upsampling processes occur. The sampling process layer and the third upsampling process layer; wherein, after the feature map output by the UpSampling layer of the first upsampling process layer is fused with the third weighted feature map, it is used as the convolution kernel size in the first upsampling process layer It is the input of the convolutional layer of 3×3×3; after the feature map output by the UpSampling layer of the second upsampling process layer is fused with the second weighted feature map, it is used as the convolution kernel size in the second upsampling process layer It is the input of the convolutional layer of 3 × 3 × 3; after the feature map output by the UpSampling layer of the third upsampling process layer is fused with the first weighted feature map, it is used as the convolution kernel size in the third upsampling process layer It is the input of the convolutional layer of 3×3×3;
第二变形场输出模块,对上述第一上采样过程层、第二上采样过程层和第三上采样过程层输出的特征图分别进行1×1×1卷积,输出第一上采样过程层、第二上采样过程层和第三上采样过程层对应的浮动图像IM到固定图像IF的变形场,依次为变形场变形场和变形场 The second deformation field output module performs 1×1×1 convolution on the feature maps output by the first upsampling process layer, the second upsampling process layer and the third upsampling process layer, and outputs the first upsampling process layer , the deformation field from the floating image I M to the fixed image I F corresponding to the second upsampling process layer and the third upsampling process layer, which are the deformation fields in turn deformation field and deformation field
空间变换模块,将浮动图像IM和上述输出的变形场输入到空间变换网络、将浮动图像IM和上述输出的变形场输入到所述空间变换网络、将浮动图像IM和上述输出的变形场输入到所述空间变换网络、将浮动图像IM和上述输出的变形场输入到所述空间变换网络,分别经所述空间变换网络的空间变换,对应得到浮动图像IM对应的变形后的图像,依次为变形后的图像变形后的图像变形后的图像以及变形后的图像 A spatial transformation module that transforms the floating image I M and the above-mentioned output deformation field Input to the spatial transformation network, transform the floating image I M and the above output deformation field Input to the space transformation network, the floating image I M and the above output deformation field Input to the space transformation network, the floating image I M and the above output deformation field Input to the space transformation network, respectively through the space transformation of the space transformation network, correspondingly obtain the deformed image corresponding to the floating image I M , which is the transformed image successively deformed image deformed image and the transformed image
神经网络优化模块,基于上述输出的变形场变形场变形场以及基于上述得到的变形后的图像变形后的图像变形后的图像变形后的图像利用损失函数计算固定图像IF与所述变形后的图像之间的损失函数值,并对神经网络进行反向传播优化,直至计算所得的损失函数值不再变小或网络训练达到预先设定的训练迭代次数,神经网络训练完成,得到所述的训练好的神经网络模型;Neural network optimization module, deformation field based on the above output deformation field deformation field And the deformed image based on the above deformed image deformed image deformed image Using the loss function to calculate the fixed image I F and the deformed image Between the loss function value, and optimize the neural network by backpropagation, until the calculated loss function value no longer becomes smaller or the network training reaches the preset number of training iterations, the neural network training is completed, and the training good neural network model;
所述损失函数的计算表达式为:The calculation expression of the loss function is:
在此式①中,表示计算所得的损失函数值,α、β均为常数且α+β=1,是正则项,λ是正则化控制常数参数,表示预先给定的由所述固定图像IF降采样得到的三维医学图像;三维医学图像的大小,依次与所述变形后的图像相等,三维医学图像的分辨率依次降低并且均小于所述固定图像IF的分辨率,表示所述固定图像IF与所述变形后的图像之间的相似度度量,表示上述三维医学图像与所述变形后的图像之间的相似度度量,表示上述三维医学图像与所述变形后的图像之间的相似度度量,表示上述三维医学图像与所述变形后的图像之间的相似度度量;采用相同的相似度度量函数。In this formula ①, Indicates the calculated loss function value, α and β are constants and α+β=1, is a regular term, λ is a regularization control constant parameter, Represents a pre-specified three-dimensional medical image obtained by downsampling the fixed image IF ; a three-dimensional medical image The size, in turn, corresponds to the warped image Equal, 3D medical images The resolutions of are successively reduced and are smaller than the resolution of the fixed image I F , Denotes the fixed image I F and the deformed image The similarity measure between Represents the above three-dimensional medical image with the warped image The similarity measure between Represents the above three-dimensional medical image with the warped image The similarity measure between Represents the above three-dimensional medical image with the warped image The similarity measure between; The same similarity measure function is used.
进一步地,所述的重加权模块,包括:Further, the reweighting module includes:
描述符获取模块,记下采样中输出的每个要进行特征重加权的特征图均为X,X∈R(H×W×D),对特征图X在其D维度上进行切片处理,使用全局平均池化策略对切片处理得到的每个切片x∈R(H×W)进行全局平均池化处理,得到特征图X其D维度上的每个切片x的切片描述符z,每个切片描述符z的具体公式如下:The descriptor acquisition module records that each feature map to be reweighted in the sampling output is X, X∈R (H×W×D) , slices the feature map X on its D dimension, and uses The global average pooling strategy performs global average pooling on each slice x∈R (H×W) obtained from the slice processing, and obtains the slice descriptor z of each slice x on the D dimension of the feature map X, and each slice The specific formula of the descriptor z is as follows:
式中(i,j)表示切片x上的像素点,x(i,j)表示切片x在像素点(i,j)处的灰度值;In the formula (i, j) represents the pixel point on the slice x, and x(i, j) represents the gray value of the slice x at the pixel point (i, j);
切片权重计算模块,获取所述特征图X其D维度上的每个切片x的权重s,其中每个切片x的权重s的计算公式如下:The slice weight calculation module obtains the weight s of each slice x on the D dimension of the feature map X, wherein the calculation formula of the weight s of each slice x is as follows:
s=σ(δ(z)),s=σ(δ(z)),
其中,σ表示sigmoid激活函数,δ是ReLU激活函数,z为描述符获取模块得到的切片x的切片描述符;Among them, σ represents the sigmoid activation function, δ is the ReLU activation function, and z is the slice descriptor of the slice x obtained by the descriptor acquisition module;
加权模块,将切片权重计算模块获取的每个权重s对应加载到各自对应的切片上,得到特征图X其D维度上的每个切片×对应的重加权后的切片其中所述每个切片x对应的特征重加权计算公式如下:The weighting module loads each weight s obtained by the slice weight calculation module to its corresponding slice, and obtains each slice on the D dimension of the feature map X × the corresponding reweighted slice The feature reweighting calculation formula corresponding to each slice x is as follows:
该式中,Fscale(x,s)表示切片x及其对应权重s之间的乘法操作; In this formula, F scale (x, s) represents the multiplication operation between the slice x and its corresponding weight s;
加权后图像获取模块,基于加权模块得到的特征图X其D维度上的每个切片x对应的重加权后的切片对应得到所述特征图X对应的重加权后的特征图 Weighted image acquisition module, based on the feature map X obtained by the weighting module, each slice x on the D dimension corresponds to the reweighted slice Corresponding to obtain the reweighted feature map corresponding to the feature map X
进一步地,所述的相似度度量函数,采用互相关函数。Further, the similarity measurement function adopts a cross-correlation function.
进一步地,所述的空间变换网络,采用STN空间变换网络。Further, the space transformation network adopts the STN space transformation network.
进一步地,图像预处理单元中所述的预处理还包括数据增强;其中所述的数据增强包括:对已得到的每个浮动图像分别进行弯曲变换,得到所述已得到的每个浮动图像对应的弯曲变换后的图像;所得到的各弯曲变换后的图像为新增加的浮动图像。Further, the preprocessing in the image preprocessing unit also includes data enhancement; wherein the data enhancement includes: performing bending transformation on each obtained floating image to obtain the corresponding The warped transformed images of ; the obtained warped transformed images are newly added floating images.
本发明的有益效果在于,The beneficial effect of the present invention is that,
(1)本发明提供的基于神经网络的无监督三维医学图像配准方法及系统,均使用无监督的配准方式,配准过程中不需要任何的标记信息和配准监督信息,减少了标记数据的需求和人为主观判断的错误,一定程度上有助于没有相关医学经验的医学工作者进行图像配准,且一定程度上提高了配准的速率,节约了配准的时间,同时也在一定程度上节约人力物力。(1) The unsupervised three-dimensional medical image registration method and system based on the neural network provided by the present invention all use an unsupervised registration method, and do not need any marker information and registration supervision information in the registration process, reducing the number of markers The demand for data and the error of human subjective judgment help medical workers who have no relevant medical experience to perform image registration to a certain extent, and to a certain extent improve the speed of registration and save registration time. To a certain extent, it saves manpower and material resources.
(2)本发明提供的基于神经网络的无监督三维医学图像配准方法及系统,分别通过特征重加权融合和具有多层次损失监督作用的损失函数,将下采样路径得到的特征图依据其贡献度加权融合到上采样路径,同时从不同分辨率角度监督模型损失,实现更有效的特征重用和模型监督,一定程度上提高了配准精度。(2) The neural network-based unsupervised three-dimensional medical image registration method and system provided by the present invention, through feature reweighted fusion and loss function with multi-level loss supervision function, the feature map obtained by the down-sampling path is based on its contribution The degree weighting is fused into the upsampling path, while supervising the model loss from different resolution angles, achieving more effective feature reuse and model supervision, and improving the registration accuracy to a certain extent.
此外,本发明设计原理可靠,结构简单,具有非常广泛的应用前景。In addition, the design principle of the present invention is reliable, the structure is simple, and has very wide application prospects.
附图说明Description of drawings
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present invention or the prior art, the following will briefly introduce the drawings that need to be used in the description of the embodiments or the prior art. Obviously, for those of ordinary skill in the art, In other words, other drawings can also be obtained from these drawings on the premise of not paying creative work.
图1是本发明一个实施例的方法的示意性流程图。Fig. 1 is a schematic flowchart of a method according to an embodiment of the present invention.
图2为图1所示方法中获取所述变形场变形场变形场变形场的过程指示示意图。Fig. 2 obtains described deformation field in the method shown in Fig. 1 deformation field deformation field deformation field Schematic diagram of the process instructions.
图3是本发明一个实施例的系统的示意性框图。Fig. 3 is a schematic block diagram of a system according to one embodiment of the present invention.
具体实施方式Detailed ways
为了使本技术领域的人员更好地理解本发明中的技术方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本发明保护的范围。In order to enable those skilled in the art to better understand the technical solutions in the present invention, the technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the drawings in the embodiments of the present invention. Obviously, the described The embodiments are only some of the embodiments of the present invention, not all of them. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts shall fall within the protection scope of the present invention.
下面对本发明中出现的关键术语进行解释。Key terms appearing in the present invention are explained below.
图1、2是本发明一个实施例的方法的示意性流程图。本实施例以对3D人脑图像配准为例。1 and 2 are schematic flowcharts of a method according to an embodiment of the present invention. This embodiment takes the registration of 3D human brain images as an example.
如图1所示,该方法100包括:As shown in Figure 1, the method 100 includes:
第一步,图像采集。The first step is image acquisition.
从公开数据集OASIS和ADNI下载人脑图像的公开数据集。Download public datasets of human brain images from public datasets OASIS and ADNI.
具体实现时,本领域技术人员还可以从CT、MRI或超声成像仪的DICOM接口获取所需的三维医学图像。During specific implementation, those skilled in the art can also obtain the required three-dimensional medical image from the DICOM interface of the CT, MRI or ultrasonic imager.
第二步,预处理:The second step, preprocessing:
对第一步中获取到的三维医学图像进行预处理。The 3D medical image obtained in the first step is preprocessed.
第一步骤中采集的人脑数据,包含颈部、口腔、鼻腔、头骨等多余部分,且其大小、灰度不一。为此,对这些图像数据进行标准的预处理。首先对图像进行分割,将人脑从原始数据中分离出来,并对所得的人脑图像进行裁剪,使其大小一致。然后进行归一化处理,将体素值归一化到[0,1]。之后进行仿射对齐。然后从仿射对齐后的数据中选取一个图像作为固定图像IF,其余作为浮动图像IM。The human brain data collected in the first step includes redundant parts such as the neck, mouth, nasal cavity, and skull, and their sizes and grayscales are different. For this purpose, standard preprocessing is performed on these image data. First, the image is segmented to separate the human brain from the original data, and the resulting human brain image is cropped to make it consistent in size. Then normalization is performed, and the voxel values are normalized to [0, 1]. Affine alignment is performed afterwards. Then select one image from the affine-aligned data as a fixed image I F , and the rest as a floating image I M .
另外,为了增强神经网络模型的鲁棒性和泛化能力,该步骤中的预处理,还包括数据增强,即对上述已在预处理过程中得到的各浮动图像进行弯曲变换,弯曲变换后得到的所有的图像均为新增的浮动图像。即弯曲变换得到的新的图像也属于该步骤中预处理得到的浮动图像。In addition, in order to enhance the robustness and generalization ability of the neural network model, the preprocessing in this step also includes data enhancement, that is, performing bending transformation on each floating image obtained in the above preprocessing process, and obtaining All images in are new floating images. That is, the new image obtained by the bending transformation also belongs to the floating image obtained by preprocessing in this step.
其中,本实施例中所述的弯曲变换,可以采用三种不同程度的弯曲变换,以实现数据增强。具体实现时,还可依据实际情况,对不同程度的弯曲变换的数量的增减。Wherein, the bending transformation described in this embodiment can adopt three different degrees of bending transformation to realize data enhancement. In actual implementation, the number of bending transformations of different degrees can also be increased or decreased according to the actual situation.
至此,本步骤中预处理得到的所有的浮动图像构成神经网络的训练集。So far, all the floating images obtained by preprocessing in this step constitute the training set of the neural network.
第三步,训练神经网络,得到训练好的神经网络模型。基于预处理后得到的固定图像IF和训练集训练神经网络,得到训练好的神经网络模型。The third step is to train the neural network to obtain the trained neural network model. The neural network is trained based on the fixed image IF obtained after preprocessing and the training set, and a trained neural network model is obtained.
第四步,将待配准三维医学图像输入上述神经网络模型进行配准,最终得到并输出该待配准三维医学图像的配准图像。The fourth step is to input the 3D medical image to be registered into the neural network model for registration, and finally obtain and output the registration image of the 3D medical image to be registered.
本发明使用时,先进行图像采集,之后对采集的图像进行预处理,之后基于预处理得到的固定图像IF和训练集训练神经网络,并得到训练好的神经网络模型,之后将待配准的三维医学图像输入训练好的神经网络模型进行配准,最终得到并输出对应的配准图像。When the present invention is in use, image acquisition is carried out first, then the images collected are preprocessed, then the neural network is trained based on the fixed image I F obtained by the preprocessing and the training set, and the trained neural network model is obtained, and then the registration is performed. The 3D medical images are input into the trained neural network model for registration, and finally the corresponding registration images are obtained and output.
该方法在待配准的相关图像(本实施例中为待配准的人脑图像)数量相对较多时,效果更优。具体地,可在训练好的神经网络模型后,分别将待配准的图像输入训练好的神经网络模型,即可得到各待配准的图像各自对应的配准图像。具体地,在训练好神经网络模型后,每输入一个待配准的图像,则可对应输出一个相应的配准图像;在上一配准图像输出后,可继续向该训练好的神经网络模型中输入下一待配准的图像,直至完成所有待配准图像的图像配准。This method has a better effect when the number of related images to be registered (human brain images to be registered in this embodiment) is relatively large. Specifically, after the neural network model has been trained, the images to be registered can be respectively input into the trained neural network model, and registration images corresponding to the images to be registered can be obtained. Specifically, after the neural network model is trained, each time an image to be registered is input, a corresponding registration image can be output; after the last registration image is output, the trained neural network model can continue to Input the next image to be registered until the image registration of all images to be registered is completed.
其中在第三步中,所述的训练神经网络,得到训练好的神经网络模型,包括:Wherein in the third step, the training neural network is obtained to obtain a trained neural network model, including:
S1、将预处理后得到的固定图像IF和浮动图像IM作为神经网络的输入层输入神经网络,每一组输入数据均包括所述的固定图像IF和一个所述的浮动图像IM。S1. Input the fixed image I F and the floating image I M obtained after preprocessing into the neural network as the input layer of the neural network, and each set of input data includes the fixed image I F and a floating image I M .
每组输入数据IF和IM在被无损拼接成为2通道的3D图像后送入神经网络输入层。Each set of input data I F and I M is sent to the neural network input layer after being losslessly spliced into a 2-channel 3D image.
S2、对输入层中输入的固定图像IF和浮动图像IM进行下采样,输出所输入的固定图像IF和浮动图像IM的特征图。S2. Down-sampling the input fixed image IF and floating image IM in the input layer, and output the feature maps of the input fixed image IF and floating image IM .
所述的下采样包括3个下采样过程、以及位于该3个下采样过程之后的一个卷积核大小为3×3×3的卷积计算过程和一个LeakyReLU激活函数计算过程;所述的3个下采样过程对应3个下采样过程层;所述的3个下采样过程层,依下采样过程的执行顺序依次记为第一下采样过程层、第二下采样过程层和第三下采样过程层;每个下采样过程层均包括一个卷积核大小为3×3×3的卷积层、一个LeakyReLU激活函数层和一个最大池化层。The downsampling includes 3 downsampling processes, a convolution calculation process with a convolution kernel size of 3×3×3 and a LeakyReLU activation function calculation process after the 3 downsampling processes; the 3 Each down-sampling process corresponds to three down-sampling process layers; the three down-sampling process layers are recorded as the first down-sampling process layer, the second down-sampling process layer and the third down-sampling process layer according to the execution order of the down-sampling process Process layer; each downsampling process layer includes a convolution layer with a convolution kernel size of 3×3×3, a LeakyReLU activation function layer, and a maximum pooling layer.
S3、对下采样对应的第一下采样过程层、第二下采样过程层和第三下采样过程层中的LeakyReLU激活函数层输出的特征图分别进行特征重加权,得到三个加权后的特征图,依次为:第一加权特征图、第二加权特征图、第三加权特征图。S3. Perform feature reweighting on the feature maps output by the first downsampling process layer, the second downsampling process layer, and the third downsampling process layer corresponding to the downsampling process layer, respectively, to obtain three weighted features The maps are, in order: the first weighted feature map, the second weighted feature map, and the third weighted feature map.
该步骤S3中所述特征重加权的实现步骤包括:The implementation steps of feature reweighting described in the step S3 include:
步骤S31、记下采样中输出的每个要进行特征重加权的特征图均为特征图X,该X∈R(H×W×D),对特征图X在其D维度上进行切片处理,使用全局平均池化策略对切片处理得到的每个切片x∈R(H×W)进行全局平均池化处理,得到特征图X其D维度上的每个切片x的切片描述符z,每个切片描述符z的具体公式如下:Step S31, write down that each feature map to be reweighted in the sampling output is a feature map X, and this X∈R (H×W×D) , perform slice processing on the feature map X in its D dimension, Use the global average pooling strategy to perform global average pooling on each slice x∈R (H×W) obtained by slice processing, and obtain the slice descriptor z of each slice x on the D dimension of the feature map X, each The specific formula of the slice descriptor z is as follows:
式中(i,j)表示切片x上的像素点,x(i,j)表示切片x在像素点(i,j)处的灰度值;In the formula (i, j) represents the pixel point on the slice x, and x(i, j) represents the gray value of the slice x at the pixel point (i, j);
步骤S32、获取所述特征图X其D维度上的每个切片x的权重s,其中每个切片x的权重s的计算公式如下:Step S32, obtaining the weight s of each slice x on the D dimension of the feature map X, wherein the calculation formula of the weight s of each slice x is as follows:
s=σ(δ(z)),s=σ(δ(z)),
其中,σ表示sigmoid激活函数,δ是ReLU激活函数,z为步骤S31中得到的切片x的切片描述符;Wherein, σ represents the sigmoid activation function, δ is the ReLU activation function, and z is the slice descriptor of the slice x obtained in step S31;
步骤S33、将步骤S32中获取的每个权重s对应加载到各自对应的切片上,得到特征图X其D维度上的每个切片×对应的重加权后的切片其中所述每个切片x对应的特征重加权计算公式如下:Step S33, correspondingly load each weight s obtained in step S32 to its corresponding slice, and obtain each slice on the D dimension of the feature map X × the corresponding reweighted slice The feature reweighting calculation formula corresponding to each slice x is as follows:
该式中,Fscale(x,s)表示切片x及其对应权重s之间的乘法操作; In this formula, F scale (x, s) represents the multiplication operation between the slice x and its corresponding weight s;
步骤S34、基于步骤S33中得到的特征图X其D维度上的每个切片x对应的重加权后的切片对应得到所述特征图X对应的重加权后的特征图 Step S34, the reweighted slice corresponding to each slice x on the D dimension of the feature map X obtained in step S33 Corresponding to obtain the reweighted feature map corresponding to the feature map X
比如对于特征图X∈R(H×W×D),步骤S33中得到的特征图X其D维度上的切片有X1、X2、…、XD,切片X1、X2、…、XD对应的重加权后的特征图依次为则有特征图X对应的重加权后的特征图 For example, for the feature map X∈R (H×W×D) , the feature map X obtained in step S33 has slices on the D dimension of X 1 , X 2 , ..., X D , slices X 1 , X 2 , ..., The reweighted feature maps corresponding to X D are Then there is a reweighted feature map corresponding to the feature map X
S4、S4、对步骤S2中输出的特征图进行1×1×1卷积,输出浮动图像IM到固定图像IF的变形场 S4, S4, perform 1×1×1 convolution on the feature map output in step S2, and output the deformation field from the floating image I M to the fixed image I F
S5、将步骤S2中输出的特征图输入上采样层进行上采样,所述的上采样层包括3个上采样过程层,每个上采样过程层均包括一个UpSampling层和一个卷积核大小为3×3×3的卷积层,每个上采样过程层中的卷积核大小为3×3×3的卷积层后分别设有LeakyReLU激活函数层;3个上采样过程层对应所述上采样的3个上采样过程;所述的3个上采样过程层,依所述3个上采样过程发生的顺序依次记为第一上采样过程层、第二上采样过程层和第三上采样过程层。S5. Input the feature map output in step S2 into the upsampling layer for upsampling. The upsampling layer includes 3 upsampling process layers, and each upsampling process layer includes an UpSampling layer and a convolution kernel with a size of 3 × 3 × 3 convolutional layers, the convolution kernel size in each upsampling process layer is 3 × 3 × 3 convolutional layers, respectively, with a LeakyReLU activation function layer; the 3 upsampling process layers correspond to the above Three upsampling processes of upsampling; the three upsampling process layers are recorded as the first upsampling process layer, the second upsampling process layer and the third upsampling process layer in sequence according to the order in which the three upsampling processes occur Sampling process layer.
其中,第一上采样过程层的UpSampling层输出的特征图与所述的第三加权特征图融合后,作为第一上采样过程层中卷积核大小为3×3×3的卷积层的输入;第二上采样过程层的UpSampling层输出的特征图与所述的第二加权特征图融合后,作为第二上采样过程层中卷积核大小为3×3×3的卷积层的输入;第三上采样过程层的UpSampling层输出的特征图与所述的第一加权特征图融合后,作为第三上采样过程层中卷积核大小为3×3×3的卷积层的输入。Wherein, after the feature map output by the UpSampling layer of the first upsampling process layer is fused with the third weighted feature map, it is used as the convolutional layer with a convolution kernel size of 3×3×3 in the first upsampling process layer Input; after the feature map output by the UpSampling layer of the second upsampling process layer is fused with the second weighted feature map, it is used as the convolution layer whose convolution kernel size is 3×3×3 in the second upsampling process layer Input; after the feature map output by the UpSampling layer of the third upsampling process layer is fused with the first weighted feature map, it is used as the convolution layer whose convolution kernel size is 3×3×3 in the third upsampling process layer enter.
S6、对上述第一上采样过程层、第二上采样过程层和第三上采样过程层输出的特征图分别进行1×1×1卷积,输出第一上采样过程层、第二上采样过程层和第三上采样过程层对应的浮动图像IM到固定图像IF的变形场,依次为变形场变形场和变形场 S6. Perform 1×1×1 convolution on the feature maps output by the first upsampling process layer, the second upsampling process layer, and the third upsampling process layer, and output the first upsampling process layer and the second upsampling process layer The deformation fields from the floating image I M to the fixed image I F corresponding to the process layer and the third upsampling process layer are the deformation fields in turn deformation field and deformation field
S7、将浮动图像IM和上述输出的变形场输入到空间变换网络、将浮动图像IM和上述输出的变形场输入到所述空间变换网络、将浮动图像IM和上述输出的变形场输入到所述空间变换网络、将浮动图像IM和上述输出的变形场输入到所述空间变换网络,分别经所述空间变换网络的空间变换,对应得到浮动图像IM对应的变形后的图像,依次为变形后的图像变形后的图像变形后的图像以及变形后的图像 S7, the floating image I M and the deformation field of the above-mentioned output Input to the spatial transformation network, transform the floating image I M and the above output deformation field Input to the space transformation network, the floating image I M and the above output deformation field Input to the space transformation network, the floating image I M and the above output deformation field Input to the space transformation network, respectively through the space transformation of the space transformation network, correspondingly obtain the deformed image corresponding to the floating image I M , which is the transformed image successively deformed image deformed image and the transformed image
S8、基于上述输出的变形场变形场变形场以及基于上述得到的变形后的图像变形后的图像变形后的图像变形后的图像利用损失函数计算固定图像IF与所述变形后的图像之间的损失函数值,并对神经网络进行反向传播优化,直至计算所得的损失函数值不再变小或网络训练达到预先设定的训练迭代次数,神经网络训练完成,得到所述的训练好的神经网络模型。S8. Deformation field based on the above output deformation field deformation field And the deformed image based on the above deformed image deformed image deformed image Using the loss function to calculate the fixed image I F and the deformed image Between the loss function value, and optimize the neural network by backpropagation, until the calculated loss function value no longer becomes smaller or the network training reaches the preset number of training iterations, the neural network training is completed, and the training Good neural network models.
其中,所述损失函数的计算表达式为:Wherein, the calculation expression of the loss function is:
在此式①中,表示计算所得的损失函数值,α、β均为常数且α+β=1,是正则项,λ是正则化控制常数参数,表示预先给定的由所述固定图像IF降采样得到的三维医学图像;三维医学图像的大小,依次与所述变形后的图像相等,三维医学图像的分辨率依次降低并且均小于所述固定图像IF的分辨率,表示所述固定图像IF与所述变形后的图像之间的相似度度量,表示上述三维医学图像与所述变形后的图像之间的相似度度量,表示上述三维医学图像与所述变形后的图像之间的相似度度量,表示上述三维医学图像与所述变形后的图像之间的相似度度量;采用相同的相似度度量函数。In this formula ①, Indicates the calculated loss function value, α and β are constants and α+β=1, is a regular term, λ is a regularization control constant parameter, Represents a pre-specified three-dimensional medical image obtained by downsampling the fixed image IF ; a three-dimensional medical image The size, in turn, corresponds to the warped image Equal, 3D medical images The resolutions of are successively reduced and are smaller than the resolution of the fixed image I F , Denotes the fixed image I F and the deformed image The similarity measure between Represents the above three-dimensional medical image with the warped image The similarity measure between Represents the above three-dimensional medical image with the warped image The similarity measure between Represents the above three-dimensional medical image with the warped image The similarity measure between; The same similarity measure function is used.
可选地,本实施例中所述的相似度度量函数,采用互相关函数;所述的空间变换网络,采用STN空间变换网络。Optionally, the similarity measurement function described in this embodiment uses a cross-correlation function; the space transformation network uses an STN space transformation network.
可选地,本实施例中所述的方法100,可采用U-Net神经网络作为基本神经网络结构进行实现。Optionally, the method 100 described in this embodiment may be implemented using a U-Net neural network as a basic neural network structure.
可选地,本发明中所涉及的特征图的融合的方式可为拼接融合。本实施例中可采用U-Net式的channel维度拼接融合。另外,本领域技术人员在实现本发明所涉及的特征图的融合时,还可依据实际情况选择其他融合方式进行融合,比如还可采用加和融合(对应点相加)的方式进行特征图的融合。Optionally, the fusion of the feature maps involved in the present invention may be splicing and fusion. In this embodiment, U-Net-style channel dimension splicing and fusion can be used. In addition, when those skilled in the art realize the fusion of the feature maps involved in the present invention, they can also choose other fusion methods for fusion according to the actual situation. fusion.
本实施方式中,下采样时,最大池化层下采样因子为2×2×2,相对应地,可预先设定由固定图像IF缩小1/2后得到、由固定图像IF缩小1/4后得到、由固定图像IF缩小1/8后得到,的分辨率>的分辨率>>0。In this embodiment, when downsampling, the downsampling factor of the maximum pooling layer is 2×2×2, correspondingly, it can be preset Obtained by reducing the fixed image I F by 1/2, Obtained by reducing the fixed image I F by 1/4, Obtained by reducing the fixed image I F by 1/8, resolution > resolution > >0.
综上,本发明提供的基于神经网络的无监督三维医学图像配准方法,其损失函数的使用,实现了三维医学图像的无监督配准,配准过程中不需要任何的标记信息和配准监督信息,减少了标记数据的需求和人为主观判断的错误,一定程度上有助于没有相关医学经验的医学工作者进行图像配准,且一定程度上提高了配准的速率,节约了配准的时间,同时也在一定程度上节约人力物力。In summary, the neural network-based unsupervised 3D medical image registration method provided by the present invention, the use of its loss function, realizes the unsupervised registration of 3D medical images, and does not require any label information and registration information during the registration process. Supervising information reduces the need for labeled data and errors in human subjective judgments. To a certain extent, it helps medical workers who have no relevant medical experience to perform image registration, and to a certain extent, it improves the speed of registration and saves registration time. time, but also save manpower and material resources to a certain extent.
参见图3,本发明的一种基于神经网络的无监督三维医学图像配准系统200,包括:Referring to FIG. 3 , a neural network-based unsupervised three-dimensional medical image registration system 200 of the present invention includes:
图像获取单元201,从公开数据集OASIS和ADNI获取三维医学图像;The image acquisition unit 201 acquires three-dimensional medical images from the public data sets OASIS and ADNI;
图像预处理单元202,对获取到的三维医学图像进行预处理,包括图像分割、裁剪、归一化处理和仿射对齐,并从仿射对齐后的图像中选取任意一个图像作为固定图像IF,其余图像作为浮动图像IM;其中裁剪后的图像大小一致;The image preprocessing unit 202 performs preprocessing on the acquired three-dimensional medical image, including image segmentation, cropping, normalization processing and affine alignment, and selects any image from the affine aligned image as a fixed image I F , and the rest of the images are used as floating images I M ; wherein the cropped images have the same size;
神经网络训练单元203,基于预处理后得到的固定图像IF和浮动图像IM训练神经网络,得到训练好的神经网络模型;The neural network training unit 203 trains the neural network based on the fixed image IF and the floating image IM obtained after preprocessing to obtain a trained neural network model;
图像配准单元204,将待配准的医学图像输入上述训练好的神经网络模型进行配准,得到并输出该待配准的医学图像的配准图像;Image registration unit 204, input the medical image to be registered into the above-mentioned trained neural network model for registration, obtain and output the registration image of the medical image to be registered;
其中,所述的神经网络训练单元203,包括:Wherein, the neural network training unit 203 includes:
输入模块,将预处理后得到的固定图像IF和浮动图像IM作为神经网络的输入层输入神经网络,每一组输入数据均包括所述的固定图像IF和一个所述的浮动图像IM;Input module, the fixed image I F and floating image I M obtained after preprocessing are input into neural network as the input layer of neural network, and each group of input data all comprises described fixed image I F and a described floating image I M ;
下采样模块,对输入层中输入的固定图像IF和浮动图像IM进行下采样,输出特征图;所述的下采样包括3个下采样过程、以及位于该3个下采样过程之后的一个卷积核大小为3×3×3的卷积计算过程和一个LeakyReLU激活函数计算过程;3个下采样过程对应3个下采样过程层;所述的3个下采样过程层,依下采样过程的执行顺序依次记为第一下采样过程层、第二下采样过程层和第三下采样过程层;每个下采样过程层均包括一个卷积核大小为3×3×3的卷积层、一个LeakyReLU激活函数层和一个最大池化层;The downsampling module is used to downsample the fixed image IF and the floating image IM input in the input layer, and output the feature map; the downsampling includes 3 downsampling processes and one after the 3 downsampling processes The convolution calculation process with a convolution kernel size of 3×3×3 and a LeakyReLU activation function calculation process; the three downsampling processes correspond to the three downsampling process layers; the three downsampling process layers described above follow the downsampling process The execution sequence of is recorded as the first downsampling process layer, the second downsampling process layer and the third downsampling process layer; each downsampling process layer includes a convolution layer with a convolution kernel size of 3×3×3 , a LeakyReLU activation function layer and a maximum pooling layer;
重加权模块,对下采样对应的第一下采样过程层、第二下采样过程层和第三下采样过程层中的LeakyReLU激活函数层输出的特征图分别进行特征重加权,得到三个加权后的特征图,依次为:第一加权特征图、第二加权特征图、第三加权特征图;The reweighting module performs feature reweighting on the feature maps output by the LeakyReLU activation function layer in the first downsampling process layer, the second downsampling process layer, and the third downsampling process layer corresponding to downsampling, and obtains three weighted The feature maps of , in order: the first weighted feature map, the second weighted feature map, and the third weighted feature map;
第一变形场输出模块,对下采样模块输出的特征图进行1×1×1卷积,输出浮动图像IM到固定图像IF的变形场 The first deformation field output module performs 1×1×1 convolution on the feature map output by the downsampling module, and outputs the deformation field from the floating image I M to the fixed image I F
上采样模块,将下采样模块输出的特征图输入上采样层进行上采样,所述的上采样层包括3个上采样过程层,每个上采样过程层均包括一个UpSampling层和一个卷积核大小为3×3×3的卷积层,每个上采样过程层中的卷积核大小为3×3×3的卷积层后分别设有LeakyReLU激活函数层;3个上采样过程层对应所述上采样的3个上采样过程;所述的3个上采样过程层,依所述3个上采样过程发生的顺序依次记为第一上采样过程层、第二上采样过程层和第三上采样过程层;其中,第一上采样过程层的UpSampling层输出的特征图与所述的第三加权特征图融合后,作为第一上采样过程层中卷积核大小为3×3×3的卷积层的输入;第二上采样过程层的UpSampling层输出的特征图与所述的第二加权特征图融合后,作为第二上采样过程层中卷积核大小为3×3×3的卷积层的输入;第三上采样过程层的UpSampling层输出的特征图与所述的第一加权特征图融合后,作为第三上采样过程层中卷积核大小为3×3×3的卷积层的输入;The upsampling module inputs the feature map output by the downsampling module into the upsampling layer for upsampling, the upsampling layer includes 3 upsampling process layers, each upsampling process layer includes an UpSampling layer and a convolution kernel A convolutional layer with a size of 3×3×3, and a convolution kernel with a size of 3×3×3 in each upsampling process layer is respectively equipped with a LeakyReLU activation function layer; the 3 upsampling process layers correspond to The three upsampling processes of the upsampling; the three upsampling process layers are recorded as the first upsampling process layer, the second upsampling process layer and the first upsampling process layer in the order in which the three upsampling processes occur. Three upsampling process layers; wherein, after the feature map output by the UpSampling layer of the first upsampling process layer is fused with the third weighted feature map, the size of the convolution kernel in the first upsampling process layer is 3×3× The input of the convolutional layer of 3; after the feature map output by the UpSampling layer of the second upsampling process layer is fused with the second weighted feature map, the size of the convolution kernel in the second upsampling process layer is 3×3× The input of the convolutional layer of 3; after the feature map output by the UpSampling layer of the third upsampling process layer is fused with the first weighted feature map, the size of the convolution kernel in the third upsampling process layer is 3×3× The input of the convolutional layer of 3;
第二变形场输出模块,对上述第一上采样过程层、第二上采样过程层和第三上采样过程层输出的特征图分别进行1×1×1卷积,输出第一上采样过程层、第二上采样过程层和第三上采样过程层对应的浮动图像IM到固定图像IF的变形场,依次为变形场变形场和变形场 The second deformation field output module performs 1×1×1 convolution on the feature maps output by the first upsampling process layer, the second upsampling process layer and the third upsampling process layer, and outputs the first upsampling process layer , the deformation field from the floating image I M to the fixed image I F corresponding to the second upsampling process layer and the third upsampling process layer, which are the deformation fields in turn deformation field and deformation field
空间变换模块,将浮动图像IM和上述输出的变形场输入到空间变换网络、将浮动图像IM和上述输出的变形场输入到所述空间变换网络、将浮动图像IM和上述输出的变形场输入到所述空间变换网络、将浮动图像IM和上述输出的变形场输入到所述空间变换网络,分别经所述空间变换网络的空间变换,对应得到浮动图像IM对应的变形后的图像,依次为变形后的图像变形后的图像变形后的图像以及变形后的图像 A spatial transformation module that transforms the floating image I M and the above-mentioned output deformation field Input to the spatial transformation network, transform the floating image I M and the above output deformation field Input to the space transformation network, the floating image I M and the above output deformation field Input to the space transformation network, the floating image I M and the above output deformation field Input to the space transformation network, respectively through the space transformation of the space transformation network, correspondingly obtain the deformed image corresponding to the floating image I M , which is the transformed image successively deformed image deformed image and the transformed image
神经网络优化模块,基于上述输出的变形场变形场变形场以及基于上述得到的变形后的图像变形后的图像变形后的图像变形后的图像利用损失函数计算固定图像IF与所述变形后的图像之间的损失函数值,并对神经网络进行反向传播优化,直至计算所得的损失函数值不再变小或网络训练达到预先设定的训练迭代次数,神经网络训练完成,得到所述的训练好的神经网络模型;Neural network optimization module, deformation field based on the above output deformation field deformation field And the deformed image based on the above deformed image deformed image deformed image Using the loss function to calculate the fixed image I F and the deformed image Between the loss function value, and optimize the neural network by backpropagation, until the calculated loss function value no longer becomes smaller or the network training reaches the preset number of training iterations, the neural network training is completed, and the training good neural network model;
所述损失函数的计算表达式为:The calculation expression of the loss function is:
在此式①中,表示计算所得的损失函数值,α、β均为常数且α+β=1,是正则项,λ是正则化控制常数参数,表示预先给定的由所述固定图像IF降采样得到的三维医学图像;三维医学图像的大小,依次与所述变形后的图像相等,三维医学图像的分辨率依次降低并且均小于所述固定图像IF的分辨率,表示所述固定图像IF与所述变形后的图像之间的相似度度量,表示上述三维医学图像与所述变形后的图像之间的相似度度量,表示上述三维医学图像与所述变形后的图像之间的相似度度量,表示上述三维医学图像与所述变形后的图像之间的相似度度量;采用相同的相似度度量函数。In this formula ①, Indicates the calculated loss function value, α and β are constants and α+β=1, is a regular term, λ is a regularization control constant parameter, Represents a pre-specified three-dimensional medical image obtained by downsampling the fixed image IF ; a three-dimensional medical image The size, in turn, corresponds to the warped image Equal, 3D medical images The resolutions of are successively reduced and are smaller than the resolution of the fixed image I F , Denotes the fixed image I F and the deformed image The similarity measure between Represents the above three-dimensional medical image with the warped image The similarity measure between Represents the above three-dimensional medical image with the warped image The similarity measure between Represents the above three-dimensional medical image with the warped image The similarity measure between; The same similarity measure function is used.
其中,所述的重加权模块,包括:Wherein, the reweighting module includes:
描述符获取模块,记下采样中输出的每个要进行特征重加权的特征图均为X,X∈R(H×W×D),对特征图X在其D维度上进行切片处理,使用全局平均池化策略对切片处理得到的每个切片x∈R(H×W)进行全局平均池化处理,得到特征图X其D维度上的每个切片x的切片描述符z,每个切片描述符z的具体公式如下:The descriptor acquisition module records that each feature map to be reweighted in the sampling output is X, X∈R (H×W×D) , slices the feature map X on its D dimension, and uses The global average pooling strategy performs global average pooling on each slice x∈R (H×W) obtained from the slice processing, and obtains the slice descriptor z of each slice x on the D dimension of the feature map X, and each slice The specific formula of the descriptor z is as follows:
式中(i,j)表示切片x上的像素点,x(i,j)表示切片x在像素点(i,j)处的灰度值;In the formula (i, j) represents the pixel point on the slice x, and x(i, j) represents the gray value of the slice x at the pixel point (i, j);
切片权重计算模块,获取所述特征图X其D维度上的每个切片x的权重s,其中每个切片x的权重s的计算公式如下:The slice weight calculation module obtains the weight s of each slice x on the D dimension of the feature map X, wherein the calculation formula of the weight s of each slice x is as follows:
s=σ(δ(z)),s=σ(δ(z)),
其中,σ表示sigmoid激活函数,δ是ReLU激活函数,z为描述符获取模块得到的切片x的切片描述符;Among them, σ represents the sigmoid activation function, δ is the ReLU activation function, and z is the slice descriptor of the slice x obtained by the descriptor acquisition module;
加权模块,将切片权重计算模块获取的每个权重s对应加载到各自对应的切片上,得到特征图X其D维度上的每个切片×对应的重加权后的切片其中所述每个切片x对应的特征重加权计算公式如下:The weighting module loads each weight s obtained by the slice weight calculation module to its corresponding slice, and obtains each slice on the D dimension of the feature map X × the corresponding reweighted slice The feature reweighting calculation formula corresponding to each slice x is as follows:
该式中,Fscale(x,s)表示切片x及其对应权重s之间的乘法操作; In this formula, F scale (x, s) represents the multiplication operation between the slice x and its corresponding weight s;
加权后图像获取模块,基于加权模块得到的特征图X其D维度上的每个切片x对应的重加权后的切片对应得到所述特征图X对应的重加权后的特征图 Weighted image acquisition module, based on the feature map X obtained by the weighting module, each slice x on the D dimension corresponds to the reweighted slice Corresponding to obtain the reweighted feature map corresponding to the feature map X
可选地,所述的相似度度量函数,采用互相关函数。Optionally, the similarity measurement function adopts a cross-correlation function.
可选地,所述的空间变换网络,采用STN空间变换网络。Optionally, the space transformation network is an STN space transformation network.
可选地,图像预处理单元202中所述的预处理还包括数据增强;其中所述的数据增强包括:对已得到的每个浮动图像分别进行弯曲变换,得到所述已得到的每个浮动图像对应的弯曲变换后的图像;所得到的各弯曲变换后的图像为新增加的浮动图像。Optionally, the preprocessing in the image preprocessing unit 202 also includes data enhancement; wherein the data enhancement includes: performing bending transformation on each obtained floating image to obtain each obtained floating image The images after warp transformation corresponding to the image; each obtained warp transformed image is a newly added floating image.
本说明书中各个实施例之间相同相似的部分互相参见即可。尤其,对于系统实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例中的说明即可。For the same and similar parts among the various embodiments in this specification, refer to each other. In particular, for the system embodiment, since it is basically similar to the method embodiment, the description is relatively simple, and for the relevant parts, refer to the description in the method embodiment.
尽管通过参考附图并结合优选实施例的方式对本发明进行了详细描述,但本发明并不限于此。在不脱离本发明的精神和实质的前提下,本领域普通技术人员可以对本发明的实施例进行各种等效的修改或替换,而这些修改或替换都应在本发明的涵盖范围内/任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应所述以权利要求的保护范围为准。Although the present invention has been described in detail in conjunction with preferred embodiments with reference to the accompanying drawings, the present invention is not limited thereto. Without departing from the spirit and essence of the present invention, those skilled in the art can make various equivalent modifications or replacements to the embodiments of the present invention, and these modifications or replacements should be within the scope of the present invention/any Those skilled in the art can easily think of changes or substitutions within the technical scope disclosed in the present invention, and all should be covered within the protection scope of the present invention. Therefore, the protection scope of the present invention should be based on the protection scope of the claims.
Claims (10)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910828807.7A CN110599528B (en) | 2019-09-03 | 2019-09-03 | Unsupervised three-dimensional medical image registration method and system based on neural network |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910828807.7A CN110599528B (en) | 2019-09-03 | 2019-09-03 | Unsupervised three-dimensional medical image registration method and system based on neural network |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN110599528A true CN110599528A (en) | 2019-12-20 |
| CN110599528B CN110599528B (en) | 2022-05-27 |
Family
ID=68857220
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201910828807.7A Active CN110599528B (en) | 2019-09-03 | 2019-09-03 | Unsupervised three-dimensional medical image registration method and system based on neural network |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN110599528B (en) |
Cited By (32)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111027508A (en) * | 2019-12-23 | 2020-04-17 | 电子科技大学 | A detection method of remote sensing image overlay change based on deep neural network |
| CN111091575A (en) * | 2019-12-31 | 2020-05-01 | 电子科技大学 | Medical image segmentation method based on reinforcement learning method |
| CN111242877A (en) * | 2019-12-31 | 2020-06-05 | 北京深睿博联科技有限责任公司 | Mammary X-ray image registration method and device |
| CN111260705A (en) * | 2020-01-13 | 2020-06-09 | 武汉大学 | A multi-task registration method for prostate MR images based on deep convolutional neural network |
| CN111524170A (en) * | 2020-04-13 | 2020-08-11 | 中南大学 | Lung CT image registration method based on unsupervised deep learning |
| CN111768379A (en) * | 2020-06-29 | 2020-10-13 | 深圳度影医疗科技有限公司 | Standard section detection method of three-dimensional uterine ultrasound image |
| CN112102373A (en) * | 2020-07-29 | 2020-12-18 | 浙江工业大学 | Carotid artery multi-mode image registration method based on strong constraint affine deformation feature learning |
| CN112150425A (en) * | 2020-09-16 | 2020-12-29 | 北京工业大学 | An Unsupervised Intravascular Ultrasound Image Registration Method Based on Neural Network |
| CN112652002A (en) * | 2020-12-25 | 2021-04-13 | 江苏集萃复合材料装备研究所有限公司 | Medical image registration method based on IDC algorithm |
| CN112907439A (en) * | 2021-03-26 | 2021-06-04 | 中国科学院深圳先进技术研究院 | Supine position and prone position mammary gland image registration method based on deep learning |
| CN113034453A (en) * | 2021-03-16 | 2021-06-25 | 深圳先进技术研究院 | Mammary gland image registration method based on deep learning |
| CN113344876A (en) * | 2021-06-08 | 2021-09-03 | 安徽大学 | Deformable registration method between CT and CBCT |
| CN113409291A (en) * | 2021-06-29 | 2021-09-17 | 山东大学 | Lung 4D-CT medical image registration method and system |
| CN113450397A (en) * | 2021-06-25 | 2021-09-28 | 广州柏视医疗科技有限公司 | Image deformation registration method based on deep learning |
| CN113724307A (en) * | 2021-09-02 | 2021-11-30 | 深圳大学 | Image registration method and device based on characteristic self-calibration network and related components |
| CN113763441A (en) * | 2021-08-25 | 2021-12-07 | 中国科学院苏州生物医学工程技术研究所 | Medical image registration method and system for unsupervised learning |
| CN114170276A (en) * | 2021-10-15 | 2022-03-11 | 烟台大学 | A method for hippocampal registration of magnetic resonance brain images |
| CN114240825A (en) * | 2021-11-02 | 2022-03-25 | 北京深睿博联科技有限责任公司 | An image segmentation method and device based on multimodal images |
| CN114332018A (en) * | 2021-12-29 | 2022-04-12 | 大连理工大学 | Medical image registration method based on deep learning and contour features |
| CN114627167A (en) * | 2022-02-25 | 2022-06-14 | 广州瑞多思医疗科技有限公司 | Method and device for registering arbitrary modal images based on neural network |
| CN114693897A (en) * | 2021-04-28 | 2022-07-01 | 上海联影智能医疗科技有限公司 | Unsupervised inter-layer super-resolution for medical images |
| WO2022178997A1 (en) * | 2021-02-25 | 2022-09-01 | 平安科技(深圳)有限公司 | Medical image registration method and apparatus, computer device, and storage medium |
| CN115170622A (en) * | 2022-05-11 | 2022-10-11 | 复旦大学 | Transformer-based medical image registration method and system |
| CN116228823A (en) * | 2023-01-05 | 2023-06-06 | 中国科学院精密测量科学与技术创新研究院 | An artificial intelligence-based method for unsupervised cascade registration of magnetic resonance images |
| CN116368516A (en) * | 2020-05-23 | 2023-06-30 | 平安科技(深圳)有限公司 | Method and device for multimodal clinical image alignment using joint synthesis, segmentation and registration |
| CN116740399A (en) * | 2023-06-13 | 2023-09-12 | 中国电子科技集团公司第五十二研究所 | Training methods, matching methods and media for heterogeneous image matching models |
| CN116797483A (en) * | 2023-06-27 | 2023-09-22 | 北京筑梦园科技有限公司 | Image correction method, system, computer equipment and storage medium |
| JP2023135836A (en) * | 2022-03-16 | 2023-09-29 | キヤノン株式会社 | Image processing device, image processing method, and program |
| CN117218166A (en) * | 2022-05-30 | 2023-12-12 | 中国医学科学院基础医学研究所 | Deformation registration system for MRI brain nerve images based on two-stream cross network |
| CN117274330A (en) * | 2023-09-14 | 2023-12-22 | 吉林大学 | Deformable medical image registration model and registration method based on Swin transducer |
| WO2024140435A1 (en) * | 2022-12-30 | 2024-07-04 | 上海术之道机器人有限公司 | Tissue structure model data fusion method and apparatus for interventional surgery |
| CN118314184A (en) * | 2024-06-11 | 2024-07-09 | 中国科学技术大学 | Medical image registration method, device and computer equipment based on improved U-Net network |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20160093050A1 (en) * | 2014-09-30 | 2016-03-31 | Samsung Electronics Co., Ltd. | Image registration device, image registration method, and ultrasonic diagnosis apparatus having image registration device |
| US20190012567A1 (en) * | 2010-05-03 | 2019-01-10 | Mim Software Inc. | Systems and methods for contouring a set of medical images |
| CN109584283A (en) * | 2018-11-29 | 2019-04-05 | 合肥中科离子医学技术装备有限公司 | A kind of Medical Image Registration Algorithm based on convolutional neural networks |
| CN109767459A (en) * | 2019-01-17 | 2019-05-17 | 中南大学 | A Novel Fundus Map Registration Method |
| CN110021037A (en) * | 2019-04-17 | 2019-07-16 | 南昌航空大学 | A kind of image non-rigid registration method and system based on generation confrontation network |
-
2019
- 2019-09-03 CN CN201910828807.7A patent/CN110599528B/en active Active
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20190012567A1 (en) * | 2010-05-03 | 2019-01-10 | Mim Software Inc. | Systems and methods for contouring a set of medical images |
| US20160093050A1 (en) * | 2014-09-30 | 2016-03-31 | Samsung Electronics Co., Ltd. | Image registration device, image registration method, and ultrasonic diagnosis apparatus having image registration device |
| CN109584283A (en) * | 2018-11-29 | 2019-04-05 | 合肥中科离子医学技术装备有限公司 | A kind of Medical Image Registration Algorithm based on convolutional neural networks |
| CN109767459A (en) * | 2019-01-17 | 2019-05-17 | 中南大学 | A Novel Fundus Map Registration Method |
| CN110021037A (en) * | 2019-04-17 | 2019-07-16 | 南昌航空大学 | A kind of image non-rigid registration method and system based on generation confrontation network |
Non-Patent Citations (4)
| Title |
|---|
| HONGMING LI 等: "Non-rigid image registration using self-supervised fully convolutional networks without training data", 《2018 IEEE 15TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI 2018)》 * |
| XIAOHUAN CAO 等: "Deformable Image Registration Using a Cue-Aware Deep Regression Network", 《IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING》 * |
| 张文华 等: "拟合精度引导的扩散加权图像配准", 《计算机科学》 * |
| 纪慧中 等: "基于图像特征和光流场的非刚性图像配准", 《光学精密工程》 * |
Cited By (44)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111027508A (en) * | 2019-12-23 | 2020-04-17 | 电子科技大学 | A detection method of remote sensing image overlay change based on deep neural network |
| CN111027508B (en) * | 2019-12-23 | 2022-09-06 | 电子科技大学 | A detection method of remote sensing image overlay change based on deep neural network |
| CN111091575A (en) * | 2019-12-31 | 2020-05-01 | 电子科技大学 | Medical image segmentation method based on reinforcement learning method |
| CN111242877A (en) * | 2019-12-31 | 2020-06-05 | 北京深睿博联科技有限责任公司 | Mammary X-ray image registration method and device |
| CN111260705A (en) * | 2020-01-13 | 2020-06-09 | 武汉大学 | A multi-task registration method for prostate MR images based on deep convolutional neural network |
| CN111524170A (en) * | 2020-04-13 | 2020-08-11 | 中南大学 | Lung CT image registration method based on unsupervised deep learning |
| CN111524170B (en) * | 2020-04-13 | 2023-05-26 | 中南大学 | Pulmonary CT image registration method based on unsupervised deep learning |
| CN116368516A (en) * | 2020-05-23 | 2023-06-30 | 平安科技(深圳)有限公司 | Method and device for multimodal clinical image alignment using joint synthesis, segmentation and registration |
| CN111768379A (en) * | 2020-06-29 | 2020-10-13 | 深圳度影医疗科技有限公司 | Standard section detection method of three-dimensional uterine ultrasound image |
| CN112102373A (en) * | 2020-07-29 | 2020-12-18 | 浙江工业大学 | Carotid artery multi-mode image registration method based on strong constraint affine deformation feature learning |
| CN112150425B (en) * | 2020-09-16 | 2024-05-24 | 北京工业大学 | Unsupervised intravascular ultrasound image registration method based on neural network |
| CN112150425A (en) * | 2020-09-16 | 2020-12-29 | 北京工业大学 | An Unsupervised Intravascular Ultrasound Image Registration Method Based on Neural Network |
| CN112652002A (en) * | 2020-12-25 | 2021-04-13 | 江苏集萃复合材料装备研究所有限公司 | Medical image registration method based on IDC algorithm |
| CN112652002B (en) * | 2020-12-25 | 2024-05-03 | 江苏集萃复合材料装备研究所有限公司 | Medical image registration method based on IDC algorithm |
| WO2022178997A1 (en) * | 2021-02-25 | 2022-09-01 | 平安科技(深圳)有限公司 | Medical image registration method and apparatus, computer device, and storage medium |
| CN113034453A (en) * | 2021-03-16 | 2021-06-25 | 深圳先进技术研究院 | Mammary gland image registration method based on deep learning |
| CN112907439A (en) * | 2021-03-26 | 2021-06-04 | 中国科学院深圳先进技术研究院 | Supine position and prone position mammary gland image registration method based on deep learning |
| CN112907439B (en) * | 2021-03-26 | 2023-08-08 | 中国科学院深圳先进技术研究院 | Deep learning-based supine position and prone position breast image registration method |
| WO2022199135A1 (en) * | 2021-03-26 | 2022-09-29 | 中国科学院深圳先进技术研究院 | Supine position and prone position breast image registration method based on deep learning |
| CN114693897A (en) * | 2021-04-28 | 2022-07-01 | 上海联影智能医疗科技有限公司 | Unsupervised inter-layer super-resolution for medical images |
| CN113344876A (en) * | 2021-06-08 | 2021-09-03 | 安徽大学 | Deformable registration method between CT and CBCT |
| CN113450397B (en) * | 2021-06-25 | 2022-04-01 | 广州柏视医疗科技有限公司 | Image deformation registration method based on deep learning |
| CN113450397A (en) * | 2021-06-25 | 2021-09-28 | 广州柏视医疗科技有限公司 | Image deformation registration method based on deep learning |
| CN113409291A (en) * | 2021-06-29 | 2021-09-17 | 山东大学 | Lung 4D-CT medical image registration method and system |
| CN113763441A (en) * | 2021-08-25 | 2021-12-07 | 中国科学院苏州生物医学工程技术研究所 | Medical image registration method and system for unsupervised learning |
| CN113763441B (en) * | 2021-08-25 | 2024-01-26 | 中国科学院苏州生物医学工程技术研究所 | Medical image registration method and system without supervision learning |
| CN113724307B (en) * | 2021-09-02 | 2023-04-28 | 深圳大学 | Image registration method and device based on characteristic self-calibration network and related components |
| CN113724307A (en) * | 2021-09-02 | 2021-11-30 | 深圳大学 | Image registration method and device based on characteristic self-calibration network and related components |
| CN114170276A (en) * | 2021-10-15 | 2022-03-11 | 烟台大学 | A method for hippocampal registration of magnetic resonance brain images |
| CN114240825A (en) * | 2021-11-02 | 2022-03-25 | 北京深睿博联科技有限责任公司 | An image segmentation method and device based on multimodal images |
| CN114332018B (en) * | 2021-12-29 | 2024-11-19 | 大连理工大学 | A medical image registration method based on deep learning and contour features |
| CN114332018A (en) * | 2021-12-29 | 2022-04-12 | 大连理工大学 | Medical image registration method based on deep learning and contour features |
| CN114627167A (en) * | 2022-02-25 | 2022-06-14 | 广州瑞多思医疗科技有限公司 | Method and device for registering arbitrary modal images based on neural network |
| JP2023135836A (en) * | 2022-03-16 | 2023-09-29 | キヤノン株式会社 | Image processing device, image processing method, and program |
| CN115170622B (en) * | 2022-05-11 | 2025-05-30 | 复旦大学 | Medical image registration method and system based on transformer |
| CN115170622A (en) * | 2022-05-11 | 2022-10-11 | 复旦大学 | Transformer-based medical image registration method and system |
| CN117218166A (en) * | 2022-05-30 | 2023-12-12 | 中国医学科学院基础医学研究所 | Deformation registration system for MRI brain nerve images based on two-stream cross network |
| WO2024140435A1 (en) * | 2022-12-30 | 2024-07-04 | 上海术之道机器人有限公司 | Tissue structure model data fusion method and apparatus for interventional surgery |
| CN116228823A (en) * | 2023-01-05 | 2023-06-06 | 中国科学院精密测量科学与技术创新研究院 | An artificial intelligence-based method for unsupervised cascade registration of magnetic resonance images |
| CN116740399A (en) * | 2023-06-13 | 2023-09-12 | 中国电子科技集团公司第五十二研究所 | Training methods, matching methods and media for heterogeneous image matching models |
| CN116797483A (en) * | 2023-06-27 | 2023-09-22 | 北京筑梦园科技有限公司 | Image correction method, system, computer equipment and storage medium |
| CN117274330A (en) * | 2023-09-14 | 2023-12-22 | 吉林大学 | Deformable medical image registration model and registration method based on Swin transducer |
| CN117274330B (en) * | 2023-09-14 | 2025-09-02 | 吉林大学 | A system and registration method for deformable medical image registration model based on Swin Transformer |
| CN118314184A (en) * | 2024-06-11 | 2024-07-09 | 中国科学技术大学 | Medical image registration method, device and computer equipment based on improved U-Net network |
Also Published As
| Publication number | Publication date |
|---|---|
| CN110599528B (en) | 2022-05-27 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN110599528A (en) | Unsupervised three-dimensional medical image registration method and system based on neural network | |
| CN111091589B (en) | Ultrasound and MRI image registration method and device based on multi-scale supervised learning | |
| Li et al. | Automated measurement network for accurate segmentation and parameter modification in fetal head ultrasound images | |
| CN110570394B (en) | Medical image segmentation method, device, equipment and storage medium | |
| US8861891B2 (en) | Hierarchical atlas-based segmentation | |
| CN118172372A (en) | Cross-modal tumor automatic segmentation method and storage medium based on PET-CT medical images | |
| CN112785632A (en) | Cross-modal automatic registration method for DR (digital radiography) and DRR (digital radiography) images in image-guided radiotherapy based on EPID (extended medical imaging) | |
| CN117710681A (en) | Semi-supervised medical image segmentation method based on data enhancement strategy | |
| CN113781659B (en) | A three-dimensional reconstruction method, device, electronic device and readable storage medium | |
| CN116681894A (en) | Adjacent layer feature fusion Unet multi-organ segmentation method, system, equipment and medium combining large-kernel convolution | |
| CN118279361A (en) | Multi-mode medical image registration method based on unsupervised deep learning and mode conversion | |
| CN111127488B (en) | Method for automatically constructing patient anatomical structure model based on statistical shape model | |
| CN117372484A (en) | Brain nuclear magnetic resonance image registration method and device based on depth self-attention network | |
| Zhang et al. | A diffeomorphic unsupervised method for deformable soft tissue image registration | |
| CN119180954A (en) | Cervical vertebra segmentation and key point detection method based on diffusion model | |
| CN111340209A (en) | Network model training method, image segmentation method and focus positioning method | |
| CN113822323A (en) | Brain scanning image identification processing method, device, equipment and storage medium | |
| CN116797726A (en) | Organ three-dimensional reconstruction method, device, electronic equipment and storage medium | |
| CN119131044B (en) | 3D heart image segmentation system based on two visual angles and semi-supervised attention model | |
| CN118762241B (en) | Medical image lesion classification method and system | |
| CN111369662A (en) | Method and system for reconstructing 3D model of blood vessels in CT images | |
| CN113450394A (en) | Different-size image registration method based on Siamese network | |
| CN120219308A (en) | A brain image analysis method and system based on multimodal fusion | |
| CN120319454A (en) | A deep learning-based imaging assessment system for mild traumatic brain injury | |
| CN118840382A (en) | Organ and medical image segmentation result correction system based on DS-ASPP and CBEM |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| TR01 | Transfer of patent right | ||
| TR01 | Transfer of patent right |
Effective date of registration: 20251028 Address after: Room 1907-B, Lianjie Financial Building, No. 109 Binhai Street, Fengze District, Quanzhou City, Fujian Province, 362000 Patentee after: Fujian Qidou Network Technology Co.,Ltd. Country or region after: China Address before: The central nanxinzhuang Ji'nan Road, No. 336 of 250022 cities in Shandong Province Patentee before: University of Jinan Country or region before: China |