[go: up one dir, main page]

CN107203989A - End-to-end chest CT image dividing method based on full convolutional neural networks - Google Patents

End-to-end chest CT image dividing method based on full convolutional neural networks Download PDF

Info

Publication number
CN107203989A
CN107203989A CN201710211615.2A CN201710211615A CN107203989A CN 107203989 A CN107203989 A CN 107203989A CN 201710211615 A CN201710211615 A CN 201710211615A CN 107203989 A CN107203989 A CN 107203989A
Authority
CN
China
Prior art keywords
chest
image
layer
model
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710211615.2A
Other languages
Chinese (zh)
Inventor
冒凯鹏
谢世朋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Post and Telecommunication University
Original Assignee
Nanjing Post and Telecommunication University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Post and Telecommunication University filed Critical Nanjing Post and Telecommunication University
Priority to CN201710211615.2A priority Critical patent/CN107203989A/en
Publication of CN107203989A publication Critical patent/CN107203989A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了基于全卷积神经网络的端对端胸部CT图像分割方法,首先临床扫描k组胸部CT图像,并且将每组CT图像按照每一张切片分开作为训练样本;然后对于每一个训练样本和测试样本,并经过专业的医护人员进行手动分割,将图像分成肺部、气管、皮肤和背景四个部分;接下来构建端对端的全卷积神经网络对经过标记的胸部CT训练数据进行训练,得到训练好的参数模型;将扫描的CT图像的每张切片分开,并且逐个输入训练好的模型,得到分割好的输出模型;最后将输出的模型进行组合,最后得到分割完成的胸部CT图像模型。本发明提出的这种利用图像邻域内容进行特征提取的卷积神经网络模型,可以对胸部CT图像进行密集预测,简化了图像分割的流程。

The invention discloses an end-to-end chest CT image segmentation method based on a fully convolutional neural network. First, k groups of chest CT images are clinically scanned, and each group of CT images is separated into each slice as a training sample; then for each training Samples and test samples are manually segmented by professional medical staff, and the image is divided into four parts: lung, trachea, skin and background; next, an end-to-end fully convolutional neural network is constructed to perform the labeled chest CT training data Training to obtain the trained parameter model; separate each slice of the scanned CT image, and input the trained model one by one to obtain the segmented output model; finally combine the output models to obtain the segmented chest CT image model. The convolutional neural network model proposed by the present invention uses image neighborhood content for feature extraction, which can perform dense prediction on chest CT images, and simplifies the process of image segmentation.

Description

基于全卷积神经网络的端对端胸部CT图像分割方法End-to-End Chest CT Image Segmentation Method Based on Fully Convolutional Neural Network

技术领域technical field

本发明属于医学图像处理领域,具体涉及基于全卷积神经网络的端对端胸部CT图像分割方法。The invention belongs to the field of medical image processing, and in particular relates to an end-to-end chest CT image segmentation method based on a fully convolutional neural network.

背景技术Background technique

随着医学影像学和计算机技术的不断快速发展,利用计算机技术对临床影像数据进行分析提高了疾病预防与治疗成功的概率。在胸部疾病的诊断与检测中,计算机断 层扫描(CT)使用最为普遍。由于CT能够为胸部的各个器官或组织提供高分辨率的 扫描图像,所以充分利用CT扫描图像对于肺部疾病的检测非常重要,比如肺癌、肺结 节等疾病。在计算机辅助诊断系统中,精准的肺部CT扫描图片的分割是后续胸部功能 分析和三维图像数据重建的基础和前提。精准的分割不仅可以增加疾病诊断的准确性, 还能减少后续无关计算的时间。With the continuous and rapid development of medical imaging and computer technology, the use of computer technology to analyze clinical image data has improved the probability of successful disease prevention and treatment. Computed tomography (CT) is most commonly used in the diagnosis and detection of thoracic diseases. Since CT can provide high-resolution scanning images for various organs or tissues of the chest, it is very important to make full use of CT scanning images for the detection of lung diseases, such as lung cancer, pulmonary nodules and other diseases. In the computer-aided diagnosis system, accurate segmentation of lung CT scan images is the basis and premise of subsequent chest function analysis and three-dimensional image data reconstruction. Accurate segmentation can not only increase the accuracy of disease diagnosis, but also reduce the time of subsequent irrelevant calculations.

由于肺部中充满了空气,相对于胸部的其它组织和器官,肺部的CT值很低,但是由于气管相对于周围的组织在密度上差异不太大,所以气管的CT值相对于肺部的CT 值更高。然而由于气管与胸部皮肤的密度相似,这使得胸部CT图像中气管与皮肤的对 比度很低,很难直接从CT图像中辨认出气管的部分。对于在特殊的生理、病理的条件 下,胸部CT各部分的形态表现差异更加明显,这使得胸部CT图像的分割称为一个现 实的挑战。Because the lungs are filled with air, the CT value of the lungs is very low compared to other tissues and organs of the chest, but because the difference in density between the trachea and the surrounding tissue is not too large, the CT value of the trachea is relatively low relative to the lungs CT value is higher. However, due to the similar density of the trachea and chest skin, the contrast between the trachea and the skin in chest CT images is very low, and it is difficult to directly identify the trachea from the CT image. Under special physiological and pathological conditions, the morphological differences of various parts of chest CT are more obvious, which makes the segmentation of chest CT images a real challenge.

胸部CT图像的分割方法,大体包括了阈值法、区域生长法及结合其它理论的方法。阈值法基于胸部CT图像的CT值不同可以很容易将肺部等器官分割出来,但是遇到 CT值与皮肤相似的气管,阈值法确实比较难以分割。区域生长法的胸部CT图像分割 中需要先设置区域生长的种子点,当肺部或者气管出现位置分叉导致CT图像中气管影 像不连续时,分割就难以取得令人满意的结果。结合其他理论的方法进行胸部CT图像 分割也会存在着许多的问题,比如算法的处理时间过长,无法满足在计算机辅助诊断 系统中应用的实时性需求。针对胸部CT图像分割在计算机发展诊断系统中的特殊性, 图像分割的系统需要经胸部粘膜粘连结节、血管等组织正确的分类,还需要实现快速 的、准确的CT图像分割,这是胸部CT图像分割的关键所在。The segmentation methods of chest CT images generally include threshold method, region growing method and methods combined with other theories. The threshold method can easily segment the lungs and other organs based on the different CT values of chest CT images, but it is really difficult to segment the trachea whose CT value is similar to that of the skin. In the chest CT image segmentation of the region growing method, it is necessary to set the seed point of the region growing first. When the lung or trachea appears to bifurcate and cause the trachea image in the CT image to be discontinuous, the segmentation is difficult to achieve satisfactory results. Combining other theoretical methods for chest CT image segmentation also has many problems, such as the processing time of the algorithm is too long, which cannot meet the real-time requirements of the application in the computer-aided diagnosis system. In view of the particularity of chest CT image segmentation in computer-developed diagnostic systems, the image segmentation system needs to correctly classify chest mucosal adhesion nodules, blood vessels and other tissues, and also needs to achieve fast and accurate CT image segmentation. The key to image segmentation.

针对利用传统的方法进行胸部CT图像的分割中存在着诸多问题,比如肺部的边缘含胸膜粘连性结节及肺部实质边缘受损,针对这些问题,已经有很多相关的研究出现: 比如检测凹凸点,将图像中的所有像素点分为凸点、凹点和平滑点,可以利用沿轮廓 上的凸起的点来画一条直线纠正肺部的边缘;曲率分析法,该方法中曲率比较大的地 方代表的是气管或者大的血管。Snake模型的图像分割法,结合了snake模型和生理解 剖学来分割胸部CT图像。但是这些分割方法也存在一定的缺点,凹凸点检测对形变模 型比较敏感,曲率分析法每次模型的参数及初始化还会存在着一定的误差,Snake模型 没有规范化的分割流程,分割的结果会出现一定的误差。There are many problems in the segmentation of chest CT images using traditional methods, such as pleural adhesive nodules on the edge of the lung and damage to the edge of the lung parenchyma. Aiming at these problems, many related studies have appeared: such as detecting Concave-convex points, divide all the pixels in the image into convex points, concave points and smooth points, you can use the raised points along the contour to draw a straight line to correct the edge of the lung; curvature analysis method, the curvature comparison in this method Larger places represent the trachea or large blood vessels. The image segmentation method of the snake model combines the snake model and physiological anatomy to segment chest CT images. However, these segmentation methods also have certain shortcomings. The detection of concave and convex points is more sensitive to the deformation model. The parameters and initialization of each model in the curvature analysis method will have certain errors. The Snake model does not have a standardized segmentation process, and the segmentation results will appear. Certain error.

发明内容Contents of the invention

本发明针对胸部CT图像分割中存在的问题,结合深度学习技术,提出了具有端对端的结构性质的胸部CT图像分割方法。模型中包括了卷积层和其它层的交互间隔,能 够自动对输入图像进行特征提取以利用图像中内容的邻域信息,并且通过了池化层的 设置,在不增加计算量的同时扩大了卷积核的感受野获得了更大的信息,并且利用这 些信息对图像中的像素点进行分类,进而将整个CT图像进行分割。Aiming at the problems existing in chest CT image segmentation, the present invention combines deep learning technology and proposes a chest CT image segmentation method with end-to-end structural properties. The model includes the interaction interval between the convolutional layer and other layers, which can automatically extract the features of the input image to use the neighborhood information of the content in the image, and through the setting of the pooling layer, it can expand the network without increasing the amount of calculation. The receptive field of the convolution kernel obtains more information, and uses this information to classify the pixels in the image, and then segment the entire CT image.

为了实现上述的目的,本发明提出的技术方案是基于全卷积神经网络的端对端胸部 CT图像分割方法,具体包括以下步骤:In order to achieve the above-mentioned purpose, the technical solution proposed by the present invention is an end-to-end chest CT image segmentation method based on a fully convolutional neural network, specifically comprising the following steps:

步骤1:临床扫描k组胸部CT图像,并且将每组CT图像按照每一张切片分开作为 训练样本;Step 1: Clinically scan k groups of chest CT images, and separate each group of CT images according to each slice as a training sample;

步骤2:对于每一个训练样本和测试样本,经过专业的医护人员进行手动分割,将图 像分成肺部、气管、皮肤和背景四个部分;Step 2: For each training sample and test sample, professional medical staff will manually segment the image into four parts: lung, trachea, skin and background;

步骤3:构建端对端的全卷积神经网络对经过标记的胸部CT训练数据进行训练,得到训练好的参数模型;Step 3: Construct an end-to-end fully convolutional neural network to train the marked chest CT training data to obtain a trained parameter model;

步骤4:将扫描的CT图像的每张切片分开,并且逐个输入训练好的模型,得到分割好的输出模型;Step 4: Separate each slice of the scanned CT image, and input the trained model one by one to obtain a segmented output model;

步骤5:将输出的模型进行组合,最后得到分割完成的胸部CT图像模型。Step 5: Combine the output models to finally obtain the segmented chest CT image model.

进一步,上述步骤3中的训练具体包含以下步骤:Further, the training in the above step 3 specifically includes the following steps:

A、在模型的前面设置3个卷积层,卷积层的输入与输出之间的关系为:A. Set up three convolutional layers in front of the model. The relationship between the input and output of the convolutional layer is:

yij=fks({xsi+δi,sj+δj}0≤δi,δj≤k)y ij =f ks ({x si+δi,sj+δj } 0≤δi,δj≤k )

xij为在特定的某一层中位置(i,j)处的数据向量,yij是下一层的某一位置的数据向量,k 是内核大小,s是步幅或子采样因子;x ij is the data vector at position (i, j) in a certain layer, y ij is the data vector at a certain position in the next layer, k is the kernel size, s is the stride or subsampling factor;

B、在每个卷积层之后设置一层非线性激活单元,这里选择的激活函数为ReLU,该非线性计算为原址计算,不增加模型的存储空间,输入输出关系为:B. Set a layer of nonlinear activation unit after each convolutional layer. The activation function selected here is ReLU. The nonlinear calculation is in-situ calculation without increasing the storage space of the model. The input-output relationship is:

yij=max(0,xij)y ij = max(0, x ij )

xij在这里与输出yij是同一位置上的数据量;x ij here is the amount of data at the same position as the output y ij ;

C、在每一个非线性激活后的卷积层后设置池化层;C. Set a pooling layer after each nonlinearly activated convolutional layer;

D、在第三个池化层之后设置卷积层,该卷积层保持了之前卷积层输出特征图的空间 结构;D. Set the convolutional layer after the third pooling layer, which maintains the spatial structure of the output feature map of the previous convolutional layer;

E、网络中设置一个反卷积层,该层将网络中的特征图映射到原始图像的大小,反卷 积层是卷积层中数据的反向传递过程,其可以视为一个插值的过程,也可以看成一个稀疏滤波器来放大,产生的输出为:E. A deconvolution layer is set in the network, which maps the feature map in the network to the size of the original image. The deconvolution layer is the reverse transmission process of the data in the convolution layer, which can be regarded as an interpolation process , can also be regarded as a sparse filter to amplify, and the output produced is:

这里的i和j都是基于0值的f′ij为输出特征图;Both i and j here are based on the 0-value f' ij as the output feature map;

F、设定网络的损失函数,定义损失函数为空间上各个维度的和,形式为:F. Set the loss function of the network, define the loss function as the sum of each dimension in the space, and the form is:

l(x:Θ)为总的损失,l′(xij;Θ)为每个点的损失,Θ为整个网络中的参数,这里的损失为 Softmax损失,输出是4个类别的损失,公式如下:l(x:Θ) is the total loss, l'(x ij ; Θ) is the loss of each point, Θ is the parameter in the entire network, the loss here is Softmax loss, and the output is the loss of 4 categories, the formula as follows:

这里的k的值为4,即分为的肺部、气管、皮肤和背景四个部分,每个类别的损失函数 等价为:The value of k here is 4, which is divided into four parts: lungs, trachea, skin and background. The loss function of each category is equivalent to:

这里的损失即是交叉熵损失;The loss here is the cross entropy loss;

G、利用处理好的数据对定义的网络在深度学习平台上进行训练,经过一段时间的训 练之后,保存训练完成的模型。G. Use the processed data to train the defined network on the deep learning platform. After a period of training, save the trained model.

作为优选,上述C步骤中选择的是步幅为2的最大值池化。Preferably, the maximum pooling with a stride of 2 is selected in the above step C.

上述D步骤中所述卷积层为两层1×1的卷积层。The convolutional layer in the above step D is two 1×1 convolutional layers.

与现有技术相比,本发明的有益效果:Compared with prior art, the beneficial effect of the present invention:

与现有的胸部CT图像分割的技术相比,本发明具有以下优点:Compared with the existing technology of chest CT image segmentation, the present invention has the following advantages:

(1)本发明提出了一种利用图像邻域内容进行特征提取的卷积神经网络模型,利用该 深度模型可以对胸部CT图像进行密集预测;(1) The present invention proposes a convolutional neural network model utilizing image neighborhood content for feature extraction, and the depth model can be used for intensive prediction of chest CT images;

(2)直接从图像到标记图像进行学习的非线性深度学习模型具有端对端的结构特性, 简化了图像分割的流程;(2) The nonlinear deep learning model that learns directly from the image to the labeled image has end-to-end structural characteristics, which simplifies the process of image segmentation;

(3)实验结果表明,将本发明构建的模型应用到胸部CT图像的分割工作中,能够有效地将胸部CT图像分成肺部、气管、皮肤和背景4个部分,该方法为医学图像处理中 胸部疾病的诊断所需要的CT图像分割提供了一种新的方法。(3) Experimental results show that applying the model constructed by the present invention to the segmentation work of chest CT images can effectively divide chest CT images into lungs, trachea, skin and background 4 parts. A new approach is provided for the CT image segmentation required for the diagnosis of chest diseases.

附图说明Description of drawings

图1位本发明的整体流程图。Fig. 1 is the overall flow chart of the present invention.

图2是胸部CT图像以及分割完成的标签图像。Figure 2 is a chest CT image and the segmented label image.

具体实施方式detailed description

现结合附图对本发明做进一步详细的说明。此处所说明的附图用来提供对本发明的进一步理解,构成本申请的一部分,本发明的示意性实施例及其仅说明用于解释本 发明,并不构成对本发明的不当限定。The present invention is described in further detail now in conjunction with accompanying drawing. The drawings described here are used to provide a further understanding of the present invention and constitute a part of the application. The schematic embodiments of the present invention and their illustrations are only used to explain the present invention, and do not constitute improper limitations to the present invention.

如图1所示,本发明包含以下步骤:As shown in Figure 1, the present invention comprises the following steps:

步骤1:临床扫描k组胸部CT图像,并且将每组CT图像按照每一张切片分开作为 训练样本;Step 1: Clinically scan k groups of chest CT images, and separate each group of CT images according to each slice as a training sample;

在获取胸部CT图像的过程中,CT扫描做的是螺旋式的扫描,其扫描过程大体是,当X光的光源和探测器在绕病人旋转的时候,病人的床沿着旋转轴的方向缓慢移动。 现代CT使用的是二维的多行探测器,采集的是锥形束数据,有些系统设置有多个X 光光源和多个二维探测器。X光光源螺旋线轨迹是靠光源绕病人旋转并同时沿转轴方 向移动病人来实现的。胸部CT图像的成像与衰减系数有关,衰减系数与物质材料的性 质有关,通常衰减系数记为μ。其定义为单位长度上光的输入与输出的强度比的对数。 通常情况下骨头衰减系数值较大,而软组织衰减系数值较小。物质材料的衰减系数还 与X光的能量有关。当X光的能量较高时,衰减系数较小。所得到了CT图像大小为512×512,而扫描的切片数量不相等。In the process of obtaining chest CT images, CT scanning is a spiral scan. The scanning process is generally that when the X-ray light source and detector rotate around the patient, the patient's bed moves slowly along the direction of the rotation axis. . Modern CT uses two-dimensional multi-row detectors to collect cone beam data, and some systems are equipped with multiple X-ray light sources and multiple two-dimensional detectors. The helical trajectory of the X-ray light source is achieved by rotating the light source around the patient and simultaneously moving the patient along the axis of rotation. The imaging of chest CT images is related to the attenuation coefficient, and the attenuation coefficient is related to the properties of the material, and the attenuation coefficient is usually recorded as μ. It is defined as the logarithm of the ratio of the input to output intensity of the glazing per unit length. Usually, the bone attenuation coefficient value is larger, while the soft tissue attenuation coefficient value is smaller. The attenuation coefficient of material is also related to the energy of X-rays. When the energy of X-rays is higher, the attenuation coefficient is smaller. The resulting CT image size is 512×512, and the number of scanned slices is not equal.

步骤2:对于每一个训练样本和测试样本,经过专业的医护人员进行手动分割,将图 像分成肺部、气管、皮肤和背景四个部分;Step 2: For each training sample and test sample, professional medical staff will manually segment the image into four parts: lung, trachea, skin and background;

步骤3:构建端对端的全卷积神经网络对经过标记的胸部CT训练数据进行训练,得到训练好的参数模型;Step 3: Construct an end-to-end fully convolutional neural network to train the marked chest CT training data to obtain a trained parameter model;

步骤4:将扫描的CT图像的每张切片分开,并且逐个输入训练好的模型,并且得到分割好的输出模型;Step 4: Separate each slice of the scanned CT image, and input the trained model one by one, and obtain a segmented output model;

步骤5:将输出的模型进行组合,最后得到分割完成的胸部CT图像模型。Step 5: Combine the output models to finally obtain the segmented chest CT image model.

如图2所示,第一行图像为待分割的胸部图像,第二行为对应的分割完成的胸部CT图像。As shown in FIG. 2 , the first row of images is the chest image to be segmented, and the second row is the corresponding segmented chest CT image.

上述步骤3中的深度学习模型的构建,所构建的模型包括了以下几个部分:The construction of the deep learning model in the above step 3, the constructed model includes the following parts:

(1)在模型的前面设置3个卷积层,卷积层的输入与输出之间的关系为:(1) Three convolutional layers are set in front of the model, and the relationship between the input and output of the convolutional layer is:

yij=fks({xsi+δi,sj+δj}0≤δi,δj≤k)y ij =f ks ({x si+δi,sj+δj } 0≤δi,δj≤k )

xij为在特定的某一层中位置(i,j)处的数据向量,yij是下一层的某一位置的数据向量,k 是内核大小,s是步幅或子采样因子;x ij is the data vector at position (i, j) in a certain layer, y ij is the data vector at a certain position in the next layer, k is the kernel size, s is the stride or subsampling factor;

(2)在每个卷积层之后设置一层非线性激活单元,这里选择的激活函数为ReLU,该非线性计算为原址计算,不增加模型的存储空间,输入输出关系为:(2) Set a layer of nonlinear activation unit after each convolutional layer. The activation function selected here is ReLU. The nonlinear calculation is in-situ calculation without increasing the storage space of the model. The input-output relationship is:

yij=max(0,xij)y ij = max(0, x ij )

xij在这里与输出yij是同一位置上的数据量。Here, x ij is the amount of data at the same position as the output y ij .

(3)在每一个非线性激活后的卷积层后设置池化层,这里选择的是步幅为2的最大值 池化。(3) A pooling layer is set after each non-linearly activated convolutional layer. Here, the maximum value pooling with a stride of 2 is selected.

(4)在第三个池化层之后设置两层1×1的卷积层,该卷积层保持了之前输出保持了 特征图的空间结构。(4) Two 1×1 convolutional layers are set after the third pooling layer, and the convolutional layer maintains the previous output and maintains the spatial structure of the feature map.

(5)网络中设置一个反卷积层,该层将网络中的特征图映射到原始图像的大小,反卷 积层是卷积层中数据的反向传递过程,其可以视为一个插值的过程,也可以看成一个稀疏滤波器来放大,产生的输出为:(5) A deconvolution layer is set in the network, which maps the feature map in the network to the size of the original image. The deconvolution layer is the reverse transmission process of the data in the convolution layer, which can be regarded as an interpolation The process can also be regarded as a sparse filter to amplify, and the output produced is:

这里的i和j都是基于0值的f′ij为输出特征图。Both i and j here are based on the 0-valued f′ ij as the output feature map.

(6)设定网络的损失函数,定义损失函数为空间上各个维度的和,形式为:(6) Set the loss function of the network, and define the loss function as the sum of each dimension in the space, in the form of:

l(x:Θ)为总的损失,l′(xij;Θ)为每个点的损失,Θ为整个网络中的参数。这里的损失为 Softmax损失,输出是4个类别的损失,公式如下:l(x:Θ) is the total loss, l'(x ij ;Θ) is the loss of each point, and Θ is the parameter in the entire network. The loss here is Softmax loss, and the output is the loss of 4 categories. The formula is as follows:

这里的k的值为4,即分为的肺部、气管、皮肤和背景四个部分,每个类别的损失函数 等价为:The value of k here is 4, which is divided into four parts: lungs, trachea, skin and background. The loss function of each category is equivalent to:

这里的损失即是交叉熵损失;The loss here is the cross entropy loss;

(7)利用处理好的数据对定义的网络在深度学习平台上进行训练,经过一段时间的训 练之后,保存训练完成的模型。(7) Use the processed data to train the defined network on the deep learning platform, and after a period of training, save the trained model.

需要说明的是本发明并不局限于上述实施例所述的具体技术方案,凡采用等同替换形成的技术方案均属于本发明的保护范围。It should be noted that the present invention is not limited to the specific technical solutions described in the above embodiments, and all technical solutions formed by equivalent replacements belong to the protection scope of the present invention.

Claims (4)

1.基于全卷积神经网络的端对端胸部CT图像分割方法,其特征在于该方法包括以下步骤:1. based on the end-to-end chest CT image segmentation method of full convolutional neural network, it is characterized in that the method comprises the following steps: 步骤1:临床扫描k组胸部CT图像,并且将每组CT图像按照每一张切片分开作为训练样本;Step 1: Clinically scan k groups of chest CT images, and separate each group of CT images according to each slice as a training sample; 步骤2:对于每一个训练样本和测试样本,经过专业的医护人员进行手动分割,将图像分成肺部、气管、皮肤和背景四个部分;Step 2: For each training sample and test sample, professional medical staff perform manual segmentation to divide the image into four parts: lung, trachea, skin and background; 步骤3:构建端对端的全卷积神经网络对经过标记的胸部CT训练数据进行训练,得到训练好的参数模型;Step 3: Construct an end-to-end fully convolutional neural network to train the marked chest CT training data to obtain a trained parameter model; 步骤4:将扫描的CT图像的每张切片分开,并且逐个输入训练好的模型,得到分割好的输出模型;Step 4: Separate each slice of the scanned CT image, and input the trained model one by one to obtain a segmented output model; 步骤5:将输出的模型进行组合,最后得到分割完成的胸部CT图像模型。Step 5: Combine the output models to finally obtain the segmented chest CT image model. 2.根据权利要求1所述的基于全卷积神经网络的端对端胸部CT图像分割方法,其特征在于步骤3中的训练具体包含以下步骤:2. the end-to-end chest CT image segmentation method based on full convolution neural network according to claim 1, is characterized in that the training in step 3 specifically comprises the following steps: A、在模型的前面设置3个卷积层,卷积层的输入与输出之间的关系为:A. Set up three convolutional layers in front of the model. The relationship between the input and output of the convolutional layer is: yij=fks({xsi+δi,sj+δj}0≤δi,δj≤k)y ij =f ks ({x si+δi,sj+δj } 0≤δi,δj≤k ) xij为在特定的某一层中位置(i,j)处的数据向量,yij是下一层的某一位置的数据向量,k是内核大小,s是步幅或子采样因子;x ij is the data vector at position (i, j) in a specific layer, y ij is the data vector at a certain position in the next layer, k is the kernel size, s is the stride or subsampling factor; B、在每个卷积层之后设置一层非线性激活单元,这里选择的激活函数为ReLU,该非线性计算为原址计算,不增加模型的存储空间,输入输出关系为:B. Set a layer of nonlinear activation unit after each convolutional layer. The activation function selected here is ReLU. The nonlinear calculation is in-situ calculation without increasing the storage space of the model. The input-output relationship is: yij=max(0,xij)y ij = max(0, x ij ) xij在这里与输出yij是同一位置上的数据量;x ij here is the amount of data at the same position as the output y ij ; C、在每一个非线性激活后的卷积层后设置池化层;C. Set a pooling layer after each nonlinearly activated convolutional layer; D、在第三个池化层之后设置卷积层,该卷积层保持了之前卷积层输出特征图的空间结构;D. Set the convolutional layer after the third pooling layer, which maintains the spatial structure of the output feature map of the previous convolutional layer; E、网络中设置一个反卷积层,该层将网络中的特征图映射到原始图像的大小,反卷积层是卷积层中数据的反向传递过程,其可以视为一个插值的过程,也可以看成一个稀疏滤波器来放大,产生的输出为:E. A deconvolution layer is set in the network, which maps the feature map in the network to the size of the original image. The deconvolution layer is the reverse transmission process of the data in the convolution layer, which can be regarded as an interpolation process , can also be regarded as a sparse filter to amplify, and the output produced is: 这里的i和j都是基于0值的f′ij为输出特征图;Both i and j here are based on the 0-value f' ij as the output feature map; F、设定网络的损失函数,定义损失函数为空间上各个维度的和,形式为:F. Set the loss function of the network, define the loss function as the sum of each dimension in the space, and the form is: l(x:Θ)为总的损失,l′(xij;Θ)为每个点的损失,Θ为整个网络中的参数,这里的损失为Softmax损失,输出是4个类别的损失,公式如下:l(x:Θ) is the total loss, l'(x ij ; Θ) is the loss of each point, Θ is the parameter in the entire network, the loss here is Softmax loss, and the output is the loss of 4 categories, the formula as follows: 这里的k的值为4,即分为的肺部、气管、皮肤和背景四个部分,每个类别的损失函数等价为:The value of k here is 4, which is divided into four parts: lungs, trachea, skin and background. The loss function of each category is equivalent to: 这里的损失即是交叉熵损失;The loss here is the cross entropy loss; G、利用处理好的数据对定义的网络在深度学习平台上进行训练,经过一段时间的训练之后,保存训练完成的模型。G. Use the processed data to train the defined network on the deep learning platform. After a period of training, save the trained model. 3.根据权利要求2所述的基于全卷积神经网络的端对端胸部CT图像分割方法,其特征在于C步骤中选择的是步幅为2的最大值池化。3. The end-to-end chest CT image segmentation method based on a full convolutional neural network according to claim 2, characterized in that what is selected in the C step is that the stride is a maximum pooling of 2. 4.根据权利要求2所述的基于全卷积神经网络的端对端胸部CT图像分割方法,其特征在于D步骤中所述卷积层为两层1×1的卷积层。4. The end-to-end chest CT image segmentation method based on a full convolutional neural network according to claim 2, wherein the convolutional layer in the D step is a two-layer 1 * 1 convolutional layer.
CN201710211615.2A 2017-04-01 2017-04-01 End-to-end chest CT image dividing method based on full convolutional neural networks Pending CN107203989A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710211615.2A CN107203989A (en) 2017-04-01 2017-04-01 End-to-end chest CT image dividing method based on full convolutional neural networks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710211615.2A CN107203989A (en) 2017-04-01 2017-04-01 End-to-end chest CT image dividing method based on full convolutional neural networks

Publications (1)

Publication Number Publication Date
CN107203989A true CN107203989A (en) 2017-09-26

Family

ID=59905646

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710211615.2A Pending CN107203989A (en) 2017-04-01 2017-04-01 End-to-end chest CT image dividing method based on full convolutional neural networks

Country Status (1)

Country Link
CN (1) CN107203989A (en)

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107977969A (en) * 2017-12-11 2018-05-01 北京数字精准医疗科技有限公司 A kind of dividing method, device and the storage medium of endoscope fluorescence image
CN108806793A (en) * 2018-04-17 2018-11-13 平安科技(深圳)有限公司 Lesion monitoring method, device, computer equipment and storage medium
CN109065165A (en) * 2018-07-25 2018-12-21 东北大学 A kind of chronic obstructive pulmonary disease prediction technique based on reconstruction air flue tree Image
CN109118501A (en) * 2018-08-03 2019-01-01 上海电气集团股份有限公司 Image processing method and system
CN109191564A (en) * 2018-07-27 2019-01-11 中国科学院自动化研究所 Exciting tomography fluorescence imaging three-dimensional rebuilding method based on deep learning
CN109377500A (en) * 2018-09-18 2019-02-22 平安科技(深圳)有限公司 Image partition method and terminal device neural network based
CN109584252A (en) * 2017-11-03 2019-04-05 杭州依图医疗技术有限公司 Lobe of the lung section dividing method, the device of CT images based on deep learning
CN109598734A (en) * 2018-12-29 2019-04-09 上海联影智能医疗科技有限公司 The method and system of heart and lobe of the lung segmentation
CN109886992A (en) * 2017-12-06 2019-06-14 深圳博脑医疗科技有限公司 A fully convolutional network model training method for segmenting abnormal signal regions in MRI images
CN109903229A (en) * 2019-03-04 2019-06-18 科新(杭州)能源环境科技有限公司 A kind of μ-CT image reconstructing method based on convolutional neural networks
WO2019155306A1 (en) * 2018-02-07 2019-08-15 International Business Machines Corporation A system for segmentation of anatomical structures in cardiac cta using fully convolutional neural networks
CN110210483A (en) * 2019-06-13 2019-09-06 上海鹰瞳医疗科技有限公司 Medical image lesion region segmentation method, model training method and device
CN110246126A (en) * 2019-06-14 2019-09-17 吉林大学第一医院 A method of extracting terminal bronchi tree from lung CT image
CN110458852A (en) * 2019-08-13 2019-11-15 四川大学 Lung tissue segmentation method, device, device and storage medium based on capsule network
CN110517757A (en) * 2018-05-21 2019-11-29 美国西门子医疗系统股份有限公司 Tuned medical ultrasound imaging
CN110634144A (en) * 2019-09-23 2019-12-31 武汉联影医疗科技有限公司 A kind of foramen ovale positioning method, device and storage medium
CN110728178A (en) * 2019-09-02 2020-01-24 武汉大学 A deep learning-based method for extracting lane lines from event cameras
CN110895815A (en) * 2019-12-02 2020-03-20 西南科技大学 A chest X-ray pneumothorax segmentation method based on deep learning
CN110910371A (en) * 2019-11-22 2020-03-24 北京理工大学 Liver tumor automatic classification method and device based on physiological indexes and image fusion
CN112232433A (en) * 2020-10-27 2021-01-15 河北工业大学 A dual-channel network-based classification method for benign and malignant pulmonary nodules
CN112288638A (en) * 2019-07-27 2021-01-29 华为技术有限公司 Image enhancement device and system
US10945695B2 (en) 2018-12-21 2021-03-16 Canon Medical Systems Corporation Apparatus and method for dual-energy computed tomography (CT) image reconstruction using sparse kVp-switching and deep learning
CN114730330A (en) * 2020-02-11 2022-07-08 三星电子株式会社 Electronic device for performing deconvolution calculation and control method thereof
US11436720B2 (en) 2018-12-28 2022-09-06 Shanghai United Imaging Intelligence Co., Ltd. Systems and methods for generating image metric

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106372390A (en) * 2016-08-25 2017-02-01 姹ゅ钩 Deep convolutional neural network-based lung cancer preventing self-service health cloud service system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106372390A (en) * 2016-08-25 2017-02-01 姹ゅ钩 Deep convolutional neural network-based lung cancer preventing self-service health cloud service system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
EVAN SHELHAMER 等: "Fully Convolutional Networks for Semantic Segmentation", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *
KAI-PENG MAO 等: "Automatic Segmentation of Thorax CT Images with Fully Convolutional Networks", 《CURRENT TRENDS IN COMPUTER SCIENCE AND MECHANICAL AUTOMATION》 *
KRIZHEVSKY A 等: "ImageNet Classification with Deep Convolutional Neural Networks", 《INTERNATIONAL CONFERENCE ON NEURAL INFORMATION PROCESSING SYSTEMS. CURRAN ASSOCIATES INC.2012》 *

Cited By (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109584252A (en) * 2017-11-03 2019-04-05 杭州依图医疗技术有限公司 Lobe of the lung section dividing method, the device of CT images based on deep learning
CN109584252B (en) * 2017-11-03 2020-08-14 杭州依图医疗技术有限公司 Lung lobe segment segmentation method and device of CT image based on deep learning
CN109886992A (en) * 2017-12-06 2019-06-14 深圳博脑医疗科技有限公司 A fully convolutional network model training method for segmenting abnormal signal regions in MRI images
CN107977969B (en) * 2017-12-11 2020-07-21 北京数字精准医疗科技有限公司 Endoscope fluorescence image segmentation method, device and storage medium
CN107977969A (en) * 2017-12-11 2018-05-01 北京数字精准医疗科技有限公司 A kind of dividing method, device and the storage medium of endoscope fluorescence image
WO2019155306A1 (en) * 2018-02-07 2019-08-15 International Business Machines Corporation A system for segmentation of anatomical structures in cardiac cta using fully convolutional neural networks
CN111557020B (en) * 2018-02-07 2023-12-05 国际商业机器公司 Cardiac CTA anatomical structure segmentation system based on fully convolutional neural network
JP7102531B2 (en) 2018-02-07 2022-07-19 インターナショナル・ビジネス・マシーンズ・コーポレーション Methods, Computer Programs, Computer-Readable Storage Mediums, and Devices for the Segmentation of Anatomical Structures in Computer Toxiography Angiography
JP2021513697A (en) * 2018-02-07 2021-05-27 インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Machines Corporation A system for anatomical segmentation in cardiac CTA using a fully convolutional neural network
GB2583682B (en) * 2018-02-07 2021-03-10 Ibm A system for segmentation of anatomical structures in cardiac CTA using fully convolutional neural networks
US10896508B2 (en) 2018-02-07 2021-01-19 International Business Machines Corporation System for segmentation of anatomical structures in cardiac CTA using fully convolutional neural networks
GB2583682A (en) * 2018-02-07 2020-11-04 Ibm A system for segmentation of anatomical structures in cardiac CTA using fully convolutional neural networks
CN111557020A (en) * 2018-02-07 2020-08-18 国际商业机器公司 A fully convolutional neural network-based segmentation system for cardiac CTA anatomy
CN108806793A (en) * 2018-04-17 2018-11-13 平安科技(深圳)有限公司 Lesion monitoring method, device, computer equipment and storage medium
WO2019200753A1 (en) * 2018-04-17 2019-10-24 平安科技(深圳)有限公司 Lesion detection method, device, computer apparatus and storage medium
CN110517757A (en) * 2018-05-21 2019-11-29 美国西门子医疗系统股份有限公司 Tuned medical ultrasound imaging
CN110517757B (en) * 2018-05-21 2023-08-04 美国西门子医疗系统股份有限公司 Tuned Medical Ultrasound Imaging
CN109065165B (en) * 2018-07-25 2021-08-17 东北大学 A prediction method for chronic obstructive pulmonary disease based on reconstructed airway tree images
CN109065165A (en) * 2018-07-25 2018-12-21 东北大学 A kind of chronic obstructive pulmonary disease prediction technique based on reconstruction air flue tree Image
CN109191564B (en) * 2018-07-27 2020-09-04 中国科学院自动化研究所 3D reconstruction method of excited fluorescence tomography based on deep learning
CN109191564A (en) * 2018-07-27 2019-01-11 中国科学院自动化研究所 Exciting tomography fluorescence imaging three-dimensional rebuilding method based on deep learning
CN109118501A (en) * 2018-08-03 2019-01-01 上海电气集团股份有限公司 Image processing method and system
CN109377500A (en) * 2018-09-18 2019-02-22 平安科技(深圳)有限公司 Image partition method and terminal device neural network based
CN109377500B (en) * 2018-09-18 2023-07-25 平安科技(深圳)有限公司 Image segmentation method based on neural network and terminal equipment
US10945695B2 (en) 2018-12-21 2021-03-16 Canon Medical Systems Corporation Apparatus and method for dual-energy computed tomography (CT) image reconstruction using sparse kVp-switching and deep learning
US11436720B2 (en) 2018-12-28 2022-09-06 Shanghai United Imaging Intelligence Co., Ltd. Systems and methods for generating image metric
CN109598734A (en) * 2018-12-29 2019-04-09 上海联影智能医疗科技有限公司 The method and system of heart and lobe of the lung segmentation
CN109903229A (en) * 2019-03-04 2019-06-18 科新(杭州)能源环境科技有限公司 A kind of μ-CT image reconstructing method based on convolutional neural networks
CN110210483A (en) * 2019-06-13 2019-09-06 上海鹰瞳医疗科技有限公司 Medical image lesion region segmentation method, model training method and device
CN110246126A (en) * 2019-06-14 2019-09-17 吉林大学第一医院 A method of extracting terminal bronchi tree from lung CT image
CN112288638A (en) * 2019-07-27 2021-01-29 华为技术有限公司 Image enhancement device and system
CN110458852B (en) * 2019-08-13 2022-10-21 四川大学 Lung tissue segmentation method, device and equipment based on capsule network and storage medium
CN110458852A (en) * 2019-08-13 2019-11-15 四川大学 Lung tissue segmentation method, device, device and storage medium based on capsule network
CN110728178B (en) * 2019-09-02 2022-03-15 武汉大学 Event camera lane line extraction method based on deep learning
CN110728178A (en) * 2019-09-02 2020-01-24 武汉大学 A deep learning-based method for extracting lane lines from event cameras
CN110634144B (en) * 2019-09-23 2022-08-02 武汉联影医疗科技有限公司 Oval hole positioning method and device and storage medium
CN110634144A (en) * 2019-09-23 2019-12-31 武汉联影医疗科技有限公司 A kind of foramen ovale positioning method, device and storage medium
CN110910371A (en) * 2019-11-22 2020-03-24 北京理工大学 Liver tumor automatic classification method and device based on physiological indexes and image fusion
CN110895815A (en) * 2019-12-02 2020-03-20 西南科技大学 A chest X-ray pneumothorax segmentation method based on deep learning
CN114730330A (en) * 2020-02-11 2022-07-08 三星电子株式会社 Electronic device for performing deconvolution calculation and control method thereof
CN114730330B (en) * 2020-02-11 2024-11-08 三星电子株式会社 Electronic device for performing deconvolution calculation and control method thereof
CN112232433A (en) * 2020-10-27 2021-01-15 河北工业大学 A dual-channel network-based classification method for benign and malignant pulmonary nodules

Similar Documents

Publication Publication Date Title
CN107203989A (en) End-to-end chest CT image dividing method based on full convolutional neural networks
Yun et al. Improvement of fully automated airway segmentation on volumetric computed tomographic images using a 2.5 dimensional convolutional neural net
CN113129309B (en) Medical image semi-supervised segmentation system based on object context consistency constraint
CN111932559B (en) New coronary pneumonia lung focus region segmentation system based on deep learning
CN111709953B (en) Output method and device in lung lobe segment segmentation of CT (computed tomography) image
CN108257134A (en) Nasopharyngeal Carcinoma Lesions automatic division method and system based on deep learning
WO2021115312A1 (en) Method for automatically sketching contour line of normal organ in medical image
CN112006772B (en) Method and system for establishing complete human body external respiratory tract
CN109215033A (en) The method and system of image segmentation
CN112767407B (en) CT image kidney tumor segmentation method based on cascade gating 3DUnet model
EP3971830B1 (en) Pneumonia sign segmentation method and apparatus, medium and electronic device
WO2023040164A1 (en) Method and apparatus for training pet/ct-based lung adenocarcinoma and squamous carcinoma diagnosis model
CN108492300B (en) Pulmonary Vascular Tree Segmentation Method Combined with Tubular Structure Enhancement and Energy Function
CN110232691A (en) A kind of dividing method of multi-modal CT images
CN111986216B (en) An improved interactive segmentation algorithm for RSG liver CT images based on neural network
CN113327225A (en) Method for providing airway information
DE102020211945A1 (en) Method and arrangement for the automatic localization of organ segments in a three-dimensional image
CN110503626A (en) CT image modality alignment method based on spatial-semantic saliency constraints
CN110400297A (en) A deep learning-based prediction method for lung cancer staging
CN108537779A (en) The method of vertebra segmentation and centroid detection based on cluster
CN117197594B (en) Deep neural network-based heart shunt classification system
CN114119950A (en) Artificial intelligence-based oral cavity curved surface fault layer dental image segmentation method
CN116563533A (en) Medical image segmentation method and system based on prior information of target position
DE102005036412A1 (en) Improved GGN segmentation in lung recordings for accuracy and consistency
CN112686897A (en) Weak supervision-based gastrointestinal lymph node pixel labeling method assisted by long and short axes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170926