[go: up one dir, main page]

CN112116625B - Automatic cardiac CT image segmentation method, device and medium based on contradiction labeling method - Google Patents

Automatic cardiac CT image segmentation method, device and medium based on contradiction labeling method Download PDF

Info

Publication number
CN112116625B
CN112116625B CN202010862313.3A CN202010862313A CN112116625B CN 112116625 B CN112116625 B CN 112116625B CN 202010862313 A CN202010862313 A CN 202010862313A CN 112116625 B CN112116625 B CN 112116625B
Authority
CN
China
Prior art keywords
annotation
contradiction
labeling
contradictory
cardiac
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010862313.3A
Other languages
Chinese (zh)
Other versions
CN112116625A (en
Inventor
陈泓昊
周昌昊
杜文亮
田小林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Macau University of Science and Technology
Original Assignee
Macau University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Macau University of Science and Technology filed Critical Macau University of Science and Technology
Priority to CN202010862313.3A priority Critical patent/CN112116625B/en
Publication of CN112116625A publication Critical patent/CN112116625A/en
Application granted granted Critical
Publication of CN112116625B publication Critical patent/CN112116625B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30048Heart; Cardiac
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention relates to a method, a device and a medium for automatically segmenting cardiac CT images based on a contradictory labeling method, which comprises the following steps: performing low-precision contradiction labeling on the heart CT image to obtain a first contradiction labeling set; taking similar annotations included in contradiction annotation pictures in different frames but adjacent pictures in the CT images of the plurality of hearts as a second contradiction annotation set; preheating a deep neural network based on U-Net through a precise annotation data part, and mixing a first contradiction annotation set and a second contradiction annotation set to obtain a mixed contradiction annotation set; training a full convolution neural network through the mixed contradiction labeling set, and segmenting the foreground and the background of the heart CT image until the full convolution neural network achieves convergence; and counting gray level histograms of foreground and background of CT heart series images segmented by the U-Net network, segmenting a threshold, setting a region of interest of the heart CT images, and segmenting. The beneficial effects of the invention are as follows: the network training process is faster; the time cost is relatively low, and the cost is reduced.

Description

基于矛盾标记法的心脏CT图像自动分割方法、装置及介质Method, device and medium for automatic segmentation of cardiac CT images based on contradictory labeling method

技术领域Technical Field

本发明涉及计算机领域,具体涉及了一种基于矛盾标记法的心脏CT图像自动分割方法、装置及介质。The present invention relates to the field of computers, and in particular to a method, a device and a medium for automatically segmenting cardiac CT images based on a contradictory labeling method.

背景技术Background Art

据世界卫生组织数据,每年有数以千万的人死于心血管疾病(心脏病),估计占全世界死亡总数的31%。心脏计算机断层扫描(CT)图像被广泛用于包括心脏病在内的放射治疗计划中,心脏CT图像自动分割成为近年来的热门研究课题。基于医学CT图像的完全心脏分割是指对CT心脏图像的整个序列进行组织分割,分割结果对协助医生诊断心血管疾病和指导医生手术具有重要意义。According to the World Health Organization, tens of millions of people die from cardiovascular disease (heart disease) each year, accounting for an estimated 31% of all deaths worldwide. Cardiac computed tomography (CT) images are widely used in radiotherapy plans, including heart disease, and automatic segmentation of cardiac CT images has become a hot research topic in recent years. Complete cardiac segmentation based on medical CT images refers to tissue segmentation of the entire sequence of CT cardiac images. The segmentation results are of great significance in assisting doctors in diagnosing cardiovascular diseases and guiding doctors in surgery.

由于心脏样本的复杂性及噪音的严重干扰,常导致CT图像中的心脏组织结构边界模糊。目前已知心脏CT医学图像自动分割的两种主要技术是:主动轮廓模型技术,广泛用于分割肺部CT图像;和机器学习方法,如卷积神经网络(CNN)。如何从心脏图像中从分割出单个心房或心室已有一些研究报告问世,但整个心脏CT序列图像的分割结果却少见有报道。目前大多数全序列CT心脏分割都需在专业医生指导下人机交互的完成半自动分割,标注一个成熟,完整和有说服力的公共数据集需耗费大量成本。Due to the complexity of cardiac samples and the serious interference of noise, the boundaries of cardiac tissue structures in CT images are often blurred. Currently, the two main technologies known for automatic segmentation of cardiac CT medical images are: active contour model technology, which is widely used to segment lung CT images; and machine learning methods, such as convolutional neural networks (CNN). There have been some research reports on how to segment a single atrium or ventricle from a cardiac image, but there are few reports on the segmentation results of the entire cardiac CT sequence image. At present, most full-sequence CT cardiac segmentation requires semi-automatic segmentation under the guidance of professional doctors through human-computer interaction, and it costs a lot of money to label a mature, complete and convincing public data set.

由于心脏样本的多样性复杂性使相关医学图像的标注极其困难,CT心脏图像组织结构的标注及准确分割成为大挑战。Due to the diversity and complexity of cardiac samples, it is extremely difficult to annotate related medical images. Therefore, the annotation and accurate segmentation of the tissue structure of CT cardiac images has become a major challenge.

发明内容Summary of the invention

本发明的目的在于至少解决现有技术中存在的技术问题之一,提供了一种基于矛盾标记法的心脏CT图像自动分割方法、装置及介质,实现方式简单,在保证标注精度的同时也降低了成本。The purpose of the present invention is to solve at least one of the technical problems existing in the prior art, and to provide a method, device and medium for automatic segmentation of cardiac CT images based on contradictory labeling method, which is simple to implement and reduces costs while ensuring labeling accuracy.

本发明的技术方案包括一种基于矛盾标记法的心脏CT图像自动分割方法,其特征在于:S100,对心脏CT图像进行低精度矛盾标注,得到同一所述心脏CT图像的两个相似注释信息的矛盾标注图,将同一帧图中的所述矛盾标注图所包括的矛盾注解作为第一矛盾标注集;S200,将多张心脏CT图像中不同帧但相邻图中的所述矛盾标注图所包括的相似注解作为第二矛盾标注集;S300,通过精准标注数据部分对基于U-Net的深度神经网络进行预热,混合所述第一矛盾标注集及所述第二矛盾标注集得到混合矛盾标注集;S400,通过所述混合矛盾标注集训练全卷积神经网络,分割所述心脏CT图像的前景和背景,直至所述全卷积神经网络达到收敛;S500,统计U-Net网络分割出的所述CT心脏系列图像的前景和背景的灰阶直方图,将该直方图双峰间的最低谷作为分割阈值,设定所述心脏CT图像的兴趣区域并进行分割。The technical solution of the present invention includes an automatic segmentation method for cardiac CT images based on a contradictory labeling method, which is characterized by: S100, low-precision contradictory annotations are performed on cardiac CT images to obtain contradictory annotation maps of two similar annotation information of the same cardiac CT image, and the contradictory annotations included in the contradictory annotation map in the same frame are used as a first contradictory annotation set; S200, similar annotations included in the contradictory annotation maps in different frames but adjacent images of multiple cardiac CT images are used as a second contradictory annotation set; S300, a U-Net-based deep neural network is preheated through a precise annotation data part, and the first contradictory annotation set and the second contradictory annotation set are mixed to obtain a mixed contradictory annotation set; S400, a fully convolutional neural network is trained through the mixed contradictory annotation set to segment the foreground and background of the cardiac CT image until the fully convolutional neural network reaches convergence; S500, grayscale histograms of the foreground and background of the CT cardiac series images segmented by the U-Net network are counted, the lowest valley between the two peaks of the histogram is used as a segmentation threshold, and the region of interest of the cardiac CT image is set and segmented.

根据所述的基于矛盾标记法的心脏CT图像自动分割方法,其中第一矛盾标注集中的矛盾注解为多张所述心脏CT图像的同一帧图全部标注点的5%。According to the method for automatic segmentation of cardiac CT images based on the contradictory labeling method, the contradictory annotations in the first contradictory labeling set are 5% of all the labeled points in the same frame of the plurality of cardiac CT images.

根据所述的基于矛盾标记法的心脏CT图像自动分割方法,其中第二矛盾标注集的相似的注释信息为多张所述心脏CT中不同帧但相邻图像全部标注点的30%。According to the method for automatic segmentation of cardiac CT images based on contradictory labeling, the similar annotation information of the second contradictory labeling set is 30% of all the annotation points of different frames but adjacent images in the plurality of cardiac CT images.

根据所述的基于矛盾标记法的心脏CT图像自动分割方法,其中低精度矛盾标注被设置为:以标注点低于标准标注集的标注点数量对应的标注集进行的矛盾标注,或未经相关领域专家完整鉴定的标注集进行的矛盾标注。According to the method for automatic segmentation of cardiac CT images based on the contradictory labeling method, the low-precision contradictory annotation is set to: contradictory annotation performed by an annotation set corresponding to an annotation point number lower than that of a standard annotation set, or contradictory annotation performed by an annotation set that has not been fully authenticated by experts in the relevant field.

根据所述的基于矛盾标记法的心脏CT图像自动分割方法,其中预训练包括:以混合矛盾标注集进行高泛化性分割训练。According to the method for automatic segmentation of cardiac CT images based on contradictory labeling, the pre-training includes: performing high generalization segmentation training with a mixed contradictory labeling set.

根据所述的基于矛盾标记法的心脏CT图像自动分割方法,其中全卷积神经网络模型设置为U-Net全卷积神经网络模型。According to the method for automatic segmentation of cardiac CT images based on contradictory labeling, the fully convolutional neural network model is set to a U-Net fully convolutional neural network model.

根据所述的基于矛盾标记法的心脏CT图像自动分割方法,其中S400包括:所述混合矛盾标注集训练U-Net全卷积神经网络模型为递减学习,所述递减学习的学习率为10%。According to the method for automatic segmentation of cardiac CT images based on contradictory labeling, S400 includes: training the U-Net full convolutional neural network model with the mixed contradictory labeling set is a decreasing learning, and the learning rate of the decreasing learning is 10%.

根据所述的基于矛盾标记法的心脏CT图像自动分割方法,其中S500包括:统计U-Net全卷积神经网络模型分割出CT心脏系列图像的前景和背景离散数据,以灰阶直方图hist作为模型输出进行统计,得到对应的两个峰值,以两个峰值之间的低谷,设定兴趣阈值并完成对心脏CT图像兴趣区域自动分割,其中低谷为第一高峰的10%。According to the method for automatic segmentation of cardiac CT images based on contradictory labeling, S500 includes: using a statistical U-Net fully convolutional neural network model to segment the foreground and background discrete data of a CT cardiac series image, using a grayscale histogram hist as the model output for statistical purposes, obtaining two corresponding peaks, using the trough between the two peaks, setting an interest threshold and completing automatic segmentation of the region of interest of the cardiac CT image, wherein the trough is 10% of the first peak.

本发明的技术方案还包括一种基于矛盾标记法的心脏CT图像自动分割装置,该装置包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现任一所述的方法步骤。The technical solution of the present invention also includes an automatic segmentation device for cardiac CT images based on the contradictory labeling method, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and is characterized in that the processor implements any of the method steps described when executing the computer program.

本发明的技术方案还包括一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现任一所述的方法步骤。The technical solution of the present invention also includes a computer-readable storage medium, which stores a computer program, and is characterized in that when the computer program is executed by a processor, any of the method steps described is implemented.

本发明的有益效果为:全CT心脏系列图像感兴趣目标区域分割的符合要求的结果;且通过矛盾标注方法训练相关神经网络,整个网络训练过程更快。这是因为矛盾标注法不需要大量标注集,标注精度也无需太高。因此准备标注数据集的时间成本相对较低,也即可以在完成分割的同时降低成本。The beneficial effects of the present invention are: the segmentation of the target region of interest of the whole CT cardiac series image meets the requirements; and the relevant neural network is trained by the contradictory annotation method, and the whole network training process is faster. This is because the contradictory annotation method does not require a large number of annotation sets, and the annotation accuracy does not need to be too high. Therefore, the time cost of preparing the annotation data set is relatively low, that is, the cost can be reduced while completing the segmentation.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

下面结合附图和实施例对本发明进一步地说明;The present invention is further described below in conjunction with the accompanying drawings and embodiments;

图1所示为根据本发明实施方式的总体流程图。FIG. 1 is a general flow chart according to an embodiment of the present invention.

图2所示为根据本发明实施方式的矛盾标注集示意图。FIG. 2 is a schematic diagram of a contradictory annotation set according to an embodiment of the present invention.

图3所示为根据本发明实施方式的矛盾标注前后示意图。FIG. 3 is a schematic diagram before and after showing contradiction marking according to an embodiment of the present invention.

图4所示为根据本发明实施方式的CT断层扫描图像示例图。FIG. 4 is a diagram showing an example of a CT tomography image according to an embodiment of the present invention.

图5所示为根据本发明实施方式的兴趣区域分割的结果图。FIG. 5 is a diagram showing the result of segmenting the region of interest according to an embodiment of the present invention.

图6所示为根据本发明实施方式的装置介质图。FIG. 6 is a diagram showing a device medium according to an embodiment of the present invention.

具体实施方式DETAILED DESCRIPTION

本部分将详细描述本发明的具体实施例,本发明之较佳实施例在附图中示出,附图的作用在于用图形补充说明书文字部分的描述,使人能够直观地、形象地理解本发明的每个技术特征和整体技术方案,但其不能理解为对本发明保护范围的限制。This section will describe in detail the specific embodiments of the present invention. The preferred embodiments of the present invention are shown in the accompanying drawings. The purpose of the accompanying drawings is to supplement the description of the text part of the specification with graphics, so that people can intuitively and vividly understand each technical feature and the overall technical solution of the present invention, but it cannot be understood as a limitation on the scope of protection of the present invention.

在本发明的描述中,若干的含义是一个或者多个,多个的含义是两个以上,大于、小于、超过等理解为不包括本数,以上、以下、以内等理解为包括本数。In the description of the present invention, “several” means one or more, “more” means more than two, “greater than”, “less than”, “exceed”, etc. are understood as not including the number itself, and “above”, “below”, “within”, etc. are understood as including the number itself.

在本发明的描述中,对方法步骤的连续标号是为了方便审查和理解,结合本发明的整体技术方案以及各个步骤之间的逻辑关系,调整步骤之间的实施顺序并不会影响本发明技术方案所达到的技术效果。In the description of the present invention, the consecutive numbering of the method steps is for the convenience of review and understanding. Combined with the overall technical scheme of the present invention and the logical relationship between the various steps, adjusting the implementation order between the steps will not affect the technical effect achieved by the technical scheme of the present invention.

图1所示为根据本发明实施方式的总体流程图。该流程包括:S100,对心脏CT图像进行低精度矛盾标注,得到同一心脏CT图像的两个相似注释信息的矛盾标注图,将同一帧图中的矛盾标注图所包括的矛盾注解作为第一矛盾标注集;S200,将多张心脏CT图像中不同帧但相邻图中的矛盾标注图所包括的相似注解作为第二矛盾标注集;S300,通过精准标注数据部分对基于U-Net的深度神经网络进行预热,混合第一矛盾标注集及第二矛盾标注集得到混合矛盾标注集;S400,通过混合矛盾标注集训练全卷积神经网络,分割心脏CT图像的前景和背景,直至全卷积神经网络达到收敛;S500,统计U-Net网络分割出的CT心脏系列图像的前景和背景的灰阶直方图,将该直方图双峰间的最低谷作为分割阈值,设定心脏CT图像的兴趣区域并进行分割。其中,对心脏CT图像的标注一般为心脏CT系列图像。FIG1 shows an overall flow chart according to an embodiment of the present invention. The process includes: S100, low-precision contradictory annotation of cardiac CT images, obtaining contradictory annotation graphs of two similar annotation information of the same cardiac CT image, and taking the contradictory annotations included in the contradictory annotation graph in the same frame as the first contradictory annotation set; S200, taking the similar annotations included in the contradictory annotation graphs in different frames but adjacent images of multiple cardiac CT images as the second contradictory annotation set; S300, preheating the U-Net-based deep neural network through the precise annotation data part, mixing the first contradictory annotation set and the second contradictory annotation set to obtain a mixed contradictory annotation set; S400, training the full convolutional neural network through the mixed contradictory annotation set, segmenting the foreground and background of the cardiac CT image, until the full convolutional neural network reaches convergence; S500, counting the grayscale histograms of the foreground and background of the CT cardiac series images segmented by the U-Net network, taking the lowest valley between the two peaks of the histogram as the segmentation threshold, setting the region of interest of the cardiac CT image and performing segmentation. Among them, the annotation of the cardiac CT image is generally a cardiac CT series image.

针对图1的技术方案,本发明还提出了以下实施手段:With respect to the technical solution of FIG. 1 , the present invention also proposes the following implementation means:

1、准备来自CT心脏系列图像中同一图像(单幅图像)两个相似的注释信息的矛盾标注图,其中包含约5%的矛盾注解作为矛盾标注集11. Prepare the contradictory annotation map of two similar annotation information of the same image (single image) from the CT heart series images, which contains about 5% of the contradictory annotations as the contradictory annotation set 1

2、准备来自CT心脏系列图像中不同图像的相似的注释信息,其中约含30%的矛盾标注(相似图像的注释信息差别较大)作为矛盾标注集2。2. Prepare similar annotation information from different images in the CT heart image series, including about 30% contradictory annotations (the annotation information of similar images is quite different) as contradictory annotation set 2.

3、将前面准备好的2个矛盾标注集混合。先以标注精度的图片,超小学习率做预热。再以完整的带有矛盾的数据集进行完整的训练。3. Mix the two conflicting annotation sets prepared earlier. First, use the images with high annotation accuracy and a very small learning rate for warm-up. Then, perform full training with the complete conflicting dataset.

4、使用准备好的有矛盾的标注数据集训练U-Net网络,分割CT心脏系列图像的前景和背景,直到网络达到收敛(损失基本不变)。整个训练U-Net模型过程约为600个epochs,分3次衰减学习率分别为0.1,0.01,0.001使用带有Momentum的SGD优化器。最终使CT心脏系列图像的前景和背景训练的分割准确率达到98%以上;训练终止。4. Use the prepared contradictory annotated dataset to train the U-Net network to segment the foreground and background of the CT heart series images until the network reaches convergence (the loss remains basically unchanged). The entire training process of the U-Net model is about 600 epochs, and the learning rate is decayed 3 times with 0.1, 0.01, and 0.001 respectively, using the SGD optimizer with Momentum. Finally, the segmentation accuracy of the foreground and background training of the CT heart series images reaches more than 98%; the training is terminated.

5、由于矛盾标记法使用带有矛盾的标注数据,该方法具有很好的泛化性。因此训练过程中验证组的设定不是必须。设置验证组与矛盾标记法不冲突,但是设置的验证组中含有较多的矛盾数据,会阻碍真实精度的评估。5. Since the contradictory labeling method uses contradictory labeled data, this method has good generalization. Therefore, it is not necessary to set a validation group during training. Setting a validation group does not conflict with the contradictory labeling method, but the validation group contains more contradictory data, which will hinder the evaluation of true accuracy.

6、统计U-Net网络分割出CT心脏系列图像的前景和背景离散情况。经过试验发现,以hist作为模型输出的统计方式,可以得到两个峰值。10%为两峰之间的低谷。设定阈值完成对心脏CT图像感兴趣区域自动分割。6. Statistical U-Net network segmented the foreground and background discreteness of CT heart series images. After experimentation, it was found that using hist as the statistical method of model output, two peaks can be obtained. 10% is the valley between the two peaks. Set the threshold to complete the automatic segmentation of the region of interest of the heart CT image.

图2所示为根据本发明实施方式的矛盾标注集示意图。从左至右分别为2a(心脏CT图像),2b(标注集一),2c(标注集二)。Fig. 2 is a schematic diagram of contradictory annotation sets according to an embodiment of the present invention, which are 2a (heart CT image), 2b (annotation set 1), and 2c (annotation set 2) from left to right.

矛盾标注方法旨在克服标注大量高精度CT心脏序列图像的困难。由于Internet上CT心脏分割的公开数据集较少,缺少足够的标注数据集可用于研发分割算法;且用于CT心脏分割的标注数据集的准备需要具备CT心脏图像专业知识才能制作出高精度的标注数据,要标注的数据不但量大且样本多样复杂,需要很长时间才能较精准完成,因此CT心脏图像的标注工作十分困难。矛盾标注方法被设计为允许标注精度较低,不同标注结果可以不完全一致,数据量较小通过使用小数据来训练网络;这样不但减少标注难度降低注释成本,还可避免通常由小训练数据集引起的网络训练过度拟合问题。也即矛盾标注法的设计理念是通过标注某些具有矛盾的小数据集来训练神经网络。矛盾标注法设计使用的矛盾数据主要包括同一图像的矛盾数据和不同图像的矛盾数据两类:The contradictory labeling method aims to overcome the difficulty of labeling a large number of high-precision CT cardiac image sequences. Since there are few public datasets for CT cardiac segmentation on the Internet, there is a lack of sufficient labeled datasets for the development of segmentation algorithms; and the preparation of labeled datasets for CT cardiac segmentation requires professional knowledge of CT cardiac images to produce high-precision labeled data. The data to be labeled is not only large in quantity but also diverse and complex in samples, and it takes a long time to complete accurately. Therefore, the labeling of CT cardiac images is very difficult. The contradictory labeling method is designed to allow for lower labeling accuracy, different labeling results may not be completely consistent, and the data volume is small. By using small data to train the network; this not only reduces the difficulty of labeling and reduces the annotation cost, but also avoids the problem of overfitting of network training usually caused by small training datasets. That is, the design concept of the contradictory labeling method is to train the neural network by labeling certain small datasets with contradictions. The contradictory data used in the design of the contradictory labeling method mainly includes two types: contradictory data of the same image and contradictory data of different images:

同一图像使用两个不同有矛盾但相似的标注集,如图2所示。图2中的图像(2b)和(2c)都是同一图像(2a)的分割标注集,它们彼此相似,但也有少许不同。仔细观察可以看到图像(2b)和(2c)显示的两个标注数据集基本一致但确实存在个别或少许差异,如左下角的突出部分,及中间区域的形状划分等都显示两个标注点集并不完全相同。这些差异即为两个标注集的矛盾所在,当然这些差异或矛盾不会也不应太大。这种标注方法的目的是迫使神经网络在学习时更加关注图像之间的共性。并通过神经网络的自动学习,将两个标注结果组合在一起找到比标注结果更好的边缘信息(例如,使边缘更平滑)。使用两个不同有矛盾但相似的标注集还防止了网络的过度训练,防止了由小数据集,不足的标注精度等引起的网络过度拟合等问题。The same image uses two different, contradictory but similar annotation sets, as shown in Figure 2. Images (2b) and (2c) in Figure 2 are both segmentation annotation sets of the same image (2a). They are similar to each other, but also slightly different. A careful observation shows that the two annotation data sets shown in images (2b) and (2c) are basically the same, but there are indeed individual or minor differences, such as the protruding part in the lower left corner and the shape division of the middle area, which show that the two annotation point sets are not exactly the same. These differences are the contradictions between the two annotation sets. Of course, these differences or contradictions will not and should not be too large. The purpose of this annotation method is to force the neural network to pay more attention to the commonalities between images when learning. And through the automatic learning of the neural network, the two annotation results are combined to find better edge information than the annotation results (for example, making the edges smoother). Using two different, contradictory but similar annotation sets also prevents over-training of the network and prevents problems such as overfitting of the network caused by small data sets and insufficient annotation accuracy.

图3所示为根据本发明实施方式的矛盾标注前后示意图。每列从左至右分别为3a(心脏CT图像),3b(标注图像)。不同或相邻图像所包含不同分割标注集。CT心脏序列图像在相邻层之间具有高度相似性,不同的心脏序列图像中类似的断层也有一定的相似性。相似图像所具有的不同标注集的可以更好地迫使神经网络学习图像之间的共性。通过自动学习神经网络来纠正分割的错误以防止过度拟合等。当然这会导致一些分割精度的损失,但实验结果表明在标注数据集本身就精度不高的前提下,所谓分割精度的损失是可以接受的。在某些细节上,这些分割精度的损失还可用来校正分割标注集。选择相似的图像而不是相同的图像的另一好处是可以扩大可用数据集,如果仅对相同的图像进行不同的标注,在实际训练中可能会导致过度拟合。FIG3 is a schematic diagram of contradictory annotations before and after according to an embodiment of the present invention. Each column is 3a (cardiac CT image) and 3b (annotated image) from left to right. Different or adjacent images contain different segmentation annotation sets. CT cardiac sequence images have a high degree of similarity between adjacent layers, and similar faults in different cardiac sequence images also have certain similarities. The different annotation sets of similar images can better force the neural network to learn the commonalities between images. The errors of segmentation are corrected by automatically learning the neural network to prevent overfitting, etc. Of course, this will lead to some loss of segmentation accuracy, but the experimental results show that the so-called loss of segmentation accuracy is acceptable under the premise that the annotation data set itself is not accurate. In some details, these losses in segmentation accuracy can also be used to correct the segmentation annotation set. Another advantage of selecting similar images instead of identical images is that the available data set can be expanded. If only different annotations are performed on the same images, overfitting may occur in actual training.

上面两种类型的矛盾标注数据反映了矛盾标注方法的本质。设计理念是通过矛盾标注法仅需注释相对少量低精度数据标注,快速训练具有良好泛化能力的模型。为了验证矛盾标注法的有效性,我们构建了上述两类矛盾标注数据,实验训练一个模型来验证我们的理论。为单纯验证矛盾标注法,实验训练过程未包含其他数据增强功能,也没有使用提前停止及其他方法。单纯使用原始U-Net模型训练,直到模型达到极限收敛(损耗基本不变)。这验证了由以上两种类型的矛盾数据训练的模型的泛化能力。实验结果表明,仅通过少量低精度标注图像,经矛盾标注法训练的神经网络就可以自动生成医学图像的前景部分(即医生感兴趣的主要区域)。The above two types of contradictory labeled data reflect the essence of the contradictory labeling method. The design concept is to use the contradictory labeling method to annotate only a relatively small amount of low-precision data annotations and quickly train a model with good generalization ability. In order to verify the effectiveness of the contradictory labeling method, we constructed the above two types of contradictory labeled data and experimentally trained a model to verify our theory. In order to simply verify the contradictory labeling method, the experimental training process did not include other data augmentation functions, nor did it use early stopping and other methods. The original U-Net model was used for training until the model reached extreme convergence (the loss remained basically unchanged). This verifies the generalization ability of the model trained by the above two types of contradictory data. The experimental results show that with only a small number of low-precision labeled images, the neural network trained by the contradictory labeling method can automatically generate the foreground part of the medical image (that is, the main area of interest to the doctor).

低精度标注集还是高精度、高低本身就是一个相对概念;标注集精度定义会根据不同的处理对象及分割目的要求会有不同的范围,并没有统一定义。所有“未经相关领域专家完整审批,无相关机构或专家背书的数据集”应该都属低精度标注集。《基于矛盾标注法的心脏CT图像感兴趣区域自动分割方法》中使用的低精度标注集是指未经相关领域专家完整审批,标注点相对较少且同一幅图中两次手动标注结果经由肉眼观察可看出不同,或不同图中标注结果经由肉眼观察可看出不同的低精度标注集。该两类低精度标注集即为矛盾标注法训练网络使用的矛盾标注数据集。Low-precision annotation set or high-precision, high and low are relative concepts in themselves; the definition of annotation set accuracy will have different ranges according to different processing objects and segmentation purpose requirements, and there is no unified definition. All "datasets that have not been fully approved by experts in related fields and have no endorsement from relevant institutions or experts" should belong to low-precision annotation sets. The low-precision annotation set used in "Automatic Segmentation Method of Region of Interest in Cardiac CT Images Based on Contradictory Annotation Method" refers to a low-precision annotation set that has not been fully approved by experts in related fields, has relatively few annotation points, and the results of two manual annotations in the same picture can be seen to be different by naked eye, or the annotation results in different pictures can be seen to be different by naked eye. These two types of low-precision annotation sets are the contradictory annotation datasets used by the contradictory annotation method training network.

基于前述矛盾标注法设计了一种新的CT心脏图像分割方法。这种方法可以很好地将CT心脏图像分为前景(ROI)和背景。该新方法以矛盾标注法为核心构建矛盾的标注集完成网络训练,然后通过阈值完成感兴趣区域的分割。分割阈值为10%,通过统计前景和背景的离散情况基本可以满足分割的需求。A new CT cardiac image segmentation method is designed based on the aforementioned contradictory annotation method. This method can well divide CT cardiac images into foreground (ROI) and background. This new method uses the contradictory annotation method as the core to construct a contradictory annotation set to complete network training, and then completes the segmentation of the region of interest through the threshold. The segmentation threshold is 10%, and the discrete situation of the foreground and background can basically meet the segmentation requirements.

通过对模型输出结果的分析和统计,我们发现训练后的CT心脏分割模型具有以下特点:由于原始数据标注准确性的限制,模型训练对分割细节的效果有一定的影响。然而由于泛化能力的增强,模型训练的结果对图像前景和背景的分割具有很好的效果。因此,该模型可以获得完整的前景/感兴趣区域的分割结果。同时另一关键点,矛盾训练可以作为预训练策略,提升精标数据极限拟合的泛化策略。Through the analysis and statistics of the model output results, we found that the trained CT heart segmentation model has the following characteristics: Due to the limitation of the accuracy of the original data annotation, the model training has a certain impact on the effect of the segmentation details. However, due to the enhanced generalization ability, the results of the model training have a good effect on the segmentation of the image foreground and background. Therefore, the model can obtain the complete foreground/region of interest segmentation results. At the same time, another key point is that the contradiction training can be used as a pre-training strategy to improve the generalization strategy of the limit fitting of the precision standard data.

图4所示为根据本发明实施方式的CT断层扫描图像示例图。每列从左至右分别为4a(第50层上的断层图),4b(第140层上的断层图),4c(第270层上的断层图)。Fig. 4 shows an example of a CT tomographic image according to an embodiment of the present invention. Each column from left to right is 4a (a tomographic image on the 50th layer), 4b (a tomographic image on the 140th layer), and 4c (a tomographic image on the 270th layer).

参考图4,通过人工标注了CT心脏数据集的一些图像,并将它们用作训练集来训练神经网络;最终获得了很高的分割准确性。Referring to Figure 4, some images of the CT heart dataset were manually annotated and used as training sets to train the neural network; ultimately a high segmentation accuracy was achieved.

测试数据集,简要介绍实验中使用CT心脏数据及矛盾标注方法注释的特定过程(选择矛盾的原则)。实验中使用CT心脏数据约为每个心脏300帧断层图。其中约220帧的图像中带有对比剂。要分割的图像是包含造影剂的图像。图像如图4所示:The test data set briefly introduces the specific process of using CT heart data and the annotation method of contradiction in the experiment (the principle of selecting contradictions). The CT heart data used in the experiment is about 300 frames of tomography for each heart. About 220 frames of images contain contrast agents. The image to be segmented is an image containing contrast agents. The image is shown in Figure 4:

白色区域是造影剂区域。在本实验中,我们最终获得了420张有效的CT心脏注释图像。这420张图片包含来自同一图像两个相似的注释信息的矛盾标注图,其中包含约5%的矛盾注解。第二种类型也即来自不同图像的相似的注释信息,其中约含30%的矛盾标注(相似图像的注释信息差别较大)。The white area is the contrast agent area. In this experiment, we finally obtained 420 valid CT heart annotated images. These 420 images contain contradictory annotations from two similar annotations of the same image, which contain about 5% contradictory annotations. The second type is similar annotation information from different images, which contains about 30% contradictory annotations (the annotation information of similar images is quite different).

结果与分析,如图5所示为根据本发明实施方式的兴趣区域分割的结果图,从左至右的列分别为5a(心脏CT图像),5b(矛盾标注方法训练结果),5c(感兴趣区域分割结果)。Results and analysis, as shown in FIG5 , is a result diagram of the region of interest segmentation according to an embodiment of the present invention, wherein the columns from left to right are 5a (cardiac CT image), 5b (contradiction labeling method training result), and 5c (region of interest segmentation result).

使用原始的U-Net模型训练了420个数据。同时使用标注集以外的数据作为测试以检验训练结果。由于矛盾标记法使用带有矛盾的标注数据,为避免验证组对训练结果的校正,训练过程中我们没有设置验证组。希望以这种方式,尽可能多地展示矛盾标注法的泛化效果。The original U-Net model was used to train 420 data. At the same time, data outside the annotation set was used as a test to verify the training results. Since the contradictory labeling method uses contradictory annotated data, in order to avoid the verification group correcting the training results, we did not set up a verification group during the training process. We hope that in this way, we can show the generalization effect of the contradictory labeling method as much as possible.

整个U-Net模型训练共计600个epochs,学习率分3次衰减各10%。最终训练的分割准确率为98.06%。在通过测试集验证后,感兴趣区域的提取十分完整,展示了很好的分割效果。在图5中已显示了感兴趣区域的提取及分割结果。The entire U-Net model was trained for a total of 600 epochs, and the learning rate was decayed 3 times by 10% each time. The final segmentation accuracy of the training was 98.06%. After verification by the test set, the extraction of the region of interest was very complete, showing a good segmentation effect. The extraction and segmentation results of the region of interest are shown in Figure 5.

训练数据集由我们自己标注。由于是一种低精度的数据集,暂时无法完成分割结果定量比较分析;暂时无法在其他现有分割方法中使用矛盾标记法。由于目前使用矛盾标记法标注的数据集带有一定矛盾分割结果,因此最终训练的精度上限是达不到100%的。满足矛盾律,我们不能使同一CT心脏图片的不同标注结果同时被判断为正确。也即用矛盾属性去激发深度神经网络的不可解释自动学习特性。以得到很好的分割结果。The training data set is annotated by ourselves. Since it is a low-precision data set, it is temporarily impossible to complete the quantitative comparative analysis of the segmentation results; it is temporarily impossible to use the contradictory labeling method in other existing segmentation methods. Since the data set currently annotated with the contradictory labeling method has certain contradictory segmentation results, the upper limit of the final training accuracy cannot reach 100%. To meet the law of contradiction, we cannot make different annotation results of the same CT heart image be judged as correct at the same time. That is, using contradictory attributes to stimulate the unexplainable automatic learning characteristics of deep neural networks. In order to obtain good segmentation results.

图6所示为根据本发明实施方式的装置介质图。装置包括存储器100及处理器200,其中处理器200存储有计算机程序,计算机程序用于执行:对心脏CT图像进行低精度矛盾标注,将矛盾标注图所包括的矛盾注解作为第一矛盾标注集;将矛盾标注图所包括的相似注解作为第二矛盾标注集;混合第一矛盾标注集及第二矛盾标注集得到混合矛盾标注集,对混合矛盾标注集进行预训练;通过混合矛盾标注集训练全卷积神经网络,分割心脏CT图像的前景和背景,直至全卷积神经网络达到收敛;统计U-Net网络分割出的CT心脏系列图像的前景和背景离散,设定心脏CT图像的兴趣区域并进行分割。其中,存储器100用于存储数据。FIG6 is a diagram of a device medium according to an embodiment of the present invention. The device includes a memory 100 and a processor 200, wherein the processor 200 stores a computer program, and the computer program is used to execute: low-precision contradictory annotation of cardiac CT images, and the contradictory annotations included in the contradictory annotation map are used as the first contradictory annotation set; the similar annotations included in the contradictory annotation map are used as the second contradictory annotation set; the first contradictory annotation set and the second contradictory annotation set are mixed to obtain a mixed contradictory annotation set, and the mixed contradictory annotation set is pre-trained; the full convolutional neural network is trained by the mixed contradictory annotation set to segment the foreground and background of the cardiac CT image until the full convolutional neural network reaches convergence; the foreground and background of the CT cardiac series images segmented by the U-Net network are statistically discrete, and the region of interest of the cardiac CT image is set and segmented. Among them, the memory 100 is used to store data.

上面结合附图对本发明实施例作了详细说明,但是本发明不限于上述实施例,在技术领域普通技术人员所具备的知识范围内,还可以在不脱离本发明宗旨的前提下做出各种变化。The embodiments of the present invention are described in detail above in conjunction with the accompanying drawings, but the present invention is not limited to the above embodiments, and various changes can be made within the knowledge scope of ordinary technicians in the technical field without departing from the purpose of the present invention.

Claims (7)

1. An automatic cardiac CT image segmentation method based on a contradictory labeling method is characterized by comprising the following steps of:
S100, performing low-precision contradiction annotation on a heart CT image to obtain contradiction annotation graphs of two similar annotation information of the same heart CT image, and taking the contradiction annotation included in the contradiction annotation graph in the same frame graph as a first contradiction annotation set; the contradiction annotation in the first contradiction annotation set is 5% of all annotation points of the same frame image of the plurality of heart CT images;
S200, taking similar annotations included in contradiction annotation pictures in different frames but adjacent pictures in the plurality of cardiac CT images as a second contradiction annotation set; similar annotation information of the second contradictory annotation set is 30% of all annotation points of different frames but adjacent images in the plurality of cardiac CTs;
S300, preheating a deep neural network based on U-Net through a precise annotation data part, and mixing the first contradiction annotation set and the second contradiction annotation set to obtain a mixed contradiction annotation set;
s400, training a full convolution neural network through the mixed contradiction labeling set, and segmenting the foreground and the background of the heart CT image until the full convolution neural network achieves convergence; the training comprises: performing high generalization segmentation training by using a mixed contradiction labeling set; combining the labeling results of the first contradiction labeling set and the labeling results of the second contradiction labeling set through automatic learning of the full convolution neural network to find out edge information better than the labeling results;
s500, counting gray level histograms of foreground and background of the CT heart series image segmented by the U-Net network, taking the lowest valley between the double peaks of the histograms as a segmentation threshold, setting a region of interest of the heart CT image and segmenting.
2. The automatic segmentation method of cardiac CT images based on the contradictory labeling method according to claim 1, wherein the low-precision contradictory labeling is set as: and performing contradiction labeling by using a labeling set corresponding to the number of labeling points with the labeling points lower than that of the standard labeling set or performing contradiction labeling by using the labeling set which is not completely identified by experts in the related fields.
3. The automatic cardiac CT image segmentation method based on the contradictory labeling method according to claim 1, wherein the full convolutional neural network model is set as a U-Net full convolutional neural network model.
4. The automatic segmentation method for cardiac CT images based on contradictory labeling according to claim 3, wherein S400 comprises:
The mixed contradiction labeling set training U-Net full convolution neural network model is descending learning, and the learning rate of the descending learning is 10%.
5. The automatic segmentation method for cardiac CT images based on contradictory labeling according to claim 3, wherein S500 comprises:
And (3) counting foreground and background discrete data of the CT heart series image by a U-Net full convolution neural network model, counting by taking a gray-scale histogram hist as model output, obtaining two corresponding peaks, setting an interest threshold value by using a valley between the two peaks, and completing automatic segmentation of a heart CT image interest region, wherein the valley is 10% of a first peak.
6. An automatic segmentation device for cardiac CT images based on contradictory labeling, the device comprising a memory, a processor and a computer program stored in said memory and executable on said processor, characterized in that the processor implements the method steps of any one of claims 1-5 when executing said computer program.
7. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the method steps of any of claims 1-5.
CN202010862313.3A 2020-08-25 2020-08-25 Automatic cardiac CT image segmentation method, device and medium based on contradiction labeling method Active CN112116625B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010862313.3A CN112116625B (en) 2020-08-25 2020-08-25 Automatic cardiac CT image segmentation method, device and medium based on contradiction labeling method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010862313.3A CN112116625B (en) 2020-08-25 2020-08-25 Automatic cardiac CT image segmentation method, device and medium based on contradiction labeling method

Publications (2)

Publication Number Publication Date
CN112116625A CN112116625A (en) 2020-12-22
CN112116625B true CN112116625B (en) 2024-10-15

Family

ID=73805243

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010862313.3A Active CN112116625B (en) 2020-08-25 2020-08-25 Automatic cardiac CT image segmentation method, device and medium based on contradiction labeling method

Country Status (1)

Country Link
CN (1) CN112116625B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109993758A (en) * 2019-04-23 2019-07-09 北京华力兴科技发展有限责任公司 Dividing method, segmenting device, computer equipment and storage medium
CN110910404A (en) * 2019-11-18 2020-03-24 西南交通大学 A breast ultrasound nodule segmentation method with anti-noise data
CN111539956A (en) * 2020-07-07 2020-08-14 南京安科医疗科技有限公司 Cerebral hemorrhage automatic detection method based on brain auxiliary image and electronic medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8660355B2 (en) * 2010-03-19 2014-02-25 Digimarc Corporation Methods and systems for determining image processing operations relevant to particular imagery
CN106599051B (en) * 2016-11-15 2020-02-07 北京航空航天大学 Automatic image annotation method based on generated image annotation library
CN110599499B (en) * 2019-08-22 2022-04-19 四川大学 MRI image heart structure segmentation method based on multipath convolutional neural network
CN111166362B (en) * 2019-12-31 2021-12-03 推想医疗科技股份有限公司 Medical image display method and device, storage medium and electronic equipment
CN111242956A (en) * 2020-01-09 2020-06-05 西北工业大学 A joint segmentation method based on U-Net ultrasound fetal heart rate and fetal lung deep learning
CN111354002A (en) * 2020-02-07 2020-06-30 天津大学 Kidney and kidney tumor segmentation method based on deep neural network
CN111460766B (en) * 2020-03-31 2023-05-26 云知声智能科技股份有限公司 Contradictory language block boundary recognition method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109993758A (en) * 2019-04-23 2019-07-09 北京华力兴科技发展有限责任公司 Dividing method, segmenting device, computer equipment and storage medium
CN110910404A (en) * 2019-11-18 2020-03-24 西南交通大学 A breast ultrasound nodule segmentation method with anti-noise data
CN111539956A (en) * 2020-07-07 2020-08-14 南京安科医疗科技有限公司 Cerebral hemorrhage automatic detection method based on brain auxiliary image and electronic medium

Also Published As

Publication number Publication date
CN112116625A (en) 2020-12-22

Similar Documents

Publication Publication Date Title
US11580646B2 (en) Medical image segmentation method based on U-Net
WO2022199143A1 (en) Medical image segmentation method based on u-shaped network
CN110232383B (en) Focus image recognition method and focus image recognition system based on deep learning model
EP4002271A1 (en) Image segmentation method and apparatus, and storage medium
CN108776969A (en) Breast ultrasound image lesion segmentation approach based on full convolutional network
CN107578416A (en) A fully automatic segmentation method of cardiac left ventricle with cascaded deep network from coarse to fine
US11315254B2 (en) Method and device for stratified image segmentation
CN111754453A (en) Pulmonary tuberculosis detection method, system and storage medium based on chest X-ray images
CN109215035B (en) Brain MRI hippocampus three-dimensional segmentation method based on deep learning
CN114649092B (en) Auxiliary diagnosis method and device based on semi-supervised learning and multi-scale feature fusion
Wang et al. A DCNN system based on an iterative method for automatic landmark detection in cephalometric X-ray images
CN111127487B (en) A Real-time Multi-Tissue Medical Image Segmentation Method
CN114612656A (en) MRI image segmentation method and system based on improved ResU-Net neural network
CN113035334B (en) Automatic delineation method and device for radiotherapy target area of nasal cavity NKT cell lymphoma
CN111192252A (en) An image segmentation result optimization method, device, intelligent terminal and storage medium
Du et al. Segmentation and visualization of left atrium through a unified deep learning framework
CN118470043A (en) Automatic medical image segmentation method based on cognitive deep learning
CN114359308A (en) Aortic dissection method based on edge response and nonlinear loss
CN115439478B (en) Method, system, equipment and medium for evaluating lung lobe perfusion intensity based on lung perfusion
CN117495879A (en) A medical image segmentation method based on MAT-UNet
CN116934721A (en) Kidney tumor segmentation method based on multi-scale feature extraction
CN112116625B (en) Automatic cardiac CT image segmentation method, device and medium based on contradiction labeling method
CN114418955A (en) Coronary plaque stenosis detection system and method
CN115619810B (en) A prostate segmentation method, system and equipment
CN113313722B (en) An interactive annotation method for tooth root images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant