[go: up one dir, main page]

CN112562819A - Report generation method of ultrasonic multi-section data for congenital heart disease - Google Patents

Report generation method of ultrasonic multi-section data for congenital heart disease Download PDF

Info

Publication number
CN112562819A
CN112562819A CN202011454009.1A CN202011454009A CN112562819A CN 112562819 A CN112562819 A CN 112562819A CN 202011454009 A CN202011454009 A CN 202011454009A CN 112562819 A CN112562819 A CN 112562819A
Authority
CN
China
Prior art keywords
ultrasound
heart disease
congenital heart
report generation
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011454009.1A
Other languages
Chinese (zh)
Other versions
CN112562819B (en
Inventor
高跃
陈自强
魏宇轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN202011454009.1A priority Critical patent/CN112562819B/en
Publication of CN112562819A publication Critical patent/CN112562819A/en
Application granted granted Critical
Publication of CN112562819B publication Critical patent/CN112562819B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H15/00ICT specially adapted for medical reports, e.g. generation or transmission thereof
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Epidemiology (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

本发明公开了一种针对先心病的超声多切面数据的报告生成方法,其特征在于包括以下步骤:步骤1、完成训练数据及预处理;步骤2、利用超声图像特征提取器,完成超声图像特征提取,在超声图像的特征提取器中,采用残差结构来传递浅层的纹理、颜色信息,采用了4个卷积模块,每个卷积模块内部2个卷积层、2个批标准化层和2个激活函数;步骤3、设置病理标签图;步骤4、采用多帧超声图像注意力机制,提取信息;步骤5、建立多帧超声图像报告生成模型,基于先心病超声的多切面报告生成模型的结构,该模型建立在临床的基础需求上,由于对网络速度的要求,没有选用非常复杂的网络结构,而依照这一标准构建的模型也达到了临床要求的精度标准。

Figure 202011454009

The invention discloses a report generation method for ultrasound multi-section data for congenital heart disease, which is characterized by comprising the following steps: Step 1, complete training data and preprocessing; Step 2, use an ultrasound image feature extractor to complete ultrasound image features Extraction, in the feature extractor of ultrasound images, the residual structure is used to transmit the shallow texture and color information, and 4 convolution modules are used, each of which has 2 convolution layers and 2 batch normalization layers. and 2 activation functions; step 3, setting the pathological label map; step 4, using the multi-frame ultrasound image attention mechanism to extract information; step 5, establishing a multi-frame ultrasound image report generation model, based on the multi-section report generation of congenital heart disease ultrasound The structure of the model is based on the basic clinical requirements. Due to the requirement of network speed, a very complex network structure is not selected, and the model constructed according to this standard also meets the accuracy standard required by the clinic.

Figure 202011454009

Description

Report generation method of ultrasonic multi-section data for congenital heart disease
Technical Field
The invention relates to a method for classifying by utilizing a multi-scale detection network, a multi-scale feature extraction module and a focus area detection module, in particular to a report generation method of ultrasonic multi-section data for congenital heart disease.
Background
Congenital heart disease is one of the most common diseases in newborns in china and many other countries. Congenital heart disease accounts for 8-12 per mill of babies born in China, which means that 12-20 ten thousand congenital heart disease patients are born in China every year, wherein the complicated congenital heart disease which cannot achieve good treatment effect by the existing treatment means or is easy to die in early postnatal period accounts for about 20 percent, and is one of the main death reasons of newborn babies and children.
Although congenital heart diseases are quite common, the heart ultrasonography level of newborns and children is different at present, and the processing capacity of the ultrasonography is in urgent need to be improved. Accordingly, experts and scholars in the related art have proposed the use of artificial intelligence to process relevant ultrasound images. Perrin et al propose a method for classifying congenital heart disease images based on a convolutional neural network. Abdi et al developed a deep convolutional neural network based on quality assessment of apical four-chamber echo slice. Dezaki et al designed a neural network that extracted the temporal correlation of echocardiograms.
The above work as artificial intelligence lays a solid foundation for image recognition application in congenital heart disease, but at present, no system for performing artificial intelligence image processing through echocardiography is available, and no report generation method for ultrasonic multi-section data of congenital heart disease is available.
Disclosure of Invention
The invention aims to provide a report generation method of ultrasonic multi-section data for congenital heart disease, which is established on the basis of clinical basic requirements and improves the report generation efficiency of ultrasonic images.
The technical scheme of the invention provides a report generation method of ultrasonic multi-section data for congenital heart disease, which is characterized by comprising the following steps:
step 1, finishing training data and preprocessing;
and 2, completing ultrasonic image feature extraction by using an ultrasonic image feature extractor.
In a feature extractor of an ultrasonic image, a residual structure is adopted to transfer texture and color information of a shallow layer, 4 convolution modules are adopted, and 2 convolution layers, 2 batch normalization layers and 2 activation functions are arranged in each module;
step 3, setting a pathological label graph, automatically extracting various major and minor guest combinations from the report by using a language analyzer, manually screening the combinations to summarize 25 pathological labels, wherein each label comprises a positive observation result and a negative observation result which respectively represent pathology, and after the pathological label graph is extracted, the pathological label graph is used as additional label data to guide a feature extractor to learn;
step 4, extracting information by adopting a multi-frame ultrasonic image attention mechanism;
and step 5, establishing a multi-frame ultrasonic image report generation model, performing theme division and pathology label extraction on the reports in the data set to obtain 5 theme sentences and 25 pathology labels, fusing the input multiple multi-view ultrasonic images by adopting an attention mechanism, and constructing an initial input full-link graph and a full-link adjacency matrix of the 5 theme sentences.
Further, in step 2, first, the picture is changed in size in the image preprocessing operation to 224 × 224 size suitable for the input network, then the picture is passed through 7 × 7 convolutional layers, the picture size is changed to 112 × 112, then the picture size is changed to 56 × 56 through one maximum pooling layer of 3 × 3 and step size 2, then the picture is passed through 4 convolution modules, each convolution module contains 2 3 × 3 convolutional layers, and after 2 3 × 3 convolutional layers, the same distribution of features of each channel is maintained by the batch normalization layer and the ReLu activation layer.
Further, in step 3, a 25-node pathology label graph structure is constructed to simulate the relationship between pathologies
The invention has the beneficial effects that: the identification efficiency of the ultrasonic image is improved by utilizing an artificial intelligence mode, the structure of the model is generated based on the multi-section report of the congenital heart disease ultrasonic, the model is established on the basis of clinical basic requirements, a very complex network structure is not selected due to the requirement on the network speed, and the model established according to the standard also reaches the precision standard of the clinical requirements.
Drawings
FIG. 1 is a diagram of a multi-slice report generation model architecture.
Fig. 2 is a training structure diagram of an ultrasound image feature extractor.
FIG. 3 is a schematic diagram of a report generation model.
Fig. 4 is a pathology signature diagram.
Detailed Description
The technical scheme of the invention is explained in detail in the following with reference to the attached drawings 1-4.
In order to achieve the purpose of the invention, the work of the classification method based on the congenital heart disease multi-ultrasonic section comprises the following aspects:
and step 1, finishing training data and preprocessing.
The model training data comprises 310 cases, wherein 61 cases of normal person section data, 104 cases of congenital heart disease atrial septal defect patient section data and 145 cases of congenital heart disease ventricular septal defect patient section data. The data classification method is provided by Wuhan Asia heart disease hospitals, and is classified by professional doctors of ultrasonic departments of the Wuhan Asia heart disease hospitals, so that the accuracy of section data classification is guaranteed. The training data are stored in the DICOM format in the sequence shown in table 1, and the number of frames in each slice is different, so that the training data can be pre-processed.
Figure BDA0002827658180000031
Figure BDA0002827658180000041
TABLE 1 ultrasonic cardiogram section each classification name table
And 2, completing ultrasonic image feature extraction by using an ultrasonic image feature extractor.
In the feature extractor of the ultrasonic image, the embodiment adopts a residual structure to transfer the information of texture, color and the like of a shallow layer, and simultaneously avoids the problem of gradient disappearance.
This embodiment employs a design of 4 convolution modules, 2 convolution layers, 2 batch normalization layers and 2 activation functions inside each module. In general, each ultrasound image only needs to pass through an 18-layer network structure, and the method is suitable for efficient ultrasound image feature extraction.
Based on the ResNet18 network, an ultrasonic image feature extractor is designed, and a preliminarily designed model structure is shown in FIG. 2. In the design of the model, the short connection mode of the residual structure is considered in the embodiment, and shallow features in the image are also preserved, so that the embodiment adopts the design mode of the residual structure for a convolution module in a network.
While this embodiment uses only 4 convolution modules due to the total number of layers. For each picture of each slice data, the embodiment inputs it into the network shown in fig. 3.
First, the picture is changed in size in an image preprocessing operation to 224 × 224 size suitable for an input network, then the picture is passed through 7 × 7 convolutional layers, the picture size is changed to 112 × 112, then the picture size is changed to 56 × 56 by one maximum pooling layer of 3 × 3 and step size 2, then 4 convolution modules are passed, each convolution module contains 2 3 × 3 convolutional layers, and after 2 3 × 3 convolutional layers, it is required to pass through a Batch Normalization layer (BN layer) and a ReLu activation layer to keep the same distribution of features of each channel. Before the output of each convolution module, the input features and the convolved features are added and output after passing through a second ReLu activation layer, so that the problem of gradient disappearance is avoided. The structure is referred to work [18] of He et al. After the input image passes through 4 convolution modules, the embodiment classifies the obtained features by using a softmax layer, and the softmax function is a function which is normalized after a group of numbers are expressed by indexes, and is also called a normalized index function, and the formula is shown as (1):
Figure BDA0002827658180000051
that is, for each class, the weight of the class is calculated in an exponential manner, and the probability that the feature belongs to the jth class is obtained. Due to the characteristics of the exponential function, the classification with low probability can be inhibited during normalization, the classification with high probability is improved, and the method is widely applied to multi-classification problems. After the softmax function is used, a 1 × 10 vector can be obtained, wherein each position i represents the probability that the single-frame picture belongs to the ith classification, and the largest value in the vector is selected to be determined as the classification of the single-frame picture. For the classification of the pathology labels, the embodiment introduces an additional full-link layer output branch for the feature extraction network, predicts a 1 × 25 vector, where each position i corresponds to the output of the ith pathology label, and then the embodiment uses a sigmoid function, where each position i represents the probability that the picture contains the ith pathology label.
And 3, setting a pathological label graph, automatically extracting various main and predicate guest combinations from the report by using a language analyzer, manually screening to summarize into 25 pathological labels, wherein each label comprises a positive observation result and a negative observation result which respectively represent pathology, and after the pathological label graph is extracted, using the pathological label graph as additional label data to guide a feature extractor to learn.
In the training of the ultrasonic image feature extractor, the embodiment can perform classification training on the images according to the angles of the images and whether the images contain obvious heart disease features. However, the angle alone and whether or not pathological features are included do not provide sufficient guidance for the feature extractor, which can result in image intra-class differences that are the same in final angle and that both include ASD or VSD features being too small for the versatility of automatic report generation. Therefore, this embodiment requires an additional image prior to assist the feature extractor in learning.
In the embodiment, a pathology label graph is innovatively introduced, and the embodiment considers that medical reports need to accurately describe various pathology characteristics, and the accuracy of the pathology description is far more important than the generation of pathology-independent words. Thus this embodiment.
In the training process of the ultrasonic image feature extractor, the embodiment requires that the extractor can accurately predict the section of the ultrasonic image, and for the image with obvious focus, the embodiment additionally requires that the extractor can predict the type of the congenital heart disease from the image, and under the assistance of a pathological label graph, parameters are fixed after the feature extractor is trained, so that accurate prior information of the ultrasonic image is provided for a subsequent report generation part.
Therefore, this embodiment creatively adds a structure of a pathology label map in the network, and the pathology label map structure is shown in fig. 4.
And 3.1, analyzing the sentences by using a language analyzer for the whole report data set, extracting the main and predicate object structures, and grouping the extracted main and predicate objects according to the description subject.
And 3.2, dividing the main predicate structure of each group into two types of description directions which respectively correspond to normal and abnormal conditions of pathology, and constructing a pathology label graph structure of 25 nodes by using a graph neural network at the end of the feature extractor to simulate the relation between pathologies.
Considering that the occurrence of the pathologies is not independent of each other, the embodiment needs to consider the correlation of the pathologies in the actual prediction, so the embodiment uses the graph neural network at the end of the feature extractor to construct the pathology label graph structure with 25 nodes for simulating the relationship between the pathologies.
Step 4, extracting information by adopting a multi-frame ultrasonic image attention mechanism;
because the ultrasonic image data is a sequence image, the sequence image has great redundancy, and a key problem is how to extract important information from redundant information and generate a report. In order to extract important information and reduce redundancy, the embodiment designs a multi-frame ultrasonic image attention mechanism.
For 20 ultrasound images, this embodiment first performs feature extraction using a pre-trained feature extractor to obtain features in dimension B × 20 × D, and then performs extrusion in dimension D through the first fully connected layer to obtain features in dimension B × 20 × D/r, where r is a set multiple, here 4. After activation by ReLu, the features are again changed back to Bx 20 XD/r size by the second fully connected layer. And finally, mapping the weight between [0,1] after the function is activated through sigmoid. And carrying out bit-wise multiplication operation on the output characteristics and the original characteristics to obtain the characteristics after weighting. In order to retain the original feature information, the weighted feature and the original feature need to be added together bit by bit.
And step 5, establishing a multi-frame ultrasonic image report generation model, performing theme division and pathology label extraction on the reports in the data set to obtain 5 theme sentences and 25 pathology labels, fusing the input multiple multi-view ultrasonic images by adopting an attention mechanism, and constructing an initial input full-link graph and a full-link adjacency matrix of the 5 theme sentences.
Since the length of the medical report is different and the description format is flexible, the embodiment performs topic division and pathology label extraction on the report in the data set, so as to obtain 5 topic sentences and 25 pathology labels. For the input 20 multi-view ultrasound images, this embodiment first fuses using the attention mechanism and then constructs an initial input fully-connected graph of 5 subject sentences and a fully-connected adjacency matrix. The overall model uses a graph convolution network and a recurrent neural network LSTM, as shown in fig. 3. The reports are generated gradually as the network iterates over time. And generating a word in each iteration step, then carrying out relationship modeling on nodes of 5 subject sentences among different subject sentences through a graph convolution once, and entering the next iteration. In a continuously iterative process, the 5-topic reports are finally generated and combined into the final report result for the input.
This embodiment designs the structure of a multi-slice report generation model based on the cardiology ultrasound. The model is established on the basis of clinical basic requirements, and due to the requirement on network speed, a very complex network structure is not selected, and the model established according to the standard also reaches the precision standard required by clinical application.
In the ultrasonic image feature extraction model structure, the embodiment adopts a residual structure to transmit the information of texture, color and the like of a shallow layer, and simultaneously avoids the problem of gradient disappearance. Due to the speed limitation, the embodiment adopts the design of 4 convolution modules, 2 convolution layers, 2 batch normalization layers and 2 activation functions in each module, and does not adopt a complex network with a deeper layer number. In general, each ultrasonic image only needs to pass through an 18-layer network structure, and the method is suitable for extracting the clinical section features with high requirements on speed.
In the embodiment, a pathology label graph is innovatively introduced, and the embodiment considers that medical reports need to accurately describe various pathology characteristics, and the accuracy of the pathology description is far more important than the generation of pathology-independent words. Therefore, in this embodiment, various combinations of principal and predicate objects are automatically extracted from the report by using the language parser, and are manually screened to be summarized into 25 kinds of pathological labels, each label includes two different observation results, namely positive observation result and negative observation result, which respectively represent pathology. After extracting the pathology label map, the embodiment uses this as additional label data to guide the feature extractor to learn.

Claims (3)

1.一种针对先心病的超声多切面数据的报告生成方法,其特征在于包括以下步骤:1. a report generation method for the ultrasound multi-section data of congenital heart disease, is characterized in that comprising the following steps: 步骤1、完成训练数据及预处理;Step 1. Complete the training data and preprocessing; 步骤2、利用超声图像特征提取器,完成超声图像特征提取。Step 2, using an ultrasound image feature extractor to complete ultrasound image feature extraction. 在超声图像的特征提取器中,采用残差结构来传递浅层的纹理、颜色信息,采用了4个卷积模块,每个模块内部2个卷积层、2个批标准化层和2个激活函数;In the feature extractor of ultrasound images, the residual structure is used to transfer the shallow texture and color information, and 4 convolution modules are used, each module has 2 convolution layers, 2 batch normalization layers and 2 activation layers. function; 步骤3、设置病理标签图,使用语言解析器自动从报告中提取出各种主谓宾组合,并进行人工筛选,总结成25种病理标签,每种标签包含正负两种分别代表病理的两个不同的观察结果,提取出病理标签图后,以此作为额外的标签数据,指导特征提取器进行学习;Step 3. Set the pathological label map, use the language parser to automatically extract various subject-verb-object combinations from the report, and perform manual screening, and summarize them into 25 pathological labels, each label contains positive and negative. After the pathological label map is extracted from different observation results, it is used as additional label data to guide the feature extractor to learn; 步骤4、采用多帧超声图像注意力机制,提取信息;Step 4, using a multi-frame ultrasound image attention mechanism to extract information; 步骤5、建立多帧超声图像报告生成模型,对于数据集中的报告进行主题划分和病理标签提取,从而得到5个主题句及25个病理标签,对于输入的多张多视角超声图像采用注意力机制进行融合,构建5个主题句的初始输入全连接图和全连接邻接矩阵。Step 5. Establish a multi-frame ultrasound image report generation model, perform subject division and pathological label extraction for the reports in the data set, so as to obtain 5 topic sentences and 25 pathological labels, and use the attention mechanism for the input multiple multi-view ultrasound images. Fusion is performed to construct the initial input fully connected graph and fully connected adjacency matrix of the 5 topic sentences. 2.根据权利要求1所述的基于先心病超声的多切面的报告生成方法,其特征在于,步骤2中,首先,在图像预处理操作中改变图片的大小,将图片变为适合输入网络的224×224的大小,之后,将图片通过7×7的卷积层,其图片大小变为112×112,之后,通过一个3×3、步长为2的最大池化层,将图片大小变为56×56,然后通过4个卷积模块,每个卷积模块包含2个3×3卷积层,且在通过2个3×3卷积层后都需要通过批标准化层和ReLu激活层,以使各个通道的特征保持相同的分布。2. The method for generating reports based on the multi-section of congenital heart disease ultrasound according to claim 1, is characterized in that, in step 2, at first, in the image preprocessing operation, change the size of the picture, the picture becomes suitable for the input network. The size of 224×224, after that, the image is passed through a 7×7 convolutional layer, and its image size becomes 112×112. After that, a 3×3 maximum pooling layer with a stride of 2 changes the image size. It is 56×56, and then passes through 4 convolution modules, each convolution module contains 2 3×3 convolutional layers, and after passing through 2 3×3 convolutional layers, it needs to pass through batch normalization layer and ReLu activation layer , so that the features of each channel maintain the same distribution. 3.根据权利要求1所述的基于先心病超声的多切面的报告生成方法,其特征在于,步骤3中,构建了25个结点的病理标签图结构,用来模拟病理之间的关系。3 . The method for generating a multi-slice report based on ultrasound of congenital heart disease according to claim 1 , wherein, in step 3, a pathological label graph structure of 25 nodes is constructed to simulate the relationship between pathologies. 4 .
CN202011454009.1A 2020-12-10 2020-12-10 Report generation method of ultrasonic multi-section data for congenital heart disease Active CN112562819B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011454009.1A CN112562819B (en) 2020-12-10 2020-12-10 Report generation method of ultrasonic multi-section data for congenital heart disease

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011454009.1A CN112562819B (en) 2020-12-10 2020-12-10 Report generation method of ultrasonic multi-section data for congenital heart disease

Publications (2)

Publication Number Publication Date
CN112562819A true CN112562819A (en) 2021-03-26
CN112562819B CN112562819B (en) 2022-06-17

Family

ID=75061745

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011454009.1A Active CN112562819B (en) 2020-12-10 2020-12-10 Report generation method of ultrasonic multi-section data for congenital heart disease

Country Status (1)

Country Link
CN (1) CN112562819B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113139956A (en) * 2021-05-12 2021-07-20 深圳大学 Generation method and identification method of section identification model based on language knowledge guidance

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190311223A1 (en) * 2017-03-13 2019-10-10 Beijing Sensetime Technology Development Co., Ltd. Image processing methods and apparatus, and electronic devices
WO2020140422A1 (en) * 2019-01-02 2020-07-09 Boe Technology Group Co., Ltd. Neural network for automatically tagging input image, computer-implemented method for automatically tagging input image, apparatus for automatically tagging input image, and computer-program product
CN111968064A (en) * 2020-10-22 2020-11-20 成都睿沿科技有限公司 Image processing method and device, electronic equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190311223A1 (en) * 2017-03-13 2019-10-10 Beijing Sensetime Technology Development Co., Ltd. Image processing methods and apparatus, and electronic devices
WO2020140422A1 (en) * 2019-01-02 2020-07-09 Boe Technology Group Co., Ltd. Neural network for automatically tagging input image, computer-implemented method for automatically tagging input image, apparatus for automatically tagging input image, and computer-program product
CN111968064A (en) * 2020-10-22 2020-11-20 成都睿沿科技有限公司 Image processing method and device, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113139956A (en) * 2021-05-12 2021-07-20 深圳大学 Generation method and identification method of section identification model based on language knowledge guidance

Also Published As

Publication number Publication date
CN112562819B (en) 2022-06-17

Similar Documents

Publication Publication Date Title
Li et al. CANet: cross-disease attention network for joint diabetic retinopathy and diabetic macular edema grading
CN113040715B (en) Human brain function network classification method based on convolutional neural network
Kaya Feature fusion-based ensemble CNN learning optimization for automated detection of pediatric pneumonia
CN117077786A (en) A data-knowledge dual-driven intelligent medical dialogue system and method based on knowledge graph
Hou et al. Self-explainable ai for medical image analysis: A survey and new outlooks
CN107909095A (en) A kind of image-recognizing method based on deep learning
CN112419313B (en) A Multi-Section Classification Method Based on Ultrasonography of Congenital Heart Disease
CN114863185B (en) A lightweight echocardiographic standard section recognition method, device and medium
CN111738302A (en) A system for classifying and diagnosing Alzheimer's disease based on multimodal data
CN111080643A (en) Method and device for classifying diabetes and related diseases based on fundus images
CN113662664B (en) Instrument tracking-based objective and automatic evaluation method for surgical operation quality
CN117316369B (en) Automatic generation method of chest imaging diagnostic report with balanced cross-modality information
CN111047590A (en) Fundus image-based hypertension classification method and equipment
CN116721289A (en) Cervical OCT image classification method and system based on self-supervised clustering contrastive learning
CN116704305A (en) Multi-modal and multi-section classification method for echocardiography based on deep learning algorithm
Shu et al. MSMA: A multi-stage and multi-attention algorithm for the classification of multimodal skin lesions
Marulkar et al. Nail disease prediction using a deep learning integrated framework
CN114119538A (en) A deep learning segmentation system for hepatic vein and hepatic portal vein
CN116403706A (en) A Diabetes Prediction Method Fused with Knowledge Expansion and Convolutional Neural Networks
CN112562819B (en) Report generation method of ultrasonic multi-section data for congenital heart disease
Shaik et al. Gated contextual transformer network for multi-modal retinal image clinical description generation
Yang et al. Alzheimer’s disease classification based on brain region-to-sample graph convolutional network
CN111340807B (en) Method, system, electronic device and storage medium for extracting core data of lesion location
CN113255718B (en) Cervical cell auxiliary diagnosis method based on deep learning cascade network method
CN115578360A (en) A Multi-Object Semantic Segmentation Method for Echocardiographic Images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant