CN111986150B - The method comprises the following steps of: digital number pathological image Interactive annotation refining method - Google Patents
The method comprises the following steps of: digital number pathological image Interactive annotation refining method Download PDFInfo
- Publication number
- CN111986150B CN111986150B CN202010690711.1A CN202010690711A CN111986150B CN 111986150 B CN111986150 B CN 111986150B CN 202010690711 A CN202010690711 A CN 202010690711A CN 111986150 B CN111986150 B CN 111986150B
- Authority
- CN
- China
- Prior art keywords
- patch
- digital
- image
- sample
- digital pathology
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Health & Medical Sciences (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Medical Informatics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Quality & Reliability (AREA)
- Probability & Statistics with Applications (AREA)
- Public Health (AREA)
- Primary Health Care (AREA)
- Epidemiology (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention provides digital pathology an interactive labeling refinement method of an image, it is characterized in that the method comprises the steps of, the method comprises the following steps: construction and is combined with training Resnet Weak supervision a classification model; the method comprises the steps of obtaining a digital pathological image input in real time, preprocessing the digital pathological image, and obtaining Patch slice data after the size and dyeing standardization corresponding to a tissue region of the digital pathological image; inputting Patch slice data into a trained Resnet weak supervision classification model to obtain benign and malignant classification of each Patch slice, and generating XML vector diagram labels on an original digital pathological image according to the benign and malignant classification of the Patch slice to obtain a pre-labeled focus region; and obtaining the refined marked outline on the pre-marked focus area. The invention provides an interactive labeling refinement method of a digital pathological image, which improves the labeling efficiency of doctors through automatic pre-labeling and contour refinement processing.
Description
Technical Field
The invention relates to the field of digital pathology image processing, in particular to an interactive labeling refinement method of a digital pathology image.
Background
The digital pathological image is a tissue slice of a pathological change part of a patient, the size of the digital pathological image obtained by utilizing the WSI technology is very large, and because the digital pathological image can directly reflect pathological change conditions in tissues, a clinician needs to search a tissue region and draw a specific pathological change tissue region position under different multiplying powers to be used as an important basis for disease diagnosis. For example, in cancer diagnosis, it is necessary to extract living tissue of a lesion to make a pathological section, and observe a digitized pathological image to determine its pathological characteristics.
The number of specialized pathologists has serious gaps, daily requirements cannot be met, and the labeling of digital pathological images also greatly increases the burden of the doctors. At present, some related technologies are used for solving the problem, but the operation process is responsible for a great deal of preset value work still required by doctors, and the whole digital pathological image needs to be observed, so that the workload is high, and the pathological tissue area is not directly focused. For example, chinese patent CN105608319B, the invention provides a method for labeling digital pathological section, which requires the user to select labeling points and labeling pattern type information, and the observed object is a whole digital pathological image; chinese patent CN105404896B, the invention mainly uses similarity as the result of automatic detection labeling, and is therefore not suitable for labeling digital pathological images.
Disclosure of Invention
The invention aims to solve the technical problems that: labeling of digital pathology images greatly increases the burden on the physician.
In order to solve the technical problems, the technical scheme of the invention provides an interactive labeling refinement method of a digital pathological image, which is characterized by comprising the following steps of:
step 1, obtaining sample digital pathology images, preprocessing the sample digital pathology images, and obtaining Patch slice data with the size corresponding to the tissue region of each sample digital pathology image and after dyeing standardization, wherein the Patch slice data only has good and malignant category labels of the sample digital pathology images and does not have labels of focus positions;
step 2, marking classification labels on the Patch slice data obtained after pretreatment according to the types of the sample digital pathological images, and combining a plurality of Patch slice data with the same classification into Patch packages, wherein the type label of each Patch package is a label of single type of Patch slice data contained in the Patch package;
step 3, inputting all Patch slice data in the Patch package into a Resnet weak supervision classification model, firstly carrying out feature extraction by a Resnet network, then classifying the Patch slices by a fully-connected network, obtaining the probability that each Patch slice is malignant after a Sigmoid activation function, selecting the Patch slice with the highest probability in a Patch slice set by the Resnet weak supervision classification model, if the probability that the Patch slice is malignant is more than 0.5, the classification of the whole Patch package is malignant, otherwise, the classification of the whole Patch package is benign, and calculating the loss with a label;
step 4, obtaining a digital pathology image input in real time, preprocessing the digital pathology image, and obtaining Patch slice data with a size corresponding to a tissue region of the digital pathology image and after dyeing standardization;
step 5, inputting the Patch slice data obtained in the step 4 into the Resnet weak supervision classification model trained in the step 3 to obtain benign and malignant classification of each Patch slice, and generating an XML vector diagram label on an original digital pathological image according to the benign and malignant classification of the Patch slice to obtain a pre-labeled focus region;
and 6, on the pre-marked focus area, the doctor performs manual marking, then sets N pixel areas on two sides of the edge line of the manual marking focus as edge convergence candidate areas, calculates gradient values among each pixel in the edge convergence candidate areas, and finally selects the position with the largest gradient as the corrected focus edge to obtain the refined marking outline.
Preferably, in step 1, preprocessing the digital pathology image of the sample comprises the following steps:
step 101, dividing an input sample digital pathological image into a tissue region on a thumbnail by using an Ojin method, and recording the coordinate position of a tissue region part corresponding to an original sample digital pathological image;
step 102, clipping the sample digital pathology image according to a certain size at the coordinate position recorded in step 101 to obtain Patch slice data;
step 103, selecting a plurality of digital pathology images dyed by the same hospital as a data source of a standard dyeing space, converting all the digital pathology images into an LAB color space, counting mean variances, using K-means clustering by taking the mean variances of the L, A, B channels as a feature vector, and selecting a clustering center of the largest class as the standard dyeing space;
step 104, using a Reinhard algorithm to normalize the staining of the Patch slice data to a standard staining space;
step 105, performing oversampling data balance on the Patch slice data and performing data enhancement by means of random rotation, flipping and noise increase.
Preferably, in step 3, the classification model is weakly supervised at ResnetAlpha factors and gamma factors are added into the loss function, wherein the alpha factors are used for balancing positive samples with benign category labels and negative sample data volumes with malignant category labels, the gamma factors are used for adjusting the loss weight of a simple sample, and then the loss function L of the Resnet weak supervision classification model is:where y represents the true label of the sample and y' represents the predicted probability of the sample.
Preferably, in step 4, preprocessing the digital pathology image includes the following steps:
step 401, dividing a tissue region on a thumbnail of an input digital pathological image by using an Ojin method, and recording the coordinate position of a tissue region part corresponding to an original sample digital pathological image;
step 402, clipping the digital pathological image according to a certain size at the coordinate position recorded in step 401 to obtain Patch slice data;
step 403, selecting a plurality of digital pathology images dyed by the same hospital as a data source of a standard dyeing space, converting all the digital pathology images into an LAB color space, counting mean variances, using K-means clustering by taking the mean variances of the L, A, B channels as a feature vector, and selecting a clustering center of the largest class as the standard dyeing space;
step 404, using Reinhard algorithm to normalize the staining of the Patch section data to a standard staining space.
Preferably, in step 6, edge detection is performed on the edge convergence candidate region using the laplace operator: second order partial derivatives are obtained in the x direction and the y direction for the image respectively
After the combination, there are:
wherein f (x, y) represents a pixel value at the position where the image coordinate is (y), and the gradient map of the edge convergence candidate region is obtained by performing convolution operation by using the Laplacian and the edge convergence candidate region, so that the gradient value between each pixel in the edge convergence candidate region is calculated, and finally the position with the maximum gradient is selected as the corrected focus edge, so as to obtain the refined labeling contour.
The invention provides an interactive labeling refinement method of a digital pathological image, which improves the labeling efficiency of doctors through automatic pre-labeling and contour refinement processing.
Drawings
FIG. 1 is a schematic diagram of an interactive labeling refinement method for digital pathology images according to the present invention;
FIG. 2 is a flow chart of a data preprocessing module according to the present invention;
FIG. 3 is a flow chart of training MIL network model according to the present invention;
FIG. 4 is a flowchart of the labeling refinement processing module of the present invention;
FIG. 5 is a schematic diagram of an edge detection operator used in the present invention.
Detailed Description
The invention will be further illustrated with reference to specific examples. It is to be understood that these examples are illustrative of the present invention and are not intended to limit the scope of the present invention. Further, it is understood that various changes and modifications may be made by those skilled in the art after reading the teachings of the present invention, and such equivalents are intended to fall within the scope of the claims appended hereto.
As shown in FIG. 1, the interactive labeling refinement method of the digital pathological image provided by the invention comprises the following steps:
step S1, a data preprocessing step
Preprocessing the input pathological image to realize standardization and normalization of the pathological image;
step S2, automatic pre-labeling of weak supervision model
Training a neural network by using an MIL training method to obtain a pre-marked area;
manually marking after the automatic pre-marking step of the weak supervision model, and manually confirming the pre-marked area by a doctor and roughly sketching the focus outline to obtain a focus outline roughly sketching area;
step S3, labeling result refinement processing
And inputting the focus outline rough sketching area into a refinement processing module to obtain a refinement labeling result.
The technical scheme comprises three parts: the first part is a data preprocessing process, the second part is a weak supervision model automatic pre-labeling process, and the third part is a coarse labeling refinement process based on a pre-labeling result by a doctor. The three parts are described in detail below:
as shown in fig. 2, the data preprocessing process in step S1 includes classification labeling of whole slices, automatic extraction of pathological section tissue parts, and staining normalization based on Reinhard and cluster statistics. The method specifically comprises the following steps:
step S101: and dividing the input pathological image into tissue areas on the thumbnail by using an Ojin method, and recording the coordinate positions of the tissue areas corresponding to the original pathological image.
Step S102: patch sections were obtained from the original pathology image by clipping at the tissue region locations recorded in the previous step in a 224X 224 size.
Step S103: in order to ensure that the standard staining space is statistically enough and sufficiently average for staining of various tissues, a total of more than 3000 digital pathology images stained in the same hospital are selected as data sources of the standard staining space, all the more than 3000 digital pathology images are converted into an LAB color space, and the average value variance is counted. Because more than 3000 digital pathology images do not necessarily meet the dyeing standard, and possibly have nonstandard dyeing or special digital pathology images formed by tissues, the method does not directly average the mean variance of the LAB space to obtain the standard dyeing space, but uses K-means clustering by taking the mean variance of the L, A, B channels as a characteristic vector, and selects the clustering center of the largest class as the standard dyeing space.
Step S104: the staining of the input image was normalized to a standard staining space using the Reinhard algorithm.
When training the Resnet weak supervision model, the method also comprises the following steps of
Step S105: the data of the Patch slice is subjected to oversampling data balance and data enhancement by means of random rotation, flipping and noise increase. Step S105 is only used when training the Resnet weakly supervised model, and the interactive labeling process does not do this.
As shown in fig. 3, the automatic pre-labeling process of the weakly supervised model includes pre-labeling by using a Resnet weakly supervised classification model trained in a multi-instance learning (MILs) mode, and labeling a data set of the whole digital pathological image category by an expert. When the Resnet weak supervision classification model is trained by utilizing a multi-instance learning mode, the Patch slice dataset of the digital pathological image for training only marks the benign and malignant categories of the whole image, and does not mark the focus position.
Training a Resnet weakly supervised classification model using a multi-instance learning approach includes the steps of:
the first step: and giving labels of the Patch slices obtained after pretreatment according to the category of the whole digital pathological image. Several identically categorized Patch slices are combined into a Patch package, with the class label of each Patch package being that of a single type of Patch slice contained therein.
And a second step of: inputting all the Patch slices in the Patch package into a Resnet weak supervision classification model, firstly, extracting features by a Resnet network, then classifying the Patch slices by a fully connected network, obtaining the probability that each Patch slice is malignant after a Sigmoid activation function, selecting the Patch slice with the highest probability in a Patch slice set by the Resnet weak supervision classification model, and if the probability that the Patch slice is malignant is greater than 0.5, determining the category of the whole Patch package as malignant, otherwise, determining the category of the whole Patch package as benign, and calculating the loss with a label.
Because the Patch package of the digital pathology image has the characteristic of unbalanced positive and negative data quantity, the weighting loss is calculated according to the data duty ratio and back propagation is carried out. The specific contents are as follows: the loss function L of the Resnet weakly supervised classification model is:where y represents the true label of the sample and y' represents the predicted probability of the sample. In order to balance the positive and negative sample data quantity and solve the problems of simple and difficult samples, an alpha factor and a gamma factor are added into a loss function of a Resnet weak supervision classification model, wherein the alpha factor can balance the positive and negative sample data quantity, the gamma factor can adjust the magnitude of a simple sample loss weight, and the improved loss function L is as follows: />
And pre-labeling the pretreated patch slices by utilizing the Resnet weak supervision classification model trained in the steps to obtain benign and malignant classification of each patch slice, and generating an XML vector diagram label on an original digital pathological image according to the benign and malignant classification of the patch slices to serve as a reference for doctor labeling.
As shown in fig. 4, the labeling refinement process includes the following: on a pre-labeling focus area of the Resnet weak supervision classification model, a doctor performs manual labeling, then sets N pixel areas on two sides of an edge line of the manual labeling focus as edge convergence candidate areas, calculates gradient values among pixels in the candidate areas, and finally selects the position with the largest gradient as a corrected focus edge to obtain a refined labeling contour. The method specifically comprises the following steps:
step S301: candidate region selection
N pixel values are respectively taken at two sides of the focus edge line manually marked by a doctor to form a candidate region.
Step S302: edge detection using Laplacian, and second order bias derivatives are obtained for images in x-direction and y-direction respectively
After combining:
where f (x, y) represents the pixel value at the image coordinate (x, y). Therefore, as shown in fig. 5, the laplace operator for edge detection uses the laplace operator and the candidate region to perform convolution operation to obtain a gradient map of the candidate region, and selects the position with the largest gradient as the optimized boundary.
Claims (3)
1. The interactive labeling refinement method of the digital pathological image is characterized by comprising the following steps of:
step 1, obtaining sample digital pathology images, preprocessing the sample digital pathology images, obtaining Patch slice data with the size corresponding to a tissue region of each sample digital pathology image and after staining standardization, wherein the Patch slice data only has good and malignant category labels of the sample digital pathology images and has no label of focus positions, and the preprocessing of the sample digital pathology images comprises the following steps:
step 101, dividing an input sample digital pathological image into a tissue region on a thumbnail by using an Ojin method, and recording the coordinate position of a tissue region part corresponding to an original sample digital pathological image;
step 102, clipping the sample digital pathology image according to a certain size at the coordinate position recorded in step 101 to obtain Patch slice data;
step 103, selecting a plurality of digital pathology images dyed by the same hospital as a data source of a standard dyeing space, converting all the digital pathology images into an LAB color space, counting mean variances, using K-means clustering by taking the mean variances of the L, A, B channels as a feature vector, and selecting a clustering center of the largest class as the standard dyeing space;
step 104, using a Reinhard algorithm to normalize the staining of the Patch slice data to a standard staining space;
step 105, performing oversampling data balance on the Patch slice data and performing data enhancement by means of random rotation, flipping and noise increase
Step 2, marking classification labels on the Patch slice data obtained after pretreatment according to the types of the sample digital pathological images, and combining a plurality of Patch slice data with the same classification into Patch packages, wherein the type label of each Patch package is a label of single type of Patch slice data contained in the Patch package;
step 3, inputting all Patch slice data in the Patch package into a Resnet weak supervision classification model, firstly carrying out feature extraction by a Resnet network, then classifying the Patch slices by a fully-connected network, obtaining the probability that each Patch slice is malignant after a Sigmoid activation function, selecting the Patch slice with the highest probability in a Patch slice set by the Resnet weak supervision classification model, if the probability that the Patch slice is malignant is more than 0.5, the classification of the whole Patch package is malignant, otherwise, the classification of the whole Patch package is benign, and calculating the loss with a label; alpha factors and gamma factors are added into the loss function of the Resnet weak supervision classification model, wherein the alpha factors are used for balancing positive samples with benign class labels and negative sample data volumes with malignant class labels, and the gamma factors are used for adjusting the magnitude of simple sample loss weights, so that the loss function of the Resnet weak supervision classification modelThe method comprises the following steps: />Wherein y is * Representing the true label of the sample, y' representing the predicted probability of the sample;
step 4, obtaining a digital pathology image input in real time, preprocessing the digital pathology image, and obtaining Patch slice data with a size corresponding to a tissue region of the digital pathology image and after dyeing standardization;
step 5, inputting the Patch slice data obtained in the step 4 into the Resnet weak supervision classification model trained in the step 3 to obtain benign and malignant classification of each Patch slice, and generating an XML vector diagram label on an original digital pathological image according to the benign and malignant classification of the Patch slice to obtain a pre-labeled focus region;
and 6, on the pre-marked focus area, the doctor performs manual marking, then sets N pixel areas on two sides of the edge line of the manual marking focus as edge convergence candidate areas, calculates gradient values among each pixel in the edge convergence candidate areas, and finally selects the position with the largest gradient as the corrected focus edge to obtain the refined marking outline.
2. The method for interactive labeling refinement of digital pathology images according to claim 1, characterized in that in step 4, the preprocessing of the digital pathology images comprises the following steps:
step 401, dividing a tissue region on a thumbnail of an input digital pathological image by using an Ojin method, and recording the coordinate position of a tissue region part corresponding to an original sample digital pathological image;
step 402, clipping the digital pathological image according to a certain size at the coordinate position recorded in step 401 to obtain Patch slice data;
step 403, selecting a plurality of digital pathology images dyed by the same hospital as a data source of a standard dyeing space, converting all the digital pathology images into an LAB color space, counting mean variances, using K-means clustering by taking the mean variances of the L, A, B channels as a feature vector, and selecting a clustering center of the largest class as the standard dyeing space;
step 404, using Reinhard algorithm to normalize the staining of the Patch section data to a standard staining space.
3. The method for interactive labeling refinement of digital pathological images according to claim 1, wherein in step 6, edge detection is performed on edge convergence candidate regions using a laplacian operator: second order partial derivatives are obtained in the x direction and the y direction for the image respectively
After the combination, there are:
wherein f (x, y) represents a pixel value at the position where the image coordinates are (x, y), and a gradient map of the edge convergence candidate region is obtained by performing convolution operation by using the Laplacian and the edge convergence candidate region, so that a gradient value between each pixel in the edge convergence candidate region is calculated, and finally, the position with the maximum gradient is selected as a corrected focus edge, so that a refined labeling contour is obtained.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010690711.1A CN111986150B (en) | 2020-07-17 | 2020-07-17 | The method comprises the following steps of: digital number pathological image Interactive annotation refining method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010690711.1A CN111986150B (en) | 2020-07-17 | 2020-07-17 | The method comprises the following steps of: digital number pathological image Interactive annotation refining method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111986150A CN111986150A (en) | 2020-11-24 |
CN111986150B true CN111986150B (en) | 2024-02-09 |
Family
ID=73437888
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010690711.1A Active CN111986150B (en) | 2020-07-17 | 2020-07-17 | The method comprises the following steps of: digital number pathological image Interactive annotation refining method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111986150B (en) |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112331314A (en) * | 2020-11-25 | 2021-02-05 | 中山大学附属第六医院 | Image annotation method and device, storage medium and electronic equipment |
CN112488234B (en) * | 2020-12-10 | 2022-04-29 | 武汉大学 | End-to-end histopathology image classification method based on attention pooling |
CN112446881A (en) * | 2021-02-01 | 2021-03-05 | 北京小白世纪网络科技有限公司 | Pathological image segmentation system and method |
CN112884724B (en) * | 2021-02-02 | 2022-06-03 | 广州智睿医疗科技有限公司 | Intelligent judgment method and system for lung cancer histopathological typing |
CN113299372B (en) * | 2021-05-14 | 2023-02-03 | 深圳大学 | A processing method, storage medium and terminal equipment for photoacoustic pathological images |
CN113469972B (en) * | 2021-06-30 | 2024-04-23 | 沈阳东软智能医疗科技研究院有限公司 | Method and device for labeling medical slice image, storage medium and electronic equipment |
CN113674288B (en) * | 2021-07-05 | 2024-02-02 | 华南理工大学 | Automatic segmentation method for digital pathological image tissue of non-small cell lung cancer |
CN113628199B (en) * | 2021-08-18 | 2022-08-16 | 四川大学华西第二医院 | Pathological picture stained tissue area detection method, pathological picture stained tissue area detection system and prognosis state analysis system |
CN113870277A (en) * | 2021-08-19 | 2021-12-31 | 杭州迪英加科技有限公司 | Auxiliary labeling method and system for digital pathological section and readable storage medium |
CN114037720B (en) * | 2021-10-18 | 2025-07-04 | 北京理工大学 | Method and device for pathological image segmentation and classification based on semi-supervised learning |
CN114202719A (en) * | 2021-11-12 | 2022-03-18 | 中原动力智能机器人有限公司 | Video sample labeling method, device, computer equipment and storage medium |
CN114187281A (en) * | 2021-12-14 | 2022-03-15 | 数坤(北京)网络科技股份有限公司 | Image processing method and device, electronic equipment and storage medium |
CN114596298B (en) * | 2022-03-16 | 2022-11-15 | 华东师范大学 | Hyperspectral imaging-based automatic generation method of fine-labeled digital pathological data set |
CN117576127B (en) * | 2024-01-17 | 2024-04-19 | 神州医疗科技股份有限公司 | Liver cancer area automatic sketching method based on pathological image |
CN118743532B (en) * | 2024-06-17 | 2025-04-29 | 山东大学齐鲁医院 | An endoscopic submucosal dissection auxiliary system based on deep learning |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102314479A (en) * | 2011-07-05 | 2012-01-11 | 万达信息股份有限公司 | Method for preventing repeated marking of slice map |
US9176987B1 (en) * | 2014-08-26 | 2015-11-03 | TCL Research America Inc. | Automatic face annotation method and system |
CN105608319A (en) * | 2015-12-21 | 2016-05-25 | 江苏康克移软软件有限公司 | Digital pathological section labeling method and device |
CN108062574A (en) * | 2017-12-31 | 2018-05-22 | 厦门大学 | A kind of Weakly supervised object detection method based on particular category space constraint |
CN109378052A (en) * | 2018-08-31 | 2019-02-22 | 透彻影像(北京)科技有限公司 | The preprocess method and system of image labeling |
CN109544561A (en) * | 2018-11-07 | 2019-03-29 | 杭州迪英加科技有限公司 | Cell mask method, system and device |
CN109670489A (en) * | 2019-02-18 | 2019-04-23 | 广州视源电子科技股份有限公司 | Weak supervision type early age related macular degeneration classification method based on multi-instance learning |
CN109766830A (en) * | 2019-01-09 | 2019-05-17 | 深圳市芯鹏智能信息有限公司 | A kind of ship seakeeping system and method based on artificial intelligence image procossing |
CN109872803A (en) * | 2019-01-28 | 2019-06-11 | 透彻影像(北京)科技有限公司 | A kind of artificial intelligence pathology labeling system |
CN110009679A (en) * | 2019-02-28 | 2019-07-12 | 江南大学 | A kind of object localization method based on Analysis On Multi-scale Features convolutional neural networks |
CN110378885A (en) * | 2019-07-19 | 2019-10-25 | 王晓骁 | A kind of focal area WSI automatic marking method and system based on machine learning |
WO2020064323A1 (en) * | 2018-09-26 | 2020-04-02 | Safran | Method and system for the non-destructive testing of an aerospace part by contour readjustment |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170330059A1 (en) * | 2016-05-11 | 2017-11-16 | Xerox Corporation | Joint object and object part detection using web supervision |
-
2020
- 2020-07-17 CN CN202010690711.1A patent/CN111986150B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102314479A (en) * | 2011-07-05 | 2012-01-11 | 万达信息股份有限公司 | Method for preventing repeated marking of slice map |
US9176987B1 (en) * | 2014-08-26 | 2015-11-03 | TCL Research America Inc. | Automatic face annotation method and system |
CN105608319A (en) * | 2015-12-21 | 2016-05-25 | 江苏康克移软软件有限公司 | Digital pathological section labeling method and device |
CN108062574A (en) * | 2017-12-31 | 2018-05-22 | 厦门大学 | A kind of Weakly supervised object detection method based on particular category space constraint |
CN109378052A (en) * | 2018-08-31 | 2019-02-22 | 透彻影像(北京)科技有限公司 | The preprocess method and system of image labeling |
WO2020064323A1 (en) * | 2018-09-26 | 2020-04-02 | Safran | Method and system for the non-destructive testing of an aerospace part by contour readjustment |
CN109544561A (en) * | 2018-11-07 | 2019-03-29 | 杭州迪英加科技有限公司 | Cell mask method, system and device |
CN109766830A (en) * | 2019-01-09 | 2019-05-17 | 深圳市芯鹏智能信息有限公司 | A kind of ship seakeeping system and method based on artificial intelligence image procossing |
CN109872803A (en) * | 2019-01-28 | 2019-06-11 | 透彻影像(北京)科技有限公司 | A kind of artificial intelligence pathology labeling system |
CN109670489A (en) * | 2019-02-18 | 2019-04-23 | 广州视源电子科技股份有限公司 | Weak supervision type early age related macular degeneration classification method based on multi-instance learning |
CN110009679A (en) * | 2019-02-28 | 2019-07-12 | 江南大学 | A kind of object localization method based on Analysis On Multi-scale Features convolutional neural networks |
CN110378885A (en) * | 2019-07-19 | 2019-10-25 | 王晓骁 | A kind of focal area WSI automatic marking method and system based on machine learning |
Non-Patent Citations (4)
Title |
---|
Weakly Supervised Object Localization with Multi-Fold Multiple Instance Learning;Ramazan Gokberk Cinbis等;IEEE Transactions on Pattern Analysis and Machine Intelligence;第39卷(第1期);第189-203页 * |
基于弱监督的图像区域自动标注算法研究;徐小程;中国优秀硕士学位论文全文数据库 信息科技辑(第1期);第I138-567页 * |
实体瘤病理数据集建设和数据标注质量控制专家意见(2019);于观贞等;第二军医大学学报(第5期);第6-11页 * |
病理图像精细化分析算法研究;崔磊;中国博士学位论文全文数据库 医药卫生科技辑(第1期);第E059-38页 * |
Also Published As
Publication number | Publication date |
---|---|
CN111986150A (en) | 2020-11-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111986150B (en) | The method comprises the following steps of: digital number pathological image Interactive annotation refining method | |
Priego-Torres et al. | Automatic segmentation of whole-slide H&E stained breast histopathology images using a deep convolutional neural network architecture | |
Mi et al. | Deep learning-based multi-class classification of breast digital pathology images | |
Dundar et al. | Computerized classification of intraductal breast lesions using histopathological images | |
Zhang et al. | Automated semantic segmentation of red blood cells for sickle cell disease | |
George et al. | Remote computer-aided breast cancer detection and diagnosis system based on cytological images | |
Song et al. | A deep learning based framework for accurate segmentation of cervical cytoplasm and nuclei | |
Veta et al. | Detecting mitotic figures in breast cancer histopathology images | |
Dov et al. | Thyroid cancer malignancy prediction from whole slide cytopathology images | |
CN112508850A (en) | Deep learning-based method for detecting malignant area of thyroid cell pathological section | |
Wang et al. | Assisted diagnosis of cervical intraepithelial neoplasia (CIN) | |
CN113870194B (en) | Breast tumor ultrasonic image processing device with fusion of deep layer characteristics and shallow layer LBP characteristics | |
CN112990214A (en) | Medical image feature recognition prediction model | |
CN112767355A (en) | Method and device for constructing thyroid nodule Tirads grading automatic identification model | |
Sreelekshmi et al. | SwinCNN: an integrated Swin transformer and CNN for improved breast Cancer grade classification | |
CN115170518A (en) | Cell detection method and system based on deep learning and machine vision | |
CN115775226B (en) | Medical image classification method based on transducer | |
Giuste et al. | Explainable synthetic image generation to improve risk assessment of rare pediatric heart transplant rejection | |
Alam et al. | A novel automated system to detect breast cancer from ultrasound images using deep fused features with super resolution | |
CN111062909A (en) | Method and equipment for judging benign and malignant breast tumor | |
Ning et al. | Multiscale context-cascaded ensemble framework (MsC 2 EF): application to breast histopathological image | |
Rosales-Pérez | A review on machine learning techniques for acute leukemia classification | |
Fernandez et al. | Artificial intelligence methods for predictive image-based grading of human cancers | |
Anari et al. | Computer-aided detection of proliferative cells and mitosis index in immunohistichemically images of meningioma | |
Prasath et al. | Segmentation of breast cancer tissue microarrays for computer-aided diagnosis in pathology |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |