TWI895233B - Deep learning method and identification device for identifying prostate cancer - Google Patents
Deep learning method and identification device for identifying prostate cancerInfo
- Publication number
- TWI895233B TWI895233B TW114115567A TW114115567A TWI895233B TW I895233 B TWI895233 B TW I895233B TW 114115567 A TW114115567 A TW 114115567A TW 114115567 A TW114115567 A TW 114115567A TW I895233 B TWI895233 B TW I895233B
- Authority
- TW
- Taiwan
- Prior art keywords
- image data
- prostate cancer
- recognition model
- deep learning
- learning method
- Prior art date
Links
Landscapes
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
一種用於識別攝護腺癌的深度學習方法及其識別裝置,該深度學習方法包含以下步驟:(a)獲取數個用於生成圖像的圖像數據,每一該圖像數據包括一相關於攝護腺癌病灶的懷疑程度的級數資訊。(b)提取每一該圖像數據的一特徵訊號組。(c) 根據每一該圖像數據的級數資訊、特徵訊號組,及一優化參數組進行人工智慧學習演算以建立一辨識模型。藉此,通過預先分級的圖像數據,加快處理速度,且通過限定數值的優化參數組,使該辨識模型辨識攝護腺癌的準確率大幅提高至90%以上。A deep learning method and recognition device for identifying prostate cancer, the deep learning method comprising the following steps: (a) obtaining a plurality of image data for generating an image, each of the image data including level information related to the degree of suspicion of a prostate cancer lesion. (b) extracting a feature signal set from each of the image data. (c) performing an artificial intelligence learning algorithm based on the level information, feature signal set, and an optimized parameter set for each of the image data to establish a recognition model. Thus, by using pre-classified image data, the processing speed is accelerated, and by using a limited numerical optimized parameter set, the recognition model's accuracy in identifying prostate cancer is significantly improved to over 90%.
Description
本發明是有關於一種識別裝置,特別是指一種用於識別攝護腺癌的深度學習方法及其識別裝置。The present invention relates to an identification device, and more particularly to a deep learning method and an identification device for identifying prostate cancer.
中華民國專利公開號第公開 TW202117742A號專利案所揭露之一種已知的基於全身骨掃描影像之深度學習攝護腺癌骨轉移辨識系統,包含一接收全身骨掃描影像以進行處理的前處理模組,及一檢測該全身骨掃描影像是否為攝護腺癌骨轉移的神經網路模組。該神經網路模組使用加快區域卷積神經網路(Faster R-CNN),依據輸入的全身骨掃描影像,分割出胸腔部位的訓練影像,及分類出癌細胞骨轉移之病灶。A known deep learning system for identifying prostate cancer bone metastases based on whole-body bone scans, disclosed in Republic of China Patent Publication No. TW202117742A, includes a pre-processing module that receives and processes whole-body bone scans and a neural network module that detects whether the whole-body bone scans represent prostate cancer bone metastases. The neural network module uses a Faster R-CNN (Fast R-CNN) model to segment the thoracic region of the training image based on the whole-body bone scan input and classify cancer cell bone metastases.
惟,模型在訓練資料的表現上,有過度擬合(Overfitting)而喪失泛化能力的現象,在實際應用時,有無法適應新資料且準確率不如預期的技術問題。However, the model exhibits a tendency to overfit the training data and lose its generalization capabilities. In practical applications, it faces technical issues such as being unable to adapt to new data and its accuracy being lower than expected.
再者,該全身骨掃描影像需要先分割出胸腔部位的訓練影像,不但成本高、且耗時長。Furthermore, the whole-body bone scan image requires segmentation of the chest area into a training image, which is not only costly but also time-consuming.
另外,訓練大型模型需大量GPU/TPU資源,對中小型企業或開發者形成門檻。In addition, training large models requires a large amount of GPU/TPU resources, which creates a barrier for small and medium-sized enterprises or developers.
因此,本發明之目的,即在提供一種能夠加快處理速度,及有效提升辨識準確率的用於識別攝護腺癌的深度學習方法及其識別裝置。Therefore, the purpose of the present invention is to provide a deep learning method and recognition device for identifying prostate cancer that can accelerate processing speed and effectively improve recognition accuracy.
於是,本發明用於識別攝護腺癌的深度學習方法 ,包含以下步驟:Therefore, the deep learning method for identifying prostate cancer of the present invention includes the following steps:
(a)獲取數個用於生成圖像的圖像數據,每一該圖像數據包括一相關於攝護腺癌病灶的懷疑程度的級數資訊。(a) Obtaining a plurality of image data for generating an image, each of the image data including level information related to the suspicion level of a prostate cancer lesion.
(b)提取每一該圖像數據的一特徵訊號組。(b) extracting a feature signal group from each of the image data.
(c)根據每一該圖像數據的級數資訊、特徵訊號組,及一優化參數組進行人工智慧學習演算以建立一辨識模型,該優化參數組包括一初始學習率(Initial Learn Rate)、一學習率下降週期(Learn Rate Drop Period)、一學習率下降係數(Learn Rate Drop Factor),及一L2正規化(L2 Regularization),其中,該初始學習值被配置為0.001,該學習率下降週期被配置為10,該學習率下降係數被配置為0.1,該L2正規化被配置為0.0001。(c) performing an artificial intelligence learning algorithm based on the level information of each image data, the feature signal set, and an optimization parameter set to establish a recognition model, the optimization parameter set including an initial learning rate, a learning rate drop period, a learning rate drop factor, and an L2 regularization, wherein the initial learning value is configured as 0.001, the learning rate drop period is configured as 10, the learning rate drop factor is configured as 0.1, and the L2 regularization is configured as 0.0001.
(d)根據該辨識模型與每一該圖像數據的特徵訊號組輸出一風險評估資訊。(d) outputting risk assessment information based on the recognition model and each feature signal set of the image data.
一種執行如前所述的用於識別攝護腺癌的深度學習方法的識別裝置,包含一通訊模組,及一處理模組。An identification device that implements the aforementioned deep learning method for identifying prostate cancer includes a communication module and a processing module.
該通訊模組用於獲取該等圖像數據,及輸出該辨識模型。The communication module is used to obtain the image data and output the recognition model.
該處理模組連接於該通訊模組,用於提取每一該圖像數據的特徵訊號組、根據每一該圖像數據的級數資訊、特徵訊號組,及該優化參數組進行人工智慧學習演算以建立該辨識模型。The processing module is connected to the communication module and is used to extract the characteristic signal set of each image data, and perform artificial intelligence learning calculation based on the level information of each image data, the characteristic signal set, and the optimization parameter set to establish the recognition model.
一種用於識別攝護腺癌的識別裝置,包含一通訊模組,及一處理模組。A prostate cancer identification device includes a communication module and a processing module.
該通訊模組用於載入如前所述的辨識模型、獲取一待辨識影像,及輸出另一風險評估資訊。The communication module is used to load the recognition model mentioned above, obtain an image to be recognized, and output another risk assessment information.
該處理模組連接於該通訊模組,使用該辨識模型辨識該待辨識影像,及產生該另一風險評估資訊。The processing module is connected to the communication module, and uses the recognition model to recognize the image to be recognized and generate the other risk assessment information.
本發明之功效在於:通過預先分級的圖像數據,加快處理速度,且通過限定數值的優化參數組,使該辨識模型辨識攝護腺癌的準確率大幅提高至90%以上。The invention's effectiveness lies in accelerating processing speed by using pre-classified image data and significantly increasing the accuracy of the recognition model in identifying prostate cancer to over 90% by using an optimized parameter set with limited values.
參閱圖1,本發明用於識別攝護腺癌的識別裝置的一實施例,包含一通訊模組1、一處理模組2,及一顯示模組3。Referring to FIG. 1 , an embodiment of the identification device for identifying prostate cancer according to the present invention includes a communication module 1 , a processing module 2 , and a display module 3 .
該識別裝置通常以網路作為媒介,具備強大的運算能力,及大量的儲存空間。該識別裝置不限於指特定數量的機器,而是指能夠在短時間內完成大量工作,並為大量使用者提供服務的設備。在本實施例中,可以是伺服器、便攜式電腦、桌上型電腦、或其它能夠輸入或輸出數據及信息的裝置。The identification device typically uses the Internet as a medium and possesses powerful computing capabilities and ample storage space. The identification device is not limited to a specific number of machines, but rather refers to equipment capable of completing a large amount of work in a short period of time and providing services to a large number of users. In this embodiment, it can be a server, laptop, desktop computer, or other device capable of inputting or outputting data and information.
該通訊模組1能夠通過無線通訊技術、移動通訊技術、或有線通訊技術與其它設備相互通訊。在本實施例中,該無線通訊技術被配置為Wi-Fi、Bluetooth、NFC、或其組合。該移動通訊技術被配置為4G、5G、或其組合。該有線通訊技術被配置為USB、type-c、或其組合。The communication module 1 is capable of communicating with other devices via wireless, mobile, or wired communication technologies. In this embodiment, the wireless communication technology is configured as Wi-Fi, Bluetooth, NFC, or a combination thereof. The mobile communication technology is configured as 4G, 5G, or a combination thereof. The wired communication technology is configured as USB, Type-C, or a combination thereof.
該處理模組2連接於該通訊模組1,負責執行各種指令和數據處理任務,如圖形渲染、圖像處理、深度學習訓練等。在本實施例中,該處理模組2可以是CPU模組、GPU模組、TPU模組、或其組合。The processing module 2 is connected to the communication module 1 and is responsible for executing various instructions and data processing tasks, such as graphics rendering, image processing, deep learning training, etc. In this embodiment, the processing module 2 can be a CPU module, a GPU module, a TPU module, or a combination thereof.
該顯示模組3連接於該處理模組2,用於顯示文字、圖像、視頻、或其組合。The display module 3 is connected to the processing module 2 and is used to display text, images, videos, or a combination thereof.
參閱圖1、圖2與圖3,本發明用於識別攝護腺癌的深度學習方法通過該識別裝置的處理模組2執行以下步驟:Referring to Figures 1, 2, and 3, the deep learning method for identifying prostate cancer of the present invention performs the following steps through the processing module 2 of the identification device:
步驟S01:通過該通訊模組1獲取數個用於生成圖像的圖像數據D。每一該圖像數據D包括一相關於攝護腺癌病灶的懷疑程度的級數資訊。Step S01: Obtain a plurality of image data D for generating an image through the communication module 1. Each image data D includes level information related to the suspicion level of a prostate cancer lesion.
在本實施例中,步驟S01所獲取的圖像數據D用於生成MRI影像(magnetic- resonance imaging)、CT影像(computed tomography)、US影像(Ultrasound)、或其組合。該等圖像數據D的來源包括PROSTATEx資料庫、Prostate-MRI-US-Biopsy資料庫、或其他公開的醫學影像資料庫。In this embodiment, the image data D acquired in step S01 is used to generate MRI images (magnetic-resonance imaging), CT images (computed tomography), US images (Ultrasound), or a combination thereof. The image data D may be sourced from the PROSTATEx database, the Prostate-MRI-US-Biopsy database, or other publicly available medical imaging databases.
每一該圖像數據D的級數資訊包括一級數。該級數介於1~ n。該級數愈大表示風險愈高。在本實施例中,每一該圖像數據D的級數資訊是依據臨床診斷標準,通過葛里森分級系統(Gleason grading system)分為1到5級。The grade information of each image data D includes a grade number. The grade number ranges from 1 to n. A larger grade number indicates a higher risk. In this embodiment, the grade information of each image data D is classified into grades 1 to 5 according to the Gleason grading system based on clinical diagnostic standards.
步驟S02:提取每一該圖像數據D的一特徵訊號組。Step S02: Extract a feature signal group from each image data D.
在本實施例中,使用MobileNet-V2架構的卷積神經網路(Convolution Neural Network, CNN)自動提取該特徵訊號組。In this embodiment, a convolutional neural network (CNN) based on the MobileNet-V2 architecture is used to automatically extract the feature signal set.
步驟S03:根據每一該圖像數據D的級數資訊、特徵訊號組,及一優化參數組,使用MobileNet-V2架構的卷積神經網路及配合一優化器進行人工智慧學習演算以建立一辨識模型M。Step S03: Based on the level information, feature signal set, and optimization parameter set of each image data D, a convolutional neural network with a MobileNet-V2 architecture and an optimizer are used to perform artificial intelligence learning calculations to establish a recognition model M.
在本實施例中,該優化器被配置為ADAM優化器、RMSprop優化器、或SGDM優化器。In this embodiment, the optimizer is configured as an ADAM optimizer, an RMSprop optimizer, or an SGDM optimizer.
該優化參數組包括:一初始學習率(Initial Learn Rate)、一學習率下降週期(Learn Rate Drop Period)、一學習率下降係數(Learn Rate Drop Factor)、一L2正規化(L2 Regularization),及一平方漸層衰減係數(Squared Gradient Decay Factor)。The optimization parameter set includes an initial learning rate, a learning rate drop period, a learning rate drop factor, an L2 regularization, and a squared gradient decay factor.
該初始學習值被配置為0.001,用於控制該辨識模型M在訓練初期的權重更新幅度。如果學習率太大,可能導致該辨識模型M收斂不穩定,太小則可能收斂過慢。The initial learning value is configured as 0.001 to control the weight update amplitude of the recognition model M in the early stage of training. If the learning rate is too large, the recognition model M may converge unstably, while if it is too small, it may converge too slowly.
該學習率下降週期能夠搭配分段式學習率,定期降低學習率,使該辨識模型M更穩定地接近最佳辨識率。在本實施例中,該學習率下降週期被配置為10,該學習率下降係數被配置為0.1。藉此,該辨識模型M的學習率會由該初始學習值隨時間逐漸縮小。例如:第1個週期,學習率=0.001×0.1=0.0001;第2個週期,學習率=0.0001×0.1=0.00001。This learning rate reduction cycle can be combined with a staged learning rate to periodically reduce the learning rate, allowing the recognition model M to more steadily approach the optimal recognition rate. In this embodiment, the learning rate reduction cycle is configured to 10, and the learning rate reduction coefficient is configured to 0.1. This allows the learning rate of the recognition model M to gradually decrease over time from the initial learning value. For example: in the first cycle, the learning rate = 0.001 × 0.1 = 0.0001; in the second cycle, the learning rate = 0.0001 × 0.1 = 0.00001.
該L2正規化值被配置為0.0001,能夠限制權重(Weights)數值的增長,防止過擬合(Overfitting),而提升模型的泛化能力。The L2 regularization value is configured to 0.0001, which can limit the growth of weights, prevent overfitting, and improve the generalization ability of the model.
該平方漸層衰減係數被配置為小於1,較佳的,被配置為0.9。通過維護梯度的平方和來調整學習率,以實現自適應學習率的效果。藉此,可以避免梯度的平方和過大,從而防止學習率過小,導致訓練過程停滯。The squared gradient attenuation coefficient is configured to be less than 1, preferably 0.9. By maintaining the sum of squared gradients, the learning rate is adjusted to achieve an adaptive learning rate. This prevents the sum of squared gradients from being too large, thus preventing the learning rate from being too small, causing the training process to stagnate.
步驟S04:根據該辨識模型M與每一該圖像數據D的特徵訊號組輸出一風險評估資訊S。Step S04: Output risk assessment information S based on the recognition model M and the characteristic signal set of each image data D.
在本實施例中,該風險評估資訊S包括一風險等級。該風險等級被配置為5級,且以文字呈現極低、低、中等、高、極高等訊息。其中:In this embodiment, the risk assessment information S includes a risk level. The risk level is configured as 5 levels and is presented in text as very low, low, medium, high, and very high.
該風險等級極低,表示觀察不到明顯的異常或病變,無癌症風險。This risk level is very low, meaning no obvious abnormalities or lesions are observed and there is no risk of cancer.
該風險等級低,表示觀察到一些異常,但為良性。This risk rating is low, meaning some abnormalities were observed but are benign.
該風險等級中等,表示觀察到一些異常,癌症可能性較低。This risk level is moderate, meaning some abnormalities were observed and the likelihood of cancer is low.
該風險等級高,表示觀察到一些異常,癌症可能性較高。The risk level is high, meaning some abnormalities were observed and the possibility of cancer is higher.
該風險等級極高,表示觀察到明顯的異常,癌症可能性很高。This risk level is extremely high, meaning a significant abnormality has been observed and the possibility of cancer is very high.
表1:
在訓練該辨識模型M與測試該辨識模型M時,該處理模組2會將該等圖像數據D切分為一訓練集DS1、一測試集DS2,及一驗證集DS3。該訓練集DS1中的圖像數據D的數量佔比為80%。該測試集DS2中的圖像數據D的數量佔比為10%,該驗證集DS3中的圖像數據D的數量佔比為10%。由表1可以知道,選擇MobileNet-V2架構的卷積神經網路(Convolution Neural Network, CNN)配合不同的優化器,可以得到90%以上的準確度。其中,配合使用RMSprop架構的優化器,可以得到高達95.07%的準確度。When training and testing the recognition model M, the processing module 2 divides the image data D into a training set DS1, a test set DS2, and a validation set DS3. The image data D in the training set DS1 accounts for 80%. The image data D in the test set DS2 accounts for 10%, and the image data D in the validation set DS3 accounts for 10%. As shown in Table 1, the convolutional neural network (CNN) with the MobileNet-V2 architecture can achieve an accuracy of over 90% when combined with different optimizers. Among them, the optimizer with the RMSprop architecture can achieve an accuracy of up to 95.07%.
藉此,設置在醫療院所的另一識別裝置只需通過該通訊模組1載入該辨識模型M,及通過該通訊模組1獲取一待辨識影像P。該另一識別裝置的處理模組2就能夠使用該辨識模型M辨識該待辨識影像P、產生另一風險評估資訊S,及通過該顯示模組3顯示該另一風險評估資訊S。藉此,協助醫生判斷攝護腺癌病灶,進而能夠提供更準確的腫瘤監測與治療建議。Thus, another recognition device installed in a medical institution only needs to load the recognition model M through the communication module 1 and obtain an image P to be recognized. The processing module 2 of the other recognition device can then use the recognition model M to recognize the image P to be recognized, generate risk assessment information S, and display this risk assessment information S through the display module 3. This helps doctors diagnose prostate cancer lesions and provides more accurate tumor monitoring and treatment recommendations.
值得說明的是,該待辨識影像P也可以成為訓練影像。藉此,擴增該訓練集DS1的圖像數據D的數量,進而能夠不斷改進演算法與該辨識模型M,以提升學習效果,及辨識率。It is worth noting that the image to be recognized P can also serve as a training image. This increases the amount of image data D in the training set DS1, allowing for continuous improvement of the algorithm and the recognition model M to enhance learning effectiveness and recognition rate.
應當注意的是,設置在醫療院所的識別裝置與建立該辨識模型M的試別裝置不限於是二個不同的裝置,在本實施例的其它變化例中,也可以是相同的裝置。It should be noted that the identification device installed in the medical institution and the trial identification device for establishing the identification model M are not limited to being two different devices. In other variations of this embodiment, they can also be the same device.
經由以上的說明,可將前述實施例的優點歸納如下:Based on the above description, the advantages of the aforementioned embodiments can be summarized as follows:
1、本發明通過限定數值的優化參數組,可以使該辨識模型M辨識攝護腺癌的準確率大幅提高至90%以上。當選擇MobileNet-V2架構的卷積神經網路配合使用RMSprop架構的優化器,則可以得到高達95.07%的準確度。顯然,本發明相較於習知技術,大幅提升了辨識時的準確度。1. By using a limited set of optimized parameters, the present invention significantly improves the accuracy of prostate cancer recognition model M to over 90%. When a convolutional neural network based on the MobileNet-V2 architecture is used in conjunction with an optimizer based on the RMSprop architecture, an accuracy of 95.07% is achieved. Clearly, this invention significantly improves recognition accuracy compared to learning techniques.
2、且發明通過預先分級的圖像數據D,不但可以加快處理速度,且能夠提升建立該辨識模型M時的學習效果。2. It is also discovered that using pre-classified image data D can not only speed up processing but also improve the learning effect when establishing the recognition model M.
3、MobileNet-V2架構的卷積神經網路是一個輕量級模型,適合資源有限的環境。3. The convolutional neural network of the MobileNet-V2 architecture is a lightweight model suitable for resource-limited environments.
惟以上所述者,僅為本發明之實施例而已,當不能以此限定本發明實施之範圍,凡是依本發明申請專利範圍及專利說明書內容所作之簡單的等效變化與修飾,皆仍屬本發明專利涵蓋之範圍內。However, the above description is merely an example of the present invention and should not be used to limit the scope of the present invention. All simple equivalent changes and modifications made within the scope of the patent application and the contents of the patent specification of the present invention are still within the scope of the present patent.
1:通訊模組 2:處理模組 3:顯示模組 D:圖像數據 DS1:訓集 DS2:測試集 DS3:驗證集 M:辨識模型 P:待辨識影像 S:風險評估資訊1: Communication module 2: Processing module 3: Display module D: Image data DS1: Training Set DS2: Test set DS3: Verification set M: Recognition model P: Image to be recognized S: Risk assessment information
本發明之其他的特徵及功效,將於參照圖式的實施方式中清楚地呈現,其中:圖1是一方塊圖,說明本發明用於識別攝護腺癌的識別裝置的一實施例;圖2是一示意圖,說明該實施例中一顯示模組用於顯示一風險評估資訊;及圖3是本發明用於識別攝護腺癌的深度學習方法的一流程圖。Other features and functions of the present invention will be clearly presented in the embodiments with reference to the accompanying drawings, wherein: FIG1 is a block diagram illustrating an embodiment of an identification device for identifying prostate cancer according to the present invention; FIG2 is a schematic diagram illustrating a display module for displaying risk assessment information in the embodiment; and FIG3 is a flow chart of the deep learning method for identifying prostate cancer according to the present invention.
1:通訊模組 1: Communication module
2:處理模組 2: Processing module
3:顯示模組 3: Display module
D:圖像數據 D: Image data
DS1:訓集 DS1: Training set
DS2:測試集 DS2: Test set
DS3:驗證集 DS3: Verification Set
M:辨識模型 M: Identification Model
P:待辨識影像 P: Image to be recognized
S:風險評估資訊 S: Risk Assessment Information
Claims (10)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW114115567A TWI895233B (en) | 2025-04-24 | 2025-04-24 | Deep learning method and identification device for identifying prostate cancer |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW114115567A TWI895233B (en) | 2025-04-24 | 2025-04-24 | Deep learning method and identification device for identifying prostate cancer |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| TWI895233B true TWI895233B (en) | 2025-08-21 |
Family
ID=97524427
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| TW114115567A TWI895233B (en) | 2025-04-24 | 2025-04-24 | Deep learning method and identification device for identifying prostate cancer |
Country Status (1)
| Country | Link |
|---|---|
| TW (1) | TWI895233B (en) |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109256212A (en) * | 2018-08-17 | 2019-01-22 | 上海米因医疗器械科技有限公司 | Bone health assessment model construction method, device, equipment, medium and assessment method |
| CN112669254A (en) * | 2019-10-16 | 2021-04-16 | 中国医药大学附设医院 | Deep learning prostate cancer bone metastasis identification system based on whole-body bone scanning image |
| WO2024037922A1 (en) * | 2022-08-15 | 2024-02-22 | Bayer Aktiengesellschaft | Prostate cancer local staging |
| EP3707510B1 (en) * | 2017-11-06 | 2024-06-26 | F. Hoffmann-La Roche AG | Diagnostic and therapeutic methods for cancer |
-
2025
- 2025-04-24 TW TW114115567A patent/TWI895233B/en active
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP3707510B1 (en) * | 2017-11-06 | 2024-06-26 | F. Hoffmann-La Roche AG | Diagnostic and therapeutic methods for cancer |
| CN109256212A (en) * | 2018-08-17 | 2019-01-22 | 上海米因医疗器械科技有限公司 | Bone health assessment model construction method, device, equipment, medium and assessment method |
| CN112669254A (en) * | 2019-10-16 | 2021-04-16 | 中国医药大学附设医院 | Deep learning prostate cancer bone metastasis identification system based on whole-body bone scanning image |
| WO2024037922A1 (en) * | 2022-08-15 | 2024-02-22 | Bayer Aktiengesellschaft | Prostate cancer local staging |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP7094407B2 (en) | Three-dimensional (3D) convolution with 3D batch normalization | |
| CN110136108B (en) | Method for carrying out benign and malignant characteristic statistics on breast cancer by adopting machine learning algorithm | |
| CN106530295A (en) | Fundus image classification method and device of retinopathy | |
| CN113989551A (en) | Alzheimer disease classification method based on improved ResNet network | |
| CN113628230A (en) | Training method, segmentation method and device for ventricular myocardial segmentation model in cardiac nuclear magnetic resonance image | |
| Li et al. | Deep learning algorithm for generating optical coherence tomography angiography (OCTA) maps of the retinal vasculature | |
| Zheng et al. | Adaptive boundary-enhanced Dice loss for image segmentation | |
| Bhushan | Liver cancer detection using hybrid approach-based convolutional neural network (HABCNN) | |
| TWI895233B (en) | Deep learning method and identification device for identifying prostate cancer | |
| CN115526882A (en) | Method, device, equipment and storage medium for classifying medical images | |
| Brianna et al. | Combination of Image Enhancement and Double U-Net Architecture for Liver Segmentation in CT-Scan Images | |
| CN114399501A (en) | Deep learning convolutional neural network-based method for automatically segmenting prostate whole gland | |
| CN117058467B (en) | A method and system for identifying gastrointestinal lesion types | |
| CN112766333A (en) | Medical image processing model training method, medical image processing method and device | |
| Guo et al. | Thyroid nodule ultrasonic imaging segmentation based on a deep learning model and data augmentation | |
| Rhomadhon et al. | Developing a classification system for brain tumors using the ResNet152V2 CNN model architecture | |
| CN116485732A (en) | A method and device for identifying intracranial aneurysms based on region growing and ECA-RESNET50 | |
| Van et al. | Robust influence-based training methods for noisy brain mri | |
| JP2015100657A (en) | Information processing apparatus and method | |
| Usman et al. | Multimodal hie lesion segmentation in neonates: A comparative study of loss functions | |
| Wang et al. | Optimization of ct image quality assessment metric based on genetic algorithm | |
| Mane et al. | CNN-based medical image restoration using customized adaptive histogram equalization | |
| Kadia | Advanced UNet for 3D Lung Segmentation and Applications | |
| Naufal et al. | Enhanced Brain Tumor Classification through Gamma Correction in Deep Learning | |
| CN117115106B (en) | A CycleGAN-based method for converting US-CT medical images of the liver. |