[go: up one dir, main page]

TWI877875B - Brain tomography computer-aided detection system, its method and its computer program product thereof - Google Patents

Brain tomography computer-aided detection system, its method and its computer program product thereof Download PDF

Info

Publication number
TWI877875B
TWI877875B TW112141805A TW112141805A TWI877875B TW I877875 B TWI877875 B TW I877875B TW 112141805 A TW112141805 A TW 112141805A TW 112141805 A TW112141805 A TW 112141805A TW I877875 B TWI877875 B TW I877875B
Authority
TW
Taiwan
Prior art keywords
image
images
module
detection system
trodat
Prior art date
Application number
TW112141805A
Other languages
Chinese (zh)
Other versions
TW202519162A (en
Inventor
高嘉鴻
謝德鈞
張宇捷
柏棋 陳
陳奕瑾
葉依純
Original Assignee
中國醫藥大學附設醫院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中國醫藥大學附設醫院 filed Critical 中國醫藥大學附設醫院
Priority to TW112141805A priority Critical patent/TWI877875B/en
Application granted granted Critical
Publication of TWI877875B publication Critical patent/TWI877875B/en
Publication of TW202519162A publication Critical patent/TW202519162A/en

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

A brain tomography computer-aided detection system is provided. The system includes an image selection module, an image preprocessing module and a visual scale prediction module. The image selection module selects at least a suitable original image from a set of dopamine tomography (TRODAT) original images of a person being tested. The image preprocessing module converts the at least a suitable original image into at least a processed image. The visual scale prediction module generates a visual scale prediction stage of the person being tested according to the at least a processed image.

Description

腦部斷層掃描電腦輔助偵測系統、方法及電腦程式產品 Brain tomography computer-assisted detection system, method and computer program product

本發明屬於電腦輔助辨識技術領域,特別是腦部斷層掃描之電腦輔助辨識技術領域。 This invention belongs to the field of computer-assisted recognition technology, especially the field of computer-assisted recognition technology for brain tomography.

帕金森氏症(Parkinson's Disease,PD)是一種影響運動的進行性疾病,通常與多巴胺神經元的缺少有關。多巴胺斷層掃描(TRODAT)常應用於檢測受測者的腦部的多巴胺神經元是否發生缺少。然而,目前檢閱TRODAT影像並做出判斷仍必須仰賴人力,然而醫生的工作繁忙且人力不足,常常會導致判斷品質下降或耗費大量時間成本 Parkinson's Disease (PD) is a progressive disease that affects movement and is usually associated with a lack of dopamine neurons. Dopamine tomography (TRODAT) is often used to detect whether the subject's brain lacks dopamine neurons. However, currently reviewing TRODAT images and making judgments still rely on manpower, but doctors are busy and lack of manpower often leads to a decline in judgment quality or a lot of time cost

因此,目前需要一種腦部斷層掃描之電腦輔助辨識系統、方法及電腦程式產品來解決上述問題。 Therefore, there is a need for a computer-assisted recognition system, method and computer program product for brain tomography to solve the above problems.

本發明提出一種腦部斷層掃描之電腦輔助辨識技術,是以深度神經網路為基礎,並利用受測者(例如患者)的TRODAT影像來對深度神經網路的訓 練用模型進行訓練,當訓練完成後,深度神經網路即可具備輔助預測受測者的視覺分級(Visual Scale,VS)預測等級的能力。 The present invention proposes a brain tomography computer-assisted recognition technology based on a deep neural network, and uses the TRODAT images of the subject (e.g., a patient) to train the training model of the deep neural network. After the training is completed, the deep neural network can assist in predicting the visual scale (VS) prediction level of the subject.

根據本發明的一觀點,茲提出一種腦部斷層掃描電腦輔助偵測系統。該系統包括一影像挑選模組、一影像預處理模組及一視覺分級預測模組。影像挑選模組執行一影像挑選程序,從一受測者的一組TRODAT原始影像中挑選出至少一合適原始影像。影像預處理模組執行一影像預處理程序,將至少一合適原始影像轉換成至少一處理後影像。視覺分級預測模組根據至少一處理後影像而產生受測者的一Visual Scale預測等級。 According to one viewpoint of the present invention, a brain tomography computer-aided detection system is proposed. The system includes an image selection module, an image preprocessing module and a visual grading prediction module. The image selection module executes an image selection procedure to select at least one suitable original image from a set of TRODAT original images of a subject. The image preprocessing module executes an image preprocessing procedure to convert at least one suitable original image into at least one processed image. The visual grading prediction module generates a Visual Scale prediction grade of the subject based on at least one processed image.

根據本發明的另一觀點,是提供一種腦部斷層掃描電腦輔助偵測方法,該方法是透過一腦部斷層掃描電腦輔助偵測系統來執行,其中腦部斷層掃描電腦輔助偵測系統包含一影像挑選模組、一影像預處理模組以及一視覺分級預測模組。該方法包含步驟:藉由影像挑選模組執行一影像挑選程序,從一受測者的一組TRODAT原始影像中挑選出至少一合適原始影像;藉由影像預處理模組執行一影像預處理程序,將至少一合適原始影像轉換成至少一處理後影像;以及藉由視覺分級預測模組,根據至少一處理後影像而產生受測者的一Visual Scale預測等級。 According to another aspect of the present invention, a brain tomography computer-assisted detection method is provided, wherein the method is performed by a brain tomography computer-assisted detection system, wherein the brain tomography computer-assisted detection system includes an image selection module, an image preprocessing module, and a visual grading prediction module. The method comprises the steps of: performing an image selection procedure by an image selection module to select at least one suitable original image from a set of TRODAT original images of a subject; performing an image preprocessing procedure by an image preprocessing module to convert at least one suitable original image into at least one processed image; and generating a Visual Scale prediction grade of the subject according to at least one processed image by a visual grading prediction module.

根據本發明又另一觀點,是提供一種電腦程式產品,儲存於非暫態電腦可讀取媒體之中,用以使一腦部斷層掃描電腦輔助偵測系統進行運作,其中腦部斷層掃描電腦輔助偵測系統包含一影像挑選模組、一影像預處理模組以及一視覺分級預測模組。該電腦程式產品包括:一指令,使影像挑選模組執行一影像挑選程序,從一受測者的一組TRODAT原始影像中挑選出至少一合適原始影像;一指令,使影像預處理模組執行一影像預處理程序,將至少一合適 原始影像轉換成至少一處理後影像;以及一指令,使視覺分級預測模組根據至少一處理後影像而產生受測者的一Visual Scale預測等級。 According to another aspect of the present invention, a computer program product is provided, which is stored in a non-transitory computer-readable medium and is used to operate a brain tomography computer-assisted detection system, wherein the brain tomography computer-assisted detection system includes an image selection module, an image pre-processing module and a visual grading prediction module. The computer program product includes: an instruction to enable an image selection module to execute an image selection procedure to select at least one suitable original image from a set of TRODAT original images of a subject; an instruction to enable an image preprocessing module to execute an image preprocessing procedure to convert at least one suitable original image into at least one processed image; and an instruction to enable a visual grading prediction module to generate a Visual Scale prediction grade of the subject based on at least one processed image.

1:腦部斷層掃描電腦輔助偵測系統 1: Brain tomography computer-assisted detection system

10:資料取得介面 10: Data acquisition interface

20:影像挑選模組 20: Image selection module

30:影像預處理模組 30: Image preprocessing module

40:視覺分級預測模組 40: Visual grading prediction module

41:第一類神經網路 41: First-class neural network

50:微處理器 50: Microprocessor

51:電腦程式產品 51: Computer program products

60:第二預測模組 60: Second prediction module

61:第二類神經網路 61: Second type of neural network

S21~S25:步驟 S21~S25: Steps

S41~S45a:步驟 S41~S45a: Steps

S51~S57:步驟 S51~S57: Steps

S61~S62:步驟 S61~S62: Steps

S71~S73:步驟 S71~S73: Steps

S81~S83:步驟 S81~S83: Steps

圖1是本發明一實施例的腦部斷層掃描電腦輔助偵測系統的示意圖;圖2是本發明一實施例的腦部斷層掃描電腦輔助偵測方法的步驟流程圖;圖3是本發明另一實施例的腦部斷層掃描電腦輔助偵測系統的示意圖;圖4是本發明另一實施例的腦部斷層掃描電腦輔助偵測方法的步驟流程圖;圖5是本發明一實施例的影像挑選程序的步驟流程圖;圖6是本發明一實施例的影像預處理程序的步驟流程圖;圖7是本發明一實施例的第一類神經網路的訓練過程流程圖;圖8是本發明一實施例的第二類神經網路的訓練過程流程圖。 FIG1 is a schematic diagram of a brain tomography computer-assisted detection system according to an embodiment of the present invention; FIG2 is a flow chart of the steps of a brain tomography computer-assisted detection method according to an embodiment of the present invention; FIG3 is a schematic diagram of a brain tomography computer-assisted detection system according to another embodiment of the present invention; FIG4 is a flow chart of a brain tomography computer-assisted detection method according to another embodiment of the present invention. FIG. 5 is a step flow chart of the image selection procedure of an embodiment of the present invention; FIG. 6 is a step flow chart of the image preprocessing procedure of an embodiment of the present invention; FIG. 7 is a flow chart of the training process of the first type of neural network of an embodiment of the present invention; FIG. 8 is a flow chart of the training process of the second type of neural network of an embodiment of the present invention.

當結合附圖閱讀時,下列實施例用於清楚地展示本發明的上述及其他技術內容、特徵及/或效果。透過具體實施方式的闡述,人們將進一步瞭解本發明所採用的技術手段及效果,以達到上述的目的。此外,由於本發明所揭示的內容應易於理解且可為本領域技術人員所實施,因此,所有不脫離本發明的概念的相等置換或修改應包含在請求項中。 When read in conjunction with the attached drawings, the following embodiments are used to clearly demonstrate the above and other technical contents, features and/or effects of the present invention. Through the description of the specific implementation methods, people will further understand the technical means and effects adopted by the present invention to achieve the above-mentioned purpose. In addition, since the contents disclosed by the present invention should be easy to understand and can be implemented by technical personnel in this field, all equivalent substitutions or modifications that do not deviate from the concept of the present invention should be included in the claims.

應注意的是,在本文中,除了特別指明者之外,「一」元件不限於單一的該元件,還可指一或更多的該元件。 It should be noted that in this article, unless otherwise specified, "a" element is not limited to a single element, but may also refer to one or more elements.

此外,說明書及權利要求中例如「第一」或「第二」等序數僅為描述所請求的元件,而不代表或不表示所請求的元件具有任何順序的序數,且不是所請求的元件及另一所請求的元件之間或製造方法的步驟之間的順序。這些序數的使用僅是為了將具有特定名稱的一個請求元件與具有相同名稱的另一請求元件區分開來。 In addition, ordinals such as "first" or "second" in the specification and claims are only used to describe the claimed elements, and do not represent or indicate that the claimed elements have any order, and are not the order between the claimed elements and another claimed element or between the steps of the manufacturing method. These ordinals are used only to distinguish one claimed element with a specific name from another claimed element with the same name.

此外,本發明中關於“當...”或“...時”等描述表示“當下、之前或之後”等態樣,而不限定為同時發生之情形,在此先行敘明。本發明中關於“設置於...上”等類似描述係表示兩元件的對應位置關係,並不限定兩元件之間是否有所接觸,除非特別有限定,在此先行敘明。再者,本發明記載多個功效時,若在功效之間使用“或”一詞,係表示功效可獨立存在,但不排除多個功效可同時存在。 In addition, the descriptions of "when..." or "when..." in the present invention indicate "at the moment, before or after", etc., and are not limited to situations that occur at the same time, which is explained in advance. The descriptions of "set on..." and similar descriptions in the present invention indicate the corresponding position relationship between two elements, and do not limit whether the two elements are in contact, unless there is a special limitation, which is explained in advance. Furthermore, when the present invention records multiple effects, if the word "or" is used between the effects, it means that the effects can exist independently, but it does not exclude that multiple effects can exist at the same time.

此外,說明書及權利要求中例如「連接」或「耦接」一詞不僅指與另一元件直接連接,也可指與另一元件間接連接或電性連接。另外,電性連接包含直接連接、間接連接或二元件間以無線電信號交流的態樣。 In addition, the words "connected" or "coupled" in the specification and claims refer not only to direct connection with another element, but also to indirect connection or electrical connection with another element. In addition, electrical connection includes direct connection, indirect connection or communication between two elements using wireless signals.

此外,說明書及權利要求中,「約」、「大約」、「實質上」、「大致上」之用語通常表示在一值與一給定值的差距在該給定值的10%內,或5%內,或3%之內,或2%之內,或1%之內,或0.5%之內的範圍。在此給定的數量為大約的數量,亦即在沒有特定說明「約」、「大約」、「實質上」、「大致上」的情況下,仍可隱含「約」、「大約」、「實質上」、「大致上」之含義。此外,用語「範圍為第一數值至第二數值」、「範圍介於第一數值至第二數值之間」表示所述範圍包含第一數值、第二數值以及它們之間的其它數值。 In addition, in the specification and claims, the terms "about", "approximately", "substantially", and "roughly" usually indicate that the difference between a value and a given value is within 10%, 5%, 3%, 2%, 1%, or 0.5% of the given value. The quantity given here is an approximate quantity, that is, in the absence of specific description of "about", "approximately", "substantially", and "roughly", the meaning of "about", "approximately", "substantially", and "roughly" can still be implied. In addition, the terms "range is from a first value to a second value" and "range is between a first value and a second value" indicate that the range includes the first value, the second value, and other values between them.

此外,在本文中,「系統」、「設備」、「裝置」、「模組」、或「單元」等用語,是指包含了一電子元件或由多個電子元件所組成的一數位電路、一類比電路、或其他更廣義電路,且除了特別指明者之外,它們不必然有位階或層級關係。上述組態皆是依照實際應用而定。 In addition, in this article, the terms "system", "equipment", "device", "module", or "unit" refer to a digital circuit, an analog circuit, or other more general circuits that include an electronic component or are composed of multiple electronic components, and unless otherwise specified, they do not necessarily have a hierarchical or layered relationship. The above configurations are all based on actual applications.

此外,只要合理,本發明所揭示的不同實施例的技術特徵可結合形成另一實施例。 In addition, as long as it is reasonable, the technical features of different embodiments disclosed in the present invention can be combined to form another embodiment.

圖1是本發明一實施例的腦部斷層掃描電腦輔助偵測系統1(以下簡稱系統1)的示意圖。系統1可用於產生一受測者(例如患者)的Visual Scale預測等級。如圖1所示,系統1可包含一資料取得介面10、一影像挑選模組20、一影像預處理模組30以及一視覺分級預測模組40。 FIG1 is a schematic diagram of a brain tomography computer-aided detection system 1 (hereinafter referred to as system 1) of an embodiment of the present invention. System 1 can be used to generate a Visual Scale predicted grade of a subject (e.g., a patient). As shown in FIG1 , system 1 can include a data acquisition interface 10, an image selection module 20, an image preprocessing module 30, and a visual grading prediction module 40.

在一實施例中,資料取得介面10可用以取得來自外部的資料,亦即使用者(例如醫師)可透過資料取得介面10將影像資料輸入至系統1中,其中影像資料可例如是受測者的一組TRODAT原始影像。 In one embodiment, the data acquisition interface 10 can be used to obtain data from the outside, that is, the user (such as a doctor) can input image data into the system 1 through the data acquisition interface 10, wherein the image data can be, for example, a set of TRODAT original images of the subject.

影像挑選模組20可自資料取得介面10處取得該受測者的該組TRODAT原始影像,且影像挑選模組20可執行一影像挑選程序,從該受測者的該組TRODAT原始影像中挑選出至少一合適原始影像。 The image selection module 20 can obtain the set of TRODAT original images of the subject from the data acquisition interface 10, and the image selection module 20 can execute an image selection procedure to select at least one suitable original image from the set of TRODAT original images of the subject.

影像預處理模組30可自影像挑選模組20處取得至少一合適原始影像。影像預處理模組30可執行一影像預處理程序,將至少一合適原始影像轉換成至少一處理後影像。 The image preprocessing module 30 can obtain at least one suitable original image from the image selection module 20. The image preprocessing module 30 can execute an image preprocessing procedure to convert at least one suitable original image into at least one processed image.

視覺分級預測模組40可包含已完成深度學習訓練的一第一類神經網路41。視覺分級預測模組40可自影像預處理模組30處取得至少一處理後影像,並對至少一處理後影像進行特徵分析,進而輸出受測者的Visual Scale預測 等級。在一實施例中,第一類神經網路41可包含一訓練編碼器及一線性分類器,其中訓練編碼器可找出該受測者的至少一處理後影像中的各種特徵,線性分類器可根據訓練編碼器找出的特徵分析出受測者的Visual Scale預測等級,且不限於此。在一實施例中,視覺分級預測模組40可將受測者的Visual Scale預測等級輸出,此處「輸出」可例如是將受測者的Visual Scale預測等級顯示於例如電腦或其它類似電子裝置的螢幕上,或者以圖表、圖形、影像、聲音、文字、文件及/或檔案等方式輸出,且不限於此。 The visual scale prediction module 40 may include a first type neural network 41 that has completed deep learning training. The visual scale prediction module 40 may obtain at least one processed image from the image pre-processing module 30, and perform feature analysis on at least one processed image, and then output the subject's Visual Scale prediction level. In one embodiment, the first type neural network 41 may include a training encoder and a linear classifier, wherein the training encoder may find various features in at least one processed image of the subject, and the linear classifier may analyze the subject's Visual Scale prediction level based on the features found by the training encoder, and is not limited thereto. In one embodiment, the visual grading prediction module 40 may output the subject's Visual Scale prediction grade. Here, "output" may be, for example, displaying the subject's Visual Scale prediction grade on a screen of a computer or other similar electronic device, or outputting in the form of a chart, graph, image, sound, text, document and/or file, and is not limited thereto.

接著更詳細說明上述元件的細節。 Next, the details of the above components are described in more detail.

在一實施例中,系統1可以是一資料處理設備,其可透過任何具有微處理器的電子裝置來實現,例如桌上型電腦、筆記型電腦、智慧型行動裝置、伺服器或雲端主機等類似裝置,或者系統1亦可透過電子裝置中的晶片來實現。在一實施例中,系統1可具備網路通訊功能,以將資料透過網路進行傳輸,其中網路通訊可以是有線網路或無線網路,因此系統1亦可透過網路來取得資料。在一實施例中,系統1可由一微處理器50執行一電腦程式產品51來實現其功能,其中電腦程式產品51可具有複數個指令,該等指令可使微處理器50執行特殊運作,進而使微處理器實現如影像挑選模組20、影像預處理模組30以及視覺分級預測模組40的功能,且不限於此。在一實施例中,電腦程式產品51可儲存於一非暫態電腦可讀取媒體(例如記憶體)之中,但不限於此。在一實施例中,電腦程式產品51亦可預先儲存於網路伺服器中,以供使用者下載。在一實施例中,電腦程式產品51實際上可包括多個子程式。 In one embodiment, the system 1 may be a data processing device, which may be implemented by any electronic device having a microprocessor, such as a desktop computer, a laptop computer, a smart mobile device, a server or a cloud host, or the system 1 may also be implemented by a chip in an electronic device. In one embodiment, the system 1 may have a network communication function to transmit data through the network, wherein the network communication may be a wired network or a wireless network, so the system 1 may also obtain data through the network. In one embodiment, the system 1 can realize its functions by a microprocessor 50 executing a computer program product 51, wherein the computer program product 51 may have a plurality of instructions, which can enable the microprocessor 50 to perform special operations, thereby enabling the microprocessor to realize functions such as the image selection module 20, the image preprocessing module 30, and the visual grading prediction module 40, but not limited thereto. In one embodiment, the computer program product 51 can be stored in a non-transient computer-readable medium (such as a memory), but not limited thereto. In one embodiment, the computer program product 51 can also be pre-stored in a network server for user downloading. In one embodiment, the computer program product 51 can actually include multiple subroutines.

此外,在一實施例中,資料取得介面10可以是系統1取得外部資料的一實體連接埠,例如當系統1是電腦時,資料取得介面10可以是電腦上USB 介面、各種傳輸線接頭等,但並非限定。此外,資料取得介面10亦可與無線通訊晶片整合,因此能以無線傳輸的方式接收資料。在一實施例中,資料取得介面10可與系統1的一暫存器或一記憶體電性連接,以存放取得的資料。 In addition, in one embodiment, the data acquisition interface 10 can be a physical connection port for the system 1 to obtain external data. For example, when the system 1 is a computer, the data acquisition interface 10 can be a USB interface on the computer, various transmission line connectors, etc., but it is not limited. In addition, the data acquisition interface 10 can also be integrated with a wireless communication chip, so that data can be received by wireless transmission. In one embodiment, the data acquisition interface 10 can be electrically connected to a register or a memory of the system 1 to store the acquired data.

在一實施例中,受測者的一組TRODAT原始影像例如是核子醫學科使用的一組「Tc-99m大腦TRODAT影像」,但不限於此。在一實施例中,TRODAT原始影像例如是在受測者注射TRODAT追蹤劑後的一段時間(例如大約四小時),以掃描機器對受測者的腦部進行斷層掃描造影所產生的一組影像,該組影像會對頭部進行掃描,且不限於此。在一實施例中,至少2000組影像被用於訓練第一類神經網路41,且至少150組影像被用於測試第一類神經網路41,但不限於此。 In one embodiment, a set of TRODAT original images of the subject is, for example, a set of "Tc-99m brain TRODAT images" used in nuclear medicine, but not limited thereto. In one embodiment, the TRODAT original images are, for example, a set of images generated by a scanning machine performing a tomographic scan of the subject's brain a period of time (e.g., about four hours) after the subject is injected with the TRODAT tracer, and the set of images scans the head, but not limited thereto. In one embodiment, at least 2,000 sets of images are used to train the first type of neural network 41, and at least 150 sets of images are used to test the first type of neural network 41, but not limited thereto.

當系統1實際使用時(例如第一類神經網路41已完成訓練時),系統1可執行一腦部斷層掃描電腦輔助偵測方法。圖2是本發明一實施例的腦部斷層掃描電腦輔助偵測方法的步驟流程圖,並請同時參考圖1。 When the system 1 is actually used (for example, when the first type of neural network 41 has completed training), the system 1 can execute a brain tomography computer-assisted detection method. FIG2 is a step flow chart of the brain tomography computer-assisted detection method of an embodiment of the present invention, and please refer to FIG1 at the same time.

如圖2所示,首先步驟S21被執行,資料取得介面10取得受測者的一組TRODAT原始影像。之後步驟S22被執行,影像挑選模組20執行影像挑選程序,從該組TRODAT原始影像中挑選出至少一合適原始影像,以做為輸入至第一類神經網路41進行分析的影像。接著步驟S23被執行,影像預處理模組30執行影像預處理程序,將該至少一合適原始影像轉換成至少一處理後影像。之後步驟S24被執行,視覺分級預測模組40的第一類神經網路41找出至少一處理後影像的複數個特徵。之後步驟S25被執行,第一類神經網路41根據特徵分析出受測者的Visual Scale預測等級。之後,視覺分級預測模組40可輸出受測者的Visual Scale預測等級。 As shown in FIG2 , first, step S21 is executed, and the data acquisition interface 10 acquires a set of TRODAT original images of the subject. Then, step S22 is executed, and the image selection module 20 executes an image selection procedure to select at least one suitable original image from the set of TRODAT original images as an image input to the first type neural network 41 for analysis. Then, step S23 is executed, and the image pre-processing module 30 executes an image pre-processing procedure to convert the at least one suitable original image into at least one processed image. Then, step S24 is executed, and the first type neural network 41 of the visual grade prediction module 40 finds a plurality of features of at least one processed image. Then step S25 is executed, and the first type of neural network 41 analyzes the subject's Visual Scale predicted level based on the features. Afterwards, the visual grading prediction module 40 can output the subject's Visual Scale predicted level.

上述步驟可依照需求調整順序或增減,且不限於此。藉此,系統1可根據受測者的TRODAT影像,分析出受測者的Visual Scale預測等級,可輔助醫師進行判斷,可減輕醫師的疲勞,並可提升準確度。 The above steps can be adjusted in order or increased or decreased according to needs, and are not limited thereto. In this way, the system 1 can analyze the subject's Visual Scale prediction level based on the subject's TRODAT image, which can assist doctors in making judgments, reduce doctors' fatigue, and improve accuracy.

此外,本發明的系統1亦可具有變化態樣。圖3是本發明另一實施例的腦部斷層掃描電腦輔助偵測系統1的示意圖。圖3實施例大致可適用圖1實施例的說明,故以下主要針對差異進行說明。 In addition, the system 1 of the present invention may also have a variation. FIG3 is a schematic diagram of a brain tomography computer-assisted detection system 1 of another embodiment of the present invention. The embodiment of FIG3 is generally applicable to the description of the embodiment of FIG1, so the following mainly describes the differences.

如圖3所示,系統1還包含一第二預測模組60。第二預測模組60可包含已完成深度學習訓練的一第二類神經網路61。第二預測模組60可自影像預處理模組30處取得至少一處理後影像,並對至少一處理後影像進行特徵分析,進而輸出受測者的Hoehn-Yahr Scale預測等級。在一實施例中,第二預測模組60可包含一訓練編碼器及一線性分類器,其中訓練編碼器可找出該受測者的至少一處理後影像中的各種特徵,線性分類器可根據訓練編碼器找出的特徵分析出受測者的Hoehn-Yahr Scale預測等級,且不限於此。在一實施例中,第二預測模組60可將受測者的Hoehn-Yahr Scale預測等級,此處「輸出」可例如是將受測者的Hoehn-Yahr Scale預測等級顯示於例如電腦或其它類似電子裝置的螢幕上,或者以圖表、圖形、影像、聲音、文字、文件及/或檔案等方式輸出,且不限於此。在一實施例中,至少250組影像被用於訓練第二類神經網路61,且至少30組影像被用於測試第二類神經網路61,但不限於此 As shown in FIG3 , the system 1 further includes a second prediction module 60. The second prediction module 60 may include a second type of neural network 61 that has completed deep learning training. The second prediction module 60 may obtain at least one processed image from the image pre-processing module 30, and perform feature analysis on at least one processed image, and then output the Hoehn-Yahr Scale prediction level of the subject. In one embodiment, the second prediction module 60 may include a training encoder and a linear classifier, wherein the training encoder may find various features in at least one processed image of the subject, and the linear classifier may analyze the Hoehn-Yahr Scale prediction level of the subject based on the features found by the training encoder, and is not limited thereto. In one embodiment, the second prediction module 60 can output the Hoehn-Yahr Scale prediction level of the subject. Here, "output" can be, for example, displaying the Hoehn-Yahr Scale prediction level of the subject on a screen of a computer or other similar electronic device, or outputting it in the form of a chart, a graph, an image, a sound, a text, a document and/or a file, but is not limited thereto. In one embodiment, at least 250 sets of images are used to train the second type of neural network 61, and at least 30 sets of images are used to test the second type of neural network 61, but is not limited thereto.

圖4是本發明另一實施例的腦部斷層掃描電腦輔助偵測方法的步驟流程圖,並請同時參考圖3。 Figure 4 is a flowchart of the steps of the brain tomography computer-assisted detection method of another embodiment of the present invention, and please refer to Figure 3 at the same time.

如圖4所示,首先步驟S41被執行,資料取得介面10取得受測者的一組TRODAT原始影像。之後步驟S42被執行,影像挑選模組20執行影像挑選程 序,從該組TRODAT原始影像中挑選出至少一合適原始影像,以做為輸入至第一類神經網路41及第二類神經網路61進行分析的影像。接著步驟S43被執行,影像預處理模組30執行影像預處理程序,將該至少一合適原始影像轉換成至少一處理後影像。之後步驟S44被執行,視覺分級預測模組40的第一類神經網路41找出至少一處理後影像的複數個特徵。此外,步驟S44a可執行,第二預測模組60的第二類神經網路61找出至少一處理後影像的複數個特徵。之後步驟S45被執行,第一類神經網路41根據特徵分析出受測者的Visual Scale預測等級。此外,步驟S45a可執行,第二類神經網路61根據特徵分析出受測者的Hoehn-Yahr Scale預測等級。之後,視覺分級預測模組40可輸出受測者的Visual Scale預測等級,且第二預測模組60可輸出受測者的Hoehn-Yahr Scale預測等級。 As shown in FIG4 , first, step S41 is executed, and the data acquisition interface 10 acquires a set of TRODAT original images of the subject. Then, step S42 is executed, and the image selection module 20 executes an image selection procedure to select at least one suitable original image from the set of TRODAT original images as an image input to the first neural network 41 and the second neural network 61 for analysis. Then, step S43 is executed, and the image pre-processing module 30 executes an image pre-processing procedure to convert the at least one suitable original image into at least one processed image. Then, step S44 is executed, and the first neural network 41 of the visual grade prediction module 40 finds a plurality of features of at least one processed image. In addition, step S44a can be executed, and the second type of neural network 61 of the second prediction module 60 finds multiple features of at least one processed image. Then step S45 is executed, and the first type of neural network 41 analyzes the subject's Visual Scale prediction level based on the features. In addition, step S45a can be executed, and the second type of neural network 61 analyzes the subject's Hoehn-Yahr Scale prediction level based on the features. Thereafter, the visual grading prediction module 40 can output the subject's Visual Scale prediction level, and the second prediction module 60 can output the subject's Hoehn-Yahr Scale prediction level.

對於評估病患的帕金森症嚴重程度而言,Hoehn-Yahr分級是重要的評估指標,因此系統1可同時藉由受測者的TRODAT影像,分析出受測者的Visual Scale預測等級及Hoehn-Yahr Scale預測等級,將輔助醫師在評估病患的病況時更加準確,並可提升效率。 The Hoehn-Yahr grade is an important evaluation indicator for assessing the severity of a patient's Parkinson's disease. Therefore, System 1 can simultaneously analyze the subject's Visual Scale predicted grade and Hoehn-Yahr Scale predicted grade through the subject's TRODAT images, which will assist doctors in assessing the patient's condition more accurately and improve efficiency.

由此可知,本發明的系統1可具備視覺分級預測模組40,並可選擇性地具備第二預測模組60。為方便說明,後續段落皆以系統1同時具備視覺分級預測模組40及第二預測模組60的態樣來說明。 It can be seen that the system 1 of the present invention can have a visual grading prediction module 40 and can optionally have a second prediction module 60. For the convenience of explanation, the subsequent paragraphs are all explained as if the system 1 has both the visual grading prediction module 40 and the second prediction module 60.

進一步地,本發明的特色之一包含,TRODAT原始影像在輸入至視覺分級預測模組40及第二預測模組60之前可進行影像挑選程序及影像預處理程序,進而可提升視覺分級預測模組40及第二預測模組60的預測效果。 Furthermore, one of the features of the present invention includes that the TRODAT original image can be subjected to an image selection process and an image preprocessing process before being input into the visual grading prediction module 40 and the second prediction module 60, thereby improving the prediction effect of the visual grading prediction module 40 and the second prediction module 60.

首先針對影像挑選程序進行說明。當受測者進行的腦部進行斷層掃描造影後,掃描的機台可產生多張TRODAT原始影像,其中每張TRODAT原 始影像可例如顯示腦部的某位置的橫切面的影像,然而由於斷層掃描的方向通常是自受測者的頭頂開始掃描,而實際上能反映腦部多寡的基底核部位位於腦部的中間位置,因該等原始影像中的一部分影像可能具備較多雜質,例如影像由頭骨所佔據等情形,而這些雜質會影響到第一類神經網路41及第二類神經網路61分析的精準度,因此影像挑選模組20需進行影像挑選程序,以自動挑選出適合第一類神經網路41及第二類神經網路61分析的影像。 First, the image selection process is explained. After the subject's brain is subjected to CT scan, the scanning machine can generate multiple TRODAT original images, each of which can, for example, show a cross-sectional image of a certain position of the brain. However, since the direction of the CT scan usually starts from the top of the subject's head, and the basal ganglia that can actually reflect the amount of brain is located in the middle of the brain, some of the original images may have more impurities, such as the image is occupied by the skull, etc., and these impurities will affect the accuracy of the analysis of the first type of neural network 41 and the second type of neural network 61. Therefore, the image selection module 20 needs to perform an image selection process to automatically select images suitable for the analysis of the first type of neural network 41 and the second type of neural network 61.

為實現上述目的,影像挑選程序可包含複數個步驟。圖5是本發明一實施例的影像挑選程序的步驟流程圖,請同時參考圖1至圖4。 To achieve the above purpose, the image selection process may include multiple steps. Figure 5 is a step flow chart of the image selection process of an embodiment of the present invention, please refer to Figures 1 to 4 at the same time.

首先,步驟S51被執行,影像挑選模組20判斷受測者的一組TRODAT原始影像的影像數量是否大於一第一門檻值。當該組TRODAT原始影像的影像數量大於第一門檻值時,步驟S51a被執行,影像挑選模組20從該組TRODAT原始影像中的第一張影像開始找出符合一預設條件的一影像,並將符合預設條件的該張影像以及接續於該張影像之後的特定數量的影像取出,以作為一組初步挑選影像。而當該組TRODAT原始影像的影像數量小於或等於第一門檻值時,影像挑選模組20直接將該組TRODAT原始影像設定為該組初步挑選影像。當該組初步挑選影像被選取出後,步驟S52被執行,對該組初步挑選影像進行一縮減優化處理,以從該組初步挑選影像中取得至少一部分影像。接著步驟S53被執行,將縮減優化處理後所取得的該組初步挑選影像的至少一部分影像與一遮罩進行中心點對位,其中該遮罩包含一基底核範圍。接著步驟S54被執行,針對對位後的每張影像,取得每張影像對應該基底核範圍內的一最大像素值。接著步驟S55被執行,將對位後的每張影像對應該基底核範圍內的最大像素值進行比較。接著步驟S56被執行,將最大像素值高於其它影像的該張影像設定 為至少一合適原始影像。接著步驟S57可執行,將具有最大像素值的該張影像的前一張影像及後一張影像也設定為該至少一合適原始影像。藉此,影像挑選程序可完成,影像挑選模組20可自動挑選出適合第一類神經網路41及第二類神經網路61進行分析的至少一合適原始影像。需注意的是,上述步驟僅是舉例,只要合理可實現,上述步驟可依照需求變換順序或進行增減。 First, step S51 is executed, and the image selection module 20 determines whether the number of images in a set of TRODAT original images of the subject is greater than a first threshold value. When the number of images in the set of TRODAT original images is greater than the first threshold value, step S51a is executed, and the image selection module 20 starts to find an image that meets a preset condition from the first image in the set of TRODAT original images, and takes out the image that meets the preset condition and a specific number of images following the image as a set of preliminary selected images. When the number of images in the set of TRODAT original images is less than or equal to the first threshold value, the image selection module 20 directly sets the set of TRODAT original images as the set of preliminary selected images. After the group of preliminary selected images is selected, step S52 is executed to perform a reduction optimization process on the group of preliminary selected images to obtain at least a portion of the images from the group of preliminary selected images. Then step S53 is executed to perform center point alignment of at least a portion of the group of preliminary selected images obtained after the reduction optimization process with a mask, wherein the mask includes a base nucleus range. Then step S54 is executed to obtain a maximum pixel value of each image corresponding to the base nucleus range for each aligned image. Then step S55 is executed to compare the maximum pixel value of each aligned image corresponding to the base nucleus range. Then step S56 is executed, and the image with the maximum pixel value higher than other images is set as at least one suitable original image. Then step S57 can be executed, and the previous image and the next image of the image with the maximum pixel value are also set as the at least one suitable original image. In this way, the image selection process can be completed, and the image selection module 20 can automatically select at least one suitable original image suitable for the first type of neural network 41 and the second type of neural network 61 to analyze. It should be noted that the above steps are only examples, and as long as they are reasonably achievable, the above steps can be changed in order or increased or decreased according to needs.

關於步驟S51。在一實施例中,第一門檻值可例如是介於60張至70張影像之間,例如64張,且不限於此。具體而言,由於每張影像皆可對應以受測者的頭頂做為起始點的掃描路徑上的一個掃描位置,因此當影像數量超過第一門檻值時,可能表示某些不需要觀察的部位(例如骨頭的部位)也可能被掃描多次,故該組TRODAT原始影像可能包含了許多張具有較多雜質而不利於分析的影像,因此這種情況需要進行步驟S51a以對該組TRODAT原始影像進行初步挑選。反之,當影像數量超過第一門檻值時,表示該組TRODAT原始影像中對應雜質的影像的數量較少,因此可直接以該組TRODAT原始影像進行後續處理。 Regarding step S51. In one embodiment, the first threshold value may be, for example, between 60 and 70 images, such as 64 images, but not limited thereto. Specifically, since each image may correspond to a scanning position on a scanning path starting from the top of the subject's head, when the number of images exceeds the first threshold value, it may indicate that some parts that do not need to be observed (such as bones) may also be scanned multiple times, so the set of TRODAT original images may contain many images with more impurities that are not conducive to analysis. Therefore, in this case, step S51a needs to be performed to preliminarily select the set of TRODAT original images. On the contrary, when the number of images exceeds the first threshold value, it means that the number of images corresponding to impurities in the set of TRODAT original images is relatively small, so the set of TRODAT original images can be directly used for subsequent processing.

步驟S51a是初步挑選的細節,此步驟會以該組TRODAT原始影像的第一張影像(例如對應頭頂)做為起點,在該組TRODAT原始影像挑選出符合預設條件的影像做為初步挑選影像。在一實施例中,預設條件可包含至少:(1)該影像具備像素值大於一第二門檻值的複數個像素點;以及(2)像素值大於該第二門檻值的像素點的數量大於一第三門檻值。在一實施例中,第二門檻值可例如但不限於介於3至10之間、4至8之間、或4至6之間,且不限於此。在一實施例中,第三門檻值可介於300至700之間、400至600之間、或450至550之間,且不限於此。舉例來說,若第二門檻值舉例為5,第三門檻值舉例為500,則在步驟S51a中,影像挑選模組20會從該組TRODAT原始影像中的第一張影像開始尋找具有 超過500個像素值大於5的像素點的一張影像,藉此,影像挑選模組20首次找到的該張影像以及接續於該張影像之後的多張影像(接續的影像最多可例如為64張,連同該張影像為65張,但不限於此)可被設定為該組初步挑選影像。 Step S51a is the details of preliminary selection. This step will take the first image (e.g., corresponding to the top of the head) of the set of TRODAT original images as the starting point, and select images that meet the preset conditions from the set of TRODAT original images as preliminary selected images. In one embodiment, the preset conditions may include at least: (1) the image has a plurality of pixels with pixel values greater than a second threshold value; and (2) the number of pixels with pixel values greater than the second threshold value is greater than a third threshold value. In one embodiment, the second threshold value may be, for example but not limited to, between 3 and 10, between 4 and 8, or between 4 and 6, and is not limited thereto. In one embodiment, the third threshold value may be between 300 and 700, between 400 and 600, or between 450 and 550, and is not limited thereto. For example, if the second threshold value is 5 and the third threshold value is 500, then in step S51a, the image selection module 20 will start to search for an image with more than 500 pixels with pixel values greater than 5 from the first image in the group of TRODAT original images, thereby, the image first found by the image selection module 20 and multiple images following the image (the number of subsequent images can be, for example, up to 64, and together with the image, 65, but not limited to this) can be set as the group of preliminary selected images.

具體而言,由於基底核吸收TRODAT追蹤劑後,基底核部位因為分泌多巴胺而使得其像素值會高於其它部位,因此具有超過500個像素值大於5的像素點的影像是對應基底核部位的可能性也較高,也因此當找到符合預設條件的一張影像時,表示掃描的位置已經接近基底核的部位,也因此該張影像及後續多張影像可被挑選出來做為初步挑選影像。藉此,步驟S51及S51a已可被理解。 Specifically, after the basal ganglia absorbs the TRODAT tracking agent, the basal ganglia secrete dopamine, making its pixel value higher than other parts. Therefore, the image with more than 500 pixels with a pixel value greater than 5 is more likely to correspond to the basal ganglia. Therefore, when an image that meets the preset conditions is found, it means that the scanned position is close to the basal ganglia. Therefore, this image and subsequent images can be selected as preliminary selected images. Thus, steps S51 and S51a can be understood.

關於步驟S52,此步驟可以對初步挑選影像進行進一步的縮減優化,使得縮減優化後的影像可更加符合基底核的部位。在一實施例中,影像挑選模組20可將初步挑選影像的總數量取中間值,在將該中間值往前及往後取得複數張影像,例如總數量的1/4、1/5、1/6、1/7、或1/8張影像,且不限於此。舉例來說,如果總數量為60張影像,則中間數可為第30張(或第31張)影像,而總數量的1/6為10,因此影像挑選模組20會從初步挑選影像中挑選出第20至第40張影像做為縮減優化後的影像。藉此,步驟S52已可被理解。 Regarding step S52, this step can further reduce and optimize the preliminary selected images so that the reduced and optimized images can better match the location of the basal ganglia. In one embodiment, the image selection module 20 can take the median value of the total number of preliminary selected images, and obtain multiple images before and after the median value, such as 1/4, 1/5, 1/6, 1/7, or 1/8 of the total number of images, and is not limited to this. For example, if the total number is 60 images, the median number can be the 30th (or 31st) image, and 1/6 of the total number is 10, so the image selection module 20 will select the 20th to 40th images from the preliminary selected images as the reduced and optimized images. Thus, step S52 can be understood.

關於步驟S53,此步驟是利用遮罩來取得縮減優化後的影像中的基底核的範圍,例如遮罩可將影像中基底核的範圍標示出來,或者可將影像中的基底核以外的範圍覆蓋掉,且不限於此。藉此,後續步驟可針對各張影像中的基底核的相關部分進行處理。 Regarding step S53, this step is to use a mask to obtain the range of the basal nucleus in the reduced and optimized image. For example, the mask can mark the range of the basal nucleus in the image, or can cover the range outside the basal nucleus in the image, but is not limited to this. In this way, the subsequent steps can process the relevant parts of the basal nucleus in each image.

關於步驟S54至步驟S56,影像挑選模組20可在每張影像對應的基底核範圍內的像素點中取得該張影像的最大像素值,接著比較每張影像的最 大像素值,藉此取得最大像素值高於其它影像的影像。具體而言,基底核分泌多巴胺時可具備較高的像素值,因此像素值越高通常代表越接近需要觀察的區域(region of interest,ROI),因此最大像素值高於其它影像的該張影像可視為該等影像中最適合用於分析的影像。因此,影像挑選模組20可挑選出適合第一類神經網路41及第二類神經網路61進行分析的至少一合適原始影像。藉此,步驟S54至步驟S56已可被理解。 Regarding steps S54 to S56, the image selection module 20 can obtain the maximum pixel value of each image from the pixel points within the basal ganglia range corresponding to each image, and then compare the maximum pixel value of each image to obtain an image with a maximum pixel value higher than other images. Specifically, the basal ganglia can have a higher pixel value when secreting dopamine, so the higher the pixel value usually means the closer to the region of interest (ROI) to be observed, so the image with a maximum pixel value higher than other images can be regarded as the most suitable image for analysis among the images. Therefore, the image selection module 20 can select at least one suitable original image suitable for analysis by the first type neural network 41 and the second type neural network 61. Thus, steps S54 to S56 can be understood.

關於步驟S57,具體而言,最大像素值高於其它影像的影像的前一張影像及後一張影像的基底核範圍通常也具備較高的像素值,也適合供第一類神經網路41及第二類神經網路61進行分析,並且可以增加第一類神經網路41及第二類神經網路61的輸入資料數量,因此步驟S57將具有最高的最大像素值的影像的前一張影像及後一張影像也用於做為合適原始影像,將可提升分析品質。藉此,步驟S57已可被理解。 Regarding step S57, specifically, the basal ganglia range of the previous image and the next image whose maximum pixel value is higher than other images usually also has a higher pixel value, and is also suitable for analysis by the first type neural network 41 and the second type neural network 61, and the amount of input data of the first type neural network 41 and the second type neural network 61 can be increased. Therefore, step S57 uses the previous image and the next image with the highest maximum pixel value as suitable original images, which can improve the analysis quality. Thus, step S57 can be understood.

藉由影像挑選程序的執行,影像挑選模組20可以自動挑選出適合進行分析的影像,進而提升分析的品質及效果。藉此,影像挑選程序已可被理解。 By executing the image selection process, the image selection module 20 can automatically select images suitable for analysis, thereby improving the quality and effect of the analysis. Thus, the image selection process can be understood.

接著說明影像預處理程序。圖6是本發明一實施例的影像預處理程序的步驟流程圖,並請同時參考圖1至圖5。 Next, the image preprocessing procedure is described. FIG6 is a flow chart of the steps of the image preprocessing procedure of an embodiment of the present invention, and please refer to FIG1 to FIG5 at the same time.

首先,步驟S61被執行,根據至少一合適原始影像對應的一最大像素值與一預設最大像素值,將該至少一合適原始影像的所有像素值進行縮放。接著步驟S62被執行,對縮放後的該至少一合適原始影像進行正規化處理,以產生該至少一處理後影像,其中正規化處理包含二值化處理。 First, step S61 is executed to scale all pixel values of at least one suitable original image according to a maximum pixel value corresponding to at least one suitable original image and a preset maximum pixel value. Then step S62 is executed to normalize the scaled at least one suitable original image to generate the at least one processed image, wherein the normalization process includes binarization process.

關於步驟S61,具體而言,由於不同檢測中心對於TRODAT斷層掃描的掃描時間可能不一致,使得掃描出的影像的像素值範圍也可能不一致,例如掃描時間較短會使得影像中的最大像素值較低,而掃描時間常會使得影像中的最大像素值較高,因此假如不同檢測中心的資料直接輸入至第一類神經網路41及第二類神經網路61,會因為最大像素值的標準不一致,造成分析失去準確度,因此需要進行步驟S61來解決此問題。在一實施例中,系統1可預設有預設最大像素值,而影像預處理模組30可根據至少一合適原始影像中的最大像素值與預設最大像素值之間的比值,將至少一合適原始影像中的所有像素點的像素值進行縮放調整,藉此可使得所有輸入至第一類神經網路41及第二類神經網路61的影像的像素值範圍都可以一致。藉此,步驟S61已可被理解。 Regarding step S61, specifically, since the scanning time of TRODAT tomography may be inconsistent in different testing centers, the pixel value range of the scanned image may also be inconsistent. For example, a shorter scanning time will result in a lower maximum pixel value in the image, while a longer scanning time will often result in a higher maximum pixel value in the image. Therefore, if the data from different testing centers are directly input into the first type of neural network 41 and the second type of neural network 61, the analysis will lose accuracy due to the inconsistent standards of the maximum pixel value. Therefore, step S61 is required to solve this problem. In one embodiment, the system 1 may be preset with a preset maximum pixel value, and the image preprocessing module 30 may scale the pixel values of all pixels in at least one suitable original image according to the ratio between the maximum pixel value in at least one suitable original image and the preset maximum pixel value, so that the pixel value ranges of all images input to the first type neural network 41 and the second type neural network 61 can be consistent. Thus, step S61 can be understood.

關於步驟S62,在一實施例中,影像預處理模組30可對縮放調整後的至少一合適原始影像進行二值化處理,例如像素值大於(或大於等於)一特定數值的像素值皆轉換成1,而像素值小於或等於(或小於)該特定數值的像素值皆轉換成0,藉此使得影像更能凸顯出基底核的吸收效果。需注意的是,上述步驟僅是舉例,只要合理可實現,上述步驟可依照需求變換順序或進行增減。 Regarding step S62, in one embodiment, the image pre-processing module 30 can perform binarization processing on at least one suitable original image after scaling adjustment, for example, pixel values greater than (or greater than or equal to) a specific value are all converted to 1, and pixel values less than or equal to (or less than) the specific value are all converted to 0, so that the image can better highlight the absorption effect of the basal ganglia. It should be noted that the above steps are only examples, and as long as they are reasonably achievable, the above steps can be changed in order or increased or decreased according to needs.

藉此,影像預處理程序已可被理解。 Thus, the image preprocessing process can be understood.

接著說明視覺分級預測模組40的細節。 Next, the details of the visual rating prediction module 40 are described.

在一實施例中,視覺分級預測模組40的第一類神經網路41是利用深度卷積神經網路(CNN)來分析影像的特徵的人工智慧模型。在一實施例中,第一類神經網路41是由一第一訓練用模型(例如一訓練用的深度卷積神經網路)經由深度學習進行訓練而形成。在一實施例中,第一訓練用模型是利用大量的訓練用影像,以監督式對比學習的方式來進行訓練。在一實施例中,當第一訓練 用模型訓練完成後會產生特徵路徑,特徵路徑可視為人工智慧模型中的神經元傳導路徑,其中每個神經元可代表一個影像特徵偵測點,且每個影像特徵偵測點可能會具有不同的權重值,藉此第一訓練用模型被訓練完成後,即可形成第一類神經網路41。 In one embodiment, the first neural network 41 of the visual rating prediction module 40 is an artificial intelligence model that uses a deep convolutional neural network (CNN) to analyze the features of an image. In one embodiment, the first neural network 41 is formed by training a first training model (e.g., a training deep convolutional neural network) through deep learning. In one embodiment, the first training model is trained using a large amount of training images in a supervised comparative learning manner. In one embodiment, when the first training model is trained, a feature path is generated. The feature path can be regarded as a neuron conduction path in the artificial intelligence model, where each neuron can represent an image feature detection point, and each image feature detection point may have a different weight value. After the first training model is trained, a first type of neural network 41 can be formed.

在一實施例中,監督式對比學習可例如使用自主監督式對比學習技術,但不限於此。使用監督式對比學習對於本發明的好處在於,監督式對比學習技術可使得人工智慧模型的分析能夠更有效利用標籤的訊息,把同一類別的集群在向量(Embedding)空間中拉到一起,同時推開來自不同類別的樣本集群,因此可更加明顯區分出對應不同Visual Scale等級的特徵之間的特異性。藉此,本發明可具備良好的準確性和穩健性,並且對影像損壞(image corruptions)和超參數變化具有穩健性。在一實施例中,系統1或視覺分級預測模組40的硬體設備可例如採用Nvidia V100圖形處理器,但不限於此。 In one embodiment, supervised contrastive learning may use, for example, autonomous supervised contrastive learning technology, but is not limited thereto. The benefit of using supervised contrastive learning for the present invention is that supervised contrastive learning technology can enable the analysis of the artificial intelligence model to more effectively utilize the information of the label, pull clusters of the same category together in the vector (Embedding) space, and push away sample clusters from different categories, so that the specificity between features corresponding to different Visual Scale levels can be more clearly distinguished. In this way, the present invention can have good accuracy and robustness, and is robust to image corruption and hyperparameter changes. In one embodiment, the hardware device of the system 1 or the visual rating prediction module 40 may adopt, for example, an Nvidia V100 graphics processor, but is not limited thereto.

在一實施例中,第一類神經網路41的架構可例如包括1~3個輸入層、4~8個卷積層(convolutional layer)、1~3個平坦層、1~3個全連接層(fully-connected layer)及1~3個輸出層,且不限於此。在一實施例中,卷積層可用於從訓練用資料中找出及整合複數個影像特徵。平坦層可對卷積層找出的特徵做維度轉換。全連接層可建立該等影像特徵與「訓練用資料的標籤(例如不同Visual Scale等級)」之間的關聯性(例如建立出特徵路徑)。此外,全連接層可包括損失函數層,其中損失函數層可例如使用交叉熵(cross entropy)損失來實現,例如使用Categorical Cross-entropy,但不限於此。另外,上述架構亦可視為形成第一類神經網路41的第一訓練用模型之架構,但第一訓練用模型的特徵路徑及神經元尚未訓練成熟。 In one embodiment, the architecture of the first type of neural network 41 may include, for example, 1 to 3 input layers, 4 to 8 convolutional layers, 1 to 3 flat layers, 1 to 3 fully-connected layers, and 1 to 3 output layers, but is not limited thereto. In one embodiment, the convolutional layer may be used to find and integrate multiple image features from the training data. The flat layer may perform dimension conversion on the features found by the convolutional layer. The fully-connected layer may establish the correlation between the image features and the "labels of the training data (e.g., different visual scale levels)" (e.g., establish a feature path). In addition, the fully connected layer may include a loss function layer, wherein the loss function layer may be implemented, for example, using cross entropy loss, such as using Categorical Cross-entropy, but not limited thereto. In addition, the above architecture may also be regarded as the architecture of the first training model of the first type of neural network 41, but the characteristic paths and neurons of the first training model have not yet been trained to maturity.

在一實施例中,第一類神經網路41可包含百萬以上的神經元數量,但不限於此。在一實施例中,第一類神經網路41的超參數設定如下:批次尺寸(batch size)設定為介於200~600之間,訓練週期(epoch)設定為介於300~500之間,但不限於此。 In one embodiment, the first type of neural network 41 may include more than one million neurons, but is not limited thereto. In one embodiment, the hyperparameters of the first type of neural network 41 are set as follows: the batch size is set to be between 200 and 600, and the training epoch is set to be between 300 and 500, but is not limited thereto.

接著說明第一類神經網路41(第一訓練用模型)的訓練過程。圖7是本發明一實施例的第一類神經網路41的訓練過程流程圖,請同時參考圖1至圖6。 Next, the training process of the first type of neural network 41 (first training model) is described. FIG7 is a flow chart of the training process of the first type of neural network 41 of an embodiment of the present invention. Please refer to FIG1 to FIG6 at the same time.

如圖7所示,首先步驟S71被執行,複數訓練用影像被輸入至第一訓練用模型,其中訓練用影像具有對應的visual scale等級的標籤。在一實施例中,訓練用影像可以是不同受測者的TRODAT原始影像,其中該等影像是直接使用醫療數位影像傳輸協定(digital imaging and communications in medicine,DICOM)的原始影像格式,而不執行格式轉換,其中DICOM影像的像素值可藉由軟體工具python 3.7.0的pydicom功能進行讀取,但不限於此。此外,輸入至第一訓練用模型的訓練用影像可先進行先前段落說明的影像挑選程序以及影像預處理程序,因此訓練用影像可視為處理後影像。另外,每個受測者的訓練用影像可例如是一組訓練用影像,例如一組三個訓練用影像,如同圖5的影像挑選程序所選取出的三個影像。 As shown in FIG7 , first, step S71 is executed, and a plurality of training images are input into the first training model, wherein the training images have labels of corresponding visual scale levels. In one embodiment, the training images may be TRODAT original images of different subjects, wherein the images are directly in the original image format of the digital imaging and communications in medicine (DICOM) protocol without format conversion, wherein the pixel values of the DICOM images may be read by the pydicom function of the software tool python 3.7.0, but not limited thereto. In addition, the training images input into the first training model may first be subjected to the image selection procedure and image preprocessing procedure described in the previous paragraph, so that the training images may be regarded as processed images. In addition, the training images for each subject may be, for example, a set of training images, such as a set of three training images, such as the three images selected by the image selection process in FIG. 5 .

接著步驟S72被執行,第一訓練用模型利用該等訓練用影像執行監督式對比學習,以找出該等訓練用影像中的影像特徵。 Then step S72 is executed, and the first training model performs supervised comparative learning using the training images to find image features in the training images.

接著步驟S73被執行,第一訓練用模型根據找出的影像特徵建立出特徵路徑,以完成第一訓練用模型的訓練。在一實施例中,第一訓練用模型可例如使用交叉熵損失的方式進行訓練,以建立出影像特徵路徑,但不限於此。 Then step S73 is executed, and the first training model establishes a feature path according to the found image features to complete the training of the first training model. In one embodiment, the first training model can be trained, for example, using a cross entropy loss method to establish an image feature path, but is not limited thereto.

在一實施例中,第一訓練用模型需經歷至少一「訓練階段」來進行訓練並建立出一特徵路徑,且需經歷至少一「測試階段」來測試該特徵路徑的準確度,當準確度達到需求時,才能做為後續實際使用的第一類神經網路41。在一實施例中,第一訓練用模型將經歷複數次訓練,並且每次訓練後皆會產生不同的特徵路徑,而準確度最高的特徵路徑會被設定為第一類神經網路41的實際特徵路徑,且不限於此。此外,第一類神經網路41的實際特徵路徑也可隨時調整。 In one embodiment, the first training model needs to undergo at least one "training phase" to train and establish a feature path, and needs to undergo at least one "testing phase" to test the accuracy of the feature path. Only when the accuracy meets the requirements can it be used as the first type of neural network 41 for subsequent actual use. In one embodiment, the first training model will undergo multiple trainings, and different feature paths will be generated after each training, and the feature path with the highest accuracy will be set as the actual feature path of the first type of neural network 41, and is not limited to this. In addition, the actual feature path of the first type of neural network 41 can also be adjusted at any time.

藉此,第一類神經網路41的建立過程已可被理解。據此,本發明的視覺分級預測模組40可預測出受測者的visual scale等級,可用於輔助醫師進行判斷,可大幅提升醫療效率。 Thus, the establishment process of the first type of neural network 41 can be understood. Accordingly, the visual grading prediction module 40 of the present invention can predict the visual scale level of the subject, which can be used to assist doctors in making judgments and greatly improve medical efficiency.

接著說明第二預測模組60的細部結構。 Next, the detailed structure of the second prediction module 60 is described.

在一實施例中,第二預測模組60的第二類神經網路61是利用深度卷積神經網路(CNN)來分析影像的特徵的人工智慧模型。在一實施例中,第二類神經網路61是由一第二訓練用模型(例如訓練用的深度卷積神經網路)經由深度學習進行訓練而形成。在一實施例中,第二預測模組60的第二類神經網路61可採用與第一類神經網路41相同的架構,故不再詳述細節。換言之,第二訓練用模型可具備與第一訓練用模型相同的架構,但藉由不同的訓練用資料而產生不同預測能力。 In one embodiment, the second neural network 61 of the second prediction module 60 is an artificial intelligence model that uses a deep convolutional neural network (CNN) to analyze the characteristics of an image. In one embodiment, the second neural network 61 is formed by training a second training model (e.g., a deep convolutional neural network for training) through deep learning. In one embodiment, the second neural network 61 of the second prediction module 60 can adopt the same architecture as the first neural network 41, so the details are not described in detail. In other words, the second training model can have the same architecture as the first training model, but different prediction capabilities are generated by different training data.

接著說明第二類神經網路61(第二訓練用模型)的訓練過程。圖8是本發明一實施例的第二類神經網路61的訓練過程流程圖,請同時參考圖1至圖7。 Next, the training process of the second type of neural network 61 (the second training model) is described. FIG8 is a flow chart of the training process of the second type of neural network 61 of an embodiment of the present invention. Please refer to FIG1 to FIG7 at the same time.

如圖8所示,首先步驟S81被執行,複數訓練用影像被輸入至第二訓練用模型,其中訓練用影像具有對應的Hoehn-Yahr Scale等級的標籤。在一實施例中,訓練用影像可以是不同受測者的TRODAT原始影像,且在訓練用影像輸入至第二訓練用模型之前可先進行影像挑選程序以及影像預處理程序,因此訓練用影像可視為處理後影像。 As shown in FIG8 , first, step S81 is executed, and a plurality of training images are input into the second training model, wherein the training images have corresponding Hoehn-Yahr Scale level labels. In one embodiment, the training images may be TRODAT original images of different subjects, and an image selection process and an image preprocessing process may be performed before the training images are input into the second training model, so the training images may be regarded as processed images.

接著步驟S82被執行,第二訓練用模型利用該等訓練用影像執行監督式對比學習,以找出該等訓練用影像中的影像特徵。 Then step S82 is executed, and the second training model performs supervised comparative learning using the training images to find the image features in the training images.

接著步驟S83被執行,第二訓練用模型根據找出的影像特徵建立出特徵路徑,以完成第二訓練用模型的訓練。在一實施例中,第二訓練用模型可例如使用交叉熵損失的方式進行訓練,以建立出影像特徵路徑,但不限於此。 Then step S83 is executed, and the second training model establishes a feature path according to the found image features to complete the training of the second training model. In one embodiment, the second training model can be trained, for example, using a cross entropy loss method to establish an image feature path, but is not limited thereto.

相似於第一訓練用模型的訓練,第二訓練用模型也需經歷至少一「訓練階段」及至少一「測試階段」。此外,訓練完成後的第二類神經網路61的實際特徵路徑也可隨時調整。 Similar to the training of the first training model, the second training model also needs to go through at least one "training phase" and at least one "testing phase". In addition, the actual feature path of the second type of neural network 61 after training can also be adjusted at any time.

藉此,第二類神經網路61的建立過程已可被理解。據此,本發明的第二預測模組60可預測出受測者的Hoehn-Yahr Scale等級,可加強對於醫師的輔助。 Thus, the establishment process of the second type of neural network 61 can be understood. Accordingly, the second prediction module 60 of the present invention can predict the Hoehn-Yahr Scale level of the subject, which can enhance the assistance to doctors.

進一步地,在一實施例中,當視覺分級預測模組40或第二預測模組60產生預測結果後,系統1可將預設結果輸出。在一實施例中,系統1可以與特定應用程式(application,APP)的使用者介面(user interface,UI)連結,進而在使用者介面上顯示出視覺分級預測模組40或第二預測模組60的預測結果,且不限於此。在另一實施例中,系統1可根據視覺分級預測模組40或第二預測模組60的預測結果,自動產生出一份相關的報告,但不限於此。 Furthermore, in one embodiment, after the visual grading prediction module 40 or the second prediction module 60 generates a prediction result, the system 1 may output a preset result. In one embodiment, the system 1 may be connected to a user interface (UI) of a specific application (APP), and the prediction result of the visual grading prediction module 40 or the second prediction module 60 may be displayed on the user interface, but not limited thereto. In another embodiment, the system 1 may automatically generate a relevant report based on the prediction result of the visual grading prediction module 40 or the second prediction module 60, but not limited thereto.

藉此,透過本發明,只要將受測者的一組TRODAT原始影像輸入至系統1中,系統1即可自動挑選出適合分析的影像,並且根據影像預測出該受測者的visual scale預測等級或Hoehn-Yahr Scale預測等級,進而可輔助醫師進行醫學判斷。藉由深度學習訓練,本發明的系統1可精準地提供預測等級,可輔助受測者尋求最佳的醫療照護方式。 Thus, through the present invention, as long as a set of TRODAT original images of the subject is input into the system 1, the system 1 can automatically select the images suitable for analysis, and predict the visual scale prediction level or Hoehn-Yahr Scale prediction level of the subject based on the images, thereby assisting doctors in making medical judgments. Through deep learning training, the system 1 of the present invention can accurately provide prediction levels, which can assist the subject in seeking the best medical care method.

儘管本發明已透過上述實施例來說明,可理解的是,根據本發明的精神及本發明所主張的申請專利範圍,許多修飾及變化都是可能的。 Although the present invention has been described through the above embodiments, it is understandable that many modifications and variations are possible according to the spirit of the present invention and the scope of the patent application claimed by the present invention.

S41~S45a:步驟 S41~S45a: Steps

Claims (10)

一種腦部斷層掃描電腦輔助偵測系統,包含:一影像挑選模組(20),執行一影像挑選程序,從一受測者的一組多巴胺斷層掃描(TRODAT)原始影像中挑選出至少一合適原始影像;一影像預處理模組(30),執行一影像預處理程序,將該至少一合適原始影像轉換成至少一處理後影像;以及一視覺分級預測模組(40),根據該至少一處理後影像而產生該受測者的一視覺分級(Visual Scale)預測等級。 A brain tomography computer-aided detection system comprises: an image selection module (20) executing an image selection procedure to select at least one suitable original image from a set of dopamine tomography (TRODAT) original images of a subject; an image preprocessing module (30) executing an image preprocessing procedure to convert the at least one suitable original image into at least one processed image; and a visual scale prediction module (40) generating a visual scale (Visual Scale) prediction level of the subject according to the at least one processed image. 如請求項1所述的腦部斷層掃描電腦輔助偵測系統,其中該影像挑選程序包含步驟:該影像挑選模組(20)判斷該組TRODAT原始影像中的影像數量是否大於一第一門檻值;當該組TRODAT原始影像的影像數量大於該第一門檻值時,從該組TRODAT原始影像中的第一張影像開始找出符合一預設條件的一影像,以及將符合該預設條件的該影像以及接續於該影像之後的特定數量的影像取出,以做為一組初步挑選影像,其中該預設條件包含:該影像具備像素值大於一第二門檻值的複數個像素點,且像素值大於該第二門檻值的像素點的數量大於一第三門檻值;以及當該組TRODAT原始影像的影像數量小於或等於該第一門檻值時,以該組TRODAT原始影像做為該組初步挑選影像。 The brain tomography computer-aided detection system as claimed in claim 1, wherein the image selection procedure comprises the steps of: the image selection module (20) determining whether the number of images in the set of TRODAT original images is greater than a first threshold value; when the number of images in the set of TRODAT original images is greater than the first threshold value, finding an image that meets a preset condition from the first image in the set of TRODAT original images; and selecting the image that meets the preset condition. The image and a specific number of images following the image are taken out as a set of preliminary selected images, wherein the preset conditions include: the image has a plurality of pixels with pixel values greater than a second threshold value, and the number of pixels with pixel values greater than the second threshold value is greater than a third threshold value; and when the number of images in the set of TRODAT original images is less than or equal to the first threshold value, the set of TRODAT original images is used as the set of preliminary selected images. 如請求項2所述的腦部斷層掃描電腦輔助偵測系統,其中該影像挑選程序包含步驟: 將該組初步挑選影像中的至少一部分影像與一遮罩進行中心點對位,其中該遮罩包含一基底核範圍;取得對位後的每張影像對應該基底核範圍內的一最大像素值;將對位後的每張影像對應該基底核範圍內的該最大像素值進行比較;以及將具有最大像素值的該張影像設定為該至少一合適原始影像。 A brain tomography computer-aided detection system as described in claim 2, wherein the image selection procedure comprises the steps of: aligning at least a portion of the image in the group of preliminary selected images with a mask, wherein the mask includes a basal ganglia range; obtaining a maximum pixel value corresponding to the basal ganglia range for each aligned image; comparing the maximum pixel value corresponding to the basal ganglia range for each aligned image; and setting the image with the maximum pixel value as the at least one suitable original image. 如請求項3所述的腦部斷層掃描電腦輔助偵測系統,其中該至少一合適原始影像還包含具有最大像素值的該張影像的前一張影像及後一張影像。 A brain tomography computer-aided detection system as described in claim 3, wherein the at least one suitable original image also includes a previous image and a subsequent image of the image having the maximum pixel value. 如請求項3所述的腦部斷層掃描電腦輔助偵測系統,其中該影像挑選程序包含步驟:將該組初步挑選影像進行一縮減優化處理,並將縮減優化處理後的該組初步挑選影像與該遮罩進行中心點對位。 As described in claim 3, the brain tomography computer-aided detection system, wherein the image selection process includes the steps of: reducing and optimizing the group of preliminary selected images, and aligning the center point of the group of preliminary selected images after the reduction and optimization process with the mask. 如請求項1所述的腦部斷層掃描電腦輔助偵測系統,其中該影像預處理程序包含步驟:對該至少一合適原始影像進行一正規化處理,以形成該至少一處理後影像,其中該正規化處理包含一二值化處理。 The brain tomography computer-aided detection system as described in claim 1, wherein the image preprocessing procedure includes the steps of: performing a normalization process on the at least one suitable original image to form the at least one processed image, wherein the normalization process includes a binarization process. 如請求項6所述的腦部斷層掃描電腦輔助偵測系統,其中該正規化處理包含步驟:根據該至少一合適原始影像所對應的一最大像素值以及一預設最大像素值,將該至少一合適原始影像的所有像素值進行縮放。 A brain tomography computer-aided detection system as described in claim 6, wherein the normalization process comprises the steps of: scaling all pixel values of the at least one suitable original image according to a maximum pixel value corresponding to the at least one suitable original image and a preset maximum pixel value. 如請求項1所述的腦部斷層掃描電腦輔助偵測系統,更包含一第二預測模組,根據該至少一處理後影像而產生該受測者的一侯恩亞爾分級(Hoehn-Yahr Scale)預測等級。 The brain tomography computer-aided detection system as described in claim 1 further includes a second prediction module that generates a Hoehn-Yahr Scale prediction level of the subject based on the at least one processed image. 一種腦部斷層掃描電腦輔助偵測方法,該方法是透過一腦部斷層掃描電腦輔助偵測系統來執行,其中該腦部斷層掃描電腦輔助偵測系統包含一影像挑選模組(20)、一影像預處理模組(30)以及一視覺分級預測模組(40),其中該方法包含步驟:藉由該影像挑選模組(20)執行一影像挑選程序,從一受測者的一組TRODAT原始影像中挑選出至少一合適原始影像;藉由該影像預處理模組(30)執行一影像預處理程序,將該至少一合適原始影像轉換成至少一處理後影像;以及藉由該視覺分級預測模組(40),根據該至少一處理後影像而產生該受測者的一Visual Scale預測等級。 A brain tomography computer-aided detection method is implemented by a brain tomography computer-aided detection system, wherein the brain tomography computer-aided detection system comprises an image selection module (20), an image preprocessing module (30) and a visual grading prediction module (40), wherein the method comprises the steps of: executing an image selection module (20) A selection process is used to select at least one suitable original image from a set of TRODAT original images of a subject; an image preprocessing process is executed by the image preprocessing module (30) to convert the at least one suitable original image into at least one processed image; and a visual scale prediction module (40) is used to generate a visual scale prediction level of the subject according to the at least one processed image. 一種電腦程式產品,儲存於一非暫態電腦可讀取媒體之中,用以使一腦部斷層掃描電腦輔助偵測系統進行運作,其中該腦部斷層掃描電腦輔助偵測系統包含一影像挑選模組(20)、一影像預處理模組(30)以及一視覺分級預測模組(40),其中該電腦程式產品包含:一指令,使該影像挑選模組(20)執行一影像挑選程序,從一受測者的一組TRODAT原始影像中挑選出至少一合適原始影像;一指令,使該影像預處理模組(30)執行一影像預處理程序,將該至少一合適原始影像轉換成至少一處理後影像;以及一指令,使該視覺分級預測模組(40)根據該至少一處理後影像而產生該受測者的一Visual Scale預測等級。 A computer program product is stored in a non-transitory computer-readable medium and is used to operate a brain tomography computer-assisted detection system, wherein the brain tomography computer-assisted detection system includes an image selection module (20), an image pre-processing module (30) and a visual grading prediction module (40), wherein the computer program product includes: an instruction to enable the image selection module (20) to execute An image selection procedure is performed to select at least one suitable original image from a set of TRODAT original images of a subject; an instruction is used to make the image pre-processing module (30) perform an image pre-processing procedure to convert the at least one suitable original image into at least one processed image; and an instruction is used to make the visual scale prediction module (40) generate a Visual Scale prediction level of the subject according to the at least one processed image.
TW112141805A 2023-10-31 2023-10-31 Brain tomography computer-aided detection system, its method and its computer program product thereof TWI877875B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW112141805A TWI877875B (en) 2023-10-31 2023-10-31 Brain tomography computer-aided detection system, its method and its computer program product thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW112141805A TWI877875B (en) 2023-10-31 2023-10-31 Brain tomography computer-aided detection system, its method and its computer program product thereof

Publications (2)

Publication Number Publication Date
TWI877875B true TWI877875B (en) 2025-03-21
TW202519162A TW202519162A (en) 2025-05-16

Family

ID=95830360

Family Applications (1)

Application Number Title Priority Date Filing Date
TW112141805A TWI877875B (en) 2023-10-31 2023-10-31 Brain tomography computer-aided detection system, its method and its computer program product thereof

Country Status (1)

Country Link
TW (1) TWI877875B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200222010A1 (en) * 2016-04-22 2020-07-16 Newton Howard System and method for deep mind analysis
CN113554663A (en) * 2021-06-08 2021-10-26 浙江大学 A system for automatic analysis of dopamine transporter PET images based on CT structural images
TW202216070A (en) * 2020-10-15 2022-05-01 臺北醫學大學 Dopamine transporter check system and operation method thereof
WO2022104288A1 (en) * 2020-11-16 2022-05-19 Terran Biosciences, Inc. Neuromelanin-sensitive mri and methods of use thereof

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200222010A1 (en) * 2016-04-22 2020-07-16 Newton Howard System and method for deep mind analysis
TW202216070A (en) * 2020-10-15 2022-05-01 臺北醫學大學 Dopamine transporter check system and operation method thereof
WO2022104288A1 (en) * 2020-11-16 2022-05-19 Terran Biosciences, Inc. Neuromelanin-sensitive mri and methods of use thereof
CN113554663A (en) * 2021-06-08 2021-10-26 浙江大学 A system for automatic analysis of dopamine transporter PET images based on CT structural images

Also Published As

Publication number Publication date
TW202519162A (en) 2025-05-16

Similar Documents

Publication Publication Date Title
WO2021036695A1 (en) Method and apparatus for determining image to be marked, and method and apparatus for training model
CN110910351B (en) Ultrasound image modality transfer, classification method and terminal based on generative adversarial network
Li et al. AGMB-transformer: anatomy-guided multi-branch transformer network for automated evaluation of root canal therapy
CN111462116A (en) Multimodal parameter model optimization fusion method based on radiomics features
Shanmugavadivel et al. Optimized polycystic ovarian disease prognosis and classification using AI based computational approaches on multi-modality data
US20250045359A1 (en) Method and system for training and deploying an artificial intelligence model on pre-scan converted ultrasound image data
CN114511759A (en) A method and system for category recognition and feature determination of skin state images
CN117218129A (en) Esophageal cancer image recognition and classification methods, systems, equipment and media
Mansur et al. Deep learning-based brain tumor image analysis for segmentation
Santos et al. A new method based on deep learning to detect lesions in retinal images using YOLOv5
Jeong et al. Image quality assessment using convolutional neural network in clinical skin images
Yadav et al. Dual scale light weight cross attention transformer for skin lesion classification
TWI877875B (en) Brain tomography computer-aided detection system, its method and its computer program product thereof
CN119624978B (en) Medical image data processing method and system
CN107590806B (en) A detection method and system based on brain medical imaging
Liu et al. Intelligent detection of left ventricular hypertrophy from pediatric echocardiography videos
Gurumurthy et al. M2AI-CVD: Multi-modal AI approach cardiovascular risk prediction system using fundus images
Zhang et al. Efficient slice anomaly detection network for 3D brain MRI Volume
Lohith et al. Facial skin disease detection using image processing
Nage et al. A novel preprocessing unit for effective deep learning based classification and grading of diabetic retinopathy
Wu et al. 3D U-TFA: A deep convolutional neural network for automatic segmentation of glioblastoma
Li et al. Automatic detection of pituitary microadenoma from magnetic resonance imaging using deep learning algorithms
TWI825643B (en) Medical auxiliary information generation method and medical auxiliary information generation system
Ganai et al. Transformers in Cardiology: Automated ECG-Based Heart Disease Detection
TWI862264B (en) Data classification method for classifying inlier and outlier data