TWI881698B - Lesion identification learning device and method - Google Patents
Lesion identification learning device and method Download PDFInfo
- Publication number
- TWI881698B TWI881698B TW113104217A TW113104217A TWI881698B TW I881698 B TWI881698 B TW I881698B TW 113104217 A TW113104217 A TW 113104217A TW 113104217 A TW113104217 A TW 113104217A TW I881698 B TWI881698 B TW I881698B
- Authority
- TW
- Taiwan
- Prior art keywords
- lesion
- type
- image
- images
- processor
- Prior art date
Links
Landscapes
- Electrically Operated Instructional Devices (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
Abstract
Description
本揭示有關於一種醫療應用的技術,且特別是有關於病灶辨識學習裝置以及方法。 This disclosure relates to a technology for medical applications, and in particular to a lesion identification learning device and method.
在臨床診斷中,超音波(ultrasound)檢測儀器、電腦斷層(computed tomography,CT)檢測儀器以及核磁共振(magnetic resonance imaging,MRI)檢測儀器等通常可用來檢測病灶的變化。一般而言,實體病灶的評估都是採用人為對病灶影像進行觀察或比較。然而,面對這樣微妙的人體的病灶辨識,有志從事醫事工作者通常需要耗費相當的時間以及精力才能有一定程度的認識。這樣的進入門檻其實是很高。當然,一些人也因為門檻過高而卻步,或是熱忱在學習過程中逐漸消退導致半途而廢。因此,如何使從事醫事工作者更快速且更有效率地學習病灶辨識,以避免潛在的醫事人才流失,是本領域相當重要的課題。 In clinical diagnosis, ultrasound, computed tomography (CT) and magnetic resonance imaging (MRI) are usually used to detect changes in lesions. Generally speaking, the evaluation of physical lesions is done by observing or comparing lesion images manually. However, in the face of such delicate lesion identification in the human body, those who aspire to become medical workers usually need to spend considerable time and energy to gain a certain degree of understanding. The entry threshold is actually very high. Of course, some people give up because the threshold is too high, or their enthusiasm gradually fades during the learning process, leading to them giving up halfway. Therefore, how to enable medical workers to learn lesion identification more quickly and efficiently to avoid the loss of potential medical talents is a very important topic in this field.
本揭示之主要目的,在於提供一種病灶辨識學習裝置以及方法,可使從事醫事工作者更快速且更有效率地學習病灶辨識,以避免潛在的醫事人才流失。 The main purpose of this disclosure is to provide a lesion identification learning device and method, which can enable medical workers to learn lesion identification more quickly and efficiently, so as to avoid the loss of potential medical talents.
為了達成上述之目的,本揭示的病灶辨識學習裝置,包括:一記憶體,經配置以儲存多個指令以及與多個候選病灶類型的各者分別對應的多個候選病灶影像;以及一處理器,連接該記憶體,並經配置以存取該多個指令以執行以下步驟:(a)選擇該多個候選病灶類型的一者做為一第一病灶類型,並選擇與該第一病灶類型對應的該多個候選病灶影像的一者做為一問題影像;(b)接收一回答類型,並判斷該回答類型是否與該第一病灶類型相同;(c)當該回答類型與該第一病灶類型不同時,從與該第一病灶類型對應的該多個候選病灶影像中選擇與該問題影像相似的多個參考影像;(d)選擇與該回答類型相同的該候選病灶類型做為一第二病灶類型,並從與該第二病灶類型對應的該多個候選病灶影像中選擇與該問題影像相似的多個相似影像;以及(e)在一輸出介面上產生與該第一病灶類型對應的該多個參考影像以及與該第二病灶類型對應的該多個相似影像以供一使用者學習病灶辨識。 In order to achieve the above-mentioned purpose, the lesion recognition learning device disclosed in the present invention comprises: a memory configured to store a plurality of instructions and a plurality of candidate lesion images respectively corresponding to each of a plurality of candidate lesion types; and a processor connected to the memory and configured to access the plurality of instructions to execute the following steps: (a) selecting one of the plurality of candidate lesion types as a first lesion type, and selecting one of the plurality of candidate lesion images corresponding to the first lesion type as a question image; (b) receiving an answer type, and determining whether the answer type is the same as the first lesion type; (c) receiving a question type, and determining whether the answer type is the same as the first lesion type; c) when the answer type is different from the first lesion type, selecting multiple reference images similar to the problem image from the multiple candidate lesion images corresponding to the first lesion type; (d) selecting the candidate lesion type that is the same as the answer type as a second lesion type, and selecting multiple similar images similar to the problem image from the multiple candidate lesion images corresponding to the second lesion type; and (e) generating the multiple reference images corresponding to the first lesion type and the multiple similar images corresponding to the second lesion type on an output interface for a user to learn lesion recognition.
為了達成上述之目的,本揭示的病灶辨識學習方法,適用於對與多個候選病灶類型的各者分別對應的多個候選病灶影像的辨識學習,包括: (a)選擇與多個候選病灶類型的一者做為一第一病灶類型,並選擇與該第一病灶類型對應的該多個候選病灶影像的一者做為一問題影像;(b)接收一回答類型,並判斷該回答類型是否與該第一病灶類型相同;(c)當該回答類型與該第一病灶類型不同時,從與該第一病灶類型對應的該多個候選病灶影像中選擇與該問題影像相似的多個參考影像;(d)選擇與該回答類型相同的該候選病灶類型做為一第二病灶類型,並從與該第二病灶類型對應的該多個候選病灶影像中選擇分別與該多個參考影像相似的多個相似影像;以及(e)在一輸出介面上顯示與該第一病灶類型對應的該多個參考影像以及與該第二病灶類型對應的該多個相似影像。 In order to achieve the above-mentioned purpose, the lesion recognition learning method disclosed in the present invention is applicable to the recognition learning of multiple candidate lesion images corresponding to each of multiple candidate lesion types, including: (a) selecting one of the multiple candidate lesion types as a first lesion type, and selecting one of the multiple candidate lesion images corresponding to the first lesion type as a question image; (b) receiving an answer type, and determining whether the answer type is the same as the first lesion type; (c) when the answer type is the same as the first lesion type, At the same time, multiple reference images similar to the question image are selected from the multiple candidate lesion images corresponding to the first lesion type; (d) the candidate lesion type that is the same as the answer type is selected as a second lesion type, and multiple similar images that are similar to the multiple reference images are selected from the multiple candidate lesion images corresponding to the second lesion type; and (e) the multiple reference images corresponding to the first lesion type and the multiple similar images corresponding to the second lesion type are displayed on an output interface.
為了達成上述之目的,本揭示的病灶辨識學習方法,適用於對與多個候選病灶類型的各者分別對應的多個候選病灶影像的辨識學習,包括:(a)藉由一處理器,選擇與多個候選病灶類型的一者做為一第一病灶類型,並選擇與該第一病灶類型對應的該多個候選病灶影像的一者做為一問題影像;(b)藉由該處理器,接收一回答類型,並判斷該回答類型是否與該第一病灶類型相同;(c)藉由該處理器,當該回答類型與該第一病灶類型不同時,從與該第一病灶類型對應的該多個候選病灶影像中選擇與該問題影像相似的多個參考影像; (d)藉由該處理器,選擇與該回答類型相同的該候選病灶類型做為一第二病灶類型,並從與該第二病灶類型對應的該多個候選病灶影像中選擇與該問題影像相似的多個相似影像;以及(e)藉由該處理器,在一輸出介面上顯示與該第一病灶類型對應的該多個參考影像以及與該第二病灶類型對應的該多個相似影像。 In order to achieve the above-mentioned purpose, the lesion recognition learning method disclosed in the present invention is applicable to the recognition learning of multiple candidate lesion images corresponding to each of multiple candidate lesion types, including: (a) selecting one of the multiple candidate lesion types as a first lesion type by a processor, and selecting one of the multiple candidate lesion images corresponding to the first lesion type as a question image; (b) receiving an answer type by the processor, and determining whether the answer type is the same as the first lesion type; (c) when the answer type is the same as the first lesion type, the processor receives an answer type and determines whether the answer type is the same as the first lesion type; and (d) when the answer type is the same as the first lesion type, the processor receives an answer type and determines whether the answer type is the same as the first lesion type. When the lesion types are different, multiple reference images similar to the problem image are selected from the multiple candidate lesion images corresponding to the first lesion type; (d) by the processor, the candidate lesion type of the same type as the answer is selected as a second lesion type, and multiple similar images similar to the problem image are selected from the multiple candidate lesion images corresponding to the second lesion type; and (e) by the processor, the multiple reference images corresponding to the first lesion type and the multiple similar images corresponding to the second lesion type are displayed on an output interface.
相較於相關技術,本揭示在使用者回答錯誤的病灶類別時顯示與問題影像相似的正確的病灶類別的影像以及與問題影像相似的錯誤的病灶類別的影像,藉此,可達到供使用者同時學習正確的病灶類別的特徵以及錯誤的病灶類別的特徵以提昇病灶辨識能力的技術效果。如此一來,將可使從事醫事工作者更快速且更有效率地學習病灶辨識,並解決潛在的醫事人才流失的問題。 Compared with the related art, the present invention displays images of the correct lesion category similar to the problem image and images of the wrong lesion category similar to the problem image when the user answers the wrong lesion category, thereby achieving the technical effect of allowing the user to learn the characteristics of the correct lesion category and the characteristics of the wrong lesion category at the same time to improve the lesion identification ability. In this way, medical workers will be able to learn lesion identification more quickly and efficiently, and solve the problem of potential medical talent loss.
100:病灶辨識學習裝置 100: Lesion identification learning device
110:記憶體 110: Memory
120:處理器 120: Processor
130:顯示器 130: Display
140:輸入電路 140: Input circuit
tp1~tpn:候選病灶類型 tp1~tpn: Candidate lesion type
img11~imgnz:候選病灶影像 img11~imgnz: candidate lesion images
S210~S250:步驟 S210~S250: Steps
300:圖形介面 300: Graphical interface
310、610:問題欄位 310, 610: Question field
320~340:輸入欄位 320~340: Input field
350:下拉式選單 350: Drop-down menu
360:答案欄位 360:Answer field
370~380:選項圖樣 370~380: Option pattern
600:輸出介面 600: Output interface
620~640:參考欄位 620~640: Reference field
650~670:相似欄位 650~670: Similar fields
FR:特徵區 FR: Feature area
圖1繪示在一些實施例中的病灶辨識學習裝置的方塊圖。 FIG1 shows a block diagram of a lesion recognition learning device in some embodiments.
圖2繪示在一些實施例中的病灶辨識學習方法的流程圖。 FIG2 is a flowchart of a lesion identification learning method in some embodiments.
圖3繪示在一些實施例中的圖形介面的示意圖。 FIG3 is a schematic diagram of a graphical interface in some embodiments.
圖4繪示在一些實施例中的圖形介面的下拉式選單的示意圖。 FIG4 is a schematic diagram of a drop-down menu of a graphical interface in some embodiments.
圖5繪示在一些實施例中的圖形介面的答案欄位的示意圖。 FIG5 is a schematic diagram of an answer field of a graphical interface in some embodiments.
圖6繪示在一些實施例中的輸出介面的示意圖。 FIG6 is a schematic diagram of an output interface in some embodiments.
圖7繪示在一些實施例中的多個關鍵特徵點的示意圖。 FIG7 is a schematic diagram showing multiple key feature points in some embodiments.
一併參照圖1,圖1繪示在一些實施例中的病灶(lesion)辨識學習裝置100的方塊圖,其中病灶辨識學習裝置100可以由任意的電子裝置或伺服器等實現(例如,可以是終端處理裝置(即,手機、桌上型電腦或平板電腦等)、雲端裝置、伺服器或雲端伺服器等)。如圖1所示,病灶辨識學習裝置100包括記憶體110以及處理器120。處理器120連接於記憶體110。 Referring to FIG. 1 , FIG. 1 shows a block diagram of a lesion identification learning device 100 in some embodiments, wherein the lesion identification learning device 100 can be implemented by any electronic device or server, etc. (for example, it can be a terminal processing device (i.e., a mobile phone, a desktop computer or a tablet computer, etc.), a cloud device, a server or a cloud server, etc.). As shown in FIG. 1 , the lesion identification learning device 100 includes a memory 110 and a processor 120. The processor 120 is connected to the memory 110.
在本實施例中,記憶體110儲存多個指令以及與多個候選病灶類型tp1~tpn的各者分別對應的多個候選病灶影像。詳細而言,候選病灶類型tp1對應於多個候選病灶影像img11~img1m。候選病灶類型tp2對應於多個候選病灶影像img21~img2o。以此類推,其他候選病灶類型tp3~tpn的各者也分別對應於多個候選病灶影像。值得注意的是,m~z為任意的正整數,並沒有特別的限制。在一些實施例中,記憶體110更儲存分類模型(未繪示)。在一些實施例中,處理器120利用分類模型對所有候選病灶影像進行分類,以將所有候選病灶影像分類為候選病灶類型tp1~tpn。在一些實施例中,分類模型可以是任意的深度學習演算法(例如,卷積神經網路(conyolutional neural network)模型、轉換器(transformer)模型或YOLO(you only look once)模型等)的模型。 In the present embodiment, the memory 110 stores a plurality of instructions and a plurality of candidate lesion images corresponding to each of the plurality of candidate lesion types tp1 to tpn. Specifically, the candidate lesion type tp1 corresponds to a plurality of candidate lesion images img11 to img1m. The candidate lesion type tp2 corresponds to a plurality of candidate lesion images img21 to img2o. Similarly, each of the other candidate lesion types tp3 to tpn also corresponds to a plurality of candidate lesion images. It is worth noting that m to z are any positive integers and are not particularly limited. In some embodiments, the memory 110 further stores a classification model (not shown). In some embodiments, the processor 120 uses a classification model to classify all candidate lesion images to classify all candidate lesion images into candidate lesion types tp1~tpn. In some embodiments, the classification model can be a model of any deep learning algorithm (for example, a convolutional neural network model, a transformer model, or a YOLO (you only look once) model, etc.).
在一些實施中,候選病灶類型tp1~tpn可以是任意的身體組織所具有的病灶類型(例如,眼睛的多發性玻璃疣或糖尿病黃斑水腫等)。在一些實施中,候選病灶影像可以是任意的具有病灶的身體組織的影像(例如,患有視乳突水腫的眼睛的超音波(ultrasound)影像、患有多發性玻璃疣的眼睛的電腦斷層(computed tomography,CT)影像或患有糖尿病黃斑水腫的眼睛的核磁共振(magnetic resonance imaging,MRI)影像)。在一些實施中,記憶體110可以由記憶單元、快閃記憶體、唯讀記憶體、硬碟或任何具相等性的儲存組件等實現,但不 以此為限。在一些實施例中,上述多個指令可以是相應的軟體或韌體指令程序。 In some implementations, the candidate lesion types tp1-tpn may be any lesion type of a body tissue (e.g., multiple drusen or diabetic macular edema of the eye). In some implementations, the candidate lesion images may be any images of body tissues with lesions (e.g., ultrasound images of eyes with optic edema, computed tomography (CT) images of eyes with multiple drusen, or magnetic resonance imaging (MRI) images of eyes with diabetic macular edema). In some implementations, the memory 110 may be implemented by a memory unit, a flash memory, a read-only memory, a hard disk, or any equivalent storage component, but is not limited thereto. In some implementations, the above-mentioned multiple instructions may be corresponding software or firmware instruction programs.
在本實施例中,處理器120基於上述這些指令執行後續段落的病灶辨識學習方法的詳細步驟。在一些實施例中,處理器120可以由中央處理單元(central processing unit,CPU)、微控制單元(micro control unit,MCU)、可程式化邏輯控制器(programmable logic controller,PLC)、系統單晶片(system on chip,SoC)或現場可程式邏輯閘陣列(field programmable gate array,FPGA)等實現,但不以此為限。 In this embodiment, the processor 120 executes the detailed steps of the lesion recognition learning method in the subsequent paragraphs based on the above instructions. In some embodiments, the processor 120 can be implemented by a central processing unit (CPU), a micro control unit (MCU), a programmable logic controller (PLC), a system on chip (SoC) or a field programmable gate array (FPGA), etc., but is not limited thereto.
在一些實施例中,病灶辨識學習裝置100更包括顯示器130,顯示器130顯示後續段落的輸出介面,進而供使用者觀看以學習病灶辨識。在一些實施例中,顯示器130可以由任意類型的顯示器(例如,液晶顯示器、觸控顯示器或頭戴顯示器等)實現。在一些實施例中,輸出介面可以是任意的可視化圖形介面。在一些實施例中,病灶辨識學習裝置100更包括輸入電路140,輸入電路140供使用者在圖形介面上輸入後續段落的回答類型。在一些實施例中,在一些實施例中,圖形介面也可顯示於顯示器130上。在一些實施例中,圖形介面也可以是任意的可視化圖形介面。在一些實施例中,輸入電路140可以由任意的輸入用的電路(例如,鍵盤、滑鼠或觸控面板等)實現。在一些實施例中,輸入電路140可由顯示器130(例如為觸控顯示器)來實現。 In some embodiments, the lesion identification learning device 100 further includes a display 130, which displays the output interface of the subsequent paragraphs, so that the user can watch and learn lesion identification. In some embodiments, the display 130 can be implemented by any type of display (for example, a liquid crystal display, a touch display, or a head-mounted display, etc.). In some embodiments, the output interface can be any visual graphical interface. In some embodiments, the lesion identification learning device 100 further includes an input circuit 140, which allows the user to input the answer type of the subsequent paragraph on the graphical interface. In some embodiments, in some embodiments, the graphical interface can also be displayed on the display 130. In some embodiments, the graphical interface may also be any visual graphical interface. In some embodiments, the input circuit 140 may be implemented by any input circuit (e.g., a keyboard, a mouse, or a touch panel, etc.). In some embodiments, the input circuit 140 may be implemented by a display 130 (e.g., a touch display).
以下進一步說明本揭示的病灶辨識學習方法。一併參照圖2,圖2繪示在一些實施例中的病灶辨識學習方法的流程圖,此病灶辨識學習方法適用於圖1所示的病灶辨識學習裝置100。 The following further describes the lesion identification learning method disclosed herein. Referring to FIG. 2 , FIG. 2 shows a flow chart of the lesion identification learning method in some embodiments. The lesion identification learning method is applicable to the lesion identification learning device 100 shown in FIG. 1 .
如圖2所示,病灶辨識學習方法包括步驟S210~S250。首先,於步驟S210中,處理器120選擇多個候選病灶類型tp1~tpn的一者做為第一病灶類型,並選擇與第一病灶類型對應的多個候選病灶影像的一者做為問題影像。 As shown in FIG2 , the lesion recognition learning method includes steps S210 to S250. First, in step S210, the processor 120 selects one of a plurality of candidate lesion types tp1 to tpn as the first lesion type, and selects one of a plurality of candidate lesion images corresponding to the first lesion type as the problem image.
在一些實施例中,輸入電路140接收影像類型以及器官類型以傳送至處理器120。接著,處理器120隨機選擇與器官類型相關的候選病灶類型的其中一者做為第一病灶類型,並隨機從與第一病灶類型對應的多個候選病灶影像中,選擇與影像類型對應的候選病灶影像的其中一者做為問題影像。在一些實施例中,影像類型可以是由任意的成像方式產生的影像的類型(例如,超音波影像、電腦斷層影像或核磁共振影像等)。在一些實施例中,器官類型可以是任意的人體器官的類型(例如,眼睛、耳朵、胃或腎臟等)。在一些實施例中,圖形介面可接收使用者利用輸入電路140所輸入的影像類型以及器官類型,以在圖形介面中的輸入成像類型欄位以及輸入器官欄位上分別顯示影像類型以及器官類型。在一些實施例中,圖形介面更具有輸入疾病欄位以供使用者輸入後續段落的回答類型。在一些實施例中,圖形介面更具有問題欄位以顯示問題影像供使用者觀看。 In some embodiments, the input circuit 140 receives the image type and the organ type to transmit to the processor 120. Then, the processor 120 randomly selects one of the candidate lesion types associated with the organ type as the first lesion type, and randomly selects one of the candidate lesion images corresponding to the image type from a plurality of candidate lesion images corresponding to the first lesion type as the problem image. In some embodiments, the image type can be the type of image generated by any imaging method (e.g., ultrasound image, computer tomography image, or magnetic resonance imaging, etc.). In some embodiments, the organ type can be any type of human organ (e.g., eye, ear, stomach, or kidney, etc.). In some embodiments, the graphical interface can receive the image type and organ type input by the user using the input circuit 140, so as to display the image type and organ type respectively on the input imaging type field and the input organ field in the graphical interface. In some embodiments, the graphical interface further has an input disease field for the user to input the answer type of the subsequent paragraph. In some embodiments, the graphical interface further has a question field to display the question image for the user to view.
以下以實際的例子說明圖形介面。一併參照圖3,圖3繪示在一些實施例中的圖形介面300的示意圖。如圖3所示,圖形介面300具有輸入欄位320~340。輸入欄位320用以接收使用者輸入的影像類型(即,光學同調斷層掃描)。輸入欄位330用以接收使用者輸入的器官類型(即,眼睛)。處理器120隨機選擇與眼睛對應的候選病灶類型的其中一者(即,糖尿病黃斑水腫)做為第一病灶類型,並隨機從與糖尿病黃斑水腫對應的多個候選病灶影像中,選擇與光學同調斷層掃描對應的候選病灶影像的其中一者做為問題影像。值得一提的是, 處理器120並不會將隨機選擇的上述第一病灶類型顯示出來,故使用者無法得知所述第一病灶類型。圖形介面300上更具有問題欄位310,問題欄位310用以顯示處理器120基於輸入欄位320、330所選擇的問題影像(即,糖尿病黃斑水腫的光學同調斷層掃描的其中任一候選病灶影像)。使用者觀看了問題欄位310上顯示的問題影像後,可在輸入欄位340輸入後續段落的回答類型(即,觀看問題的病灶的影像,並辨識出影像存在哪種病灶)。藉此,達到訓練病灶辨識的能力的效果。 The graphical interface is described below with an actual example. Referring to FIG. 3 , FIG. 3 is a schematic diagram of a graphical interface 300 in some embodiments. As shown in FIG. 3 , the graphical interface 300 has input fields 320 to 340. The input field 320 is used to receive an image type (i.e., optical coherence tomography) input by a user. The input field 330 is used to receive an organ type (i.e., eye) input by a user. The processor 120 randomly selects one of the candidate lesion types corresponding to the eye (i.e., diabetic macular edema) as the first lesion type, and randomly selects one of the candidate lesion images corresponding to the optical coherence tomography as the problem image from a plurality of candidate lesion images corresponding to diabetic macular edema. It is worth mentioning that the processor 120 does not display the randomly selected first lesion type, so the user cannot know the first lesion type. The graphical interface 300 further has a question field 310, which is used to display the question image selected by the processor 120 based on the input fields 320 and 330 (i.e., any candidate lesion image of the optical coherence tomography scan of diabetic macular edema). After the user views the question image displayed on the question field 310, he can enter the answer type of the subsequent paragraph in the input field 340 (i.e., view the image of the lesion in the question and identify which lesion exists in the image). In this way, the effect of training the ability to identify lesions is achieved.
回到圖2,於步驟S220中,處理器120接收回答類型,並判斷回答類型是否與第一病灶類型相同。在一些實施例中,回答類型為與輸入的器官類型相關的候選病灶類型的一者。在一些實施例中,處理器120從圖形介面接收與輸入的器官類型相關的候選病灶類型的一者。在一些實施例中,處理器120在從輸入電路140接收對圖形介面的與疾病相關的輸入欄位的點選指令時產生下拉式選單。接著,處理器120在從輸入電路140接收對下拉式選單中的與器官類型相關的候選病灶類型的一者的點選指令時將點選的候選病灶類型做為回答類型。在其他實施例中,處理器120也可在從輸入電路140接收回答類型的直接輸入時將輸入的回答類型顯示於圖形介面。 Return to Fig. 2, in step S220, processor 120 receives answer type, and judges whether answer type is identical with the first lesion type.In some embodiments, answer type is one of candidate lesion types relevant to the organ type of input.In some embodiments, processor 120 receives one of candidate lesion types relevant to the organ type of input from a graphical interface.In some embodiments, processor 120 generates a drop-down menu when receiving a click instruction to an input field relevant to the disease of the graphical interface from input circuit 140.Then, processor 120 uses the candidate lesion type clicked as answer type when receiving a click instruction to one of candidate lesion types relevant to the organ type in the drop-down menu from input circuit 140. In other embodiments, the processor 120 may also display the input answer type on the graphical interface when receiving a direct input of the answer type from the input circuit 140.
以下以實際的例子說明圖形介面。一併參照圖4,圖4繪示在一些實施例中的圖形介面300的下拉式選單350的示意圖。如圖3以及圖4所示,延續圖3的例子,當使用者利用輸入電路140在圖形介面300上點選輸入欄位340時,處理器120可在圖形介面300產生一個下拉式選單350,下拉式選單350包括與眼睛相關的候選病灶類型(例如,與眼睛相關的多發性玻璃疣、糖尿病黃斑水腫或視乳突水腫等)。接著,使用者可觀看問題欄位310顯示的問題影像,並利用輸 入電路140點選與眼睛相關的多發性玻璃疣、糖尿病黃斑水腫或視乳突水腫等的一者做為回答類型(即,使用者可選擇問題影像應是哪種候選病灶類型的影像)。 The following is an actual example to illustrate the graphical interface. Referring to FIG. 4 , FIG. 4 is a schematic diagram of a drop-down menu 350 of the graphical interface 300 in some embodiments. As shown in FIG. 3 and FIG. 4 , continuing the example of FIG. 3 , when the user clicks the input field 340 on the graphical interface 300 using the input circuit 140, the processor 120 can generate a drop-down menu 350 on the graphical interface 300, and the drop-down menu 350 includes candidate lesion types related to the eye (for example, multiple drusen related to the eye, diabetic macular edema or papillary edema, etc.). Next, the user can view the question image displayed in the question field 310 and use the input circuit 140 to select one of the eye-related multiple drusen, diabetic macular edema, or optic disc edema as the answer type (i.e., the user can select which candidate lesion type the question image should be).
回到圖2,於步驟S230中,當回答類型與第一病灶類型不同時,處理器120從與第一病灶類型對應的多個候選病灶影像中選擇與問題影像相似的多個參考影像。在一些實施例中,處理器120利用相似度演算法,從與第一病灶類型對應的多個候選病灶影像中選擇與問題影像相似的多個候選病灶影像做為多個參考影像。在一些實施例中,處理器120利用相似度演算法,從與第一病灶類型對應的多個候選病灶影像中辨識與問題影像的相似度最高的特定數量的候選病灶影像做為多個參考影像。在一些實施例中,相似度演算法可以是任意的計算相似度的演算法(例如,歐式距離(Euclidean distance)演算法、初比雪夫距離(Chebyshev distance)演算法或主成分分析(principal component analysis)演算法等)。在一些實施例中,特定數量可以是使用者預先設定的任意數量。在一些實施例中,當回答類型與第一病灶類型不同時,處理器120在圖形介面上產生答案欄位,並在答案欄位顯示第一病灶類型。 Returning to FIG. 2 , in step S230, when the answer type is different from the first lesion type, the processor 120 selects a plurality of reference images similar to the question image from a plurality of candidate lesion images corresponding to the first lesion type. In some embodiments, the processor 120 uses a similarity algorithm to select a plurality of candidate lesion images similar to the question image from a plurality of candidate lesion images corresponding to the first lesion type as a plurality of reference images. In some embodiments, the processor 120 uses a similarity algorithm to identify a specific number of candidate lesion images with the highest similarity to the question image from a plurality of candidate lesion images corresponding to the first lesion type as a plurality of reference images. In some embodiments, the similarity algorithm may be any algorithm for calculating similarity (e.g., Euclidean distance algorithm, Chebyshev distance algorithm, or principal component analysis algorithm, etc.). In some embodiments, the specific number may be any number pre-set by the user. In some embodiments, when the answer type is different from the first lesion type, the processor 120 generates an answer field on the graphical interface and displays the first lesion type in the answer field.
以下以實際的例子說明答案欄位。一併參照圖5,圖5繪示在一些實施例中的圖形介面300的答案欄位360的示意圖。如圖5所示,當使用者利用輸入電路140在圖形介面300上輸入回答類型(即,多發性玻璃疣)時,處理器120從圖形介面300接收回答類型,並比對回答類型與第一病灶類型(即,糖尿病黃斑水腫)是否相同。此時,處理器120判斷輸入的回答類型並非是糖尿病黃斑水腫。接著,處理器120在圖形介面300上產生答案欄位360,並在答案欄位360顯示糖尿病黃斑水腫(即,顯示正確答案)。此外,圖形介面300更具有選項圖樣 370~380。選項圖樣370用以供使用者選擇以產生後續段落的輸出介面。後續段落將會對輸出介面進一步說明,因此,在此不贅述。選項圖樣380用以供使用者選擇以產生下一題的圖形介面。下一題的圖形介面與上述圖形介面相似,因此,在此也不贅述。 The answer field is explained below using an actual example. Referring to FIG. 5 , FIG. 5 is a schematic diagram of an answer field 360 of a graphical interface 300 in some embodiments. As shown in FIG. 5 , when a user inputs an answer type (i.e., multiple drusen) on the graphical interface 300 using the input circuit 140, the processor 120 receives the answer type from the graphical interface 300 and compares the answer type with the first lesion type (i.e., diabetic macular edema) to see if they are the same. At this time, the processor 120 determines that the input answer type is not diabetic macular edema. Then, the processor 120 generates an answer field 360 on the graphical interface 300 and displays diabetic macular edema in the answer field 360 (i.e., displays the correct answer). In addition, the graphical interface 300 further has option icons 370~380. Option icon 370 is used for the user to select to generate the output interface of the subsequent paragraph. The subsequent paragraph will further explain the output interface, so it is not repeated here. Option icon 380 is used for the user to select to generate the graphical interface of the next question. The graphical interface of the next question is similar to the above graphical interface, so it is not repeated here.
在一些實施例中,當回答類型與第一病灶類型相同時,處理器120利用相似度演算法,從與不同於第一病灶類型的候選病灶類型的各者分別對應的多個候選病灶影像中,選擇與問題影像最相似的候選病灶影像做為新的問題影像。接著,處理器120將與新的問題影像對應的候選病灶類型做為新的第一病灶類型。接著,處理器120再次執行步驟S220。 In some embodiments, when the answer type is the same as the first lesion type, the processor 120 uses a similarity algorithm to select a candidate lesion image that is most similar to the question image from multiple candidate lesion images corresponding to each of the candidate lesion types different from the first lesion type as a new question image. Then, the processor 120 uses the candidate lesion type corresponding to the new question image as the new first lesion type. Then, the processor 120 executes step S220 again.
回到圖2,於步驟S240中,處理器120選擇與回答類型相同的候選病灶類型做為第二病灶類型,並從與第二病灶類型對應的多個候選病灶影像中選擇與問題影像相似的多個相似影像。在一些實施例中,處理器120利用相似度演算法,從與第二病灶類型對應的多個候選病灶影像中辨識與問題影像的相似度最高的特定數量的候選病灶影像做為多個相似影像。換言之,處理器120會利用相似度演算法分別找出與第二病灶類型對應的多個候選病灶影像中的最相似於問題影像的幾個候選病灶影像做為這些相似影像(即,找到與問題影像的相似度前幾高的候選病灶影像)。在一些實施例中,此特定數量也可以是使用者預先設定的任意數量。在一些實施例中,相似影像的數量等於參考影像的數量。 Returning to FIG. 2 , in step S240, the processor 120 selects a candidate lesion type that is the same as the answer type as the second lesion type, and selects a plurality of similar images that are similar to the problem image from a plurality of candidate lesion images corresponding to the second lesion type. In some embodiments, the processor 120 uses a similarity algorithm to identify a specific number of candidate lesion images that have the highest similarity to the problem image from a plurality of candidate lesion images corresponding to the second lesion type as a plurality of similar images. In other words, the processor 120 uses a similarity algorithm to find out a few candidate lesion images that are most similar to the problem image from a plurality of candidate lesion images corresponding to the second lesion type as these similar images (i.e., find the candidate lesion images with the highest similarity to the problem image). In some embodiments, this specific number may also be an arbitrary number pre-set by the user. In some embodiments, the number of similar images is equal to the number of reference images.
於步驟S250中,處理器120在輸出介面上產生與第一病灶類型對應的多個參考影像以及與第二病灶類型對應的多個相似影像以供使用者學習病灶辨識。在一些實施例中,處理器120利用分類模型辨識問題影像中的與病灶相關的多個關鍵特徵點(例如,從卷積神經網路模型的其中多個卷積層分別提取與 病灶對應的多個關鍵特徵點的座標)。在一些實施例中,處理器120更在輸出介面上產生問題影像,並在問題影像中的與病灶相關的多個關鍵特徵點的位置上產生對應的多個顏色(例如,病灶組織密度越高的關鍵特徵點的位置上有越接近紅色的顏色,而病灶組織密度越低的關鍵特徵點的位置上有越接近綠色的顏色)。在一些實施例中,處理器120將輸出介面顯示於顯示器130上以供使用者學習病灶辨識。 In step S250, the processor 120 generates a plurality of reference images corresponding to the first lesion type and a plurality of similar images corresponding to the second lesion type on the output interface for the user to learn lesion recognition. In some embodiments, the processor 120 uses a classification model to identify a plurality of key feature points related to the lesion in the problem image (for example, the coordinates of a plurality of key feature points corresponding to the lesion are extracted from a plurality of convolutional layers of a convolutional neural network model). In some embodiments, the processor 120 further generates a problem image on the output interface, and generates corresponding multiple colors at the positions of multiple key feature points related to the lesion in the problem image (for example, the key feature points with higher lesion tissue density have colors closer to red, and the key feature points with lower lesion tissue density have colors closer to green). In some embodiments, the processor 120 displays the output interface on the display 130 for the user to learn lesion identification.
以下以實際的例子說明輸出介面。一併參照圖6,圖6繪示在一些實施例中的輸出介面600的示意圖。如圖6所示,輸出介面600具有問題欄位610。問題欄位610用以顯示上述問題影像(即,糖尿病黃斑水腫的其中一候選病灶影像),並在問題影像上與病灶相關的多個關鍵特徵點的位置上產生特徵區FR,其中特徵區FR會顯示與多個關鍵特徵點對應的顏色,其中特徵區FR中的病灶組織密度越高的關鍵特徵點的位置上有越接近紅色的顏色,而病灶組織密度越低的關鍵特徵點的位置上有越接近綠色的顏色。一併參照圖7的圖式,圖7繪示了在一些實施例中的多個關鍵特徵點的示意圖。在圖7的圖式中,病灶組織密度越高的關鍵特徵點的位置上有越接近第一顏色(例如紅色)的顏色,而病灶組織密度越低的關鍵特徵點的位置上有越接近第二顏色(綠色)的顏色。換言之,問題欄位610顯示了對圖5的圖形介面300上的問題欄位310的問題影像進行特徵強化後的彩色的問題影像。如圖6所示,輸出介面600更具有參考欄位620~640。參考欄位620~640分別用以顯示與問題影像最相似的三個參考影像(即,與問題影像的相似度最高的前三個糖尿病黃斑水腫的影像),其中參考欄位620顯示的參考影像與問題影像之間具有最高的相似性(即,與問題影像最相 似的一個糖尿病黃斑水腫的影像)。換言之,參考欄位620顯示的糖尿病黃斑水腫的影像與問題影像具有最高的相似度。 The output interface is explained below with an actual example. Referring to FIG. 6 , FIG. 6 is a schematic diagram of an output interface 600 in some embodiments. As shown in FIG. 6 , the output interface 600 has a question field 610. The question field 610 is used to display the above-mentioned question image (i.e., one of the candidate lesion images of diabetic macular edema), and generate a feature region FR at the positions of multiple key feature points related to the lesion on the question image, wherein the feature region FR will display colors corresponding to the multiple key feature points, wherein the key feature points with higher lesion tissue density in the feature region FR have colors closer to red at their positions, and the key feature points with lower lesion tissue density have colors closer to green at their positions. With reference to the diagram of FIG. 7 , FIG. 7 illustrates a schematic diagram of a plurality of key feature points in some embodiments. In the diagram of FIG. 7 , the key feature points with higher lesion tissue density have a color closer to the first color (e.g., red), while the key feature points with lower lesion tissue density have a color closer to the second color (green). In other words, the problem field 610 shows a colored problem image after feature enhancement of the problem image of the problem field 310 on the graphical interface 300 of FIG. 5 . As shown in FIG. 6 , the output interface 600 further has reference fields 620 to 640. Reference fields 620-640 are used to display the three reference images most similar to the problem image (i.e., the top three diabetic macular edema images with the highest similarity to the problem image), wherein the reference image displayed in reference field 620 has the highest similarity to the problem image (i.e., the one diabetic macular edema image most similar to the problem image). In other words, the diabetic macular edema image displayed in reference field 620 has the highest similarity to the problem image.
輸出介面600還具有相似欄位650~670。相似欄位650~670分別用以顯示與上述圖5的圖形介面300上的問題欄位310的問題影像最相似的相似影像(即,與問題影像的糖尿病黃斑水腫的影像的相似度最高的多發性玻璃疣的影像)。換言之,相似欄位650顯示的多發性玻璃疣的影像與問題欄位310顯示的糖尿病黃斑水腫的影像具有最高的相似度(即,與上述圖5的問題欄位310所顯示的問題影像的相似度最高的前三個多發性玻璃疣的影像),其中相似欄位650顯示的相似影像與上述圖5的問題欄位310所顯示的問題影像之間具有最高的相似性(即,與問題影像最相似的一個多發性玻璃疣的影像)。換言之,相似欄位650顯示的多發性玻璃疣的影像與上述圖5的問題欄位310所顯示的問題影像具有最高的相似度。而相似欄位660~670分別顯示的多發性玻璃疣的影像與上述圖5的問題欄位310所顯示的問題影像具有第二以及第三高的相似度。 The output interface 600 also has similar fields 650-670. The similar fields 650-670 are respectively used to display similar images that are most similar to the problem image of the problem field 310 on the graphical interface 300 of FIG. 5 (ie, the image of multiple drusen that is most similar to the problem image of diabetic macular edema). In other words, the image of multiple drusen displayed in the similar field 650 has the highest similarity with the image of diabetic macular edema displayed in the problem field 310 (i.e., the top three images of multiple drusen with the highest similarity to the problem image displayed in the problem field 310 of FIG. 5 ), wherein the similar image displayed in the similar field 650 has the highest similarity with the problem image displayed in the problem field 310 of FIG. 5 (i.e., the image of multiple drusen that is most similar to the problem image). In other words, the image of multiple drusen displayed in the similar field 650 has the highest similarity with the problem image displayed in the problem field 310 of FIG. 5 . The images of multiple drusen displayed in the similarity fields 660-670 respectively have the second and third highest similarities with the problem image displayed in the problem field 310 of FIG. 5 .
藉由上述步驟,使用者可在顯示器130顯示的輸出介面觀看問題影像中存在病灶的特徵的位置,也可在顯示器130顯示的輸出介面觀看與問題影像相似的參考影像(相同病灶類型)以及與問題影像難以區分的相似影像(不同病灶類型)。使用者可直接觀察參考影像以及與問題影像最相似的不同病灶類型的相似影像之間的差異。藉此,使用者不僅能學習到第一病灶類型的病灶的影像的特徵,也能學習到第二病灶類型的病灶的影像的特徵,進而讓使用者更精準且更快速地辨識出影像的病灶以及病灶類型。如此一來,這樣的方法將可克服以往由於影像類型過多造成病灶難以辨識的問題。 Through the above steps, the user can view the location of the lesion characteristics in the problem image on the output interface displayed by the display 130, and can also view the reference image similar to the problem image (same lesion type) and the similar image that is difficult to distinguish from the problem image (different lesion type) on the output interface displayed by the display 130. The user can directly observe the difference between the reference image and the similar image of different lesion types that is most similar to the problem image. In this way, the user can not only learn the characteristics of the image of the lesion of the first lesion type, but also learn the characteristics of the image of the lesion of the second lesion type, so that the user can more accurately and quickly identify the lesion and lesion type of the image. In this way, such a method will overcome the problem of difficulty in identifying lesions due to too many image types in the past.
綜上所述,本揭示提出的病灶辨識學習裝置以及方法先設定好問題的病灶類型以及與病灶類型對應的問題的影像,並將影像顯示在顯示器供使用者做答。當接收到錯誤的回答時,本揭示提出的病灶辨識學習裝置以及方法會找出與問題的影像相似的影像,其中此影像也對應於問題的病灶類型。本揭示提出的病灶辨識學習裝置以及方法更會找出其他病灶類型中與問題的影像相似的影像。藉此,使用者可在顯示器上的成對的相似影像觀察兩種病灶類型的差異以及特徵。此外,本揭示提出的病灶辨識學習裝置以及方法更會顯示問題的影像中的關鍵特徵點以供使用者觀察問題的病灶類型的各種特徵。如此一來,本揭示提出的病灶辨識學習裝置以及方法可使從事醫事工作者更快速且更有效率地學習病灶辨識,並解決潛在的醫事人才流失的問題。 In summary, the lesion identification learning device and method proposed in the present disclosure first set the lesion type of the question and the image of the question corresponding to the lesion type, and display the image on the display for the user to answer. When receiving an incorrect answer, the lesion identification learning device and method proposed in the present disclosure will find an image similar to the image of the question, wherein this image also corresponds to the lesion type of the question. The lesion identification learning device and method proposed in the present disclosure will further find images similar to the image of the question in other lesion types. Thereby, the user can observe the difference and characteristics of two lesion types in paired similar images on the display. In addition, the lesion identification learning device and method proposed in the present disclosure will further display the key feature points in the image of the question for the user to observe various characteristics of the lesion type of the question. In this way, the lesion identification learning device and method proposed in this disclosure can enable medical workers to learn lesion identification more quickly and efficiently, and solve the problem of potential medical talent loss.
雖然本揭示已以實施例揭露如上,然其並非用以限定本揭示,任何所屬技術領域中具有通常知識者,在不脫離本揭示的精神和範圍內,當可作些許的更動與潤飾,故本揭示的保護範圍當視後附的申請專利範圍所界定者為準。 Although the present disclosure has been disclosed as above by way of embodiments, it is not intended to limit the present disclosure. Any person with ordinary knowledge in the relevant technical field may make some changes and modifications within the spirit and scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the scope defined by the attached patent application.
100:病灶辨識學習裝置 100: Lesion identification learning device
110:記憶體 110: Memory
120:處理器 120: Processor
130:顯示器 130: Display
140:輸入電路 140: Input circuit
tp1~tpn:候選病灶類型 tp1~tpn: Candidate lesion type
img11~imgnz:候選病灶影像 img11~imgnz: candidate lesion images
Claims (12)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW113104217A TWI881698B (en) | 2024-02-02 | 2024-02-02 | Lesion identification learning device and method |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW113104217A TWI881698B (en) | 2024-02-02 | 2024-02-02 | Lesion identification learning device and method |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| TWI881698B true TWI881698B (en) | 2025-04-21 |
| TW202533252A TW202533252A (en) | 2025-08-16 |
Family
ID=96141926
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| TW113104217A TWI881698B (en) | 2024-02-02 | 2024-02-02 | Lesion identification learning device and method |
Country Status (1)
| Country | Link |
|---|---|
| TW (1) | TWI881698B (en) |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20200085382A1 (en) * | 2017-05-30 | 2020-03-19 | Arterys Inc. | Automated lesion detection, segmentation, and longitudinal identification |
| CN111048170B (en) * | 2019-12-23 | 2021-05-28 | 山东大学齐鲁医院 | Method and system for generating structured diagnostic report of digestive endoscopy based on image recognition |
| TW202221568A (en) * | 2020-09-24 | 2022-06-01 | 大陸商上海商湯智能科技有限公司 | Image recognition method, electronic device and computer readable storage medium |
| TWI769370B (en) * | 2019-03-08 | 2022-07-01 | 太豪生醫股份有限公司 | Focus detection apparatus and method thereof |
-
2024
- 2024-02-02 TW TW113104217A patent/TWI881698B/en active
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20200085382A1 (en) * | 2017-05-30 | 2020-03-19 | Arterys Inc. | Automated lesion detection, segmentation, and longitudinal identification |
| TWI769370B (en) * | 2019-03-08 | 2022-07-01 | 太豪生醫股份有限公司 | Focus detection apparatus and method thereof |
| CN111048170B (en) * | 2019-12-23 | 2021-05-28 | 山东大学齐鲁医院 | Method and system for generating structured diagnostic report of digestive endoscopy based on image recognition |
| TW202221568A (en) * | 2020-09-24 | 2022-06-01 | 大陸商上海商湯智能科技有限公司 | Image recognition method, electronic device and computer readable storage medium |
Also Published As
| Publication number | Publication date |
|---|---|
| TW202533252A (en) | 2025-08-16 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Lu et al. | Deep learning-based automated classification of multi-categorical abnormalities from optical coherence tomography images | |
| Liao et al. | Clinical interpretable deep learning model for glaucoma diagnosis | |
| JP7201404B2 (en) | MEDICAL IMAGE PROCESSING APPARATUS, MEDICAL IMAGE PROCESSING METHOD, AND PROGRAM | |
| Ding et al. | A novel deep learning pipeline for retinal vessel detection in fluorescein angiography | |
| Motta et al. | Vessel optimal transport for automated alignment of retinal fundus images | |
| Stalin David et al. | A new expert system based on hybrid colour and structure descriptor and machine learning algorithms for early glaucoma diagnosis | |
| WO2019223080A1 (en) | Bmi prediction method and device, computer device and storage medium | |
| JP2019091454A (en) | Data analysis processing device and data analysis processing program | |
| Hu et al. | Embedded residual recurrent network and graph search for the segmentation of retinal layer boundaries in optical coherence tomography | |
| CN106204555A (en) | A kind of combination Gbvs model and the optic disc localization method of phase equalization | |
| KR20210158682A (en) | Method to display lesion readings result | |
| CN114201613A (en) | Test question generation method, test question generation device, electronic device and storage medium | |
| CN106257475A (en) | Modular Automation grade point system for TBI assessment | |
| Samant et al. | Comparative analysis of classification based algorithms for diabetes diagnosis using iris images | |
| Xing et al. | Weakly supervised serous retinal detachment segmentation in SD-OCT images by two-stage learning | |
| CN112750099A (en) | Follicle measurement method, ultrasound apparatus, and computer-readable storage medium | |
| Koornwinder et al. | Multimodal artificial intelligence models predicting glaucoma progression using electronic health records and retinal nerve fiber layer scans | |
| TWI881698B (en) | Lesion identification learning device and method | |
| Umans et al. | Artificial Intelligence in Imaging in the First Trimester of Pregnancy: A Systematic Review | |
| Shafiq et al. | Dualeye-featurenet: a dual-stream feature transfer framework for multi-modal ophthalmic image classification | |
| Zheng et al. | Automatic measurement of pennation angle from ultrasound images using resnets | |
| Xia et al. | Cross-domain brain CT image smart segmentation via shared hidden space transfer FCM clustering | |
| Sajadi et al. | A new seeded region growing technique for retinal blood vessels extraction | |
| Shaaf et al. | A Convolutional Neural Network Model to Segment Myocardial Infarction from MRI Images. | |
| Ashokkumar et al. | Comparative Analysis of Deep Learning Algorithms for Image Recognition in Medical Imaging |