[go: up one dir, main page]

TWI855947B - Device and method for analyzing cardiac ultrasound images - Google Patents

Device and method for analyzing cardiac ultrasound images Download PDF

Info

Publication number
TWI855947B
TWI855947B TW112150683A TW112150683A TWI855947B TW I855947 B TWI855947 B TW I855947B TW 112150683 A TW112150683 A TW 112150683A TW 112150683 A TW112150683 A TW 112150683A TW I855947 B TWI855947 B TW I855947B
Authority
TW
Taiwan
Prior art keywords
machine learning
learning model
cutting
results
ultrasound images
Prior art date
Application number
TW112150683A
Other languages
Chinese (zh)
Other versions
TW202525241A (en
Inventor
黃睦翔
蔡惟全
劉秉彥
陳育德
林韋辰
李亦庭
蕭婷安
李玟瑤
Original Assignee
國立成功大學
國立成功大學醫學院附設醫院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 國立成功大學, 國立成功大學醫學院附設醫院 filed Critical 國立成功大學
Priority to TW112150683A priority Critical patent/TWI855947B/en
Application granted granted Critical
Publication of TWI855947B publication Critical patent/TWI855947B/en
Publication of TW202525241A publication Critical patent/TW202525241A/en

Links

Landscapes

  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

The invention proposes a method for analyzing cardiac ultrasound images that is executed by an analysis device. This method includes: acquiring multiple cardiac ultrasound images; inputting the ultrasound images into a first machine learning model to generate multiple cycle detection results, which are configured to indicate whether each cardiac ultrasound image belongs to a systolic phase or a diastolic phase; inputting the ultrasound images into a second machine learning model to generate multiple segmentation results, which are configured to indicate atriums, valves, and regurgitation in the ultrasound images; and based on the cycle detection results and the segmentation results, executing a third machine learning model to generate an identification result which is configured to indicate the occurrence of atrioventricular valve regurgitation.

Description

心臟超音波影像的分析裝置與方法Cardiac ultrasound image analysis device and method

本揭露是有關於能自動分析超音波影像的裝置與方法,特別是可以偵測房室瓣閉鎖不全。The present disclosure relates to an apparatus and method for automatically analyzing ultrasound images, particularly for detecting atrioventricular valve regurgitation.

根據日本與台灣的臨床流行病學研究顯示,二尖瓣與三尖瓣閉鎖不全是瓣膜性心臟疾病中排名第一與第三的常見情況。然而,相較於其他已開發國家,這些病患接受治療的比例相對較低,進而增加了公眾的醫療負擔。已有多項臨床觀察性和介入性研究證實,及時的醫療介入,包括適當的藥物治療或瓣膜置換,能夠有效改善患者的臨床預後和心衰竭相關症狀。心臟超音波是診斷心臟瓣膜閉鎖不全的重要工具,但舊有方法需要有經驗的醫師診斷,相當耗費時間與人力。According to clinical epidemiological studies in Japan and Taiwan, mitral and tricuspid regurgitation are the first and third most common conditions among valvular heart diseases. However, compared with other developed countries, the proportion of these patients receiving treatment is relatively low, which in turn increases the public's medical burden. A number of clinical observational and interventional studies have confirmed that timely medical intervention, including appropriate drug therapy or valve replacement, can effectively improve patients' clinical prognosis and heart failure-related symptoms. Echocardiography is an important tool for diagnosing heart valvular regurgitation, but the old method requires experienced doctors to diagnose, which is time-consuming and labor-intensive.

本揭露的實施例提出一種心臟超音波影像的分析裝置,包括記憶體與處理器。記憶體用以儲存多個指令,處理器通訊連接至記憶體,用以執行這些指令以完成以下步驟:取得多張心臟超音波影像;將超音波影像輸入至第一機器學習模型以產生多個週期偵測結果,這些週期偵測結果用以指示每一張心臟超音波影像屬於收縮期或是舒張期;將超音波影像輸入至第二機器學習模型以產生對應的多個切割結果,這些切割結果用以指示超音波影像中的心房、瓣膜與逆流;根據週期偵測結果與切割結果執行第三機器學習模型以產生辨識結果,此辨識結果用以指示是否發生房室瓣閉鎖不全。The disclosed embodiment provides a cardiac ultrasound image analysis device, including a memory and a processor. The memory is used to store multiple instructions, and the processor is communicatively connected to the memory to execute these instructions to complete the following steps: obtaining multiple cardiac ultrasound images; inputting the ultrasound images into the first machine learning model to generate multiple cycle detection results, and these cycle detection results are used to indicate whether each cardiac ultrasound image belongs to the systolic period or the diastolic period; inputting the ultrasound images into the second machine learning model to generate corresponding multiple cutting results, and these cutting results are used to indicate the atria, valves and regurgitation in the ultrasound images; executing the third machine learning model according to the cycle detection results and the cutting results to generate identification results, and this identification result is used to indicate whether atrioventricular valve regurgitation occurs.

在一些實施例中,第一機器學習模型包含循環神經網路,這些週期偵測結果包含多張二值化影像,二值化影像包含多個數值,這些數值彼此相同且用以指示對應的超音波影像屬於收縮期或是舒張期。In some embodiments, the first machine learning model includes a recurrent neural network, and the cycle detection results include a plurality of binary images, the binary images include a plurality of values, and the values are the same and are used to indicate whether the corresponding ultrasound image belongs to the systolic period or the diastolic period.

在一些實施例中,上述根據週期偵測結果與切割結果執行第三機器學習模型的步驟包括:對於其中一張心臟超音波影像,將對應的週期偵測結果、心房的切割結果、以及逆流的切割結果做元素相乘運算以得到交集切割結果。In some embodiments, the step of executing the third machine learning model based on the cycle detection results and the cutting results includes: for one of the cardiac ultrasound images, performing element-wise multiplication operations on the corresponding cycle detection results, the atrium cutting results, and the reverse flow cutting results to obtain an intersection cutting result.

在一些實施例中,上述根據週期偵測結果與切割結果執行第三機器學習模型的步驟還包括;將交集切割結果、心臟超音波影像以及瓣膜的切割結果相連接並輸入至第三機器學習模型。In some embodiments, the step of executing the third machine learning model based on the cycle detection results and the cutting results further includes: connecting the intersection cutting results, the cardiac ultrasound image and the valve cutting results and inputting them into the third machine learning model.

在一些實施例中,上述的第三機器學習模型為膠囊網路,交集切割結果、心臟超音波影像以及瓣膜的切割結果屬於多個視角的其中之一。第三機器學習模型包含多個輸入分支以對應至這些視角,每個輸入分支用以產生一視角辨識結果,第三機器學習模型用以結合視角辨識結果以產生辨識結果。In some embodiments, the third machine learning model is a capsule network, and the intersection cutting result, the cardiac ultrasound image, and the valve cutting result belong to one of a plurality of viewpoints. The third machine learning model includes a plurality of input branches corresponding to the viewpoints, each input branch is used to generate a viewpoint recognition result, and the third machine learning model is used to combine the viewpoint recognition results to generate a recognition result.

以另一個角度來說,本揭露的實施例提出一種心臟超音波影像的分析方法,由分析裝置所執行。此分析方法包括:取得多張心臟超音波影像;將超音波影像輸入至第一機器學習模型以產生多個週期偵測結果,這些週期偵測結果用以指示每一張心臟超音波影像屬於收縮期或是舒張期;將超音波影像輸入至第二機器學習模型以產生對應的多個切割結果,這些切割結果用以指示超音波影像中的心房、瓣膜與逆流;根據週期偵測結果與切割結果執行第三機器學習模型以產生辨識結果,此辨識結果用以指示是否發生房室瓣閉鎖不全。From another perspective, the embodiment of the present disclosure proposes a cardiac ultrasound image analysis method, which is executed by an analysis device. The analysis method includes: obtaining multiple cardiac ultrasound images; inputting the ultrasound images into a first machine learning model to generate multiple cycle detection results, which are used to indicate whether each cardiac ultrasound image belongs to the systolic period or the diastolic period; inputting the ultrasound images into a second machine learning model to generate corresponding multiple cutting results, which are used to indicate the atria, valves and regurgitation in the ultrasound images; executing a third machine learning model based on the cycle detection results and the cutting results to generate a recognition result, which is used to indicate whether atrioventricular valve regurgitation occurs.

關於本文中所使用之「第一」、「第二」等,並非特別指次序或順位的意思,其僅為了區別以相同技術用語描述的元件或操作。The terms “first,” “second,” etc. used herein do not particularly refer to order or sequence, but are only used to distinguish elements or operations described with the same technical term.

圖1是根據一實施例繪示心臟超音波影像的分析裝置。請參照圖1,分析裝置100可以是個人電腦、筆記型電腦、伺服器、分散式電腦、雲端伺服器、工業電腦、智慧型手機、平板電腦或具有計算能力的各種電子裝置等,本發明並不在此限。分析裝置100包括了處理器110與記憶體120,處理器110通訊連接至記憶體120,在此通訊連接可以透過任意有線或無線的通訊手段來達成,或者也可透過互聯網來達成。處理器110可為中央處理器、微處理器、微控制器、影像處理晶片、特殊應用積體電路等,記憶體120可為隨機存取記憶體、唯讀記憶體、快閃記憶體、軟碟、硬碟、光碟、隨身碟、磁帶或是可透過網際網路存取之資料庫,其中儲存有多個指令,處理器110會執行這些指令來完成一個心臟超音波影像的分析方法,以下將詳細說明此方法。FIG1 is a diagram of an analysis device for cardiac ultrasound images according to an embodiment. Referring to FIG1 , the analysis device 100 may be a personal computer, a laptop, a server, a distributed computer, a cloud server, an industrial computer, a smart phone, a tablet computer, or various electronic devices with computing capabilities, but the present invention is not limited thereto. The analysis device 100 includes a processor 110 and a memory 120, and the processor 110 is communicatively connected to the memory 120, wherein the communication connection may be achieved through any wired or wireless communication means, or may also be achieved through the Internet. The processor 110 may be a central processing unit, a microprocessor, a microcontroller, an image processing chip, a special application integrated circuit, etc. The memory 120 may be a random access memory, a read-only memory, a flash memory, a floppy disk, a hard disk, an optical disk, a flash drive, a magnetic tape, or a database accessible via the Internet, wherein a plurality of instructions are stored. The processor 110 executes these instructions to complete a cardiac ultrasound image analysis method, which will be described in detail below.

首先說明心臟超音波影像的標記(annotation),這些標記由專業醫師提供。首先,標記包含每張心臟超音波影像屬於哪個視角,這些視角可包括胸骨旁長軸(parasternal long axis,PLAX)、胸骨旁短軸(parasternal short axis,PSAX)、頂端四腔(Apical Four chamber,A4C)、頂端三腔(A3C)、頂端二腔(A2C)或肋下(Subcostal view,SC)等,本揭露並不在此限。第二,標記也包含二尖瓣(前瓣和後瓣)和三尖瓣(隔膜、前瓣和後瓣)的每個小葉(leaflet),從瓣環到尖端均勻地標記每個小葉的四個節點。第三,以2D 和彩色模式標記心室與心房、逆流(Regurgitation)和流入(inflow)。最後,對於二尖瓣和三尖瓣逆流分級,在此是提取了由經驗豐富的心臟科醫生授權的原始結構化報告。First, the annotations of cardiac ultrasound images are explained. These annotations are provided by professional physicians. First, the annotations include which view angle each cardiac ultrasound image belongs to. These view angles may include parasternal long axis (PLAX), parasternal short axis (PSAX), apical four chambers (A4C), apical three chambers (A3C), apical two chambers (A2C) or subcostal view (SC), etc., but the present disclosure is not limited thereto. Second, the annotations also include each leaflet of the mitral valve (anterior leaflet and posterior leaflet) and the tricuspid valve (septum, anterior leaflet and posterior leaflet), and the four nodes of each leaflet are evenly marked from the valve annulus to the tip. Third, the ventricles and atria, regurgitation, and inflow were labeled in 2D and color mode. Finally, for mitral and tricuspid regurgitation grading, the original structured reports authorized by experienced cardiologists were extracted.

圖2是根據一實施例繪示心臟超音波影像的分析方法的流程示意圖。請參照圖2,首先取得多張待分析的心臟超音波影像,這些心臟超音波影像可為彩色影像或是灰階影像,本揭露並不在此限。心臟超音波影像可以通過一個機器學習模型(未繪示)來辨識屬於哪一個視角,這裡以心臟超音波影像201~205為例說明,這些心臟超音波影像201~205屬於相同的視角。在此實施例中每個視角下都有多張(例如20張)心臟超音波影像以形成一個影片,本揭露並不限制心臟超音波影像201~205的個數。FIG. 2 is a flowchart of a cardiac ultrasound image analysis method according to an embodiment. Referring to FIG. 2 , a plurality of cardiac ultrasound images to be analyzed are first obtained. These cardiac ultrasound images may be color images or grayscale images, but the present disclosure is not limited thereto. The cardiac ultrasound images may be identified to which viewing angle they belong through a machine learning model (not shown). Here, cardiac ultrasound images 201 to 205 are used as an example to illustrate that these cardiac ultrasound images 201 to 205 belong to the same viewing angle. In this embodiment, there are multiple (e.g., 20) cardiac ultrasound images at each viewing angle to form a video, but the present disclosure does not limit the number of cardiac ultrasound images 201 to 205.

接下來,將相同視角的心臟超音波影像201~205輸入至機器學習模型210以產生多個週期偵測結果,這些週期偵測結果分別對應至心臟超音波影像201~205,用以指示對應的心臟超音波影像屬於舒張期211或是收縮期212,例如心臟超音波影像201~202屬於舒張期211、心臟超音波影像203~204屬於收縮期212,以此類推。機器學習模型210又稱為事件偵測器(event detector),圖3是根據一實施例繪示週期偵測器的網路架構圖。請參照圖3,機器學習模型210包含了三維卷積神經網路310、循環神經網路(recurrent neural network)320、330、一維卷積層340。例如,心臟超音波影像201~205可彼此連接在一起成為一個三維的特徵圖,然後輸入至三維卷積神經網路310,接著透過循環神經網路(recurrent neural network)320、330來處理時間上的相關性,在此實施例中循環神經網路320、330為長短期記憶網絡(Long Short-Term Memory,LSTM),但在其他實施例中也可以為門控遞歸單元(Gated Recurrent Unit,GRU)等合適的網路,本揭露並不在此限。循環神經網路330的輸出會經過一維卷積層340,然後輸出多個數值341~343,每個數值對應至一張心臟超音波影像,在此用數值“1”表示收縮期,用數值“0”表示舒張期。為了方便閱讀起見,在圖3中三維卷積神經網路310中加入了“3D CNN”的文字說明;循環神經網路320、330中加入了“LSTM”的文字說明;一維卷積層340中加入了“1D Conv”的文字說明。Next, the cardiac ultrasound images 201-205 of the same viewing angle are input into the machine learning model 210 to generate a plurality of cycle detection results. These cycle detection results correspond to the cardiac ultrasound images 201-205 respectively, and are used to indicate whether the corresponding cardiac ultrasound images belong to the diastole 211 or the systole 212. For example, the cardiac ultrasound images 201-202 belong to the diastole 211, the cardiac ultrasound images 203-204 belong to the systole 212, and so on. The machine learning model 210 is also called an event detector. FIG3 is a network architecture diagram of a cycle detector according to an embodiment. 3, the machine learning model 210 includes a three-dimensional convolutional neural network 310, recurrent neural networks 320, 330, and a one-dimensional convolutional layer 340. For example, the cardiac ultrasound images 201-205 can be connected to each other to form a three-dimensional feature map, which is then input into the three-dimensional convolutional neural network 310, and then the temporal correlation is processed by the recurrent neural networks 320, 330. In this embodiment, the recurrent neural networks 320, 330 are long short-term memory networks (Long Short-Term Memory, LSTM), but in other embodiments, they can also be suitable networks such as gated recurrent units (GRU), and the present disclosure is not limited thereto. The output of the circulatory neural network 330 passes through the one-dimensional convolution layer 340, and then outputs multiple values 341-343, each of which corresponds to a cardiac ultrasound image. Here, the value "1" represents the systolic period, and the value "0" represents the diastolic period. For the sake of ease of reading, the text description "3D CNN" is added to the three-dimensional convolution neural network 310 in FIG. 3; the text description "LSTM" is added to the circulatory neural networks 320 and 330; and the text description "1D Conv" is added to the one-dimensional convolution layer 340.

接下來,數值341會被轉化為二值化影像351,數值342會被轉化為二值化影像352,以此類推。每張二值化影像351~353都具有相同的數值,用以表示對應的心臟超音波影像屬於收縮期或是舒張期,這些二值化影像351~353形成上述的多個週期偵測結果。產生二值化影像(而不是單一個數值)是為了用於後續的卷積層,以下將詳細說明。Next, the value 341 is converted into a binary image 351, the value 342 is converted into a binary image 352, and so on. Each binary image 351-353 has the same value, which is used to indicate whether the corresponding cardiac ultrasound image belongs to the systolic period or the diastolic period. These binary images 351-353 form the above-mentioned multiple cycle detection results. The binary image (rather than a single value) is generated for use in the subsequent convolution layer, which will be explained in detail below.

請回到圖2,另一方面心臟超音波影像201~205也會輸入至機器學習模型220,用以產生多個切割結果221~223,其中切割結果221用以指示心臟超音波影像中的逆流,切割結果222用以指示心室與心房。切割結果223用以指示瓣膜的位置,例如可用幾個點來表示瓣膜的小葉。圖3是根據一實施例繪示機器學習模型220的網路架構圖,請參照圖3,在此實施例中機器學習模型220是採用堆疊沙漏(Stacked Hourglass,SHG)模型,如圖4所示,但在其他實施例中也可以採用U-網路、全卷積網路、或其他合適的網路,本揭露並不在此限。Please go back to Figure 2. On the other hand, the cardiac ultrasound images 201~205 will also be input into the machine learning model 220 to generate multiple cutting results 221~223, wherein the cutting result 221 is used to indicate the reverse flow in the cardiac ultrasound image, and the cutting result 222 is used to indicate the ventricle and atrium. The cutting result 223 is used to indicate the position of the valve, for example, several points can be used to represent the leaflets of the valve. Figure 3 is a network architecture diagram of the machine learning model 220 according to an embodiment. Please refer to Figure 3. In this embodiment, the machine learning model 220 adopts a stacked hourglass (SHG) model, as shown in Figure 4, but in other embodiments, a U-network, a full convolution network, or other suitable networks can also be used, and the present disclosure is not limited thereto.

請回到圖2,接下來根據機器學習模型210輸出的週期偵測結果以及機器學習模型220輸出的切割結果執行機器學習模型240以產生辨識結果241,此辨識結果241用以指示是否發生房室閉鎖不全。舉例來說,週期偵測結果與所有的切割結果可以相連接在一起形成更大的特徵圖,交由機器學習模型240進行辨識。在此實施例中,還會結合醫學上的知識來設計辨識流程,具體來說,若要偵測二尖瓣與三尖瓣所造成的逆流,由於這些逆流只會發生在心房內,且只會發生在收縮期,因此對於某一張心臟超音波影像,可以將對應的週期偵測結果(圖3的二值化影像)、心房的切割結果222、以及逆流的切割結果221做元素相乘運算(element-wise multiplication)230。換言之,二值化影像、切割結果222與切割結果221的長與寬皆相同。例如在週期偵測結果中數值“1”表示收縮期,心房的切割結果222中數值“1”表示心房,而逆流的切割結果221中數值“1”表示發生逆流,因此元素相乘運算230等同於是計算收縮期、心房、以及逆流的交集,經過這樣的運算可以避免一些錯誤,提高辨識準確度。元素相乘運算230會產生交集切割結果231。此交集切割結果231、心臟超音波影像201~205以及瓣膜的切割結果223會相連接並輸入至機器學習模型240。Please return to FIG. 2 . Next, the machine learning model 240 is executed according to the cycle detection results output by the machine learning model 210 and the cutting results output by the machine learning model 220 to generate the identification result 241. The identification result 241 is used to indicate whether atrioventricular regurgitation occurs. For example, the cycle detection results and all the cutting results can be connected together to form a larger feature graph, which is then submitted to the machine learning model 240 for identification. In this embodiment, medical knowledge is also combined to design the identification process. Specifically, if the reflux caused by the mitral valve and the tricuspid valve is to be detected, since these refluxes only occur in the atrium and only occur during the systolic period, for a certain cardiac ultrasound image, the corresponding cycle detection result (binarized image in FIG. 3 ), the atrium cutting result 222, and the reflux cutting result 221 can be element-wise multiplied 230. In other words, the length and width of the binary image, the cutting result 222, and the cutting result 221 are the same. For example, in the cycle detection result, the value "1" indicates the systolic period, in the atrium cutting result 222, the value "1" indicates the atrium, and in the reverse flow cutting result 221, the value "1" indicates the occurrence of reverse flow. Therefore, the element multiplication operation 230 is equivalent to calculating the intersection of the systolic period, the atrium, and the reverse flow. Through such an operation, some errors can be avoided and the recognition accuracy can be improved. The element multiplication operation 230 will generate the intersection cutting result 231. This intersection cutting result 231, the cardiac ultrasound images 201-205, and the valve cutting result 223 are connected and input into the machine learning model 240.

上述的運算是針對某一個視角下的心臟超音波影像201~205,在一些實施例中機器學習模型240可以只根據一個視角下的心臟超音波影像輸出辨識結果241。在其他實施例中機器學習模型240也可以考慮多個視角,結合多個視角下的影像輸出辨識結果241。對於其他視角下的心臟超音波影像也可以做相同於心臟超音波影像201~205的處理。因此,如果有n個視角,則共需要訓練n個機器學習模型210與n個機器學習模型220,其中n為正整數。The above operations are for the cardiac ultrasound images 201-205 at a certain viewing angle. In some embodiments, the machine learning model 240 can output the recognition result 241 based on the cardiac ultrasound images at only one viewing angle. In other embodiments, the machine learning model 240 can also consider multiple viewing angles and output the recognition result 241 by combining the images at multiple viewing angles. The cardiac ultrasound images at other viewing angles can also be processed in the same way as the cardiac ultrasound images 201-205. Therefore, if there are n viewing angles, a total of n machine learning models 210 and n machine learning models 220 need to be trained, where n is a positive integer.

在一些實施例中,機器學習模型240是採用膠囊網路(capsule network)。圖5是根據一實施例繪示膠囊網路的架構示意圖。請參照圖5,機器學習模型240包含了多個輸入分支501~502,每個輸入分支501~502是對應至一個視角。以輸入分支501來說,上述產生的交集切割結果231、心臟超音波影像201~205以及瓣膜的切割結果223會相連接以形成一個特徵圖511。特徵圖511會輸入至特徵擷取器521以產生特徵向量531,經由一個分類器(未繪示)以後會產生視角辨識結果541。例如,視角辨識結果541包含兩個數值,分別表示在此視角下房室瓣閉鎖不全為顯著(significant)或是不顯著。在其他實施例中,視角辨識結果541也可以包含一個分數,用以表示逆流的嚴重程度。類似的,對於輸入分支502來說也可以產生對應的特徵圖512,透過特徵擷取器522會產生對應的特徵向量532以及視角辨識結果542。不同視角的視角辨識結果541~542會被結合在一起成為向量551,然後透過一個辨識器(未繪示)輸出辨識結果561,此辨識結果561例如也包含兩個數值分別表示房室瓣閉鎖不全為顯著(significant)或是不顯著。In some embodiments, the machine learning model 240 uses a capsule network. FIG. 5 is a schematic diagram of the structure of a capsule network according to an embodiment. Referring to FIG. 5 , the machine learning model 240 includes a plurality of input branches 501-502, each of which corresponds to a viewpoint. For the input branch 501, the intersection cutting result 231, the cardiac ultrasound images 201-205, and the valve cutting result 223 generated above are connected to form a feature map 511. The feature map 511 is input to the feature extractor 521 to generate a feature vector 531, which is then passed through a classifier (not shown) to generate a viewpoint recognition result 541. For example, the view angle recognition result 541 includes two values, which respectively indicate whether the atrioventricular valve regurgitation is significant or not significant at this view angle. In other embodiments, the view angle recognition result 541 may also include a score to indicate the severity of the reflux. Similarly, for the input branch 502, a corresponding feature map 512 may also be generated, and a corresponding feature vector 532 and a view angle recognition result 542 may be generated through the feature extractor 522. The view angle recognition results 541-542 of different view angles are combined into a vector 551, and then output as a recognition result 561 through an identifier (not shown). The recognition result 561 also includes two numerical values, respectively indicating whether the atrioventricular valve regurgitation is significant or not significant.

上述的房室瓣閉鎖不全可代表二尖瓣發生逆流、以及/或者三尖瓣發生逆流,或者在其他實施例中也可以偵測其他瓣膜的逆流。本領域具有通常知識者當可根據不同的偵測目標修改上述的網路架構。例如某種瓣膜逆流只會發生在舒張期,則可以在週期偵測結果中設定“1”為舒張期;如果某種瓣膜逆流只會發生在心室,則可以取心室對應的切割結果做上述的元素相乘運算。The above-mentioned atrioventricular regurgitation may represent mitral valve regurgitation and/or tricuspid valve regurgitation, or in other embodiments, other valve regurgitation may also be detected. A person with ordinary knowledge in the field may modify the above-mentioned network architecture according to different detection targets. For example, if a certain valve regurgitation only occurs in the diastole, "1" may be set as the diastole in the cycle detection result; if a certain valve regurgitation only occurs in the ventricle, the corresponding cutting result of the ventricle may be taken to perform the above-mentioned element-by-element multiplication operation.

圖6是根據一實施例繪示實驗結果的示意圖。圖6繪示了二尖瓣的切割結果601,三尖瓣的切割結果602,心室心房的切割結果603、逆流的切割結果604,流入(inflow)的切割結果605。在圖6中是把左右心室心房的切割結果都顯示在同一張圖,但左右心室心房可以有不同的標籤。FIG6 is a schematic diagram showing experimental results according to an embodiment. FIG6 shows the cutting result 601 of the mitral valve, the cutting result 602 of the tricuspid valve, the cutting result 603 of the ventricle and atrium, the cutting result 604 of the reverse flow, and the cutting result 605 of the inflow. In FIG6, the cutting results of the left and right ventricles and atria are displayed in the same figure, but the left and right ventricles and atria can have different labels.

圖7是根據一實施例繪示經過元素相乘運算的限制後的實驗結果。請參照圖7,在切割結果710中指示出了逆流區域711,但此逆流區域711涵蓋了心室,在經過上述的元素相乘運算以後可以限制切割結果在心房中,得到切割結果720,可以較準確地指出逆流區域721。FIG7 is an experimental result after being restricted by an element-wise multiplication operation according to an embodiment. Referring to FIG7 , a reverse flow region 711 is indicated in the cutting result 710, but this reverse flow region 711 covers the ventricle. After the above-mentioned element-wise multiplication operation, the cutting result can be restricted to the atrium to obtain a cutting result 720, which can more accurately indicate the reverse flow region 721.

圖8是根據一實施例繪示心臟超音波影像的分析方法流程圖。在步驟801,取得多張心臟超音波影像。在步驟802,將超音波影像輸入至第一機器學習模型以產生多個週期偵測結果,這些週期偵測結果用以指示每一張心臟超音波影像屬於收縮期或是舒張期。在步驟803,將超音波影像輸入至第二機器學習模型以產生對應的多個切割結果,這些切割結果用以指示超音波影像中的心房、瓣膜與逆流。在步驟804,根據週期偵測結果與切割結果執行第三機器學習模型以產生辨識結果,此辨識結果用以指示是否發生房室瓣閉鎖不全。圖8中各步驟已詳細說明如上,在此便不再贅述。值得注意的是,圖8中各步驟可以實作為多個程式碼或是電路,本發明並不在此限。此外,圖8的方法可以搭配以上實施例使用也可以單獨使用,換言之,圖8的各步驟之間也可以加入其他的步驟。FIG8 is a flowchart of a cardiac ultrasound image analysis method according to an embodiment. In step 801, a plurality of cardiac ultrasound images are obtained. In step 802, the ultrasound images are input into a first machine learning model to generate a plurality of cycle detection results, which are used to indicate whether each cardiac ultrasound image belongs to the systolic phase or the diastolic phase. In step 803, the ultrasound images are input into a second machine learning model to generate a plurality of corresponding cutting results, which are used to indicate the atria, valves and regurgitation in the ultrasound images. In step 804, the third machine learning model is executed according to the cycle detection result and the cutting result to generate an identification result, and the identification result is used to indicate whether atrioventricular valve regurgitation occurs. The steps in FIG8 have been described in detail above and will not be repeated here. It is worth noting that the steps in FIG8 can be implemented as multiple program codes or circuits, and the present invention is not limited thereto. In addition, the method of FIG8 can be used in conjunction with the above embodiments or can be used alone. In other words, other steps can be added between the steps in FIG8.

以另外一個角度來說,本發明也提出了一電腦程式產品,此產品可由任意的程式語言及/或平台所撰寫,當此電腦程式產品被載入至電腦系統並執行時,可執行上述的分析方法。From another perspective, the present invention also proposes a computer program product, which can be written in any programming language and/or platform. When the computer program product is loaded into a computer system and executed, the above-mentioned analysis method can be executed.

除了上述例子,本揭露提到的機器學習模型也可以是決策樹、隨機森林、多層次神經網路、支持向量機等等,本發明並不在此限。當採用卷積神經網路時,除了上述提供的具體例子,這些網路的架構可以採用LeNet、AlexNet、VGG、GoogLeNet、ResNet、DenseNet或是YOLO(You Only Look Once)等。所採用的損失函數(loss function)可以是均方誤差(Mean square error,MSE)、平均絕對值誤差(Mean absolute error,MAE)、交叉熵(cross-entropy)、Huber損失函數、Log-Cosh損失函數等。更新參數的程序可以採用梯度下降法(Gradient descent)、反向傳播(Backpropagation)等,本揭露並不在此限。In addition to the above examples, the machine learning model mentioned in the present disclosure may also be a decision tree, a random forest, a multi-layer neural network, a support vector machine, etc., but the present invention is not limited thereto. When a convolutional neural network is used, in addition to the specific examples provided above, the architecture of these networks may adopt LeNet, AlexNet, VGG, GoogLeNet, ResNet, DenseNet or YOLO (You Only Look Once), etc. The loss function used may be mean square error (MSE), mean absolute error (MAE), cross-entropy, Huber loss function, Log-Cosh loss function, etc. The procedure for updating parameters may adopt gradient descent, backpropagation, etc., but the present disclosure is not limited thereto.

雖然本發明已以實施例揭露如上,然其並非用以限定本發明,任何所屬技術領域中具有通常知識者,在不脫離本發明的精神和範圍內,當可作些許的更動與潤飾,故本發明的保護範圍當視後附的申請專利範圍所界定者為準。Although the present invention has been disclosed as above by the embodiments, they are not intended to limit the present invention. Any person with ordinary knowledge in the relevant technical field can make some changes and modifications without departing from the spirit and scope of the present invention. Therefore, the protection scope of the present invention shall be defined by the scope of the attached patent application.

100:分析裝置100:Analysis device

110:處理器110:Processor

120:記憶體120: Memory

201~205:心臟超音波影像201~205: Cardiac ultrasound images

210,220,240:機器學習模型210,220,240: Machine learning models

211:舒張期211: Diastole

212:收縮期212: Contraction period

221~223:切割結果221~223: Cutting results

230:元素相乘運算230: Element-wise multiplication

231:交集切割結果231: Intersection cutting result

241:辨識結果241: Identification results

310:三維卷積神經網路310: 3D Convolutional Neural Network

320,330:循環神經網路320,330:Circulatory neural network

340:一維卷積層340: One-dimensional convolution layer

341~343:數值341~343:Number

351~353:二值化影像351~353: Binarized image

501,502:輸入分支501,502: Input branch

511,512:特徵圖511,512: Feature graph

521,522:特徵擷取器521,522: Feature Extractor

531,532:特徵向量531,532: Feature vector

541,542:視角辨識結果541,542: Visual recognition results

551:向量551: Vector

561:辨識結果561:Identification results

601~605,710,720:切割結果601~605,710,720: cutting results

711,721:逆流區域711,721: Countercurrent area

801~804:步驟801~804: Steps

為讓本發明的上述特徵和優點能更明顯易懂,下文特舉實施例,並配合所附圖式作詳細說明如下。 圖1是根據一實施例繪示心臟超音波影像的分析裝置。 圖2是根據一實施例繪示心臟超音波影像的分析方法的流程示意圖。 圖3是根據一實施例繪示週期偵測器的網路架構圖。 圖4是根據一實施例繪示堆疊沙漏模型的網路架構圖。 圖5是根據一實施例繪示膠囊網路的架構示意圖。 圖6是根據一實施例繪示實驗結果的示意圖。 圖7是根據一實施例繪示經過元素相乘運算的限制後的實驗結果。 圖8是根據一實施例繪示心臟超音波影像的分析方法流程圖。 In order to make the above features and advantages of the present invention more clearly understandable, the following embodiments are specifically cited and detailed with the attached figures. FIG. 1 is a diagram of an analysis device for cardiac ultrasound images according to an embodiment. FIG. 2 is a flow chart of a method for analyzing cardiac ultrasound images according to an embodiment. FIG. 3 is a network architecture diagram of a cycle detector according to an embodiment. FIG. 4 is a network architecture diagram of a stacked hourglass model according to an embodiment. FIG. 5 is a schematic diagram of the architecture of a capsule network according to an embodiment. FIG. 6 is a schematic diagram of experimental results according to an embodiment. FIG. 7 is an experimental result after being restricted by element multiplication operation according to an embodiment. FIG8 is a flowchart of a cardiac ultrasound image analysis method according to an embodiment.

201~205:心臟超音波影像 201~205: Cardiac ultrasound images

210,220,240:機器學習模型 210,220,240: Machine learning model

211:舒張期 211: Diastole

212:收縮期 212: Contraction period

221~223:切割結果 221~223: Cutting results

230:元素相乘運算 230: Element-wise multiplication

231:交集切割結果 231: Intersection cutting result

241:辨識結果 241: Identification results

Claims (10)

一種心臟超音波影像的分析裝置,包括: 一記憶體,用以儲存多個指令:以及 一處理器,通訊連接至該記憶體,用以執行該些指令以完成多個步驟: 取得多張心臟超音波影像; 將該些超音波影像輸入至一第一機器學習模型以產生多個週期偵測結果,該些週期偵測結果用以指示該些心臟超音波影像中的每一者屬於一收縮期或是一舒張期; 將該些超音波影像輸入至一第二機器學習模型以產生對應的多個切割結果,該些切割結果用以指示該些超音波影像中的每一者的多個心房、多個瓣膜與一逆流; 根據該些週期偵測結果與該些切割結果執行一第三機器學習模型以產生一辨識結果,該辨識結果用以指示是否發生一房室瓣閉鎖不全。 A cardiac ultrasound image analysis device includes: A memory for storing a plurality of instructions; and A processor, communicatively connected to the memory, for executing the instructions to complete a plurality of steps: Acquire a plurality of cardiac ultrasound images; Input the ultrasound images into a first machine learning model to generate a plurality of cycle detection results, the cycle detection results are used to indicate that each of the cardiac ultrasound images belongs to a systolic period or a diastolic period; Input the ultrasound images into a second machine learning model to generate a plurality of corresponding cutting results, the cutting results are used to indicate a plurality of atria, a plurality of valves and a regurgitation in each of the ultrasound images; A third machine learning model is executed based on the cycle detection results and the cutting results to generate a recognition result, and the recognition result is used to indicate whether an atrioventricular valve regurgitation occurs. 如請求項1所述之分析裝置,其中該第一機器學習模型包含一循環神經網路(recurrent neural network),該些週期偵測結果包含多張二值化影像,該些二值化影像中的其中之一包含多個數值,該些數值彼此相同且用以指示對應的該超音波影像屬於該收縮期或是該舒張期。An analysis device as described in claim 1, wherein the first machine learning model includes a recurrent neural network, the cycle detection results include multiple binary images, one of the binary images includes multiple values, which are the same as each other and are used to indicate whether the corresponding ultrasound image belongs to the systolic period or the diastolic period. 如請求項2所述之分析裝置,其中根據該些週期偵測結果與該些切割結果執行該第三機器學習模型的步驟包括: 對於該些心臟超音波影像的其中之一,將對應的該週期偵測結果、該些心房的該切割結果、以及該逆流的該切割結果做一元素相乘運算以得到一交集切割結果。 The analysis device as described in claim 2, wherein the step of executing the third machine learning model according to the cycle detection results and the cutting results includes: For one of the cardiac ultrasound images, perform an element-wise multiplication operation on the corresponding cycle detection result, the cutting result of the atria, and the cutting result of the reverse flow to obtain an intersection cutting result. 如請求項3所述之分析裝置,其中根據該些週期偵測結果與該些切割結果執行該第三機器學習模型的步驟還包括; 將該交集切割結果、該些心臟超音波影像以及該些瓣膜的該切割結果相連接並輸入至該第三機器學習模型。 The analysis device as described in claim 3, wherein the step of executing the third machine learning model according to the cycle detection results and the cutting results further includes: Connecting the intersection cutting results, the cardiac ultrasound images and the cutting results of the valves and inputting them into the third machine learning model. 如請求項4所述之分析裝置,其中該第三機器學習模型為一膠囊網路,該交集切割結果、該些心臟超音波影像以及該些瓣膜的該切割結果屬於多個視角的其中之一,該第三機器學習模型包含多個輸入分支以對應至該些視角,該些輸入分支的每一者用以產生一視角辨識結果,該第三機器學習模型用以結合該些視角辨識結果以產生該辨識結果。An analysis device as described in claim 4, wherein the third machine learning model is a capsule network, the intersection cutting result, the cardiac ultrasound images and the cutting result of the valves belong to one of a plurality of viewing angles, the third machine learning model comprises a plurality of input branches corresponding to the viewing angles, each of the input branches is used to generate a viewing angle recognition result, and the third machine learning model is used to combine the viewing angle recognition results to generate the recognition result. 一種心臟超音波影像的分析方法,由一分析裝置所執行,該分析方法包括: 取得多張心臟超音波影像; 將該些超音波影像輸入至一第一機器學習模型以產生多個週期偵測結果,該些週期偵測結果用以指示該些心臟超音波影像中的每一者屬於一收縮期或是一舒張期; 將該些超音波影像輸入至一第二機器學習模型以產生對應的多個切割結果,該些切割結果用以指示該些超音波影像中的每一者的多個心房、多個瓣膜與一逆流;以及 根據該些週期偵測結果與該些切割結果執行一第三機器學習模型以產生一辨識結果,該辨識結果用以指示是否發生一房室瓣閉鎖不全。 A cardiac ultrasound image analysis method is performed by an analysis device, the analysis method comprising: Acquiring a plurality of cardiac ultrasound images; Inputting the ultrasound images into a first machine learning model to generate a plurality of cycle detection results, the cycle detection results are used to indicate whether each of the cardiac ultrasound images belongs to a systolic phase or a diastolic phase; Inputting the ultrasound images into a second machine learning model to generate a plurality of corresponding cutting results, the cutting results are used to indicate a plurality of atria, a plurality of valves and a regurgitation in each of the ultrasound images; and A third machine learning model is executed based on the cycle detection results and the cutting results to generate a recognition result, and the recognition result is used to indicate whether an atrioventricular valve regurgitation occurs. 如請求項6所述之分析方法,其中該第一機器學習模型包含一循環神經網路(recurrent neural network),該些週期偵測結果包含多張二值化影像,該些二值化影像中的其中之一包含多個數值,該些數值彼此相同且用以指示對應的該超音波影像屬於該收縮期或是該舒張期。An analysis method as described in claim 6, wherein the first machine learning model comprises a recurrent neural network, the cycle detection results comprise a plurality of binary images, one of the binary images comprises a plurality of numerical values, the numerical values are the same as each other and are used to indicate whether the corresponding ultrasound image belongs to the systolic period or the diastolic period. 如請求項7所述之分析方法,其中根據該些週期偵測結果與該些切割結果執行該第三機器學習模型的步驟包括: 對於該些心臟超音波影像的其中之一,將對應的該週期偵測結果、該些心房的該切割結果、以及該逆流的該切割結果做一元素相乘運算以得到一交集切割結果。 The analysis method as described in claim 7, wherein the step of executing the third machine learning model according to the cycle detection results and the cutting results includes: For one of the cardiac ultrasound images, perform an element-wise multiplication operation on the corresponding cycle detection result, the cutting result of the atria, and the cutting result of the reverse flow to obtain an intersection cutting result. 如請求項8所述之分析方法,其中根據該些週期偵測結果與該些切割結果執行該第三機器學習模型的步驟還包括; 將該交集切割結果、該些心臟超音波影像以及該些瓣膜的該切割結果相連接並輸入至該第三機器學習模型。 The analysis method as described in claim 8, wherein the step of executing the third machine learning model based on the cycle detection results and the cutting results also includes: Connecting the intersection cutting results, the cardiac ultrasound images and the cutting results of the valves and inputting them into the third machine learning model. 如請求項9所述之分析方法,其中該第三機器學習模型為一膠囊網路,該交集切割結果、該些心臟超音波影像以及該些瓣膜的該切割結果屬於多個視角的其中之一,該第三機器學習模型包含多個輸入分支以對應至該些視角,該些輸入分支的每一者用以產生一視角辨識結果,該第三機器學習模型用以結合該些視角辨識結果以產生該辨識結果。An analysis method as described in claim 9, wherein the third machine learning model is a capsule network, the intersection cutting result, the cardiac ultrasound images and the cutting result of the valves belong to one of a plurality of viewing angles, the third machine learning model comprises a plurality of input branches corresponding to the viewing angles, each of the input branches is used to generate a viewing angle recognition result, and the third machine learning model is used to combine the viewing angle recognition results to generate the recognition result.
TW112150683A 2023-12-26 2023-12-26 Device and method for analyzing cardiac ultrasound images TWI855947B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW112150683A TWI855947B (en) 2023-12-26 2023-12-26 Device and method for analyzing cardiac ultrasound images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW112150683A TWI855947B (en) 2023-12-26 2023-12-26 Device and method for analyzing cardiac ultrasound images

Publications (2)

Publication Number Publication Date
TWI855947B true TWI855947B (en) 2024-09-11
TW202525241A TW202525241A (en) 2025-07-01

Family

ID=93649215

Family Applications (1)

Application Number Title Priority Date Filing Date
TW112150683A TWI855947B (en) 2023-12-26 2023-12-26 Device and method for analyzing cardiac ultrasound images

Country Status (1)

Country Link
TW (1) TWI855947B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI768774B (en) * 2021-03-17 2022-06-21 宏碁股份有限公司 Method for evaluating movement state of heart
TW202236296A (en) * 2021-03-10 2022-09-16 宏碁股份有限公司 Image processing apparatus for cardiac image evaluation and ventricle status identification method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW202236296A (en) * 2021-03-10 2022-09-16 宏碁股份有限公司 Image processing apparatus for cardiac image evaluation and ventricle status identification method
TWI768774B (en) * 2021-03-17 2022-06-21 宏碁股份有限公司 Method for evaluating movement state of heart

Also Published As

Publication number Publication date
TW202525241A (en) 2025-07-01

Similar Documents

Publication Publication Date Title
US11957507B2 (en) Systems and methods for a deep neural network to enhance prediction of patient endpoints using videos of the heart
US10702247B2 (en) Automatic clinical workflow that recognizes and analyzes 2D and doppler modality echocardiogram images for automated cardiac measurements and the diagnosis, prediction and prognosis of heart disease
US11446009B2 (en) Clinical workflow to diagnose heart disease based on cardiac biomarker measurements and AI recognition of 2D and doppler modality echocardiogram images
Nurmaini et al. Accurate detection of septal defects with fetal ultrasonography images using deep learning-based multiclass instance segmentation
Sfakianakis et al. GUDU: Geometrically-constrained Ultrasound Data augmentation in U-Net for echocardiography semantic segmentation
Yang et al. Deep RetinaNet for dynamic left ventricle detection in multiview echocardiography classification
WO2025161189A1 (en) Subject analysis method and apparatus, computer device and nonvolatile storage medium
JP7369437B2 (en) Evaluation system, evaluation method, learning method, trained model, program
TWI855947B (en) Device and method for analyzing cardiac ultrasound images
US12399932B1 (en) Apparatus and methods for visualization within a three-dimensional model using neural networks
US20250209697A1 (en) Apparatus and method for generating a three-dimensional (3d) model with an overlay
US12308113B1 (en) Apparatus and methods for synthetizing medical images
TWI849997B (en) Method and electrical device for analyzing cardiac ultrasound image
Cerna Large scale electronic health record data and echocardiography video analysis for mortality risk prediction
CN119763747B (en) A self-learning electronic medical record structuring method and device based on prefix fine-tuning and reinforcement learning
Firdous et al. Prevention of autopsy by establishing a cause-effect relationship between pulmonary embolism and heart-failure using machine learning
Benjamins et al. Hybrid cardiac imaging: The role of machine learning and artificial intelligence
Ezhilan et al. Ensemble learning approaches for cardiovascular diseases prediction: a comparative evaluation
US20260038175A1 (en) Apparatus and method for generating a three-dimensional (3d) model with an overlay
Dash et al. Rheumatic Carditis Screening Unveiled on YOLO Algorithms Insights
US20250295349A1 (en) Apparatus and methods for automatic suggestion of atrial fibrillation cases based on a presence of abnormal pulmonary vein anatomy
Lane A computer vision pipeline for fully automated echocardiogram interpretation
Avila et al. The EchoCardiography open data base EchoUNAL: a benchmarking study for automatic classification of six echocardiographic views
CN117979907A (en) Methods and systems for generating a likelihood of heart failure with preserved ejection fraction (HFpEF)
Petersen et al. Current and Future Role of Artificial Intelligence in Cardiac Imaging, Volume II