[go: up one dir, main page]

TWI868019B - Operation training system for ultrasound and operation training method for ultrasound - Google Patents

Operation training system for ultrasound and operation training method for ultrasound Download PDF

Info

Publication number
TWI868019B
TWI868019B TW113117469A TW113117469A TWI868019B TW I868019 B TWI868019 B TW I868019B TW 113117469 A TW113117469 A TW 113117469A TW 113117469 A TW113117469 A TW 113117469A TW I868019 B TWI868019 B TW I868019B
Authority
TW
Taiwan
Prior art keywords
ultrasound
virtual
display
hand
operation training
Prior art date
Application number
TW113117469A
Other languages
Chinese (zh)
Other versions
TW202544753A (en
Inventor
謝凱生
楊東華
許玉龍
王韋竣
Original Assignee
中國醫藥大學
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中國醫藥大學 filed Critical 中國醫藥大學
Priority to TW113117469A priority Critical patent/TWI868019B/en
Priority to US18/888,567 priority patent/US20250095512A1/en
Application granted granted Critical
Publication of TWI868019B publication Critical patent/TWI868019B/en
Publication of TW202544753A publication Critical patent/TW202544753A/en

Links

Images

Landscapes

  • Rehabilitation Tools (AREA)

Abstract

An operation training system for ultrasound and an operation training method for ultrasound are proposed. The operation training system includes a sensing device, a data processing device and a display device. The sensing device is configured to sense a hand movement signal and a voice signal. The data processing device analyzes the hand movement signal to perform virtual ultrasound detection. The data processing device identifies one of a plurality of ultrasound data to be played and generates a view judgment result and a disease judgment result. The data processing device analyzes the voice signal to retrieve at least one keyword, and generates answer content corresponding to the voice signal based on the at least one keyword. Thus, the effect of simulation training of ultrasonic operation is improved.

Description

超音波操作訓練系統及超音波操作訓練方法Ultrasound operation training system and ultrasound operation training method

本揭示內容關於一種操作訓練系統及操作訓練方法,且尤其是關於一種超音波操作訓練系統及超音波操作訓練方法。The present disclosure relates to an operation training system and an operation training method, and in particular to an ultrasound operation training system and an ultrasound operation training method.

專業的超音波檢測人員須經過嚴格的訓練,方能知道其要檢測之器官或部分,並且適當地移動或轉動超音波探頭以取得特定位置或角度的超音波影像,如此方能有助於正確的判斷器官或部位的狀況。並且,進一步地要會透過超音波影像來判斷器官或部位為正常或異常。Professional ultrasound inspectors must undergo rigorous training to know the organs or parts to be inspected, and to move or rotate the ultrasound probe appropriately to obtain ultrasound images at specific locations or angles, which can help to correctly judge the condition of the organs or parts. Furthermore, they must be able to judge whether the organs or parts are normal or abnormal through ultrasound images.

習知之超音波操作訓練是運用假人或真人來進行超音波檢測的訓練,但在例如兒童的超音波檢測中,不容易使兒童靜躺於床椅上配合訓練,而仍有其待改善之處。As is known, ultrasound operation training is to use a dummy or real person to conduct ultrasound testing training. However, in ultrasound testing of children, for example, it is not easy to make the children lie still on a bed or chair to cooperate with the training, and there is still room for improvement.

有鑑於此,如何發展出一種超音波操作訓練系統及超音波操作訓練方法,遂成相關學/業者欲追求的目標。In view of this, how to develop an ultrasound operation training system and ultrasound operation training method has become the goal pursued by relevant academics/professionals.

為了解決上述問題,本揭示內容提供一種超音波操作訓練系統及超音波操作訓練方法,透過系統的配置,可有效進行模擬訓練。To solve the above problems, the present disclosure provides an ultrasonic operation training system and an ultrasonic operation training method, which can effectively perform simulation training through the configuration of the system.

依據本揭示內容一實施方式提供一種超音波操作訓練系統,包含一感測裝置、一資料處理裝置及一顯示裝置。感測裝置用以感測一使用者之一手部以發出一手部動作訊號,並擷取使用者的一說話語音以發出一語音訊號。資料處理裝置訊號連接感測裝置且包含一顯示產生模組、一分析模組、一辨識模組及一互動模組。顯示產生模組用以產生一虛擬場景,虛擬場景中包含一虛擬人體、一虛擬手部、一虛擬探頭及一虛擬超音波顯示器。分析模組訊號連接顯示產生模組,並用以分析手部動作訊號,依據手部動作訊號控制虛擬手部於虛擬場景內移動虛擬探頭並進行虛擬超音波檢測,而使顯示產生模組自複數超音波資料中選擇其中一待播放者,並根據虛擬探頭的掃描角度,自此些超音波資料中切換另一待播放者。辨識模組訊號連接顯示產生模組及分析模組,並用以辨識待播放者而產生一切面判斷結果及一疾病判斷結果,並將切面判斷結果及疾病判斷結果儲存至一問答模型。互動模組訊號連接顯示產生模組及分析模組,並用以分析語音訊號而擷取至少一關鍵字,並根據問答模型而產生對應語音訊號的一對答內容。顯示裝置訊號連接顯示產生模組,且用以顯示虛擬場景並播放對答內容。According to an embodiment of the present disclosure, an ultrasound operation training system is provided, comprising a sensing device, a data processing device and a display device. The sensing device is used to sense a hand of a user to generate a hand motion signal, and capture a speech of the user to generate a speech signal. The data processing device is signal-connected to the sensing device and comprises a display generation module, an analysis module, a recognition module and an interaction module. The display generation module is used to generate a virtual scene, which comprises a virtual human body, a virtual hand, a virtual probe and a virtual ultrasound display. The analysis module signal is connected to the display generation module and is used to analyze the hand movement signal. According to the hand movement signal, the virtual hand is controlled to move the virtual probe in the virtual scene and perform virtual ultrasound detection, so that the display generation module selects one of the subjects to be played from the plurality of ultrasound data, and switches another subject to be played from these ultrasound data according to the scanning angle of the virtual probe. The identification module signal is connected to the display generation module and the analysis module, and is used to identify the subject to be played and generate a cross-sectional judgment result and a disease judgment result, and store the cross-sectional judgment result and the disease judgment result in a question-answering model. The interactive module is signal-connected to the display generation module and the analysis module, and is used to analyze the voice signal to capture at least one keyword, and to generate a dialogue content corresponding to the voice signal according to the question-answer model. The display device is signal-connected to the display generation module, and is used to display the virtual scene and play the dialogue content.

前述實施方式的其他實施例如下:超音波操作訓練系統更包含一資料庫,其包含對應於不同器官的複數超音波切面樣本及複數超音波疾病樣本,辨識模組萃取待播放者的至少一特徵,與此些超音波切面樣本進行比對而產生切面判斷結果,並與此些超音波疾病樣本進行比對而產生疾病判斷結果。Other embodiments of the aforementioned implementation method are as follows: the ultrasound operation training system further includes a database, which includes a plurality of ultrasound section samples and a plurality of ultrasound disease samples corresponding to different organs. The recognition module extracts at least one feature of the person to be played, compares it with these ultrasound section samples to generate a section judgment result, and compares it with these ultrasound disease samples to generate a disease judgment result.

前述實施方式的其他實施例如下:資料庫更包含問答模型,問答模型接收至少一關鍵字,並根據至少一關鍵字而運算產生對答內容。Other implementation examples of the aforementioned implementation method are as follows: the database further includes a question-answering model, which receives at least one keyword and generates answer content based on the at least one keyword.

前述實施方式的其他實施例如下:超音波操作訓練系統更包含一資料庫,其包含至少一超音波檔案,至少一超音波檔案被分割為此些超音波資料,各超音波資料為一超音波動態片段或一超音波靜態影格,資料處理裝置更包含一超音波資料接收模組用以接收此些超音波資料。Other embodiments of the aforementioned implementation method are as follows: the ultrasound operation training system further includes a database, which includes at least one ultrasound file, and the at least one ultrasound file is divided into these ultrasound data, each ultrasound data is an ultrasound dynamic segment or an ultrasound static frame, and the data processing device further includes an ultrasound data receiving module for receiving these ultrasound data.

前述實施方式的其他實施例如下:顯示裝置更包含一穿戴部,其用以連接支撐一穿戴虛擬實境顯示器且用以供使用者穿戴於一頭部,感測裝置包含一頭部位置感測器,頭部位置感測器位於穿戴部,以感測使用者之頭部以發出一頭部動作訊號及語音訊號。Other embodiments of the aforementioned embodiment are as follows: the display device further includes a wearable portion, which is used to connect and support a wearable virtual reality display and is used for a user to wear on a head, and the sensing device includes a head position sensor, which is located in the wearable portion to sense the user's head to emit a head movement signal and a voice signal.

前述實施方式的其他實施例如下:虛擬場景中更包含至少一按鈕,至少一按鈕用以被觸發以改變虛擬場景中虛擬人體的一狀態、定格虛擬超音波顯示器的畫面、切換虛擬人體上的一掃描區域提示類型或更換掃描主題。Other embodiments of the aforementioned embodiment are as follows: the virtual scene further includes at least one button, and the at least one button is used to be triggered to change a state of the virtual human body in the virtual scene, freeze the image of the virtual ultrasound display, switch a scan area prompt type on the virtual human body, or change the scan theme.

依據本揭示內容另一實施方式提供一種超音波操作訓練方法,包含一虛擬場景顯示步驟、一感測步驟、一超音波資料顯示步驟、一辨識步驟以及一問答回饋步驟。虛擬場景顯示步驟中,一超音波操作訓練系統的一資料處理裝置的一顯示產生模組產生一虛擬場景,虛擬場景中包含一虛擬人體、一虛擬手部、一虛擬探頭及一虛擬超音波顯示器,虛擬場景顯示於超音波操作訓練系統的一顯示裝置。感測步驟中,一感測裝置感測一使用者之一手部以發出一手部動作訊號,並擷取使用者的一說話語音以發出一語音訊號。超音波資料顯示步驟中,資料處理裝置的一分析模組分析手部動作訊號,依據手部動作訊號控制虛擬手部於虛擬場景內移動虛擬探頭並進行虛擬超音波檢測,而使顯示產生模組自複數超音波資料中選擇其中一待播放者,並根據虛擬探頭的掃描角度,自此些超音波資料中切換另一待播放者。辨識步驟中,資料處理裝置的一辨識模組辨識待播放者而產生一切面判斷結果及一疾病判斷結果,並將切面判斷結果及疾病判斷結果儲存至一問答模型。問答回饋步驟中,資料處理裝置的一互動模組分析語音訊號而擷取至少一關鍵字,並根據問答模型而產生對應語音訊號的一對答內容,並以顯示裝置播放對答內容。According to another embodiment of the present disclosure, an ultrasound operation training method is provided, comprising a virtual scene display step, a sensing step, an ultrasound data display step, a recognition step, and a question-answer feedback step. In the virtual scene display step, a display generation module of a data processing device of an ultrasound operation training system generates a virtual scene, wherein the virtual scene comprises a virtual human body, a virtual hand, a virtual probe, and a virtual ultrasound display, and the virtual scene is displayed on a display device of the ultrasound operation training system. In the sensing step, a sensing device senses a hand of a user to generate a hand motion signal, and captures a speech of the user to generate a speech signal. In the ultrasonic data display step, an analysis module of the data processing device analyzes the hand motion signal, controls the virtual hand to move the virtual probe in the virtual scene and performs virtual ultrasonic detection according to the hand motion signal, and enables the display generation module to select one of the ultrasonic data to be played, and switches another one of the ultrasonic data to be played according to the scanning angle of the virtual probe. In the recognition step, a recognition module of the data processing device recognizes the player to generate a face judgment result and a disease judgment result, and stores the face judgment result and the disease judgment result in a question-answering model. In the question-answering feedback step, an interactive module of the data processing device analyzes the voice signal to capture at least one keyword, and generates a dialogue content corresponding to the voice signal according to the question-answering model, and plays the dialogue content on the display device.

前述實施方式的其他實施例如下:超音波操作訓練方法,更包含一調整切換步驟。調整切換步驟中,虛擬場景更顯示至少一按鈕,使用者移動另一手部,感測裝置感測另一手部以發出另一手部動作訊號,分析模組依據另一手部動作訊號判斷虛擬場景內的另一虛擬手部是否觸碰至少一按鈕,以使顯示產生模組改變虛擬場景中虛擬人體的一狀態、定格虛擬超音波顯示器的畫面、切換虛擬人體上的一掃描區域提示類型或更換掃描主題。Other embodiments of the above-mentioned implementation are as follows: The ultrasound operation training method further includes an adjustment switching step. In the adjustment switching step, the virtual scene further displays at least one button, the user moves the other hand, the sensing device senses the other hand to send the other hand movement signal, and the analysis module determines whether the other virtual hand in the virtual scene touches the at least one button according to the other hand movement signal, so that the display generation module changes a state of the virtual human body in the virtual scene, freezes the screen of the virtual ultrasound display, switches a scanning area prompt type on the virtual human body, or changes the scanning theme.

前述實施方式的其他實施例如下:於辨識步驟中,辨識模組萃取待播放者的至少一特徵,與一資料庫中的複數超音波切面樣本進行比對而產生切面判斷結果,並與資料庫中的複數超音波疾病樣本進行比對而產生疾病判斷結果。其中,資料庫的此些超音波切面樣本及此些超音波疾病樣本對應於不同器官。Other embodiments of the aforementioned implementation are as follows: In the recognition step, the recognition module extracts at least one feature of the person to be played, compares it with a plurality of ultrasound section samples in a database to generate a section judgment result, and compares it with a plurality of ultrasound disease samples in the database to generate a disease judgment result. Among them, these ultrasound section samples and these ultrasound disease samples in the database correspond to different organs.

前述實施方式的其他實施例如下:於問答回饋步驟中,問答模型接收至少一關鍵字,並根據至少一關鍵字而運算產生對答內容。Other implementation examples of the aforementioned implementation method are as follows: In the question-answer feedback step, the question-answer model receives at least one keyword and generates answer content based on the at least one keyword.

以下將參照圖式說明本揭示內容之實施例。為明確說明起見,許多實務上的細節將在以下敘述中一併說明。然而,閱讀者應瞭解到,這些實務上的細節不應用以限制本揭示內容。也就是說,在本揭示內容部分實施例中,這些實務上的細節是非必要的。此外,為簡化圖式起見,一些習知慣用的結構與元件在圖式中將以簡單示意的方式繪示;並且重複之元件將可能使用相同的編號或類似的編號表示。The following will describe embodiments of the present disclosure with reference to the drawings. For the sake of clarity, many practical details will be described together in the following description. However, the reader should understand that these practical details should not be used to limit the present disclosure. In other words, in some embodiments of the present disclosure, these practical details are not necessary. In addition, in order to simplify the drawings, some commonly used structures and components will be shown in the drawings in a simple schematic manner; and repeated components may be represented by the same number or similar number.

此外,本文中第一、第二、第三等用語只是用來描述不同元件或成分,而對元件/成分本身並無限制,因此,第一元件/成分亦可改稱為第二元件/成分。且本文中之元件/成分/機構/模組之組合非此領域中之一般周知、常規或習知之組合,不能以元件/成分/機構/模組本身是否為習知,來判定其組合關係是否容易被技術領域中之通常知識者輕易完成。In addition, the terms "first", "second", "third", etc. in this article are only used to describe different elements or components, and do not limit the elements/components themselves. Therefore, the first element/component can also be renamed as the second element/component. Moreover, the combination of elements/components/mechanisms/modules in this article is not a generally known, conventional or familiar combination in this field. Whether the elements/components/mechanisms/modules themselves are known cannot be used to determine whether their combination relationship is easy to be completed by ordinary knowledge in the technical field.

請參閱第1圖、第2圖及第3圖,其中第1圖繪示依照本揭示內容第一實施例之超音波操作訓練系統100與一使用者U1的示意圖;第2圖繪示第1圖之超音波操作訓練系統100的方塊架構圖;及第3圖繪示第1圖之超音波操作訓練系統100的一虛擬場景P。超音波操作訓練系統100包含一感測裝置110、一資料處理裝置120以及一顯示裝置130。Please refer to FIG. 1, FIG. 2 and FIG. 3, wherein FIG. 1 is a schematic diagram of an ultrasonic operation training system 100 and a user U1 according to a first embodiment of the present disclosure; FIG. 2 is a block diagram of the ultrasonic operation training system 100 of FIG. 1; and FIG. 3 is a virtual scene P of the ultrasonic operation training system 100 of FIG. 1. The ultrasonic operation training system 100 includes a sensing device 110, a data processing device 120 and a display device 130.

感測裝置110用以感測使用者U1之一手部U11以發出一手部動作訊號。資料處理裝置120訊號連接感測裝置110且包含一分析模組123及一顯示產生模組121,分析模組123用以分析手部動作訊號,顯示產生模組121訊號連接分析模組123且用以產生虛擬場景P,虛擬場景P中包含一虛擬人體P2、一虛擬手部P11、一虛擬探頭P5及一虛擬超音波顯示器P3。顯示裝置130訊號連接顯示產生模組121且用以顯示虛擬場景P。其中,虛擬手部P11依據手部動作訊號,於虛擬場景P內移動虛擬探頭P5並進行虛擬超音波檢測,由分析模組123依據手部動作訊號,自複數超音波資料中選擇其中一待播放者,並使顯示產生模組121將前述其中一待播放者顯示於虛擬超音波顯示器P3,當分析模組123依據手部動作訊號,判斷使用者U1的手部U11有一移動或一轉動時,根據虛擬探頭P5的掃描角度自超音波資料中選擇另一待播放者,由顯示產生模組121將前述另一待播放者播放至虛擬超音波顯示器P3,以對應切換虛擬超音波顯示器P3的畫面。The sensing device 110 is used to sense a hand U11 of the user U1 to generate a hand motion signal. The data processing device 120 is signal-connected to the sensing device 110 and includes an analysis module 123 and a display generation module 121. The analysis module 123 is used to analyze the hand motion signal. The display generation module 121 is signal-connected to the analysis module 123 and is used to generate a virtual scene P. The virtual scene P includes a virtual human body P2, a virtual hand P11, a virtual probe P5, and a virtual ultrasound display P3. The display device 130 is signal-connected to the display generation module 121 and is used to display the virtual scene P. The virtual hand P11 moves the virtual probe P5 in the virtual scene P and performs virtual ultrasound detection according to the hand motion signal. The analysis module 123 selects one of the ultrasound data to be played from the plurality of ultrasound data according to the hand motion signal, and causes the display generation module 121 to display the one of the ultrasound data to be played on the virtual ultrasound display P3. When the analysis module 123 determines that the hand U11 of the user U1 has moved or rotated according to the hand motion signal, another object to be played is selected from the ultrasound data according to the scanning angle of the virtual probe P5, and the display generation module 121 plays the aforementioned another object to be played to the virtual ultrasound display P3 to switch the screen of the virtual ultrasound display P3 accordingly.

藉此,資料處理裝置120產生的虛擬場景P可模擬實際的檢測場景,再由感測裝置110感測使用者U1的手部U11以允許使用者U1以虛擬手部P11於虛擬場景P內移動,並且根據手部U11的移動可以調整要對應播放的超音波資料,增加模擬訓練的真實性。後面將詳述超音波操作訓練系統100的細節。Thus, the virtual scene P generated by the data processing device 120 can simulate the actual detection scene, and the sensing device 110 can sense the hand U11 of the user U1 to allow the user U1 to move the virtual hand P11 in the virtual scene P, and the ultrasound data to be played can be adjusted according to the movement of the hand U11, thereby increasing the realism of the simulation training. The details of the ultrasound operation training system 100 will be described in detail later.

超音波操作訓練系統100可更包含一資料庫140,其包含至少一超音波檔案,前述至少一超音波檔案被分割為前述複數超音波資料,各超音波資料為一超音波動態片段或一超音波靜態影格。資料處理裝置120可更包含一超音波資料接收模組122用以接收前述複數超音波資料。The ultrasound operation training system 100 may further include a database 140, which includes at least one ultrasound file, wherein the at least one ultrasound file is divided into the plurality of ultrasound data, each of which is an ultrasound dynamic segment or an ultrasound static frame. The data processing device 120 may further include an ultrasound data receiving module 122 for receiving the plurality of ultrasound data.

具體地,資料處理裝置120可為一電子裝置例如智慧型裝置、虛擬實境處理器或混合實境處理器等,電子裝置可例如包含中央處理單元進行演算、隨機存取記憶體隨演算產生臨時性資訊以及記憶單元如硬碟等。資料處理裝置120可經程式化而形成顯示產生模組121、超音波資料接收模組122及分析模組123,以執行指令及對應之功能,在本實施例中,資料處理裝置120示意為虛擬實境處理器。資料庫140可位於一伺服器或是一電腦,其可以透過無線或有線的方式連接於資料處理裝置120,而能將超音波資料傳送至資料處理裝置120。Specifically, the data processing device 120 may be an electronic device such as a smart device, a virtual reality processor or a mixed reality processor, etc. The electronic device may include, for example, a central processing unit for performing calculations, a random access memory for generating temporary information by random calculations, and a memory unit such as a hard disk, etc. The data processing device 120 may be programmed to form a display generation module 121, an ultrasonic data receiving module 122, and an analysis module 123 to execute instructions and corresponding functions. In this embodiment, the data processing device 120 is illustrated as a virtual reality processor. The database 140 may be located in a server or a computer, and may be connected to the data processing device 120 via a wireless or wired manner, so as to transmit the ultrasound data to the data processing device 120 .

超音波檔案的數量可為複數,其是實際操作超音波檢測時所錄製之動態影像,超音波檔案可依需求先被進行分類,例如是依器官分類,或依正常、異常等進行分類,並且可進一步再依病症分類。此外,超音波檔案可依需求被切割,例如是將每個完整的動態影像依探頭位置及/或角度等分割為複數個超音波動態片段及/或超音波靜態影格,因此每個超音波檔案均可被分為複數個超音波動態片段及/或超音波靜態影格。在本實施例中,超音波檔案可以是先於伺服器分割好後再傳至超音波資料接收模組122,而在其他實施例中,亦可以是由伺服器將超音波檔案傳至資料處理裝置,由資料處理裝置的分割模組進行分割後,再傳至超音波資料接收模組,不以此為限。The number of ultrasound files can be multiple, which are dynamic images recorded during the actual operation of ultrasound detection. The ultrasound files can be classified according to the needs, such as by organ classification, or by normal, abnormal, etc., and can be further classified by disease. In addition, the ultrasound file can be cut according to the needs, such as dividing each complete dynamic image into multiple ultrasound dynamic segments and/or ultrasound static frames according to the probe position and/or angle, so each ultrasound file can be divided into multiple ultrasound dynamic segments and/or ultrasound static frames. In this embodiment, the ultrasound file may be segmented by the server before being transmitted to the ultrasound data receiving module 122. In other embodiments, the server may transmit the ultrasound file to the data processing device, which may segment the file and then transmit the file to the ultrasound data receiving module. The present invention is not limited to this.

顯示裝置130可包含一穿戴虛擬實境顯示器131、一穿戴混合實境顯示器132以及一投影顯示器133中至少其中一者。具體地,如第1圖所示,穿戴虛擬實境顯示器131可具有頭戴式顯示器(HMD)結構,因此顯示裝置130可更包含一穿戴部134,其用以連接支撐穿戴虛擬實境顯示器131且用以供使用者U1穿戴於一頭部U13,資料處理裝置120亦可設置於穿戴部134。穿戴混合實境顯示器132可具有透明、半透明或透視型的近眼顯示器結構,亦可連接於另一穿戴部。在一實施例中,顯示裝置可只包含穿戴虛擬實境顯示器及投影顯示器,如此虛擬場景可以同時顯示於穿戴虛擬實境顯示器及投影顯示器,可方便其他人員觀察使用者的訓練狀況。The display device 130 may include at least one of a wearable virtual reality display 131, a wearable mixed reality display 132, and a projection display 133. Specifically, as shown in FIG. 1 , the wearable virtual reality display 131 may have a head mounted display (HMD) structure, so the display device 130 may further include a wearable portion 134, which is used to connect and support the wearable virtual reality display 131 and is used for the user U1 to wear on a head U13, and the data processing device 120 may also be disposed on the wearable portion 134. The wearable mixed reality display 132 may have a transparent, semi-transparent or see-through near-eye display structure, and may also be connected to another wearable portion. In one embodiment, the display device may only include a wearable virtual reality display and a projection display, so that the virtual scene can be displayed on the wearable virtual reality display and the projection display at the same time, which can facilitate other personnel to observe the user's training status.

感測裝置110可包含一手部位置感測器111,且手部位置感測器111具有一手把結構以供手部U11握持。又,感測裝置110可更包含一頭部位置感測器112,頭部位置感測器112可位於穿戴部134,以感測使用者U1之頭部U13以發出一頭部動作訊號。此外,感測裝置110可更包含一參考點位置感測器113。The sensing device 110 may include a hand position sensor 111, and the hand position sensor 111 has a handle structure for the hand U11 to hold. In addition, the sensing device 110 may further include a head position sensor 112, and the head position sensor 112 may be located in the wearable portion 134 to sense the head U13 of the user U1 to send a head movement signal. In addition, the sensing device 110 may further include a reference point position sensor 113.

具體而言,手部位置感測器111可包含慣性感測元件及數據傳輸元件等,慣性感測元件感測手部U11(以及後面所提及之手部U12)的動作例如位移或轉動,位移是指線性位移例如縱向位移、橫向位移,並由數據傳輸元件依據感測結果形成手部動作訊號傳回給資料處理裝置120,而由分析模組123分析後,使顯示產生模組121依據手部動作訊號改變虛擬場景P中虛擬手部P11的狀態。手部位置感測器111並可進一步包含觸覺回饋模組,而能以振動等方式回饋訊息給使用者U1。在其他實施例中,手部位置感測器可具有一手套結構以供手部穿戴,不以上述為限。頭部位置感測器112的結構與手部位置感測器111類似而亦可包含慣性感測元件及數據傳輸元件等,以感測使用者U1的頭部U13的位置或轉動,而能控制虛擬場景P所看之角度或位置的變化。Specifically, the hand position sensor 111 may include an inertial sensing element and a data transmission element. The inertial sensing element senses the movement of the hand U11 (and the hand U12 mentioned later), such as displacement or rotation. Displacement refers to linear displacement, such as longitudinal displacement and lateral displacement. The data transmission element forms a hand movement signal based on the sensing result and transmits it back to the data processing device 120. After analysis by the analysis module 123, the display generation module 121 changes the state of the virtual hand P11 in the virtual scene P based on the hand movement signal. The hand position sensor 111 may further include a tactile feedback module, which can provide feedback to the user U1 in the form of vibration or the like. In other embodiments, the hand position sensor may have a glove structure for the hand to wear, and is not limited to the above. The structure of the head position sensor 112 is similar to that of the hand position sensor 111 and may also include an inertial sensing element and a data transmission element, etc., to sense the position or rotation of the head U13 of the user U1, and to control the change of the angle or position of the virtual scene P.

參考點位置感測器113可例如是設置於一支架上並且與資料處理裝置120訊號連接,更進一步地,亦可與手部位置感測器111及頭部位置感測器112訊號連接,以確認使用者U1於現實空間中各部位的具體位置,而方便虛實整合,然不以此為限。在其他實施例中,感測裝置可包含一相機及複數貼片,貼片可例如包含二維編碼且可貼於使用者上欲感測追蹤的部位,而能透過影像識別以確認各部位的位置。或者,感測裝置可包含紅外光偵測器、紅外光發射器及反射貼片,反射貼片用以貼在使用者上欲感測追蹤的部位,再由紅外光偵測器、紅外光發射器確認位置,本揭示內容不以此為限。The reference point position sensor 113 may be, for example, mounted on a bracket and connected to the data processing device 120 by signal. Furthermore, it may also be connected to the hand position sensor 111 and the head position sensor 112 by signal to confirm the specific position of each part of the user U1 in the real space, and facilitate the integration of virtual and real, but it is not limited to this. In other embodiments, the sensing device may include a camera and a plurality of patches. The patches may, for example, include two-dimensional codes and may be attached to the parts of the user to be sensed and tracked, and the positions of each part may be confirmed through image recognition. Alternatively, the sensing device may include an infrared light detector, an infrared light emitter, and a reflective patch. The reflective patch is used to be attached to the part of the user to be sensed and tracked, and the infrared light detector and the infrared light emitter confirm the position. The content of the present disclosure is not limited to this.

在第1圖至第3圖的實施例中,感測裝置110還可以感測使用者U1的另一手部U12,例如是以另一手部位置感測器111感測手部U12,以發出另一手部動作訊號,如此顯示產生模組121產生的虛擬場景P可更包含另一虛擬手部P12,並且依據手部U12的作動控制虛擬手部P12於虛擬場景P中的動作。In the embodiments of Figures 1 to 3, the sensing device 110 can also sense the other hand U12 of the user U1, for example, by using another hand position sensor 111 to sense the hand U12 to send out another hand movement signal, so that the virtual scene P generated by the display generation module 121 can further include another virtual hand P12, and control the movement of the virtual hand P12 in the virtual scene P according to the movement of the hand U12.

如第3圖所示,虛擬場景P中包含虛擬人體P2、虛擬手部P11、P12、虛擬超音波顯示器P3及虛擬探頭P5,虛擬手部P11、P12會依據手部U11、U12的動作而作出相應的位移或轉動,虛擬場景P中亦可更包含一超音波機台、其他病房內的設備或儀器,然不以此為限。虛擬場景P內可更包含至少一按鈕P4,前述至少一按鈕P4用以被觸發以改變虛擬場景P中虛擬人體P2的一狀態、定格虛擬超音波顯示器P3的畫面、切換虛擬人體P2上的一掃描區域提示類型或更換掃描主題。仔細而言,按鈕P4的數量可為複數,且可分別於不同按鈕P4上顯示「人體顯示模式更換」、「定格掃描畫面」、「切換顯示掃描區」、「心臟掃描」及「腹部掃描」等對應上述功能的文字。在進行訓練時,可控制虛擬手部P12觸發按鈕P4以選擇進行心臟掃描或進行腹部掃描,並且在初始掃描的預設狀態下沒有任何提示。若有需要,使用者U1可再控制虛擬手部P12按下「切換顯示掃描區」的按鈕P4,以顯示人體上掃描區域的提示,提示可分為無提示、縱向掃描顯示及橫向掃描顯示,均可透過控制虛擬手部P12按下按鈕P4進行切換。此外,虛擬人體P2上亦可更顯示一掃描方位D1,其包含一長軸及三短軸,掃描方位D1是位於對應要掃描的窗口上,而可供使用者U1參考。要特別說明的是,本案之超音波掃描可應用於人體上各器官的掃描,而不限於上述所提到之器官。As shown in FIG. 3 , the virtual scene P includes a virtual human body P2, virtual hands P11, P12, a virtual ultrasound display P3, and a virtual probe P5. The virtual hands P11, P12 will make corresponding displacements or rotations according to the movements of the hands U11, U12. The virtual scene P may also include an ultrasound machine, other equipment or instruments in the ward, but is not limited to this. The virtual scene P may further include at least one button P4, which is used to be triggered to change a state of the virtual human body P2 in the virtual scene P, freeze the screen of the virtual ultrasound display P3, switch a scan area prompt type on the virtual human body P2, or change the scan theme. Specifically, the number of buttons P4 may be plural, and texts corresponding to the above functions such as "human body display mode change", "freeze scan screen", "switch display scan area", "heart scan" and "abdomen scan" may be displayed on different buttons P4. During training, the virtual hand P12 can be controlled to trigger the button P4 to select a heart scan or an abdominal scan, and there is no prompt in the default state of the initial scan. If necessary, the user U1 can control the virtual hand P12 to press the "switch display scan area" button P4 to display the prompt of the scan area on the human body. The prompt can be divided into no prompt, longitudinal scan display and horizontal scan display, which can be switched by controlling the virtual hand P12 to press the button P4. In addition, a scanning direction D1 including a long axis and three short axes can also be displayed on the virtual human body P2. The scanning direction D1 is located on the window corresponding to the scanned window and can be used as a reference for the user U1. It should be particularly noted that the ultrasound scanning in this case can be applied to the scanning of various organs in the human body, not limited to the organs mentioned above.

使用者U1可控制虛擬手部P11移動虛擬探頭P5,在初始狀態下,虛擬超音波顯示器P3的畫面上未有資料,而當使用者U1操作虛擬探頭P5使角度或位置正確時,虛擬超音波顯示器P3才會顯示對應的顯示超音波資料,並且,當使用者U1持續移動或轉動手部U11,例如移動超過一移動閾值時,判定手部U11移動,此時虛擬超音波顯示器P3顯示的超音波資料應會對應切換,即由原本的超音波動態片段及/或超音波靜態影格切換為另一個超音波動態片段及/或超音波靜態影格。類似地,當使用者U1的手部U11轉動超過一轉動閾值時,判定手部U11轉動,此時虛擬超音波顯示器P3顯示的超音波資料應會對應切換。移動閾值可例如為15 mm,轉動閾值可例如為10度,並且可介於8度到17度之間,或介於10度到20度之間。當使用者U1使用虛擬探頭P5進行超音波掃描時,使用者U1可用另一手部U12操控虛擬手部P12,例如按下按鈕P4以定格掃描畫面,然不以此為限。The user U1 can control the virtual hand P11 to move the virtual probe P5. In the initial state, there is no data on the screen of the virtual ultrasound display P3. When the user U1 operates the virtual probe P5 to make the angle or position correct, the virtual ultrasound display P3 will display the corresponding ultrasound data. Moreover, when the user U1 continues to move or rotate the hand U11, for example, when the movement exceeds a movement threshold, the hand U11 is determined to be moving. At this time, the ultrasound data displayed on the virtual ultrasound display P3 should be switched accordingly, that is, the original ultrasound dynamic segment and/or ultrasound static frame is switched to another ultrasound dynamic segment and/or ultrasound static frame. Similarly, when the hand U11 of the user U1 rotates beyond a rotation threshold, the hand U11 is determined to be rotated, and the ultrasound data displayed on the virtual ultrasound display P3 should be switched accordingly. The movement threshold may be, for example, 15 mm, and the rotation threshold may be, for example, 10 degrees, and may be between 8 degrees and 17 degrees, or between 10 degrees and 20 degrees. When the user U1 uses the virtual probe P5 to perform an ultrasound scan, the user U1 may use the other hand U12 to control the virtual hand P12, such as pressing the button P4 to freeze the scan screen, but is not limited thereto.

請參閱第4圖,其中第4圖繪示依照本揭示內容第二實施例之超音波操作訓練方法S200的方塊流程圖。超音波操作訓練方法S200包含一虛擬場景顯示步驟S210、一感測步驟S220以及一超音波資料顯示步驟S230。以下將搭配第1圖至第3圖的超音波操作訓練系統100說明超音波操作訓練方法S200的細節。Please refer to FIG. 4, wherein FIG. 4 is a block flow chart of an ultrasonic operation training method S200 according to the second embodiment of the present disclosure. The ultrasonic operation training method S200 includes a virtual scene display step S210, a sensing step S220, and an ultrasonic data display step S230. The details of the ultrasonic operation training method S200 will be described below in conjunction with the ultrasonic operation training system 100 of FIGS. 1 to 3.

虛擬場景顯示步驟S210中,超音波操作訓練系統100的資料處理裝置120的顯示產生模組121產生虛擬場景P,虛擬場景P中包含虛擬人體P2、虛擬手部P11、虛擬探頭P5及虛擬超音波顯示器P3,虛擬場景P顯示於超音波操作訓練系統100的顯示裝置130。In the virtual scene display step S210, the display generation module 121 of the data processing device 120 of the ultrasound operation training system 100 generates a virtual scene P. The virtual scene P includes a virtual human body P2, a virtual hand P11, a virtual probe P5 and a virtual ultrasound display P3. The virtual scene P is displayed on the display device 130 of the ultrasound operation training system 100.

感測步驟S220中,感測裝置110感測使用者U1之手部U11以發出手部動作訊號,資料處理裝置120的分析模組123分析手部動作訊號,依據手部動作訊號控制虛擬手部P11於虛擬場景P內移動虛擬探頭P5並進行虛擬超音波檢測。In the sensing step S220, the sensing device 110 senses the hand U11 of the user U1 to generate a hand motion signal. The analysis module 123 of the data processing device 120 analyzes the hand motion signal and controls the virtual hand P11 to move the virtual probe P5 in the virtual scene P and perform virtual ultrasound detection according to the hand motion signal.

超音波資料顯示步驟S230中,顯示產生模組121依據手部動作訊號,自複數超音波資料中選擇其中一待播放者,並使顯示產生模組121將其中一待播放者顯示於虛擬超音波顯示器P3,當顯示產生模組121依據手部動作訊號,判斷使用者U1的手部U11有移動或轉動時,根據虛擬探頭P5的掃描角度自前述複數超音波資料中選擇另一待播放者,由顯示產生模組121將前述另一待播放者播放至虛擬超音波顯示器P3,以對應切換虛擬超音波顯示器P3的畫面。In the ultrasound data display step S230, the display generation module 121 selects one to be played from the plurality of ultrasound data according to the hand motion signal, and displays the one to be played on the virtual ultrasound display P3. When the display generation module 121 determines that the hand U11 of the user U1 moves or rotates according to the hand motion signal, another to be played is selected from the plurality of ultrasound data according to the scanning angle of the virtual probe P5, and the display generation module 121 plays the other to be played to the virtual ultrasound display P3 to switch the screen of the virtual ultrasound display P3 accordingly.

具體地,於虛擬場景顯示步驟S210,顯示產生模組121產生了虛擬場景P,並且此虛擬場景P可例如由顯示裝置130的穿戴虛擬實境顯示器131顯示,而讓使用者U1可以進行虛擬實境的超音波檢測訓練。Specifically, in the virtual scene display step S210, the display generation module 121 generates a virtual scene P, and the virtual scene P can be displayed by the wearable virtual reality display 131 of the display device 130, so that the user U1 can perform virtual reality ultrasound detection training.

感測步驟S220中,是透過手部位置感測器111感測手部U11的作動狀態,例如移動或是轉動,而使得虛擬手部P11於虛擬場景P內產生相應的動作,以操控虛擬探頭P5。In the sensing step S220 , the hand position sensor 111 senses the motion state of the hand U11 , such as movement or rotation, so that the virtual hand P11 generates corresponding motion in the virtual scene P to control the virtual probe P5 .

超音波資料顯示步驟S230中,是模擬真實的超音波檢測,因此當虛擬探頭P5移至相應的位置時,虛擬場景P中虛擬超音波顯示器P3會顯示出對應之超音波資料,並且在虛擬探頭P5移動或轉動時,產生畫面的切換。因此,其是由分析模組123依據手部動作訊號判斷使用者U1的手部U11是否有移動或轉動,而能選擇對應要播放的超音波資料,能使顯示產生模組121進行畫面的切換。In the ultrasound data display step S230, a real ultrasound detection is simulated, so when the virtual probe P5 moves to the corresponding position, the virtual ultrasound display P3 in the virtual scene P will display the corresponding ultrasound data, and when the virtual probe P5 moves or rotates, the screen switches. Therefore, the analysis module 123 determines whether the hand U11 of the user U1 moves or rotates according to the hand movement signal, and can select the corresponding ultrasound data to be played, so that the display generation module 121 can switch the screen.

此外,於超音波資料顯示步驟S230中,感測裝置110包含一手部位置感測器111,且手部位置感測器111具有一手把結構以供手部U11握持,當資料處理裝置120依據手部動作訊號判斷手部U11的位置及角度正確時,資料處理裝置120使手部位置感測器111產生振動。也就是說,在初始時,虛擬超音波顯示器P3上可能未有畫面,待使用者U1控制手部U11,使虛擬手部P11操控虛擬探頭P5至正確或指定的位置及角度時,虛擬超音波顯示器P3上會出現畫面,並且資料處理裝置120可包含一反饋模組,由反饋模組發出指令使手部位置感測器111的觸覺回饋模組作動,而產生振動讓使用者U1了解其操作正確。In addition, in the ultrasonic data display step S230, the sensing device 110 includes a hand position sensor 111, and the hand position sensor 111 has a handle structure for the hand U11 to hold. When the data processing device 120 determines that the position and angle of the hand U11 are correct based on the hand motion signal, the data processing device 120 causes the hand position sensor 111 to vibrate. That is to say, initially, there may be no image on the virtual ultrasound display P3. When the user U1 controls the hand U11 so that the virtual hand P11 manipulates the virtual probe P5 to the correct or specified position and angle, an image will appear on the virtual ultrasound display P3, and the data processing device 120 may include a feedback module, which sends instructions to activate the tactile feedback module of the hand position sensor 111, thereby generating vibration to let the user U1 know that the operation is correct.

如第4圖所示,超音波操作訓練方法S200可更包含一調整切換步驟S240,虛擬場景P更顯示至少一按鈕P4,使用者U1移動另一手部U12,感測裝置110感測另一手部U12以發出另一手部動作訊號,分析模組123依據另一手部動作訊號判斷虛擬場景P內的另一虛擬手部P12是否觸碰前述至少一按鈕P4,以使顯示產生模組121改變虛擬場景P中虛擬人體P2的狀態、定格虛擬超音波顯示器P3的畫面、切換虛擬人體P2上的掃描區域提示類型或更換掃描主題。As shown in FIG. 4 , the ultrasound operation training method S200 may further include an adjustment switching step S240, the virtual scene P further displays at least one button P4, the user U1 moves the other hand U12, the sensing device 110 senses the other hand U12 to send another hand motion signal, and the analysis module 123 determines whether the other virtual hand P12 in the virtual scene P touches the at least one button P4 according to the other hand motion signal, so that the display generation module 121 changes the state of the virtual human body P2 in the virtual scene P, freezes the screen of the virtual ultrasound display P3, switches the scanning area prompt type on the virtual human body P2, or changes the scanning theme.

仔細而言,按鈕P4的數量為複數,因此當虛擬場景P內的虛擬手部P12觸碰顯示「人體顯示模式更換」的按鈕P4以進行人體顯示模式更換時,即可改變虛擬場景P中虛擬人體P2的一狀態,如切換人體外皮、透視腹部、透視心臟等。當虛擬場景P內的虛擬手部P12觸碰「心臟掃描」或「腹部掃描」的按鈕P4以選擇進行心臟掃描或進行腹部掃描,即可更換至所選的掃描主題。Specifically, the number of buttons P4 is plural, so when the virtual hand P12 in the virtual scene P touches the button P4 showing "Human body display mode change" to change the human body display mode, a state of the virtual human body P2 in the virtual scene P can be changed, such as switching the human skin, viewing the abdomen, viewing the heart, etc. When the virtual hand P12 in the virtual scene P touches the button P4 showing "Heart scan" or "Abdomen scan" to select to perform a heart scan or an abdomen scan, the selected scan theme can be changed.

請參閱第5圖,其中第5圖繪示第4圖實施例的超音波操作訓練方法S200的步驟流程圖。在初始時,使用者U1可戴上穿戴部134使穿戴混合實境顯示器132置於眼前,並且手持例如是手把結構的手部位置感測器111,而於步驟S01中,顯示裝置130的穿戴混合實境顯示器132可顯示出虛擬場景P,如此可讓使用者U1看見虛擬場景P中的虛擬人體P2、虛擬手部P11、P12、虛擬探頭P5及虛擬超音波顯示器P3。Please refer to FIG. 5, which shows a flow chart of the steps of the ultrasound operation training method S200 of the embodiment of FIG. 4. Initially, the user U1 can wear the wearable part 134 to place the wearable mixed reality display 132 in front of the eyes, and hold the hand position sensor 111 such as a handle structure, and in step S01, the wearable mixed reality display 132 of the display device 130 can display the virtual scene P, so that the user U1 can see the virtual human body P2, virtual hands P11, P12, virtual probe P5 and virtual ultrasound display P3 in the virtual scene P.

於步驟S02中,是偵測使用者U1的手部U11、U12,並且於虛擬場景P中讓虛擬手部P11、P12依據手部U11、U12的動作移動。於步驟S03中,可先確認使用者U1是否移動手部U12使虛擬手部P12按下按鈕P4,確認是否改變器官組織顯示方式,若是,則進入步驟S04對應切換人體顯示模式為人體外皮、透視腹部、透視心臟等。若否,則進入步驟S05確認使用者U1所選擇或切換的掃描主題。In step S02, the hands U11 and U12 of the user U1 are detected, and the virtual hands P11 and P12 are moved in the virtual scene P according to the movements of the hands U11 and U12. In step S03, it is first confirmed whether the user U1 moves the hand U12 to make the virtual hand P12 press the button P4 to confirm whether to change the organ tissue display mode. If so, the process proceeds to step S04 to switch the human body display mode to human skin, see-through abdomen, see-through heart, etc. If not, the process proceeds to step S05 to confirm the scanning theme selected or switched by the user U1.

再來,可進入步驟S06,依據使用者U1所選擇或切換的掃描主題,可預設有一正確的探頭放置位置及角度,因此可確認使用者U1透過虛擬手部P11操控虛擬探頭P5的角度及位置是否正確,若不正確,則進入步驟S07,顯示產生模組121不顯示畫面。反之,若正確,則進入步驟S08,由顯示產生模組121顯示對應的畫面,並且透過反饋模組使手部位置感測器111產生振動,以表示操作正確。虛擬超音波顯示器P3上可重覆顯示約2秒的超音波動態片段,或是單一的超音波靜態影格,直至切換畫面為止。Next, the process proceeds to step S06. According to the scanning theme selected or switched by the user U1, a correct probe placement position and angle can be preset. Therefore, it can be confirmed whether the angle and position of the virtual probe P5 controlled by the user U1 through the virtual hand P11 are correct. If not, the process proceeds to step S07, and the display generation module 121 does not display the screen. On the contrary, if it is correct, the process proceeds to step S08, and the display generation module 121 displays the corresponding screen, and the hand position sensor 111 vibrates through the feedback module to indicate that the operation is correct. The virtual ultrasound display P3 can repeatedly display about 2 seconds of ultrasound dynamic clips, or a single ultrasound static frame, until the screen is switched.

之後,進入步驟S09及步驟S12,持續偵測使用者U1的手部U11,確認手部U11是否移動超過位移閾值或轉動超過轉動閾值,若否,則進入步驟S11,由顯示產生模組121於虛擬超音波顯示器P3的虛擬超音波顯示器P3上維持同一顯示畫面,反之,則進入步驟S10更換顯示畫面。Afterwards, enter step S09 and step S12 to continuously detect the hand U11 of the user U1 to confirm whether the hand U11 moves beyond the displacement threshold or rotates beyond the rotation threshold. If not, enter step S11 and the display generation module 121 maintains the same display screen on the virtual ultrasound display P3. Otherwise, enter step S10 to change the display screen.

要特別說明的是,上述的步驟流程僅為示例,步驟S03、S05,可於操作過程中隨時被判斷或進行,而不以上述的順序為限。此外,進行超音波操作訓練方法S200時,所要顯示的超音波資料可以是事先設定,例如是事先於資料庫140中選定好一組對應心臟的超音波資料及一組對應腹部的超音波資料,再依使用者U1的選擇播放對應心臟的超音波資料或對應腹部的超音波資料,並且再依使用者U1的手部U11決定播放那組超音波資料中的哪一個超音波動態片段及/或超音波靜態影格。然在其他實施例中,亦可以事先選定複數組對應心臟的超音波資料及複數組對應腹部的超音波資料,再隨機播放,不以此為限。在一實施例中,是由資料處理裝置依據使用者選擇「心臟掃描」或「腹部掃描」,再向資料庫要求傳送對應的超音波資料;但在另一實施例中,資料處理裝置可以是事先已儲存至少一組對應心臟的超音波資料及至少一組對應腹部的超音波資料,不以此為限。It should be particularly noted that the above step flow is only an example, and steps S03 and S05 can be judged or performed at any time during the operation, and are not limited to the above sequence. In addition, when performing the ultrasound operation training method S200, the ultrasound data to be displayed can be pre-set, for example, a set of ultrasound data corresponding to the heart and a set of ultrasound data corresponding to the abdomen are selected in advance in the database 140, and then the ultrasound data corresponding to the heart or the ultrasound data corresponding to the abdomen are played according to the selection of the user U1, and the ultrasound dynamic segment and/or ultrasound static frame in the set of ultrasound data to be played is determined according to the hand U11 of the user U1. However, in other embodiments, multiple sets of ultrasound data corresponding to the heart and multiple sets of ultrasound data corresponding to the abdomen may be selected in advance and then randomly played, but the present invention is not limited thereto. In one embodiment, the data processing device requests the database to transmit the corresponding ultrasound data based on the user's selection of "heart scan" or "abdomen scan"; however, in another embodiment, the data processing device may have stored at least one set of ultrasound data corresponding to the heart and at least one set of ultrasound data corresponding to the abdomen in advance, but the present invention is not limited thereto.

又,使用者U1還可透過在虛擬場景P中移動虛擬探頭P5學習認識各器官位置,並且透過切換人體顯示模式為透視腹部及透視心臟等,讓皮膚與肌肉組織可透明化,而能更輕易辨識各內臟器官。再者,還透過切換人體上掃描區域提示,輔助掃描位置的學習。In addition, the user U1 can learn to recognize the positions of various organs by moving the virtual probe P5 in the virtual scene P, and can more easily identify various internal organs by switching the human body display mode to see-through abdomen and see-through heart, etc., so that the skin and muscle tissue can be transparent. Furthermore, the scanning area prompt on the human body is switched to assist in learning the scanning position.

此外,超音波操作訓練方法S200還用於進行測驗,因此,可選擇資料庫140中特定案例做為一測驗主題,也就是前述的超音波資料選自資料庫140的特定案例,並以此做為測驗主題來進行測驗。舉例而言,其可例如是由考官或由人工智慧於電子裝置中選定測驗主題,測驗主題可例如是器官位置、角度、病徵確認等,考官可由投影顯示器133中觀察使用者U1的操作或判斷是否正確。在其他實施例中,若資料處理裝置為一智慧型裝置,則亦可以將虛擬場景顯示於電腦的螢幕上,不以此為限。In addition, the ultrasound operation training method S200 is also used for testing, so a specific case in the database 140 can be selected as a test subject, that is, the aforementioned ultrasound data is selected from a specific case in the database 140, and this is used as a test subject for testing. For example, the test subject can be selected by an examiner or by artificial intelligence in an electronic device. The test subject can be, for example, organ position, angle, symptom confirmation, etc. The examiner can observe the operation of the user U1 or judge whether it is correct through the projection display 133. In other embodiments, if the data processing device is an intelligent device, the virtual scene can also be displayed on the computer screen, but it is not limited to this.

請參閱第1圖、第2圖及第6圖,第6圖繪示依照本揭示內容第三實施例之超音波操作訓練系統300的方塊架構圖。超音波操作訓練系統300包含一感測裝置310、一資料處理裝置320、一顯示裝置330以及一資料庫340。感測裝置310包含一手部位置感測器311、一頭部位置感測器312及一參考點位置感測器313。資料處理裝置320包含一顯示產生模組321、一超音波資料接收模組322及一分析模組323。顯示裝置330包含一穿戴虛擬實境顯示器331、一穿戴混合實境顯示器332、一投影顯示器333及一穿戴部(未繪示)。於第三實施例中,感測裝置310之手部位置感測器311及參考點位置感測器313,資料處理裝置320之顯示產生模組321、超音波資料接收模組322及分析模組323,以及顯示裝置330分別與前述第一實施例之手部位置感測器111及參考點位置感測器113,資料處理裝置120之顯示產生模組121、超音波資料接收模組122及分析模組123,以及顯示裝置130相同,在此不另贅述。Please refer to FIG. 1, FIG. 2 and FIG. 6, FIG. 6 shows a block diagram of an ultrasound operation training system 300 according to the third embodiment of the present disclosure. The ultrasound operation training system 300 includes a sensing device 310, a data processing device 320, a display device 330 and a database 340. The sensing device 310 includes a hand position sensor 311, a head position sensor 312 and a reference point position sensor 313. The data processing device 320 includes a display generation module 321, an ultrasound data receiving module 322 and an analysis module 323. The display device 330 includes a wearable virtual reality display 331, a wearable mixed reality display 332, a projection display 333 and a wearable part (not shown). In the third embodiment, the hand position sensor 311 and the reference point position sensor 313 of the sensing device 310, the display generation module 321, the ultrasonic data receiving module 322 and the analysis module 323 of the data processing device 320, and the display device 330 are respectively the same as the hand position sensor 111 and the reference point position sensor 113, the display generation module 121, the ultrasonic data receiving module 122 and the analysis module 123 of the data processing device 120, and the display device 130 of the aforementioned first embodiment, and are not further described here.

而第三實施例與第一實施例的差異在於,感測裝置310之頭部位置感測器312更包含擷取使用者U1的一說話語音(圖未繪示)以發出一語音訊號。資料處理裝置320更包含一辨識模組324及一互動模組325。此外,資料庫340更包含一問答模型(圖未繪示)及對應於不同器官的複數超音波切面樣本及複數超音波疾病樣本。The difference between the third embodiment and the first embodiment is that the head position sensor 312 of the sensing device 310 further includes capturing a speech voice (not shown) of the user U1 to send out a voice signal. The data processing device 320 further includes a recognition module 324 and an interaction module 325. In addition, the database 340 further includes a question-answering model (not shown) and a plurality of ultrasound section samples and a plurality of ultrasound disease samples corresponding to different organs.

辨識模組324及互動模組325訊號連接顯示產生模組321及分析模組323。辨識模組324用以辨識待播放者而產生一切面判斷結果及一疾病判斷結果,並將切面判斷結果及疾病判斷結果儲存至問答模型。辨識模組324萃取待播放者的至少一特徵,與資料庫340的超音波切面樣本進行比對而產生切面判斷結果,並與資料庫340的超音波疾病樣本進行比對而產生疾病判斷結果。The recognition module 324 and the interactive module 325 are signal-connected to the display generation module 321 and the analysis module 323. The recognition module 324 is used to recognize the person to be played and generate a section judgment result and a disease judgment result, and store the section judgment result and the disease judgment result in the question-answering model. The recognition module 324 extracts at least one feature of the person to be played, compares it with the ultrasound section sample in the database 340 to generate the section judgment result, and compares it with the ultrasound disease sample in the database 340 to generate the disease judgment result.

詳細而言,參閱第1圖、第3圖及第6圖所示,使用者U1在掃描虛擬人體P2各器官時,辨識模組324能夠透過卷積神經網路(Convolutional Neural Network;CNN)自動對虛擬超音波顯示器P3顯示的待播放者進行圖像辨識,而確認待播放者對應的器官與掃描的切面,同時確認所述器官是否有疾病。In detail, referring to FIG. 1, FIG. 3 and FIG. 6, when the user U1 scans the organs of the virtual human body P2, the recognition module 324 can automatically perform image recognition on the person to be played displayed on the virtual ultrasound display P3 through the convolutional neural network (CNN), and confirm the corresponding organs and scanned sections of the person to be played, and at the same time confirm whether the organs have diseases.

互動模組325用以分析語音訊號而擷取至少一關鍵字,並根據問答模型而運算產生對應語音訊號的一對答內容。問答模型接收至少一關鍵字,並根據至少一關鍵字而運算產生對答內容。詳細而言,使用者U1以說話語音向互動模組325進行關於超音波操作的提問,互動模組325能夠對說話語音之語音訊號進行關鍵字的提取,而使問答模型透過人工智慧自問答集中進行搜尋而產生對答內容,並透過顯示裝置330自動將對答內容以語音播放而回覆使用者U1。藉此,以虛擬教師的角色在使用者U1進行模擬操作時,與使用者U1互動並起到引導或提醒的作用。此外,在其他可能的實施例中,使用者可透過語音控制顯示產生模組修正虛擬探頭角度或控制虛擬場景中的按鈕,本揭示內容不以此為限。The interactive module 325 is used to analyze the voice signal and capture at least one keyword, and calculate and generate a response content corresponding to the voice signal according to the question-answer model. The question-answer model receives at least one keyword, and calculates and generates the response content according to at least one keyword. In detail, the user U1 asks the interactive module 325 questions about the ultrasonic operation in a spoken voice, and the interactive module 325 can extract keywords from the voice signal of the spoken voice, so that the question-answer model searches from the question-answer set through artificial intelligence to generate the response content, and automatically plays the response content in the form of voice through the display device 330 to reply to the user U1. In this way, when the user U1 performs the simulated operation, the virtual teacher interacts with the user U1 and plays a role of guidance or reminder. In addition, in other possible embodiments, the user may use voice to control the display generation module to modify the virtual probe angle or control buttons in the virtual scene, but the present disclosure is not limited thereto.

舉例來說,當使用者U1在掃描虛擬人體P2的心臟器官時,辨識模組324會先辨識待播放者而產生切面判斷結果為「胸骨左緣縱軸面(Parasternal Long Axis View)」及疾病判斷結果為「無異常」,並儲存至問答模型。當使用者U1可透過說話語音詢問「目前掃描心臟對應的切面為何者?」時,互動模組325可對應擷取的關鍵字為「心臟」與「切面」,問答模型透過關鍵字及前述切面判斷結果產生對答內容為「目前掃描心臟對應的切面為胸骨左緣縱軸面」,並透過顯示裝置330自動將對答內容以語音回覆使用者U1。For example, when the user U1 is scanning the heart of the virtual human body P2, the recognition module 324 will first identify the user to be played and generate a section judgment result of "parasternal long axis view" and a disease judgment result of "no abnormality", and store it in the question-answering model. When user U1 asks via voice, "Which section does the current heart scan correspond to?", the interactive module 325 can correspondingly capture the keywords "heart" and "section", and the question-answer model generates the answer "The current heart scan corresponds to the longitudinal plane of the left edge of the sternum" based on the keywords and the aforementioned section judgment results, and automatically replies to the user U1 via voice via the display device 330.

請參閱第1圖、第6圖及第7圖所示,其中第7圖繪示依照本揭示內容第四實施例之超音波操作訓練方法S400的方塊流程圖。超音波操作訓練系統300經配置以實施超音波操作訓練方法S400,而必須說明的是,本揭示內容之超音波操作訓練方法S400不限於透過本揭示內容第三實施例提供的超音波操作訓練系統300實施。於第四實施例中,超音波操作訓練方法S400包含一虛擬場景顯示步驟S410、一感測步驟S420、一超音波資料顯示步驟S430、一調整切換步驟S440、一辨識步驟S450及一問答回饋步驟S460。其中,虛擬場景顯示步驟S410、超音波資料顯示步驟S430及調整切換步驟S440分別與前述第二實施例之虛擬場景顯示步驟S210、超音波資料顯示步驟S230及調整切換步驟S240相同,在此不另贅述。Please refer to FIG. 1, FIG. 6 and FIG. 7, wherein FIG. 7 is a block flow chart of the ultrasonic operation training method S400 according to the fourth embodiment of the present disclosure. The ultrasonic operation training system 300 is configured to implement the ultrasonic operation training method S400, and it must be noted that the ultrasonic operation training method S400 of the present disclosure is not limited to being implemented by the ultrasonic operation training system 300 provided by the third embodiment of the present disclosure. In the fourth embodiment, the ultrasound operation training method S400 includes a virtual scene display step S410, a sensing step S420, an ultrasound data display step S430, an adjustment switching step S440, a recognition step S450, and a question-answer feedback step S460. The virtual scene display step S410, the ultrasound data display step S430, and the adjustment switching step S440 are respectively the same as the virtual scene display step S210, the ultrasound data display step S230, and the adjustment switching step S240 of the second embodiment, and are not further described herein.

於感測步驟S420中,感測裝置310感測使用者U1之手部U11以發出手部動作訊號,並擷取使用者U1的說話語音(未另標號)以發出語音訊號。於辨識步驟S450中,辨識模組324辨識待播放者而產生切面判斷結果及疾病判斷結果,並將切面判斷結果及疾病判斷結果儲存至問答模型。辨識模組324萃取待播放者的至少一特徵,與資料庫340中的複數超音波切面樣本進行比對而產生切面判斷結果,並與資料庫340中的複數超音波疾病樣本進行比對而產生疾病判斷結果。In the sensing step S420, the sensing device 310 senses the hand U11 of the user U1 to send out a hand movement signal, and captures the user U1's speech voice (not separately labeled) to send out a voice signal. In the recognition step S450, the recognition module 324 recognizes the person to be played and generates a section judgment result and a disease judgment result, and stores the section judgment result and the disease judgment result in the question-answering model. The recognition module 324 extracts at least one feature of the person to be played, compares it with a plurality of ultrasound section samples in the database 340 to generate a section judgment result, and compares it with a plurality of ultrasound disease samples in the database 340 to generate a disease judgment result.

於問答回饋步驟S460中,互動模組325分析語音訊號而擷取至少一關鍵字,互動模組325透過問答模型接收至少一關鍵字而產生對應語音訊號的對答內容。顯示裝置330播放對答內容。In the question-answer feedback step S460, the interactive module 325 analyzes the voice signal and captures at least one keyword. The interactive module 325 receives the at least one keyword through the question-answer model and generates a dialogue content corresponding to the voice signal. The display device 330 plays the dialogue content.

由上述實施例可知,本揭示內容具有下列優點:其一,資料處理裝置產生的虛擬場景可模擬實際的檢測場景,再由感測裝置感測使用者手部以允許使用者以虛擬手部於虛擬場景內移動,並且根據手部的移動可以調整要對應播放的超音波資料,可增加模擬訓練的真實性。其二,通過互動模組以虛擬教師的角色在使用者進行模擬操作時,與使用者互動並起到引導或提醒的作用,能夠提升模擬訓練的效果。As can be seen from the above embodiments, the present disclosure has the following advantages: First, the virtual scene generated by the data processing device can simulate the actual detection scene, and the sensing device senses the user's hand to allow the user to move the virtual hand in the virtual scene, and the ultrasound data to be played can be adjusted according to the movement of the hand, which can increase the authenticity of the simulation training. Second, the interactive module can interact with the user in the role of a virtual teacher and play a role of guidance or reminder when the user performs the simulation operation, which can improve the effect of the simulation training.

雖然本揭示內容已以實施例揭露如上,然其並非用以限定本揭示內容,任何熟習此技藝者,在不脫離本揭示內容之精神和範圍內,當可作各種之更動與潤飾,因此本揭示內容之保護範圍當視後附之申請專利範圍所界定者為準。Although the contents of this disclosure have been disclosed as above by way of embodiments, they are not intended to limit the contents of this disclosure. Anyone skilled in the art can make various changes and modifications without departing from the spirit and scope of the contents of this disclosure. Therefore, the protection scope of the contents of this disclosure shall be subject to the scope defined by the attached patent application.

100,300:超音波操作訓練系統 110,310:感測裝置 111,311:手部位置感測器 112,312:頭部位置感測器 113,313:參考點位置感測器 120,320:資料處理裝置 121,321:顯示產生模組 122,322:超音波資料接收模組 123,323:分析模組 130,330:顯示裝置 131,331:穿戴虛擬實境顯示器 132,332:穿戴混合實境顯示器 133,333:投影顯示器 134:穿戴部 140,340:資料庫 324:辨識模組 325:互動模組 D1:掃描方位 P:虛擬場景 P11,P12:虛擬手部 P2:虛擬人體 P3:虛擬超音波顯示器 P4:按鈕 P5:虛擬探頭 S01,S02,S03,S04,S05,S06,S07,S08,S09,S10,S11,S12:步驟 S200,S400:超音波操作訓練方法 S210,S410:虛擬場景顯示步驟 S220,S420:感測步驟 S230,S430:超音波資料顯示步驟 S240,S440:調整切換步驟 S450:辨識步驟 S460:問答回饋步驟 U1:使用者 U11,U12:手部 U13:頭部100,300: Ultrasonic operation training system 110,310: Sensing device 111,311: Hand position sensor 112,312: Head position sensor 113,313: Reference point position sensor 120,320: Data processing device 121,321: Display generation module 122,322: Ultrasonic data receiving module 123,323: Analysis module 130,330: Display device 131,331: Wearable virtual reality display 132,332: Wearable mixed reality display 133,333: Projection display 134: Wearable unit 140,340: Database 324: Recognition module 325: Interactive module D1: Scanning direction P: Virtual scene P11, P12: Virtual hand P2: Virtual human body P3: Virtual ultrasound display P4: Button P5: Virtual probe S01, S02, S03, S04, S05, S06, S07, S08, S09, S10, S11, S12: Steps S200, S400: Ultrasound operation training method S210, S410: Virtual scene display step S220, S420: Sensing step S230, S430: Ultrasound data display step S240, S440: Adjustment and switching step S450: Identification step S460: Question and answer feedback step U1: User U11, U12: Hands U13: Head

第1圖繪示依照本揭示內容第一實施例之超音波操作訓練系統與一使用者的示意圖; 第2圖繪示第1圖之超音波操作訓練系統的方塊架構圖; 第3圖繪示第1圖之超音波操作訓練系統的一虛擬場景; 第4圖繪示依照本揭示內容第二實施例之超音波操作訓練方法的方塊流程圖; 第5圖繪示第4圖實施例的超音波操作訓練方法的步驟流程圖; 第6圖繪示依照本揭示內容第三實施例之超音波操作訓練系統的方塊架構圖;以及 第7圖繪示依照本揭示內容第四實施例之超音波操作訓練方法的方塊流程圖。 FIG. 1 is a schematic diagram of an ultrasonic operation training system and a user according to the first embodiment of the present disclosure; FIG. 2 is a block diagram of the ultrasonic operation training system of FIG. 1; FIG. 3 is a virtual scene of the ultrasonic operation training system of FIG. 1; FIG. 4 is a block flow chart of an ultrasonic operation training method according to the second embodiment of the present disclosure; FIG. 5 is a step flow chart of the ultrasonic operation training method of the embodiment of FIG. 4; FIG. 6 is a block diagram of an ultrasonic operation training system according to the third embodiment of the present disclosure; and FIG. 7 is a block flow chart of an ultrasonic operation training method according to the fourth embodiment of the present disclosure.

300:超音波操作訓練系統 300: Ultrasound operation training system

310:感測裝置 310:Sensor device

311:手部位置感測器 311:Hand position sensor

312:頭部位置感測器 312: Head position sensor

313:參考點位置感測器 313: Reference point position sensor

320:資料處理裝置 320: Data processing device

321:顯示產生模組 321: Display generated module

322:超音波資料接收模組 322: Ultrasound data receiving module

323:分析模組 323:Analysis module

324:辨識模組 324: Identification module

325:互動模組 325:Interactive module

330:顯示裝置 330: Display device

331:穿戴虛擬實境顯示器 331: Wearable virtual reality display

332:穿戴混合實境顯示器 332: Wearable mixed reality display

333:投影顯示器 333: Projection display

340:資料庫 340: Database

Claims (10)

一種超音波操作訓練系統,包含: 一感測裝置,用以感測一使用者之一手部以發出一手部動作訊號,並擷取該使用者的一說話語音以發出一語音訊號; 一資料處理裝置,訊號連接該感測裝置且包含: 一顯示產生模組,用以產生一虛擬場景,該虛擬場景中包含一虛擬人體、一虛擬手部、一虛擬探頭及一虛擬超音波顯示器; 一分析模組,訊號連接該顯示產生模組,並用以分析該手部動作訊號,依據該手部動作訊號控制該虛擬手部於該虛擬場景內移動該虛擬探頭並進行虛擬超音波檢測,而使該顯示產生模組自複數超音波資料中選擇其中一待播放者,並根據該虛擬探頭的掃描角度,自該些超音波資料中切換另一待播放者; 一辨識模組,訊號連接該顯示產生模組及該分析模組,並用以辨識該待播放者而產生一切面判斷結果及一疾病判斷結果,並將該切面判斷結果及該疾病判斷結果儲存至一問答模型;及 一互動模組,訊號連接該顯示產生模組及該分析模組,並用以分析該語音訊號而擷取至少一關鍵字,並根據該問答模型而產生對應該語音訊號的一對答內容;以及 一顯示裝置,訊號連接該顯示產生模組,且用以顯示該虛擬場景並播放該對答內容。 An ultrasonic operation training system comprises: A sensing device for sensing a hand of a user to generate a hand movement signal, and capturing a speech of the user to generate a speech signal; A data processing device, signal-connected to the sensing device and comprising: A display generation module for generating a virtual scene, wherein the virtual scene comprises a virtual human body, a virtual hand, a virtual probe and a virtual ultrasonic display; An analysis module, which is connected to the display generation module by signal and is used to analyze the hand motion signal, and controls the virtual hand to move the virtual probe in the virtual scene and perform virtual ultrasound detection according to the hand motion signal, so that the display generation module selects one of the subjects to be played from the plurality of ultrasound data, and switches another subject to be played from the ultrasound data according to the scanning angle of the virtual probe; An identification module, which is connected to the display generation module and the analysis module by signal and is used to identify the subject to be played and generate a section judgment result and a disease judgment result, and store the section judgment result and the disease judgment result in a question-answering model; and An interactive module, signal-connected to the display generation module and the analysis module, and used to analyze the voice signal to capture at least one keyword, and generate a dialogue content corresponding to the voice signal according to the question-answer model; and A display device, signal-connected to the display generation module, and used to display the virtual scene and play the dialogue content. 如請求項1所述之超音波操作訓練系統,更包含一資料庫,其包含對應於不同器官的複數超音波切面樣本及複數超音波疾病樣本,該辨識模組萃取該待播放者的至少一特徵,與該些超音波切面樣本進行比對而產生該切面判斷結果,並與該些超音波疾病樣本進行比對而產生該疾病判斷結果。The ultrasound operation training system as described in claim 1 further includes a database, which includes a plurality of ultrasound section samples and a plurality of ultrasound disease samples corresponding to different organs. The recognition module extracts at least one feature of the person to be played, compares it with the ultrasound section samples to generate the section judgment result, and compares it with the ultrasound disease samples to generate the disease judgment result. 如請求項2所述之超音波操作訓練系統,其中該資料庫更包含該問答模型,該問答模型接收該至少一關鍵字,並根據該至少一關鍵字而運算產生該對答內容。An ultrasound operation training system as described in claim 2, wherein the database further includes the question-answer model, which receives the at least one keyword and generates the answer content based on the at least one keyword. 如請求項1所述之超音波操作訓練系統,更包含一資料庫,其包含至少一超音波檔案,該至少一超音波檔案被分割為該些超音波資料,各該超音波資料為一超音波動態片段或一超音波靜態影格,該資料處理裝置更包含一超音波資料接收模組用以接收該些超音波資料。The ultrasound operation training system as described in claim 1 further includes a database, which includes at least one ultrasound file, and the at least one ultrasound file is divided into a plurality of ultrasound data, each of which is an ultrasound dynamic segment or an ultrasound static frame. The data processing device further includes an ultrasound data receiving module for receiving the ultrasound data. 如請求項1所述之超音波操作訓練系統,其中,該顯示裝置更包含一穿戴部,其用以連接支撐一穿戴虛擬實境顯示器且用以供該使用者穿戴於一頭部,該感測裝置包含一頭部位置感測器,該頭部位置感測器位於該穿戴部,以感測該使用者之該頭部以發出一頭部動作訊號及該語音訊號。An ultrasonic operation training system as described in claim 1, wherein the display device further includes a wearable portion, which is used to connect and support a wearable virtual reality display and is used for the user to wear it on the head, and the sensing device includes a head position sensor, which is located on the wearable portion to sense the head of the user to emit a head movement signal and the voice signal. 如請求項1所述之超音波操作訓練系統,其中,該虛擬場景中更包含至少一按鈕,該至少一按鈕用以被觸發以改變該虛擬場景中該虛擬人體的一狀態、定格該虛擬超音波顯示器的畫面、切換該虛擬人體上的一掃描區域提示類型或更換掃描主題。An ultrasound operation training system as described in claim 1, wherein the virtual scene further includes at least one button, and the at least one button is used to be triggered to change a state of the virtual human body in the virtual scene, freeze the image of the virtual ultrasound display, switch a scan area prompt type on the virtual human body, or change a scan theme. 一種超音波操作訓練方法,包含: 一虛擬場景顯示步驟,一超音波操作訓練系統的一資料處理裝置的一顯示產生模組產生一虛擬場景,該虛擬場景中包含一虛擬人體、一虛擬手部、一虛擬探頭及一虛擬超音波顯示器,該虛擬場景顯示於該超音波操作訓練系統的一顯示裝置; 一感測步驟,一感測裝置感測一使用者之一手部以發出一手部動作訊號,並擷取該使用者的一說話語音以發出一語音訊號; 一超音波資料顯示步驟,該資料處理裝置的一分析模組分析該手部動作訊號,依據該手部動作訊號控制該虛擬手部於該虛擬場景內移動該虛擬探頭並進行虛擬超音波檢測,而使該顯示產生模組自複數超音波資料中選擇其中一待播放者,並根據該虛擬探頭的掃描角度,自該些超音波資料中切換另一待播放者; 一辨識步驟,該資料處理裝置的一辨識模組辨識該待播放者而產生一切面判斷結果及一疾病判斷結果,並將該切面判斷結果及該疾病判斷結果儲存至一問答模型;以及 一問答回饋步驟,該資料處理裝置的一互動模組分析該語音訊號而擷取至少一關鍵字,並根據該問答模型而產生對應該語音訊號的一對答內容,並以該顯示裝置播放該對答內容。 A method for ultrasonic operation training comprises: A virtual scene display step, a display generation module of a data processing device of an ultrasonic operation training system generates a virtual scene, the virtual scene includes a virtual human body, a virtual hand, a virtual probe and a virtual ultrasonic display, and the virtual scene is displayed on a display device of the ultrasonic operation training system; A sensing step, a sensing device senses a hand of a user to generate a hand movement signal, and captures a speech of the user to generate a voice signal; An ultrasound data display step, an analysis module of the data processing device analyzes the hand motion signal, controls the virtual hand to move the virtual probe in the virtual scene and performs virtual ultrasound detection according to the hand motion signal, so that the display generation module selects one of the objects to be played from the plurality of ultrasound data, and switches another object to be played from the ultrasound data according to the scanning angle of the virtual probe; An identification step, an identification module of the data processing device identifies the object to be played and generates a section judgment result and a disease judgment result, and stores the section judgment result and the disease judgment result in a question-answering model; and In a question-and-answer feedback step, an interactive module of the data processing device analyzes the voice signal to capture at least one keyword, generates a dialogue content corresponding to the voice signal according to the question-and-answer model, and plays the dialogue content on the display device. 如請求項7所述之超音波操作訓練方法,更包含: 一調整切換步驟,該虛擬場景更顯示至少一按鈕,該使用者移動另一手部,該感測裝置感測該另一手部以發出另一手部動作訊號,該分析模組依據該另一手部動作訊號判斷該虛擬場景內的另一虛擬手部是否觸碰該至少一按鈕,以使該顯示產生模組改變該虛擬場景中該虛擬人體的一狀態、定格該虛擬超音波顯示器的畫面、切換該虛擬人體上的一掃描區域提示類型或更換掃描主題。 The ultrasound operation training method as described in claim 7 further comprises: an adjustment switching step, wherein the virtual scene further displays at least one button, the user moves the other hand, the sensing device senses the other hand to send another hand motion signal, and the analysis module determines whether the other virtual hand in the virtual scene touches the at least one button according to the other hand motion signal, so that the display generation module changes a state of the virtual human body in the virtual scene, freezes the screen of the virtual ultrasound display, switches a scan area prompt type on the virtual human body, or changes the scan theme. 如請求項7所述之超音波操作訓練方法,其中,於該辨識步驟中,該辨識模組萃取該待播放者的至少一特徵,與一資料庫中的複數超音波切面樣本進行比對而產生該切面判斷結果,並與該資料庫中的複數超音波疾病樣本進行比對而產生該疾病判斷結果; 其中,該資料庫的該些超音波切面樣本及該些超音波疾病樣本對應於不同器官。 The ultrasound operation training method as described in claim 7, wherein, in the identification step, the identification module extracts at least one feature of the person to be played, compares it with a plurality of ultrasound section samples in a database to generate the section judgment result, and compares it with a plurality of ultrasound disease samples in the database to generate the disease judgment result; wherein the ultrasound section samples and the ultrasound disease samples in the database correspond to different organs. 如請求項7所述之超音波操作訓練方法,其中,於該問答回饋步驟中,該問答模型接收該至少一關鍵字,並根據該至少一關鍵字而運算產生該對答內容。An ultrasonic operation training method as described in claim 7, wherein, in the question-answer feedback step, the question-answer model receives the at least one keyword and generates the answer content based on the at least one keyword.
TW113117469A 2023-09-19 2024-05-10 Operation training system for ultrasound and operation training method for ultrasound TWI868019B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
TW113117469A TWI868019B (en) 2024-05-10 2024-05-10 Operation training system for ultrasound and operation training method for ultrasound
US18/888,567 US20250095512A1 (en) 2023-09-19 2024-09-18 Operation training system for ultrasound and operation training method for ultrasound

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW113117469A TWI868019B (en) 2024-05-10 2024-05-10 Operation training system for ultrasound and operation training method for ultrasound

Publications (2)

Publication Number Publication Date
TWI868019B true TWI868019B (en) 2024-12-21
TW202544753A TW202544753A (en) 2025-11-16

Family

ID=94769587

Family Applications (1)

Application Number Title Priority Date Filing Date
TW113117469A TWI868019B (en) 2023-09-19 2024-05-10 Operation training system for ultrasound and operation training method for ultrasound

Country Status (1)

Country Link
TW (1) TWI868019B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW202115400A (en) * 2019-08-28 2021-04-16 比利時商比利時意志有限公司 Method for the detection of cancer
WO2022003428A2 (en) * 2020-06-30 2022-01-06 貴志 山本 Information processing device and trained model
TWI837015B (en) * 2023-06-06 2024-03-21 國立臺中科技大學 Process for rendering real-time ultrasound images used in virtual reality

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW202115400A (en) * 2019-08-28 2021-04-16 比利時商比利時意志有限公司 Method for the detection of cancer
WO2022003428A2 (en) * 2020-06-30 2022-01-06 貴志 山本 Information processing device and trained model
TWI837015B (en) * 2023-06-06 2024-03-21 國立臺中科技大學 Process for rendering real-time ultrasound images used in virtual reality

Similar Documents

Publication Publication Date Title
US20090258703A1 (en) Motion Assessment Using a Game Controller
US10486050B2 (en) Virtual reality sports training systems and methods
US11373550B2 (en) Augmented reality training system
Williams et al. Visual search strategy, selective attention, and expertise in soccer
Williams et al. Anticipation skill in a real-world task: measurement, training, and transfer in tennis.
Grauman et al. Communication via eye blinks and eyebrow raises: Video-based human-computer interfaces
TWI377055B (en) Interactive rehabilitation method and system for upper and lower extremities
Manera et al. Cooperation or competition? Discriminating between social intentions by observing prehensile movements
CN102331840B (en) User selection and navigation based on looped motions
US20100167248A1 (en) Tracking and training system for medical procedures
US20100306712A1 (en) Gesture Coach
Moorthy et al. Motion analysis in the training and assessment of minimally invasive surgery
JP2005525598A (en) Surgical training simulator
Bloom et al. G3di: A gaming interaction dataset with a real time detection and evaluation framework
WO2015097825A1 (en) Movement learning support device and movement learning support method
KR20160090065A (en) Rehabilitation system based on gaze tracking
CN109219426A (en) Rehabilitation training sub-controlling unit and computer program
TWI836680B (en) System for interactive simulation with three-dimensional images and method for operating the same
TWI868019B (en) Operation training system for ultrasound and operation training method for ultrasound
CN112419826B (en) Endoscope operation training method and system for virtual simulation laparoscopic surgery
TWI876544B (en) Simulation training system for ultrasound examination and simulation training method for ultrasound examination
Sherstyuk et al. Mixed reality manikins for medical education
US20250095512A1 (en) Operation training system for ultrasound and operation training method for ultrasound
TW202544753A (en) Operation training system for ultrasound and operation training method for ultrasound
JP7201998B2 (en) surgical training device