[go: up one dir, main page]

TWI909215B - Simulation training system for microscope and simulation training method for microscope - Google Patents

Simulation training system for microscope and simulation training method for microscope

Info

Publication number
TWI909215B
TWI909215B TW112135950A TW112135950A TWI909215B TW I909215 B TWI909215 B TW I909215B TW 112135950 A TW112135950 A TW 112135950A TW 112135950 A TW112135950 A TW 112135950A TW I909215 B TWI909215 B TW I909215B
Authority
TW
Taiwan
Prior art keywords
virtual
microscope
image
hand
simulation training
Prior art date
Application number
TW112135950A
Other languages
Chinese (zh)
Other versions
TW202514563A (en
Inventor
謝凱生
楊東華
張聰賢
林文智
許凱程
許玉龍
Original Assignee
中國醫藥大學
Filing date
Publication date
Application filed by 中國醫藥大學 filed Critical 中國醫藥大學
Priority to TW112135950A priority Critical patent/TWI909215B/en
Publication of TW202514563A publication Critical patent/TW202514563A/en
Application granted granted Critical
Publication of TWI909215B publication Critical patent/TWI909215B/en

Links

Abstract

A simulation training system for a microscope is proposed. The simulation training system for the microscope includes a sensor, a database, a data processing device and a displaying device. The sensor detects a gesture of a user to generate a gesture sensing information. The data processing device includes an analyzing module and an image outputting module. The analyzing module analyzes a rotation and a relative position information corresponding to the gesture, and transforms the rotation and the relative position into an image magnification. The image outputting module outputs a virtual scene. The displaying device displays the virtual scene. The analyzing module lets a virtual hand move, and drives a virtual nosepiece and a virtual adjustment knob to rotate in the virtual scene. Therefore, the analyzing module enlarges an image of a glass slide to generate a zoomed image, and the zoomed image is displayed in the virtual scene. Thus, the simulation training system for the microscope of the present disclosure can help the user to train and simulate the operating process of the microscope.

Description

顯微鏡模擬訓練系統及顯微鏡模擬訓練方法Microscope Simulation Training System and Method

本發明係關於一種模擬訓練系統及模擬訓練方法,特別是關於一種顯微鏡模擬訓練系統及顯微鏡模擬訓練方法。This invention relates to a simulation training system and a simulation training method, and more particularly to a microscope simulation training system and a microscope simulation training method.

判讀血液抹片、骨髓抹片及細菌染色抹片皆為臨床醫學上進行準確診斷的重要途徑,其需透過血液專科人員採集血液或其他樣本進行染色,放置於載玻片,再透過顯微鏡觀察載玻片中的血液細胞、骨髓細胞或細菌,方能正確判讀患者病況。Interpreting blood smears, bone marrow smears, and bacterial stained smears are all important methods for accurate diagnosis in clinical medicine. They require blood or other samples to be collected by hematologists, stained, placed on a glass slide, and then observed under a microscope to correctly interpret the blood cells, bone marrow cells, or bacteria on the slide.

然而,因對應各類疾病案例之抹片資料累積不易,加上顯微鏡設備資源有限,且對製作染色抹片經驗不足的人員在操作時,容易因暴露而受感染。因此,相關人員學習熟練操作顯微鏡及找出抹片中的判讀區塊耗時極長,且效率低落。However, due to the difficulty in accumulating smear data for various disease cases, the limited resources of microscope equipment, and the high risk of infection due to exposure for personnel with insufficient experience in preparing stained smears, it is extremely time-consuming and inefficient for relevant personnel to learn to operate a microscope proficiently and to identify interpretable areas in smears.

有鑑於此,如何發展出一種不受顯微鏡設備及抹片樣本數限制的顯微鏡模擬訓練系統及顯微鏡模擬訓練方法,遂成相關學/業者欲追求的目標。In view of this, the development of a microscope simulation training system and method that is not limited by microscope equipment and the number of slide samples has become the goal pursued by relevant academics and industry professionals.

因此,本發明之目的在於提供一種顯微鏡模擬訓練系統及顯微鏡模擬訓練方法,其透過虛擬實境系統模擬顯微鏡操作及抹片判讀,進行顯微鏡模擬訓練。Therefore, the purpose of this invention is to provide a microscope simulation training system and a microscope simulation training method, which simulates microscope operation and slide interpretation through a virtual reality system to conduct microscope simulation training.

依據本發明一實施方式提供一種顯微鏡模擬訓練系統,包含一感測器、一資料庫、一資訊處理裝置以及一顯示裝置。感測器用以感測一使用者之一手部姿勢,並轉換為一手勢感測資訊。資料庫儲存複數載玻片影像。資訊處理裝置訊號連接感測器及資料庫,並包含一分析模組及一影像輸出模組。分析模組接收手勢感測資訊,分析模組依據手勢感測資訊分析對應手部姿勢的一旋轉量及一相對位置資訊,並轉換為一影像放大倍率。影像輸出模組訊號連接分析模組,並用以輸出一虛擬場景。虛擬場景包含一虛擬手部、一虛擬顯微鏡及此些載玻片影像之一者,虛擬手部對應使用者之手部姿勢。顯示裝置訊號連接影像輸出模組,並用以顯示來自影像輸出模組之虛擬場景。分析模組依據旋轉量及相對位置資訊使虛擬手部於虛擬場景中位移。虛擬手部依據手勢感測資訊於虛擬場景內轉動虛擬顯微鏡之一虛擬物鏡轉換器及一虛擬調節旋鈕之一者,分析模組依據影像放大倍率縮放此些載玻片影像之此者而產生一放大後虛擬載玻片影像,並使放大後虛擬載玻片影像顯示於虛擬場景。According to one embodiment of the present invention, a microscope simulation training system is provided, comprising a sensor, a database, an information processing device, and a display device. The sensor is used to sense a user's hand posture and convert it into gesture sensing information. The database stores multiple slide images. The information processing device is signal-connected to the sensor and the database, and includes an analysis module and an image output module. The analysis module receives the gesture sensing information, analyzes the rotation amount and relative position information corresponding to the hand posture based on the gesture sensing information, and converts it into an image magnification. The image output module is signal-connected to the analysis module and is used to output a virtual scene. The virtual scene comprises a virtual hand, a virtual microscope, and one of these slide images; the virtual hand corresponds to the user's hand posture. A display device is connected to an image output module and used to display the virtual scene from the image output module. An analysis module moves the virtual hand within the virtual scene based on rotation and relative position information. The virtual hand rotates one of the virtual microscope's virtual objective lens converters and one of its virtual adjustment knobs within the virtual scene based on gesture sensing information. The analysis module scales down these slide images according to the image magnification to generate a magnified virtual slide image, which is then displayed in the virtual scene.

藉此,本發明之顯微鏡模擬訓練系統透過資料處理裝置產生虛擬場景,供使用者模擬顯微鏡操作,並提供大量抹片資料供使用者進行判讀訓練。Therefore, the microscope simulation training system of this invention generates virtual scenarios through data processing devices, allowing users to simulate microscope operation and providing a large amount of slide data for users to interpret and train.

前述實施方式之其他實施例如下:前述顯微鏡模擬訓練系統可更包含一頭部感測器。頭部感測器訊號連接資訊處理裝置。頭部感測器配置於使用者之一頭部,並用以感測頭部與虛擬顯微鏡的一距離資訊。分析模組依據距離資訊是否大於一預設距離驅動影像輸出模組於一第一畫面及一第二畫面之間切換輸出。第一畫面為此些載玻片影像之此者以全螢幕顯示於顯示裝置。第二畫面為虛擬顯微鏡及虛擬手部同時顯示於顯示裝置。Other embodiments of the aforementioned implementation method are as follows: The aforementioned microscope simulation training system may further include a head sensor. The head sensor is connected to an information processing device. The head sensor is positioned on the user's head and is used to sense distance information between the head and the virtual microscope. An analysis module drives the image output module to switch between a first screen and a second screen based on whether the distance information is greater than a preset distance. The first screen is a full-screen display of the slide images on the display device. The second screen is a simultaneous display of the virtual microscope and a virtual hand on the display device.

前述實施方式之其他實施例如下:前述載玻片影像選自資料庫的一特定案例,以做為一測驗主題。Other implementations of the aforementioned implementation method are as follows: the aforementioned slide image is selected from a specific case in the database as a test subject.

前述實施方式之其他實施例如下:前述顯微鏡模擬訓練系統可更包含另一顯示裝置。另一顯示裝置訊號連接影像輸出模組。顯示裝置與另一顯示裝置同步顯示虛擬場景。顯示裝置包含一混合實境處理模組。混合實境處理模組用以擷取一實際場景影像,並對實際場景影像與虛擬場景處理而顯示一混合實境影像。Other embodiments of the aforementioned implementation are as follows: The aforementioned microscope simulation training system may further include another display device. The other display device is signal-connected to an image output module. The display device and the other display device synchronously display the virtual scene. The display device includes a mixed reality processing module. The mixed reality processing module is used to capture an image of the actual scene and process the actual scene image and the virtual scene to display a mixed reality image.

前述實施方式之其他實施例如下:前述虛擬手部可更依據手勢感測資訊於虛擬場景內更換虛擬顯微鏡之一虛擬載玻片,並由分析模組自資料庫取得此些載玻片影像之另一者顯示於虛擬場景。Other embodiments of the aforementioned implementation method are as follows: the aforementioned virtual hand can change one of the virtual slides of the virtual microscope in the virtual scene based on the gesture sensing information, and the analysis module obtains another of these slide images from the database and displays it in the virtual scene.

依據本發明另一實施方式提供一種顯微鏡模擬訓練方法,包含一影像取得步驟、一虛擬場景顯示步驟、一感測步驟、一分析步驟以及一顯示步驟。影像取得步驟包含驅動一資訊處理裝置自一資料庫取得複數載玻片影像之一者。虛擬場景顯示步驟包含驅動資訊處理裝置之一影像輸出模組輸出一虛擬場景。虛擬場景包含一虛擬手部、一虛擬顯微鏡及此些載玻片影像之此者。感測步驟包含驅動一感測器感測一使用者之一手部姿勢,並轉換為一手勢感測資訊。使用者之手部姿勢對應虛擬手部。分析步驟包含驅動資訊處理裝置之一分析模組依據手勢感測資訊分析對應手部姿勢的一旋轉量及一相對位置資訊,並轉換為一影像放大倍率。分析模組依據旋轉量及相對位置資訊使虛擬手部於虛擬場景中位移。虛擬手部依據手勢感測資訊於虛擬場景內轉動虛擬顯微鏡之一虛擬物鏡轉換器及一虛擬調節旋鈕之一者,分析模組依據影像放大倍率縮放此些載玻片影像之此者而產生一放大後虛擬載玻片影像,並使放大後虛擬載玻片影像顯示於虛擬場景。顯示步驟包含驅動一顯示裝置接收並顯示虛擬場景。資訊處理裝置訊號連接感測器、資料庫及顯示裝置。According to another embodiment of the present invention, a microscope simulation training method is provided, comprising an image acquisition step, a virtual scene display step, a sensing step, an analysis step, and a display step. The image acquisition step includes driving an information processing device to acquire one of a plurality of slide images from a database. The virtual scene display step includes driving an image output module of the information processing device to output a virtual scene. The virtual scene includes a virtual hand, a virtual microscope, and one of the slide images. The sensing step includes driving a sensor to sense the hand posture of a user and converting it into gesture sensing information. The user's hand gestures correspond to the virtual hand. The analysis process includes an analysis module of the driving information processing device analyzing the rotation amount and relative position information of the corresponding hand gesture based on the hand gesture sensing information, and converting it into an image magnification. The analysis module moves the virtual hand in the virtual scene based on the rotation amount and relative position information. The virtual hand, based on gesture sensing information, rotates one of the virtual microscope's virtual objective lens converters and one of its virtual adjustment knobs within a virtual scene. The analysis module scales these slide images according to the image magnification to generate a magnified virtual slide image, which is then displayed in the virtual scene. The display process includes driving a display device to receive and display the virtual scene. The information processing device is connected to the sensor, database, and display device.

藉此,本發明之顯微鏡模擬訓練方法透過資料處理裝置產生虛擬場景,供使用者模擬顯微鏡操作,並提供大量抹片資料供使用者進行判讀訓練。Therefore, the microscope simulation training method of this invention generates virtual scenarios through a data processing device, allowing users to simulate microscope operation and providing a large amount of slide data for users to interpret and train.

前述實施方式之其他實施例如下:前述顯微鏡模擬訓練方法可更包含一模式切換步驟。模式切換步驟包含驅動一頭部感測器感測一頭部與虛擬顯微鏡的一距離資訊,並依據距離資訊是否大於一預設距離驅動影像輸出模組於一第一畫面及一第二畫面之間切換輸出。第一畫面為此些載玻片影像之此者以全螢幕顯示於顯示裝置,第二畫面為虛擬顯微鏡及虛擬手部同時顯示於顯示裝置。頭部感測器配置於使用者之頭部,並訊號連接資訊處理裝置。Other embodiments of the aforementioned implementation are as follows: The aforementioned microscope simulation training method may further include a mode switching step. The mode switching step includes driving a head sensor to sense distance information between a head and a virtual microscope, and driving the image output module to switch the output between a first screen and a second screen based on whether the distance information is greater than a preset distance. The first screen is a full-screen display of these slide images on the display device, and the second screen is a simultaneous display of the virtual microscope and a virtual hand on the display device. The head sensor is positioned on the user's head and is signal-connected to an information processing device.

前述實施方式之其他實施例如下:前述顯微鏡模擬訓練方法可更包含一混合實境處理步驟及一同步顯示步驟。混合實境處理步驟包含驅動顯示裝置之一混合實境處理模組擷取一實際場景影像,並對實際場景影像與虛擬場景處理而顯示一混合實境影像。同步顯示步驟包含驅動顯示裝置與另一顯示裝置同步顯示虛擬場景。另一顯示裝置訊號連接影像輸出模組。Other embodiments of the aforementioned implementation are as follows: The aforementioned microscope simulation training method may further include a mixed reality processing step and a synchronous display step. The mixed reality processing step includes driving a mixed reality processing module of the display device to capture a real-world scene image, and processing the real-world scene image and the virtual scene to display a mixed reality image. The synchronous display step includes driving the display device and another display device to synchronously display the virtual scene. The other display device is signal-connected to an image output module.

前述實施方式之其他實施例如下:前述載玻片影像選自資料庫的一特定案例,以做為一測驗主題。Other implementations of the aforementioned implementation method are as follows: the aforementioned slide image is selected from a specific case in the database as a test subject.

前述實施方式之其他實施例如下:前述分析步驟可更包含驅動分析模組自資料庫取得此些載玻片影像之另一者,虛擬手部更依據手勢感測資訊於虛擬場景內更換虛擬顯微鏡之一虛擬載玻片,並將此些載玻片影像之另一者顯示於虛擬場景。Other embodiments of the aforementioned implementation method are as follows: The aforementioned analysis steps may further include driving the analysis module to obtain another of these slide images from the database, the virtual hand further changing one of the virtual slides of the virtual microscope in the virtual scene based on gesture sensing information, and displaying the other of these slide images in the virtual scene.

以下將參照圖式說明本發明之複數個實施例。為明確說明起見,許多實務上的細節將在以下敘述中一併說明。然而,應瞭解到,這些實務上的細節不應用以限制本發明。也就是說,在本發明部分實施例中,這些實務上的細節是非必要的。此外,為簡化圖式起見,一些習知慣用的結構與元件在圖式中將以簡單示意的方式繪示之;並且重複之元件將可能使用相同的編號表示之。Several embodiments of the present invention will be described below with reference to the drawings. For clarity, many practical details will be described in the following description. However, it should be understood that these practical details should not be used to limit the present invention. That is, these practical details are not essential in some embodiments of the present invention. Furthermore, for the sake of simplicity, some conventional structures and components will be shown in the drawings in a simple schematic manner; and repeated components may be represented by the same designation.

此外,本文中當某一元件(或單元或模組等)「連接」於另一元件,可指所述元件是直接連接於另一元件,亦可指某一元件是間接連接於另一元件,意即,有其他元件介於所述元件及另一元件之間。而當有明示某一元件是「直接連接」於另一元件時,才表示沒有其他元件介於所述元件及另一元件之間。而第一、第二、第三等用語只是用來描述不同元件,而對元件本身並無限制,因此,第一元件亦可改稱為第二元件。且本文中之元件/單元/電路之組合非此領域中之一般周知、常規或習知之組合,不能以元件/單元/電路本身是否為習知,來判定其組合關係是否容易被技術領域中之通常知識者輕易完成。Furthermore, in this document, when a component (or unit or module, etc.) is "connected" to another component, it can mean that the component is directly connected to the other component, or that the component is indirectly connected to the other component, meaning that there is another component between the component and the other component. Only when it is explicitly stated that a component is "directly connected" to another component does it mean that there is no other component between the component and the other component. The terms "first," "second," and "third" are only used to describe different components and do not limit the components themselves; therefore, the first component can also be referred to as the second component. Moreover, the combinations of components/units/circuits in this document are not combinations that are generally known, conventional, or customary in this field. Whether the components/units/circuits themselves are customary cannot be used to determine whether their combination relationship is easily performed by someone with ordinary knowledge in the technical field.

請參閱第1圖至第4圖,第1圖係繪示本發明之第一實施例之顯微鏡模擬訓練系統100之方塊示意圖;第2圖係繪示依照第1圖之顯微鏡模擬訓練系統100之情境示意圖;第3圖係繪示依照第1圖之顯微鏡模擬訓練系統100之虛擬場景20之示意圖;第4圖係繪示依照第1圖之顯微鏡模擬訓練系統100之虛擬場景20之另一示意圖。顯微鏡模擬訓練系統100包含一感測器110、一資料庫120、一資訊處理裝置130以及一顯示裝置140。感測器110用以感測一使用者10之一手部姿勢,並轉換為一手勢感測資訊D1。資料庫120儲存複數載玻片影像Im。資訊處理裝置130訊號連接感測器110及資料庫120,並包含一分析模組132及一影像輸出模組134。分析模組132接收手勢感測資訊D1,分析模組132依據手勢感測資訊D1分析對應手部姿勢的一旋轉量及一相對位置資訊,並轉換為一影像放大倍率。影像輸出模組134訊號連接分析模組132,並用以輸出一虛擬場景20。虛擬場景20包含一虛擬手部21、一虛擬顯微鏡22及此些載玻片影像Im之一者,虛擬手部21對應使用者10之手部姿勢。顯示裝置140訊號連接影像輸出模組134,並用以顯示來自影像輸出模組134之虛擬場景20。分析模組132依據旋轉量及相對位置資訊使虛擬手部21於虛擬場景20中位移。虛擬手部21依據手勢感測資訊D1於虛擬場景20內轉動虛擬顯微鏡22之一虛擬物鏡轉換器221及一虛擬調節旋鈕222之一者,分析模組132依據影像放大倍率縮放此些載玻片影像Im之此者而產生一放大後虛擬載玻片影像Iv(見第8圖),並使放大後虛擬載玻片影像Iv顯示於虛擬場景20。Please refer to Figures 1 through 4. Figure 1 is a block diagram illustrating the microscope simulation training system 100 of the first embodiment of the present invention; Figure 2 is a scenario diagram illustrating the microscope simulation training system 100 according to Figure 1; Figure 3 is a schematic diagram illustrating the virtual scene 20 of the microscope simulation training system 100 according to Figure 1; and Figure 4 is another schematic diagram illustrating the virtual scene 20 of the microscope simulation training system 100 according to Figure 1. The microscope simulation training system 100 includes a sensor 110, a database 120, an information processing device 130, and a display device 140. Sensor 110 is used to sense the hand posture of a user 10 and convert it into gesture sensing information D1. Database 120 stores multiple slide images Im. Information processing device 130 is signal-connected to sensor 110 and database 120, and includes an analysis module 132 and an image output module 134. Analysis module 132 receives gesture sensing information D1, analyzes the rotation amount and relative position information of the corresponding hand posture based on gesture sensing information D1, and converts it into an image magnification. Image output module 134 is signal-connected to analysis module 132 and is used to output a virtual scene 20. The virtual scene 20 includes a virtual hand 21, a virtual microscope 22, and one of these slide images Im. The virtual hand 21 corresponds to the hand posture of the user 10. The display device 140 is signal-connected to the image output module 134 and is used to display the virtual scene 20 from the image output module 134. The analysis module 132 moves the virtual hand 21 within the virtual scene 20 based on rotation and relative position information. The virtual hand 21 rotates one of the virtual objective lens converters 221 and one of the virtual adjustment knobs 222 of the virtual microscope 22 within the virtual scene 20 based on the gesture sensing information D1. The analysis module 132 scales down these slide images Im according to the image magnification to generate a magnified virtual slide image Iv (see Figure 8), and displays the magnified virtual slide image Iv in the virtual scene 20.

藉此,本發明之顯微鏡模擬訓練系統100透過資訊處理裝置130產生虛擬場景20,供使用者10模擬顯微鏡操作,並提供大量抹片資料供使用者10進行判讀訓練。Therefore, the microscope simulation training system 100 of this invention generates a virtual scene 20 through the information processing device 130, allowing the user 10 to simulate microscope operation and providing a large amount of slide data for the user 10 to interpret and train.

具體而言,感測器110可為影像感測裝置或其他非穿戴式動作感測裝置;資料庫120可包含可儲存供資訊處理裝置130執行之資訊和指令的隨機存取記憶體(Random Access Memory;RAM)或其它型式的動態儲存裝置;資訊處理裝置130可包含具有虛擬實境處理器或混合實鏡處理器的任何型式的處理器、微處理器或智慧型裝置;顯示裝置140可包含穿戴虛擬實境顯示器、穿戴混合實境顯示器以及投影顯示器;載玻片影像Im可為對應不同疾病患者的細菌染色抹片、血液抹片、骨髓抹片經高倍數連續顯微數位攝影的影像,但本發明不以此為限。此外,資料庫120可更包含對應前述載玻片影像Im的診斷報告及分析檢驗報告。藉此,本發明之顯微鏡模擬訓練系統100透過非穿戴式的感測器110,感測使用者10的手部姿勢,不須手持握把式感測器,即可模擬顯微鏡操作過程,增加模擬真實性。在一實施例中,載玻片影像Im及放大後虛擬載玻片影像Iv可顯示於虛擬顯示器或虛擬場景20中的其他區域,本發明不以此為限。Specifically, sensor 110 may be an image sensing device or other non-wearable motion sensing device; database 120 may include random access memory (RAM) or other types of dynamic storage devices capable of storing information and instructions for execution by information processing device 130; information processing device 130 may include any type of processor, microprocessor, or smart device with virtual reality processor or mixed reality processor; display device 140 may include wearable virtual reality display, wearable mixed reality display, and projection display; slide image Im may be an image of bacterial staining smear, blood smear, or bone marrow smear corresponding to patients with different diseases, captured by high-magnification continuous microscopic digital photography, but the present invention is not limited thereto. Furthermore, the database 120 may include diagnostic reports and analytical test reports corresponding to the aforementioned slide image Im. Thus, the microscope simulation training system 100 of this invention, through a non-wearable sensor 110, senses the user 10's hand posture, simulating the microscope operation process without requiring a handheld sensor, thereby increasing the realism of the simulation. In one embodiment, the slide image Im and the magnified virtual slide image Iv may be displayed on a virtual display or in other areas of the virtual scene 20; this invention is not limited thereto.

詳細地說,當使用者10使用顯微鏡模擬訓練系統100進行訓練時,可配戴具有顯示裝置140的穿戴式虛擬頭盔於頭部。資訊處理裝置130之影像輸出模組134輸出如第3圖所示之虛擬場景20至顯示裝置140,並由顯示裝置140顯示虛擬場景20。虛擬場景20中的虛擬顯微鏡22包含虛擬物鏡轉換器221及虛擬調節旋鈕222。虛擬物鏡轉換器221可用以更換觀測載玻片影像Im的物鏡的倍率;虛擬調節旋鈕222可用以放大載玻片影像Im。虛擬手部21可對應使用者10的手部動作,當使用者10移動手部時,感測器110感測使用者10的手部動作而產生手勢感測資訊D1,並將手勢感測資訊D1傳送至資訊處理裝置130的分析模組132。分析模組132依據使用者10的手部姿勢對應調整虛擬場景20中的虛擬手部21,使虛擬手部21的動作、位移及轉動角度與使用者10的手部姿勢對應。因此,使用者10可透過改變手部姿勢對應移動虛擬場景20中的虛擬手部21至虛擬物鏡轉換器221或虛擬調節旋鈕222,並做出轉動手部之動作,帶動虛擬手部21轉動虛擬物鏡轉換器221或虛擬調節旋鈕222,並於虛擬場景20中顯示放大後虛擬載玻片影像Iv。Specifically, when user 10 trains using the microscope simulation training system 100, they can wear a wearable virtual helmet with a display device 140 on their head. The image output module 134 of the information processing device 130 outputs the virtual scene 20, as shown in Figure 3, to the display device 140, and the display device 140 displays the virtual scene 20. The virtual microscope 22 in the virtual scene 20 includes a virtual objective lens converter 221 and a virtual adjustment knob 222. The virtual objective lens converter 221 can be used to change the magnification of the objective lens for observing the slide image Im; the virtual adjustment knob 222 can be used to magnify the slide image Im. The virtual hand 21 can correspond to the hand movements of the user 10. When the user 10 moves their hand, the sensor 110 senses the hand movement of the user 10 and generates gesture sensing information D1, and transmits the gesture sensing information D1 to the analysis module 132 of the information processing device 130. The analysis module 132 adjusts the virtual hand 21 in the virtual scene 20 according to the hand posture of the user 10, so that the movement, displacement and rotation angle of the virtual hand 21 correspond to the hand posture of the user 10. Therefore, the user 10 can change the hand posture to correspond to the virtual hand 21 in the moving virtual scene 20 to the virtual objective lens converter 221 or the virtual adjustment knob 222, and make a hand rotation action to drive the virtual hand 21 to rotate the virtual objective lens converter 221 or the virtual adjustment knob 222, and display the magnified virtual slide image Iv in the virtual scene 20.

藉此,本發明之顯微鏡模擬訓練系統100透過顯示裝置140中顯示的虛擬場景20模擬更換物鏡、對焦、調節旋鈕及自載玻片影像Im尋找欲進行判讀的細胞區塊的動作,增加模擬訓練的真實性。Therefore, the microscope simulation training system 100 of this invention simulates the actions of changing objectives, focusing, adjusting knobs, and searching for cell regions to be interpreted by the slide image Im through the virtual scene 20 displayed in the display device 140, thereby increasing the realism of the simulation training.

請參閱第1圖至第5圖,第5圖係繪示本發明之第二實施例之顯微鏡模擬訓練方法300之流程圖。顯微鏡模擬訓練方法300包含一影像取得步驟S01、一虛擬場景顯示步驟S02、一感測步驟S03、一分析步驟S04以及一顯示步驟S05。以下將搭配第1圖至第4圖的顯微鏡模擬訓練系統100說明顯微鏡模擬訓練方法300的細節。Please refer to Figures 1 through 5. Figure 5 is a flowchart illustrating the microscope simulation training method 300 of the second embodiment of the present invention. The microscope simulation training method 300 includes an image acquisition step S01, a virtual scene display step S02, a sensing step S03, an analysis step S04, and a display step S05. The details of the microscope simulation training method 300 will be described below in conjunction with the microscope simulation training system 100 shown in Figures 1 through 4.

影像取得步驟S01包含驅動資訊處理裝置130自資料庫120取得複數載玻片影像Im之一者。虛擬場景顯示步驟S02包含驅動資訊處理裝置130之影像輸出模組134輸出虛擬場景20。感測步驟S03包含驅動感測器110感測使用者10之手部姿勢,並轉換為手勢感測資訊D1。分析步驟S04包含驅動資訊處理裝置130之分析模組132依據手勢感測資訊D1分析對應手部姿勢的旋轉量及相對位置資訊,並轉換為影像放大倍率。分析模組132依據旋轉量及相對位置資訊使虛擬手部21於虛擬場景20中位移。虛擬手部21依據手勢感測資訊D1於虛擬場景20內轉動虛擬顯微鏡22之虛擬物鏡轉換器221及虛擬調節旋鈕222之一者,分析模組132依據影像放大倍率縮放此些載玻片影像Im之此者而產生放大後虛擬載玻片影像Iv,並使放大後虛擬載玻片影像Iv顯示於虛擬場景20。顯示步驟S05包含驅動顯示裝置140接收並顯示虛擬場景20。The image acquisition step S01 includes the driving information processing device 130 acquiring one of a plurality of slide images Im from the database 120. The virtual scene display step S02 includes the image output module 134 of the driving information processing device 130 outputting a virtual scene 20. The sensing step S03 includes the driving sensor 110 sensing the user 10's hand posture and converting it into hand posture sensing information D1. The analysis step S04 includes the analysis module 132 of the driving information processing device 130 analyzing the rotation amount and relative position information of the corresponding hand posture based on the hand posture sensing information D1 and converting it into image magnification. Analysis module 132 moves the virtual hand 21 within the virtual scene 20 based on rotation and relative position information. The virtual hand 21 rotates either the virtual objective lens converter 221 or the virtual adjustment knob 222 of the virtual microscope 22 within the virtual scene 20 based on gesture sensing information D1. Analysis module 132 scales these slide images Im according to the image magnification to generate a magnified virtual slide image Iv, and displays the magnified virtual slide image Iv in the virtual scene 20. Display step S05 includes driving display device 140 to receive and display the virtual scene 20.

請參閱第1圖至第4圖、第6圖,第6圖係繪示本發明之第三實施例之顯微鏡模擬訓練系統100a之方塊示意圖。顯微鏡模擬訓練系統100a包含一感測器110、一資料庫120、一資訊處理裝置130以及一顯示裝置140。在第三實施例中,感測器110、資料庫120及資訊處理裝置130分別與第一實施例之感測器110、資料庫120及資訊處理裝置130作動相同,不再贅述。特別的是,顯微鏡模擬訓練系統100a可更包含一頭部感測器150及另一顯示裝置160。顯示裝置140可更包含一混合實境處理模組141。Please refer to Figures 1 through 4 and Figure 6. Figure 6 is a block diagram illustrating the microscope simulation training system 100a according to the third embodiment of the present invention. The microscope simulation training system 100a includes a sensor 110, a database 120, an information processing device 130, and a display device 140. In the third embodiment, the sensor 110, database 120, and information processing device 130 operate in the same manner as those in the first embodiment, and will not be described again. Notably, the microscope simulation training system 100a may further include a head sensor 150 and another display device 160. The display device 140 may further include a mixed reality processing module 141.

請參閱第2圖、第3圖、第6圖至第7圖,第7圖係繪示本發明之第四實施例之顯微鏡模擬訓練方法300a之流程圖。顯微鏡模擬訓練方法300a包含一影像取得步驟S11、一虛擬場景顯示步驟S12、一感測步驟S13、一分析步驟S14、一混合實境處理步驟S15、一顯示步驟S16、一同步顯示步驟S17及一模式切換步驟S18。在第四實施例中,顯微鏡模擬訓練方法300a之影像取得步驟S11、虛擬場景顯示步驟S12、感測步驟S13、分析步驟S14及顯示步驟S16分別與第二實施例之顯微鏡模擬訓練方法300之影像取得步驟S01、虛擬場景顯示步驟S02、感測步驟S03、分析步驟S04及顯示步驟S05作動相同,不再贅述。特別的是,顯微鏡模擬訓練方法300a可更包含混合實境處理步驟S15、同步顯示步驟S17及模式切換步驟S18。Please refer to Figures 2, 3, 6, and 7. Figure 7 is a flowchart illustrating the microscope simulation training method 300a of the fourth embodiment of the present invention. The microscope simulation training method 300a includes an image acquisition step S11, a virtual scene display step S12, a sensing step S13, an analysis step S14, a mixed reality processing step S15, a display step S16, a synchronous display step S17, and a mode switching step S18. In the fourth embodiment, the image acquisition step S11, virtual scene display step S12, sensing step S13, analysis step S14, and display step S16 of the microscope simulation training method 300a operate identically to the image acquisition step S01, virtual scene display step S02, sensing step S03, analysis step S04, and display step S05 of the microscope simulation training method 300 of the second embodiment, and will not be described in detail. Notably, the microscope simulation training method 300a may further include a mixed reality processing step S15, a synchronous display step S17, and a mode switching step S18.

具體而言,頭部感測器150訊號連接資訊處理裝置130。頭部感測器150配置於使用者10之頭部。顯示裝置160訊號連接影像輸出模組134。另一顯示裝置160訊號連接影像輸出模組134。Specifically, a head sensor 150 is connected to an information processing device 130. The head sensor 150 is disposed on the head of the user 10. A display device 160 is connected to an image output module 134. Another display device 160 is connected to the image output module 134.

請參閱第6圖至第8圖,第8圖係繪示依照第7圖之顯微鏡模擬訓練方法300a之虛擬場景20之第一畫面之示意圖。模式切換步驟S18包含驅動頭部感測器150感測頭部與虛擬顯微鏡22的一距離資訊D2,並依據距離資訊D2是否大於一預設距離驅動影像輸出模組134於一第一畫面及一第二畫面之間切換輸出。第一畫面為此些載玻片影像Im之此者以全螢幕顯示於顯示裝置140,第二畫面為虛擬顯微鏡22及虛擬手部21同時顯示於顯示裝置140。在第8圖中,第一畫面可更包含虛擬手部21操縱游標23移動於第一畫面中,當游標23指向放大後虛擬載玻片影像Iv中的特定細胞,並於第一畫面中顯示特定細胞的相關資訊、及縮放倍率操作說明。Please refer to Figures 6 through 8. Figure 8 is a schematic diagram illustrating the first screen of the virtual scene 20 according to the microscope simulation training method 300a in Figure 7. The mode switching step S18 includes driving the head sensor 150 to sense a distance information D2 between the head and the virtual microscope 22, and driving the image output module 134 to switch the output between a first screen and a second screen based on whether the distance information D2 is greater than a preset distance. The first screen is the slide image Im displayed in full screen on the display device 140, and the second screen is the virtual microscope 22 and the virtual hand 21 displayed simultaneously on the display device 140. In Figure 8, the first screen may further include a virtual hand 21 manipulating a cursor 23 to move within the first screen. When the cursor 23 points to a specific cell in the magnified virtual slide image Iv, the relevant information of the specific cell and the zoom level operation instructions are displayed in the first screen.

混合實境處理步驟S15包含驅動顯示裝置140之一混合實境處理模組141擷取一實際場景影像,並對實際場景影像與虛擬場景20處理而顯示一混合實境影像。具體而言,實際場景影像為使用者10所處的實際空間的空間影像,顯示裝置140透過混合實境處理模組141呈現虛擬場景20中背景以外的物體(例如虛擬手部21及虛擬顯微鏡22)位於使用者10所在的實際場景。The mixed reality processing step S15 includes driving a mixed reality processing module 141 of the display device 140 to capture a real-world scene image and processing the real-world scene image and the virtual scene 20 to display a mixed reality image. Specifically, the real-world scene image is a spatial image of the actual space where the user 10 is located, and the display device 140 presents objects other than the background in the virtual scene 20 (such as the virtual hand 21 and the virtual microscope 22) located in the actual scene where the user 10 is located through the mixed reality processing module 141.

同步顯示步驟S17包含驅動顯示裝置140與顯示裝置160同步顯示虛擬場景20。藉此,虛擬場景20可以同時顯示於穿戴虛擬實境顯示器及投影顯示器,可方便其他人員觀察使用者10的訓練狀況。The synchronized display step S17 includes driving the display device 140 and the display device 160 to simultaneously display the virtual scene 20. In this way, the virtual scene 20 can be displayed on both the wearable virtual reality display and the projection display at the same time, which makes it convenient for other personnel to observe the training status of the user 10.

請參閱第2圖、第3圖、第6圖至第9圖,第9圖係繪示本發明之第五實施例之顯微鏡模擬訓練方法300b之流程圖。在第9圖中,顯微鏡模擬訓練方法300b包含影像取得步驟S11、虛擬場景顯示步驟S12、步驟S212、S213、S214、S215、S216、S217、S218、S219、S220。在第五實施例中,影像取得步驟S11及虛擬場景顯示步驟S12可與第四實施例之影像取得步驟S11及虛擬場景顯示步驟S12作動相同,不再贅述。透過影像取得步驟S11及虛擬場景顯示步驟S12經由影像輸出模組134輸出虛擬場景20後,步驟S212係驅動感測器110感測使用者10的手部。步驟S213用以判斷是否需調整虛擬物鏡轉換器221,當使用者10移動手部,以令虛擬場景20中的虛擬手部21對應靠近並轉動虛擬物鏡轉換器221時,執行步驟S214。當使用者10移動手部,但虛擬場景20中的虛擬手部21未對應靠近虛擬物鏡轉換器221時,執行步驟S215。Please refer to Figures 2, 3, 6 through 9. Figure 9 is a flowchart illustrating the microscope simulation training method 300b of the fifth embodiment of the present invention. In Figure 9, the microscope simulation training method 300b includes image acquisition step S11, virtual scene display step S12, steps S212, S213, S214, S215, S216, S217, S218, S219, and S220. In the fifth embodiment, the image acquisition step S11 and the virtual scene display step S12 operate identically to those of the image acquisition step S11 and the virtual scene display step S12 of the fourth embodiment, and will not be described again. After the image acquisition step S11 and virtual scene display step S12 are completed, and the virtual scene 20 is output through the image output module 134, step S212 drives the sensor 110 to sense the user 10's hand. Step S213 is used to determine whether the virtual objective lens converter 221 needs to be adjusted. When the user 10 moves his hand so that the virtual hand 21 in the virtual scene 20 moves closer to and rotates the virtual objective lens converter 221, step S214 is executed. When user 10 moves their hand, but the virtual hand 21 in the virtual scene 20 does not correspond to the virtual object mirror converter 221, step S215 is executed.

步驟S214係將影像放大倍率調整為對應虛擬物鏡轉換器221下方的物鏡對應的倍率。Step S214 adjusts the image magnification to match the magnification of the objective lens below the virtual objective lens converter 221.

步驟S215、S216、S217、S218對應第7圖之模式切換步驟S18。步驟S215係驅動頭部感測器150偵測頭部與虛擬顯微鏡22的距離資訊D2,步驟S216、S217、S218係由分析模組132依據距離資訊D2是否大於一預設距離驅動影像輸出模組134於一第一畫面及一第二畫面之間切換輸出。步驟S216係判斷距離資訊D2是否小於預設距離。當距離資訊D2小於預設距離時,執行步驟S217。當距離資訊D2大於等於預設距離時,執行步驟S218。步驟S217係驅動影像輸出模組134將第一畫面輸出並透過顯示裝置140顯示第一畫面。步驟S218係驅動影像輸出模組134將第二畫面輸出並透過顯示裝置140顯示第二畫面。Steps S215, S216, S217, and S218 correspond to the mode switching step S18 in Figure 7. Step S215 drives the head sensor 150 to detect the distance information D2 between the head and the virtual microscope 22. Steps S216, S217, and S218 drive the image output module 134 to switch between a first screen and a second screen based on whether the distance information D2 is greater than a preset distance. Step S216 determines whether the distance information D2 is less than the preset distance. When the distance information D2 is less than the preset distance, step S217 is executed. When the distance information D2 is greater than or equal to the preset distance, step S218 is executed. Step S217 drives the image output module 134 to output the first image and display it through the display device 140. Step S218 drives the image output module 134 to output the second image and display it through the display device 140.

換句話說,當使用者10向虛擬場景20中的虛擬顯微鏡22靠近至預設距離時,顯示裝置140自動全螢幕顯示載玻片影像Im。在本發明的其他實施方式中,使用者亦可不透過頭部感測器感測,手動切換第一畫面及第二畫面,但本發明不以此為限。In other words, when the user 10 approaches the virtual microscope 22 in the virtual scene 20 to a preset distance, the display device 140 automatically displays the slide image Im in full screen. In other embodiments of the invention, the user may also manually switch between the first and second images without using a head sensor, but the invention is not limited to this.

步驟S219係驅動感測器110感測使用者10的手部移動並靠近虛擬調節旋鈕222。步驟S220係於虛擬手部21靠近並轉動虛擬調節旋鈕222時,調整影像放大倍率,並於虛擬場景20中將根據影像放大倍率放大的放大後虛擬載玻片影像Iv輸出。Step S219 involves the driver sensor 110 detecting the movement of the user 10's hand as it approaches the virtual adjustment knob 222. Step S220 involves adjusting the image magnification when the virtual hand 21 approaches and rotates the virtual adjustment knob 222, and outputting the magnified virtual slide image Iv, magnified according to the image magnification, in the virtual scene 20.

請參閱第10圖至第11圖,第10圖係繪示依照第7圖之顯微鏡模擬訓練方法300b之虛擬場景20之示意圖;第11圖係繪示依照第7圖之顯微鏡模擬訓練方法300b之虛擬場景20之另一示意圖。顯微鏡模擬訓練方法300b可更包含驅動分析模組132自資料庫120取得此些載玻片影像Im之另一者,虛擬手部21更依據手勢感測資訊D1於虛擬場景20內更換虛擬顯微鏡22之虛擬載玻片25,並將對應另一虛擬載玻片25a之此些載玻片影像Im之另一者顯示於虛擬場景20。由第10圖可知,當使用者10欲更換虛擬場景20中虛擬顯微鏡22上的虛擬載玻片25時,可將虛擬手部21靠近虛擬抽屜24中的虛擬載玻片25a,以使虛擬手部21拿取虛擬載玻片25a。在第11圖中,當拿取虛擬載玻片25a的虛擬手部21靠近虛擬顯微鏡22時,虛擬顯微鏡22上的虛擬載玻片25將被替換為虛擬載玻片25a。Please refer to Figures 10 and 11. Figure 10 is a schematic diagram of the virtual scene 20 according to the microscope simulation training method 300b in Figure 7; Figure 11 is another schematic diagram of the virtual scene 20 according to the microscope simulation training method 300b in Figure 7. The microscope simulation training method 300b may further include a driving analysis module 132 that obtains another of these slide images Im from the database 120, and the virtual hand 21 further changes the virtual slide 25 of the virtual microscope 22 in the virtual scene 20 according to the gesture sensing information D1, and displays another of these slide images Im corresponding to another virtual slide 25a in the virtual scene 20. As shown in Figure 10, when the user 10 wants to change the virtual slide 25 on the virtual microscope 22 in the virtual scene 20, they can bring the virtual hand 21 close to the virtual slide 25a in the virtual drawer 24 so that the virtual hand 21 can pick up the virtual slide 25a. In Figure 11, when the virtual hand 21 that picks up the virtual slide 25a is brought close to the virtual microscope 22, the virtual slide 25 on the virtual microscope 22 will be replaced with the virtual slide 25a.

在本發明的其他實施方式中,使用者亦可對感測器發出語音指令更換虛擬場景中放置於虛擬顯微鏡上的虛擬載玻片,此外,使用者更可透過語音指令指定所欲更換的虛擬載玻片對應的疾病類別、細胞種類,但本發明不以此為限。In other embodiments of the present invention, the user may also issue voice commands to the sensor to change the virtual slide placed on the virtual microscope in the virtual scene. In addition, the user may also specify the disease category and cell type corresponding to the virtual slide to be replaced through voice commands, but the present invention is not limited thereto.

在本發明的其他實施方式中,顯微鏡模擬訓練方法還可用於進行測驗,因此,可選擇資料庫中特定案例做為一測驗主題,也就是前述的載玻片影像選自資料庫中的特定案例,並以此做為測驗主題來進行測驗。舉例而言,其可例如是由考官於電子裝置中選定測驗主題,測驗主題可例如是顯微鏡操作、病徵判讀、疾病辨別等,考官可由投影顯示器中觀察使用者的操作或判斷是否正確。若資料處理裝置為一智慧型手機,則亦可以將虛擬場景顯示於電腦的螢幕上,不以此為限。In other embodiments of this invention, the microscope simulation training method can also be used for testing. Therefore, a specific case from a database can be selected as a test topic; that is, the aforementioned slide image is selected from a specific case in the database, and this is used as the test topic for testing. For example, the examiner can select the test topic in an electronic device. The test topic could be, for example, microscope operation, symptom interpretation, disease identification, etc. The examiner can observe the user's operation or judge whether it is correct through a projection display. If the data processing device is a smartphone, the virtual scene can also be displayed on a computer screen, and this is not a limitation.

在本發明的其他實施方式中,使用者亦可手持握把,由感測器透過握把感測使用者的手部姿勢,並轉換為手勢感測資訊,但本發明不以此為限。In other embodiments of the present invention, the user may also hold the handle, and the sensor may detect the user's hand posture through the handle and convert it into gesture sensing information, but the present invention is not limited to this.

由上述實施方式可知,本發明具有下列優點,其一,本發明之顯微鏡模擬訓練系統透過資料處理裝置產生虛擬場景,供使用者模擬顯微鏡操作,並提供大量抹片資料供使用者進行判讀訓練;其二,本發明之顯微鏡模擬訓練系統透過非穿戴式的感測器,感測使用者的手部姿勢,不須手持握把式感測器,即可模擬顯微鏡操作過程,增加模擬真實性;其三,本發明之顯微鏡模擬訓練系統透過顯示裝置中顯示的虛擬場景模擬更換物鏡、對焦、調節旋鈕及自載玻片影像尋找欲進行判讀的細胞區塊的動作,增加模擬訓練的真實性;其四,虛擬場景可以同時顯示於穿戴虛擬實境顯示器及投影顯示器,可方便其他人員觀察使用者的訓練狀況。As can be seen from the above embodiments, the present invention has the following advantages: First, the microscope simulation training system of the present invention generates virtual scenarios through a data processing device, allowing users to simulate microscope operation and providing a large amount of smear data for users to interpret and train; Second, the microscope simulation training system of the present invention uses non-wearable sensors to sense the user's hand posture, eliminating the need to hold a grip-type sensor to simulate microscope operation. The simulation process enhances realism; thirdly, the microscope simulation training system of this invention simulates the actions of changing objectives, focusing, adjusting knobs, and searching for cell regions to be interpreted by using a virtual scene displayed on the display device, thereby increasing the realism of the simulation training; fourthly, the virtual scene can be simultaneously displayed on a wearable virtual reality display and a projection display, which facilitates other personnel to observe the user's training status.

雖然本發明已以實施方式揭露如上,然其並非用以限定本發明,任何熟習此技藝者,在不脫離本發明之精神和範圍內,當可作各種之更動與潤飾,因此本發明之保護範圍當視後附之申請專利範圍所界定者為準。Although the invention has been disclosed above by way of implementation, it is not intended to limit the invention. Anyone skilled in the art may make various modifications and alterations without departing from the spirit and scope of the invention. Therefore, the scope of protection of the invention shall be determined by the appended patent application.

10:使用者 100,100a:顯微鏡模擬訓練系統 110:感測器 120:資料庫 130:資訊處理裝置 132:分析模組 134:影像輸出模組 140,160:顯示裝置 141:混合實境處理模組 150:頭部感測器 20:虛擬場景 21:虛擬手部 22:虛擬顯微鏡 221:虛擬物鏡轉換器 222:虛擬調節旋鈕 23:游標 24:虛擬抽屜 25,25a:虛擬載玻片 300,300a,300b:顯微鏡模擬訓練方法 S01,S11:影像取得步驟 S02,S12:虛擬場景顯示步驟 S03,S13:感測步驟 S04,S14:分析步驟 S05,S16:顯示步驟 S15:混合實境處理步驟 S17:同步顯示步驟 S18:模式切換步驟 S212,S213,S214,S215,S216,S217,S218,S219,S220:步驟 D1:手勢感測資訊 D2:距離資訊 Im:載玻片影像 Iv:放大後虛擬載玻片影像 10: User 100, 100a: Microscope Simulation Training System 110: Sensor 120: Database 130: Information Processing Device 132: Analysis Module 134: Image Output Module 140, 160: Display Device 141: Mixed Reality Processing Module 150: Head Sensor 20: Virtual Scene 21: Virtual Hand 22: Virtual Microscope 221: Virtual Objective Converter 222: Virtual Adjustment Knob 23: Cursor 24: Virtual Drawer 25, 25a: Virtual Slide 300, 300a, 300b: Microscope Simulation Training Methods S01, S11: Image Acquisition Steps S02, S12: Virtual Scene Display Steps S03, S13: Sensing Steps S04, S14: Analysis Steps S05, S16: Display Steps S15: Mixed Reality Processing Steps S17: Synchronous Display Steps S18: Mode Switching Steps S212, S213, S214, S215, S216, S217, S218, S219, S220: Steps D1: Gesture Sensing Information D2: Distance Information Im: Slide Image IV: Magnified Virtual Slide Image

第1圖係繪示本發明之第一實施例之顯微鏡模擬訓練系統之方塊示意圖; 第2圖係繪示依照第1圖之顯微鏡模擬訓練系統之情境示意圖; 第3圖係繪示依照第1圖之顯微鏡模擬訓練系統之虛擬場景之示意圖; 第4圖係繪示依照第1圖之顯微鏡模擬訓練系統之虛擬場景之另一示意圖; 第5圖係繪示本發明之第二實施例之顯微鏡模擬訓練方法之流程圖; 第6圖係繪示本發明之第三實施例之顯微鏡模擬訓練系統之方塊示意圖; 第7圖係繪示本發明之第四實施例之顯微鏡模擬訓練方法之流程圖; 第8圖係繪示依照第7圖之顯微鏡模擬訓練方法之虛擬場景之第一畫面之示意圖; 第9圖係繪示本發明之第五實施例之顯微鏡模擬訓練方法之流程圖; 第10圖係繪示依照第7圖之顯微鏡模擬訓練方法之虛擬場景之示意圖;及 第11圖係繪示依照第7圖之顯微鏡模擬訓練方法之虛擬場景之另一示意圖。 Figure 1 is a block diagram illustrating a microscope simulation training system according to a first embodiment of the present invention; Figure 2 is a schematic diagram illustrating a scenario of the microscope simulation training system according to Figure 1; Figure 3 is a schematic diagram illustrating a virtual scene of the microscope simulation training system according to Figure 1; Figure 4 is another schematic diagram illustrating a virtual scene of the microscope simulation training system according to Figure 1; Figure 5 is a flowchart illustrating a microscope simulation training method according to a second embodiment of the present invention; Figure 6 is a block diagram illustrating a microscope simulation training system according to a third embodiment of the present invention; Figure 7 is a flowchart illustrating the microscope simulation training method of the fourth embodiment of the present invention; Figure 8 is a schematic diagram illustrating the first view of the virtual scene of the microscope simulation training method according to Figure 7; Figure 9 is a flowchart illustrating the microscope simulation training method of the fifth embodiment of the present invention; Figure 10 is a schematic diagram illustrating the virtual scene of the microscope simulation training method according to Figure 7; and Figure 11 is another schematic diagram illustrating the virtual scene of the microscope simulation training method according to Figure 7.

100:顯微鏡模擬訓練系統 100: Microscope Simulation Training System

110:感測器 110: Sensor

120:資料庫 120:Database

130:資訊處理裝置 130: Information processing device

132:分析模組 132: Analysis Module

134:影像輸出模組 134: Image Output Module

140:顯示裝置 140: Display device

D1:手勢感測資訊 D1: Gesture Sensing Information

Im:載玻片影像 Im: Slide Image

Claims (10)

一種顯微鏡模擬訓練系統,包含: 一感測器,用以感測一使用者之一手部姿勢,並轉換為一手勢感測資訊; 一資料庫,儲存複數載玻片影像; 一資訊處理裝置,訊號連接該感測器及該資料庫,並包含: 一分析模組,接收該手勢感測資訊,該分析模組依據該手勢感測資訊分析對應該手部姿勢的一旋轉量及一相對位置資訊,並轉換為一影像放大倍率;及 一影像輸出模組,訊號連接該分析模組,並用以輸出一虛擬場景,該虛擬場景包含一虛擬手部、一虛擬顯微鏡及該些載玻片影像之一者,該虛擬手部對應該使用者之該手部姿勢;以及 一顯示裝置,訊號連接該影像輸出模組,並用以顯示來自該影像輸出模組之該虛擬場景; 其中,該分析模組依據該旋轉量及該相對位置資訊使該虛擬手部於該虛擬場景中位移,該虛擬手部依據該手勢感測資訊於該虛擬場景內轉動該虛擬顯微鏡之一虛擬物鏡轉換器及一虛擬調節旋鈕之一者,該分析模組依據該影像放大倍率縮放該些載玻片影像之該一者而產生一放大後虛擬載玻片影像,並使該放大後虛擬載玻片影像顯示於該虛擬場景。 A microscope simulation training system includes: a sensor for sensing a user's hand posture and converting it into hand gesture sensing information; a database for storing multiple slide images; an information processing device, signal-connected to the sensor and the database, and including: an analysis module for receiving the hand gesture sensing information, analyzing the hand posture's rotation and relative position information based on the hand gesture sensing information, and converting it into an image magnification; and An image output module, signal-connected to the analysis module, for outputting a virtual scene including one of a virtual hand, a virtual microscope, and images of glass slides, the virtual hand corresponding to the user's hand gesture; and a display device, signal-connected to the image output module, for displaying the virtual scene from the image output module; The analysis module moves the virtual hand within the virtual scene based on the rotation amount and relative position information. The virtual hand rotates one of the virtual objective lens converter and one of the virtual adjustment knobs of the virtual microscope within the virtual scene based on gesture sensing information. The analysis module scales one of the slide images according to the image magnification to generate a magnified virtual slide image, and displays the magnified virtual slide image in the virtual scene. 如請求項1所述之顯微鏡模擬訓練系統,更包含: 一頭部感測器,訊號連接該資訊處理裝置,該頭部感測器配置於該使用者之一頭部,並用以感測該頭部與該虛擬顯微鏡的一距離資訊; 其中,該分析模組依據該距離資訊是否大於一預設距離驅動該影像輸出模組於一第一畫面及一第二畫面之間切換輸出; 其中,該第一畫面為該些載玻片影像之該一者以全螢幕顯示於該顯示裝置,該第二畫面為該虛擬顯微鏡及該虛擬手部同時顯示於該顯示裝置。 The microscope simulation training system as described in claim 1 further comprises: a head sensor, signal-connected to the information processing device, the head sensor being disposed on one of the user's heads and used to sense distance information between the head and the virtual microscope; wherein, the analysis module drives the image output module to switch between a first screen and a second screen output based on whether the distance information is greater than the preset distance; wherein, the first screen is a full-screen display of one of the slide images on the display device, and the second screen is a simultaneous display of the virtual microscope and the virtual hand on the display device. 如請求項1所述之顯微鏡模擬訓練系統,其中該些載玻片影像選自該資料庫的一特定案例,以做為一測驗主題。The microscope simulation training system as described in claim 1, wherein the slide images are selected from a specific case in the database as a test subject. 如請求項1所述之顯微鏡模擬訓練系統,更包含: 另一顯示裝置,訊號連接該影像輸出模組; 其中,該顯示裝置與該另一顯示裝置同步顯示該虛擬場景; 其中,該顯示裝置包含: 一混合實境處理模組,用以擷取一實際場景影像,並對該實際場景影像與該虛擬場景處理而顯示一混合實境影像。 The microscope simulation training system as described in claim 1 further comprises: another display device, signal-connected to the image output module; wherein, the display device and the other display device synchronously display the virtual scene; wherein, the display device comprises: a mixed reality processing module for capturing an actual scene image and processing the actual scene image and the virtual scene to display a mixed reality image. 如請求項1所述之顯微鏡模擬訓練系統,其中該虛擬手部更依據該手勢感測資訊於該虛擬場景內更換該虛擬顯微鏡之一虛擬載玻片,並由該分析模組自該資料庫取得該些載玻片影像之另一者顯示於該虛擬場景。As described in claim 1, in the microscope simulation training system, the virtual hand further changes one of the virtual slides of the virtual microscope in the virtual scene based on the gesture sensing information, and the analysis module obtains another of the slide images from the database and displays it in the virtual scene. 一種顯微鏡模擬訓練方法,包含: 一影像取得步驟,包含驅動一資訊處理裝置自一資料庫取得複數載玻片影像之一者; 一虛擬場景顯示步驟,包含驅動該資訊處理裝置之一影像輸出模組輸出一虛擬場景,該虛擬場景包含一虛擬手部、一虛擬顯微鏡及該些載玻片影像之該一者; 一感測步驟,包含驅動一感測器感測一使用者之一手部姿勢,並轉換為一手勢感測資訊,其中該使用者之該手部姿勢對應該虛擬手部; 一分析步驟,包含驅動該資訊處理裝置之一分析模組依據該手勢感測資訊分析對應該手部姿勢的一旋轉量及一相對位置資訊,並轉換為一影像放大倍率,其中該分析模組依據該旋轉量及該相對位置資訊使該虛擬手部於該虛擬場景中位移,該虛擬手部依據該手勢感測資訊於該虛擬場景內轉動該虛擬顯微鏡之一虛擬物鏡轉換器及一虛擬調節旋鈕之一者,該分析模組依據該影像放大倍率縮放該些載玻片影像之該一者而產生一放大後虛擬載玻片影像,並使該放大後虛擬載玻片影像顯示於該虛擬場景;以及 一顯示步驟,包含驅動一顯示裝置接收並顯示該虛擬場景; 其中,該資訊處理裝置訊號連接該感測器、該資料庫及該顯示裝置。 A microscope simulation training method includes: An image acquisition step, comprising driving an information processing device to acquire one of a plurality of slide images from a database; A virtual scene display step, comprising driving an image output module of the information processing device to output a virtual scene, the virtual scene including a virtual hand, a virtual microscope, and one of the slide images; A sensing step, comprising driving a sensor to sense a hand gesture of a user and converting it into hand gesture sensing information, wherein the user's hand gesture corresponds to the virtual hand; One analysis step includes driving an analysis module of the information processing device to analyze a rotation amount and a relative position information corresponding to the hand posture based on the gesture sensing information, and converting it into an image magnification. The analysis module then moves the virtual hand within the virtual scene based on the rotation amount and the relative position information. Based on the gesture sensing information, the analysis module rotates one of the virtual objective lens converters and one of the virtual adjustment knobs of the virtual microscope within the virtual scene. The analysis module then scales one of the slide images according to the image magnification to generate a magnified virtual slide image, and displays the magnified virtual slide image in the virtual scene; A display step includes driving a display device to receive and display the virtual scene; Wherein, the information processing device is signal-connected to the sensor, the database, and the display device. 如請求項6所述之顯微鏡模擬訓練方法,更包含: 一模式切換步驟,包含驅動一頭部感測器感測一頭部與該虛擬顯微鏡的一距離資訊,並依據該距離資訊是否大於一預設距離驅動該影像輸出模組於一第一畫面及一第二畫面之間切換輸出; 其中,該第一畫面為該些載玻片影像之該一者以全螢幕顯示於該顯示裝置,該第二畫面為該虛擬顯微鏡及該虛擬手部同時顯示於該顯示裝置; 其中,該頭部感測器配置於該使用者之該頭部,並訊號連接該資訊處理裝置。 The microscope simulation training method as described in claim 6 further includes: A mode switching step, comprising driving a head sensor to sense distance information between a head and the virtual microscope, and driving the image output module to switch output between a first screen and a second screen based on whether the distance information is greater than the preset distance; Wherein the first screen is a full-screen display of one of the slide images on the display device, and the second screen is a simultaneous display of the virtual microscope and the virtual hand on the display device; Wherein the head sensor is disposed on the user's head and signal-connected to the information processing device. 如請求項6所述之顯微鏡模擬訓練方法,更包含: 一混合實境處理步驟,包含驅動該顯示裝置之一混合實境處理模組擷取一實際場景影像,並對該實際場景影像與該虛擬場景處理而顯示一混合實境影像; 一同步顯示步驟,包含驅動該顯示裝置與另一顯示裝置同步顯示該虛擬場景; 其中,該另一顯示裝置訊號連接該影像輸出模組。 The microscope simulation training method as described in claim 6 further comprises: a mixed reality processing step, comprising driving a mixed reality processing module of the display device to capture a real-world scene image, and processing the real-world scene image and the virtual scene to display a mixed reality image; a synchronous display step, comprising driving the display device and another display device to synchronously display the virtual scene; wherein, the other display device is signal-connected to the image output module. 如請求項6所述之顯微鏡模擬訓練方法,其中該些載玻片影像選自該資料庫的一特定案例,以做為一測驗主題。The microscope simulation training method as described in claim 6, wherein the slide images are selected from a specific case of the database as a test subject. 如請求項6所述之顯微鏡模擬訓練方法,其中該分析步驟更包含: 驅動該分析模組自該資料庫取得該些載玻片影像之另一者,其中該虛擬手部更依據該手勢感測資訊於該虛擬場景內更換該虛擬顯微鏡之一虛擬載玻片,並將該些載玻片影像之該另一者顯示於該虛擬場景。 The microscope simulation training method as described in claim 6, wherein the analysis step further includes: driving the analysis module to retrieve another of the slide images from the database, wherein the virtual hand further replaces one of the virtual slides of the virtual microscope in the virtual scene based on the gesture sensing information, and displays the other of the slide images in the virtual scene.
TW112135950A 2023-09-20 Simulation training system for microscope and simulation training method for microscope TWI909215B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW112135950A TWI909215B (en) 2023-09-20 Simulation training system for microscope and simulation training method for microscope

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW112135950A TWI909215B (en) 2023-09-20 Simulation training system for microscope and simulation training method for microscope

Publications (2)

Publication Number Publication Date
TW202514563A TW202514563A (en) 2025-04-01
TWI909215B true TWI909215B (en) 2025-12-21

Family

ID=

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100248200A1 (en) 2008-09-26 2010-09-30 Ladak Hanif M System, Method and Computer Program for Virtual Reality Simulation for Medical Procedure Skills Training

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100248200A1 (en) 2008-09-26 2010-09-30 Ladak Hanif M System, Method and Computer Program for Virtual Reality Simulation for Medical Procedure Skills Training

Similar Documents

Publication Publication Date Title
EP3776458B1 (en) Augmented reality microscope for pathology with overlay of quantitative biomarker data
US11636627B2 (en) System for histological examination of tissue specimens
CN111474701B (en) Pathological microscopic image real-time acquisition and analysis system, method, device and medium
AU2014237346B2 (en) System and method for reviewing and analyzing cytological specimens
JP6811837B2 (en) Pathology data acquisition
CN116235223A (en) Annotated data collection using gaze-based tracking
CN109997199A (en) Tuberculosis inspection method based on deep learning
CN113485555A (en) Medical image reading method, electronic equipment and storage medium
Gallas et al. Evaluation environment for digital and analog pathology: a platform for validation studies
CN111656247A (en) A cell image processing system, method, automatic film reading device and storage medium
TWI909215B (en) Simulation training system for microscope and simulation training method for microscope
US20200074628A1 (en) Image processing apparatus, imaging system, image processing method and computer readable recoding medium
CN115429271A (en) Autism spectrum disorder screening system and method based on eye movement and facial expression
US10922899B2 (en) Method of interactive quantification of digitized 3D objects using an eye tracking camera
CN110196642A (en) A kind of navigation-type virtual microscopic understanding model based on intention
TW202514563A (en) Simulation training system for microscope and simulation training method for microscope
US20200193596A1 (en) Method And System For Identifying And Classifying Structures In A Blood Sample
Gilbertson et al. Clinical slide digitization: whole slide imaging in clinical Practice experience from the University of Pittsburgh
CN113283402B (en) A differential two-dimensional gaze point detection method and device
WO2014149598A1 (en) Microscope-based learning
JP2023105818A (en) Computer-implemented training system and method for user-interactive training of executable methods in an IVD laboratory system
Ersoy et al. Eye gaze pattern analysis of whole slide image viewing behavior in pathedex platform
EP4586053A1 (en) Apparatus for an optical imaging system, optical imaging system, method and computer program
CN119200843B (en) Unilateral visual spatial perception parameter acquisition device based on picture description and reading test
JP7758151B2 (en) Focus adjustment method, program, and device