[go: up one dir, main page]

TWI899841B - Wearable apparatus and integrating method of virtual and real images - Google Patents

Wearable apparatus and integrating method of virtual and real images

Info

Publication number
TWI899841B
TWI899841B TW113104883A TW113104883A TWI899841B TW I899841 B TWI899841 B TW I899841B TW 113104883 A TW113104883 A TW 113104883A TW 113104883 A TW113104883 A TW 113104883A TW I899841 B TWI899841 B TW I899841B
Authority
TW
Taiwan
Prior art keywords
application
display area
virtual
processor
virtual display
Prior art date
Application number
TW113104883A
Other languages
Chinese (zh)
Other versions
TW202439091A (en
Inventor
曾戈平
詹惠文
姜禮誠
Original Assignee
仁寶電腦工業股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 仁寶電腦工業股份有限公司 filed Critical 仁寶電腦工業股份有限公司
Publication of TW202439091A publication Critical patent/TW202439091A/en
Application granted granted Critical
Publication of TWI899841B publication Critical patent/TWI899841B/en

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

A wearable apparatus and a method for integrating virtual and real images. Wearable apparatus includes an image capture device, a projection module and a processor. The processor is configured to: identify a physical display area in the captured image, the physical display area corresponds to the size of the display screen of the display, and the external device includes the display; define one or more virtual display areas based on the location of the physical display area position; and project the interface of one or more applications to one or more virtual display areas through the projection module according to the relevant parameters of one or more applications of the external device. The virtual image projected by the projection module corresponds to the virtual display areas. Therefore, user experience could be improved.

Description

穿戴式裝置及虛實畫面整合方法Wearable device and virtual and real screen integration method

本發明是有關於一種影像處理技術,且特別是有關於一種穿戴式裝置及虛實畫面整合方法。 The present invention relates to an image processing technology, and more particularly to a wearable device and a method for integrating virtual and real images.

若有同時觀看與編輯多個檔案內容的需求,現有裝置的工作模式能提供單一螢幕或多個螢幕的內容呈現。雖然可透過連接多台螢幕來增加觀看內容的數量,但這些螢幕佔據許多實體空間,且在擴充性上也有一定的成本與限制。 If you need to view and edit multiple files simultaneously, existing devices can display content on a single screen or multiple screens. While connecting multiple screens can increase the number of available screens, these screens take up a lot of physical space and have certain costs and limitations in scalability.

本發明提供一種穿戴式裝置及虛實畫面整合方法,透過實體螢幕與虛擬視窗的整合,讓工作中的資料切換能夠更為順暢。 This invention provides a wearable device and virtual reality screen integration method. By integrating the physical screen with the virtual window, data switching during work can be made smoother.

本發明實施例的穿戴式裝置包括(但不僅限於)影像擷取裝置、投影模組及處理器。影像擷取裝置用以取得擷取影像。投影模組用以投影虛擬畫面。處理器耦接影像擷取裝置及投影模組。處理器經配置用以:辨識擷取影像中的實體顯示區域,實體顯示區域 對應於顯示器的顯示畫面的尺寸,且外部裝置包括顯示器;依據實體顯示區域的位置定義一或多個虛擬顯示區域的位置;以及依據外部裝置的一或多個應用程式的相關參數,透過投影模組投影一或多個應用程式的介面至一或多個虛擬顯示區域。虛擬畫面對應於虛擬顯示區域。 The wearable device of an embodiment of the present invention includes (but is not limited to) an image capture device, a projection module, and a processor. The image capture device is used to capture images. The projection module is used to project virtual images. The processor is coupled to the image capture device and the projection module. The processor is configured to: identify a physical display area in the captured image, where the physical display area corresponds to the size of a display screen of a display, and the external device includes a display; define the positions of one or more virtual display areas based on the position of the physical display area; and project one or more application interfaces onto the one or more virtual display areas via the projection module based on parameters related to one or more applications in the external device. The virtual images correspond to the virtual display areas.

本發明實施例的虛實畫面整合方法適用於穿戴式裝置,該穿戴式裝置包括影像擷取裝置、投影模組、及處理器。虛實畫面整合方法包括(但不僅限於)下列步驟:透過處理器辨識擷取影像中的實體顯示區域,實體顯示區域對應於顯示器的顯示畫面的尺寸,且外部裝置包括這顯示器;透過處理器依據實體顯示區域的位置定義一或多個虛擬顯示區域的位置;以及透過處理器依據外部裝置的一或多個應用程式的相關參數,經由投影模組投影一或多個應用程式的介面至一或多個虛擬顯示區域。投影模組投影的虛擬畫面對應於虛擬顯示區域。 The virtual-reality integration method of an embodiment of the present invention is applicable to a wearable device comprising an image capture device, a projection module, and a processor. The virtual-reality integration method includes (but is not limited to) the following steps: identifying a physical display area in a captured image by a processor, where the physical display area corresponds to the size of a display screen of a display, and the external device includes such a display; defining the positions of one or more virtual display areas by the processor based on the position of the physical display area; and projecting one or more application interfaces onto the one or more virtual display areas via the projection module based on parameters related to one or more applications in the external device. The virtual images projected by the projection module correspond to the virtual display areas.

基於上述,本發明實施例的穿戴式裝置及虛實畫面整合方法,可依據擷取影像中的實體顯示區域定義虛擬顯示區域,並依據應用程式的參數投影應用程式的介面至虛擬顯示區域。藉此,可依據應用程式的屬性及使用者對於應用程式的使用習慣,自動投影合適的內容並以合適的尺寸比例到虛擬顯示區域。 Based on the above, the wearable device and virtual-reality integration method of the present invention can define a virtual display area based on the physical display area in the captured image and project the application interface onto the virtual display area based on the application's parameters. This allows for automatic projection of appropriate content and a suitable size ratio onto the virtual display area, based on the application's properties and user usage habits.

為讓本發明的上述特徵和優點能更明顯易懂,下文特舉實施例,並配合所附圖式作詳細說明如下。 To make the above features and advantages of the present invention more clearly understood, the following examples are given and described in detail with reference to the accompanying drawings.

1:整合系統 1: System Integration

10:穿戴式裝置 10: Wearable devices

11:影像擷取裝置 11: Image capture device

12:投影模組 12: Projection Module

13、33:通訊收發器 13, 33: Communication transceiver

14、34:儲存器 14, 34: Storage

15、35:處理器 15, 35: Processor

30:外部裝置 30: External devices

121:導光板 121: Light guide plate

E:眼睛 E: Eyes

M1、M2:標記點 M1, M2: Marking points

S410~S430、S810~S830、S910~S920、S1110~S1120、:步驟 S410~S430, S810~S830, S910~S920, S1110~S1120: Steps

DA:顯示區域 DA: Display Area

C1:擷取影像 C1: Capture image

VDA1、VDA1’、VDA2、VDA3:虛擬顯示區域 VDA1, VDA1’, VDA2, VDA3: Virtual Display Area

VS:虛擬畫面 VS: Virtual Screen

PDA:實體顯示區域 PDA: Physical Display Area

MA:主應用程式 MA: Main Application

SA:次應用程式 SA: Sub-application

SA1~SA3、SA1-1~SA1-6、SA2-1~SA2-6、SA3-1~SA3-6:介面 SA1~SA3, SA1-1~SA1-6, SA2-1~SA2-6, SA3-1~SA3-6: interface

圖1是依據本發明一實施例的整合系統的元件方塊圖。 Figure 1 is a block diagram of components of an integrated system according to one embodiment of the present invention.

圖2A是依據本發明一實施例的穿戴式裝置的示意圖。 Figure 2A is a schematic diagram of a wearable device according to an embodiment of the present invention.

圖2B是依據本發明一實施例說明投影成像的示意圖。 Figure 2B is a schematic diagram illustrating projection imaging according to an embodiment of the present invention.

圖3A是依據本發明一實施例的整合系統的示意圖。 Figure 3A is a schematic diagram of an integrated system according to one embodiment of the present invention.

圖3B是依據本發明一實施例的在應用情境下的整合系統的示意圖。 Figure 3B is a schematic diagram of an integrated system in an application scenario according to an embodiment of the present invention.

圖4是依據本發明一實施例的虛實整合方法的流程圖。 Figure 4 is a flow chart of a virtual-real integration method according to an embodiment of the present invention.

圖5是依據本發明一實施例說明實體顯示區域的辨識的示意圖。 Figure 5 is a schematic diagram illustrating the identification of a physical display area according to an embodiment of the present invention.

圖6A是依據本發明一實施例說明實體顯示區域與虛擬顯示區域的示意圖。 Figure 6A is a schematic diagram illustrating a physical display area and a virtual display area according to an embodiment of the present invention.

圖6B是依據本發明另一實施例說明實體顯示區域與虛擬顯示區域的示意圖。 Figure 6B is a schematic diagram illustrating a physical display area and a virtual display area according to another embodiment of the present invention.

圖7A是依據本發明一實施例說明視窗收合的示意圖。 Figure 7A is a schematic diagram illustrating window folding according to an embodiment of the present invention.

圖7B是依據本發明一實施例說明視窗展開的示意圖。 Figure 7B is a schematic diagram illustrating window expansion according to an embodiment of the present invention.

圖8是依據本發明一實施例說明排序的流程圖。 Figure 8 is a flowchart illustrating sorting according to an embodiment of the present invention.

圖9是依據本發明一實施例的排序決定的流程圖。 Figure 9 is a flowchart of the sorting decision according to one embodiment of the present invention.

圖10是依據本發明一實施例說明依據使用歷程的視窗分配的示意圖。 Figure 10 is a schematic diagram illustrating window allocation based on usage history according to an embodiment of the present invention.

圖11是依據本發明一實施例的位置分配的流程圖。 Figure 11 is a flowchart of location allocation according to an embodiment of the present invention.

圖12是依據本發明一實施例說明依據應用屬性的視窗分配的示意圖。 Figure 12 is a schematic diagram illustrating window allocation based on application attributes according to an embodiment of the present invention.

圖1是依據本發明一實施例的整合系統1的元件方塊圖。請照圖1,整合系統1包括(但不僅限於)穿戴式裝置10及外部裝置30。 FIG1 is a block diagram of an integrated system 1 according to an embodiment of the present invention. Referring to FIG1 , the integrated system 1 includes (but is not limited to) a wearable device 10 and an external device 30.

穿戴式裝置10可以是頭戴顯示器、智慧型眼鏡、擴增實境(Augmented Reality,AR)或混合實境(Mixed Reality)穿戴裝置、抬頭顯示器、智慧型手機、平板電腦或其他裝置。 The wearable device 10 can be a head-mounted display, smart glasses, an augmented reality (AR) or mixed reality (MR) wearable device, a head-up display, a smartphone, a tablet computer, or other devices.

穿戴式裝置10包括(但不僅限於)影像擷取裝置11、投影模組12、通訊收發器13、儲存器14及處理器15。 The wearable device 10 includes (but is not limited to) an image capture device 11, a projection module 12, a communication transceiver 13, a memory 14, and a processor 15.

影像擷取裝置11可以是相機、攝影機、監視器或具備影像擷取功能的電路,並據以擷取指定視野(Field of View,FOV)內的影像。例如,圖2A是依據本發明一實施例的穿戴式裝置10的示意圖。請參照圖2A,穿戴式裝置10的外型為眼鏡形式(但不以此為限)。此外,影像擷取裝置11設於穿戴式裝置10的主體前側。當使用者戴上穿戴式裝置10時,影像擷取裝置11用以朝使用者前方拍攝。然而,影像擷取裝置11的設置位置及拍攝視野不以圖2A所示為限。 The image capture device 11 can be a camera, a camcorder, a monitor, or a circuit with image capture functionality, and is used to capture images within a specified field of view (FOV). For example, Figure 2A is a schematic diagram of a wearable device 10 according to one embodiment of the present invention. Referring to Figure 2A , the wearable device 10 is in the form of glasses (but is not limited to this). Furthermore, the image capture device 11 is located on the front side of the wearable device 10. When a user wears the wearable device 10, the image capture device 11 is used to capture images facing forward from the user. However, the placement and field of view of the image capture device 11 are not limited to those shown in Figure 2A .

投影模組12可以是數位光處理(Digital Light Processing,DLP)、液晶顯示(Liquid-Crystal Display,LCD)、發光二極體(Light Emitting Diode,LED)或其他投影顯像技術的視訊播放設備。在一實施例中,投影模組12用以投影一或多張影像。例如,虛擬畫面、擴增實境影像、照片或影片。 The projection module 12 can be a video playback device using digital light processing (DLP), liquid crystal display (LCD), light emitting diode (LED), or other projection display technologies. In one embodiment, the projection module 12 is used to project one or more images, such as virtual images, augmented reality images, photos, or videos.

圖2B是依據本發明一實施例說明投影成像的示意圖。請照圖2A及圖2B,投影模組12設於穿戴式裝置10的主體前側。導光板121例如是自由曲面光學(Freeform Optic)鏡片。投影模組12發射的光束透過導光板121反射至眼睛E,並據以成像於眼睛E。此外,眼睛E仍可看到真實世界。然而,投影模組12的成像方式不限於圖2B所示的範例,且還可能有其他投影/成像方式。 Figure 2B is a schematic diagram illustrating projection imaging according to an embodiment of the present invention. Referring to Figures 2A and 2B , the projection module 12 is disposed on the front side of the wearable device 10. The light guide plate 121 is, for example, a freeform optical lens. The light beam emitted by the projection module 12 is reflected by the light guide plate 121 toward the eye E, where it is imaged. Furthermore, the eye E can still see the real world. However, the imaging method of the projection module 12 is not limited to the example shown in Figure 2B , and other projection/imaging methods are possible.

請參照圖1,通訊收發器13可以支援諸如藍芽、Wi-Fi、行動網路、光纖網路、通用序列匯流排(Universal Serial Bus,USB)、Thunderbolt或其他通訊技術的通訊收發電路/傳輸介面。在一實施例中,通訊收發器13用以接收來自外部裝置的訊號或傳送訊號至外部裝置30。在一些實施例中,通訊收發器13用以與外部裝置30連線,並據以傳送或接收資料。 Referring to Figure 1 , the communication transceiver 13 can support communication transceiver circuits/transmission interfaces such as Bluetooth, Wi-Fi, mobile networks, optical networks, Universal Serial Bus (USB), Thunderbolt, or other communication technologies. In one embodiment, the communication transceiver 13 is used to receive signals from an external device or transmit signals to an external device 30. In some embodiments, the communication transceiver 13 is used to connect to the external device 30 and transmit or receive data accordingly.

儲存器14可以是任何型態的固定或可移動隨機存取記憶體(Radom Access Memory,RAM)、唯讀記憶體(Read Only Memory,ROM)、快閃記憶體(flash memory)、傳統硬碟(Hard Disk Drive,HDD)、固態硬碟(Solid-State Drive,SSD)或類似元件。在一實施例中,儲存器14用以儲存程式碼、軟體模組、組態配置、資料(例如,影像、使用歷程、或應用屬性)或檔案。 The memory 14 can be any type of fixed or removable random access memory (RAM), read-only memory (ROM), flash memory, a traditional hard disk drive (HDD), a solid-state drive (SSD), or similar device. In one embodiment, the memory 14 is used to store program code, software modules, configurations, data (e.g., images, usage history, or application properties), or files.

處理器15耦接影像擷取裝置11、投影模組12、通訊收 發器13及儲存器14。處理器15可以是中央處理單元(Central Processing Unit,CPU)、圖形處理單元(Graphic Processing unit,GPU)、數據處理單元(Data Processing Unit,DPU)、視覺處理單元(Visual Processing Unit,VPU)、張量處理單元(Tensor Processing Unit,TPU)或神經網路處理單元(Neural-network Processing Unit,NPU),或是其他可程式化之一般用途或特殊用途的微處理器(Microprocessor)、數位信號處理器(Digital Signal Processor,DSP)、可程式化控制器、現場可程式化邏輯閘陣列(Field Programmable Gate Array,FPGA)、特殊應用積體電路(Application-Specific Integrated Circuit,ASIC)或其他類似元件或上述元件的組合。在一實施例中,處理器15用以執行穿戴式裝置10的所有或部份作業,且可載入並執行儲存器14所儲存的一個或多個軟體模組、檔案及/或資料。 The processor 15 is coupled to the image capture device 11, the projection module 12, the communication transceiver 13, and the memory 14. The processor 15 may be a central processing unit (CPU), a graphics processing unit (GPU), a data processing unit (DPU), a visual processing unit (VPU), a tensor processing unit (TPU), or a neural-network processing unit (NPU), or other programmable general-purpose or special-purpose microprocessor, digital signal processor (DSP), programmable controller, field programmable gate array (FPGA), application-specific integrated circuit (ASIC), or other similar components or a combination of the above components. In one embodiment, the processor 15 is used to execute all or part of the operations of the wearable device 10 and can load and execute one or more software modules, files and/or data stored in the memory 14.

外部裝置30可以是智慧型手機、平板電腦、穿戴式裝置、筆記型電腦、桌上型電腦、伺服器、智能家電裝置、智能助理裝置、車載系統、會議電話、家用遊戲機、個人電腦、人工智慧個人電腦(AI PC)或其他電子裝置。 The external device 30 may be a smartphone, tablet computer, wearable device, laptop computer, desktop computer, server, smart home appliance, smart assistant device, in-vehicle system, conference phone, home game console, personal computer, artificial intelligence personal computer (AI PC), or other electronic device.

外部裝置30包括(但不僅限於)顯示器32、通訊收發器33、儲存器34及處理器35。 The external device 30 includes (but is not limited to) a display 32, a communication transceiver 33, a memory 34, and a processor 35.

顯示器32可以是液晶顯示器(Liquid-Crystal Display,LCD)、發光二極體(Light-Emitting Diode,LED)顯示器、有機發光二極體(Organic Light-Emitting Diode,OLED)、Mini LED顯示器 或其他顯示器。在一實施例中,顯示器32用以透過其面板/螢幕顯示一張或更多張影像。 Display 32 may be a liquid crystal display (LCD), a light-emitting diode (LED) display, an organic light-emitting diode (OLED), a mini-LED display, or other displays. In one embodiment, display 32 is configured to display one or more images on its panel/screen.

通訊收發器33、儲存器34及處理器35的實施態樣及功能可分別參照前述針對通訊收發器13、儲存器14及處理器15的說明,於此不再贅述。 The implementation and functions of the communication transceiver 33, memory 34, and processor 35 can be found in the aforementioned descriptions of the communication transceiver 13, memory 14, and processor 15, respectively, and will not be further elaborated here.

處理器35耦接顯示器32、通訊收發器33及儲存器34。在一實施例中,處理器35用以執行外部裝置30的所有或部份作業,且可載入並執行儲存器34所儲存的一個或多個軟體模組、檔案及/或資料。 The processor 35 is coupled to the display 32, the communication transceiver 33, and the memory 34. In one embodiment, the processor 35 is used to execute all or part of the operations of the external device 30 and can load and execute one or more software modules, files, and/or data stored in the memory 34.

圖3A是依據本發明一實施例的整合系統1的示意圖。請參照圖3A,穿戴式裝置10與外部裝置30透過傳輸線連接。例如,USB傳輸介面。此外,穿戴式裝置10也能以無線傳輸的方式與外部裝置30連接。 FIG3A is a schematic diagram of an integrated system 1 according to an embodiment of the present invention. Referring to FIG3A , the wearable device 10 is connected to the external device 30 via a transmission cable, such as a USB transmission interface. Alternatively, the wearable device 10 can be connected to the external device 30 via wireless transmission.

圖3B是依據本發明一實施例的在應用情境下的整合系統的示意圖。請參照圖3B,使用者戴上穿戴式裝置10,且外部裝置30架設於桌面。在這應用情境下,使用者戴著穿戴式裝置10,且一邊使用外部裝置30。然而,圖3B所示應用情境僅是作為範例說明,且還可能有其他應用情境。 FIG3B is a schematic diagram of an integrated system according to an embodiment of the present invention in an application scenario. Referring to FIG3B , a user wears a wearable device 10, and an external device 30 is mounted on a table. In this application scenario, the user wears the wearable device 10 while using the external device 30. However, the application scenario shown in FIG3B is merely an example, and other application scenarios are possible.

圖4是依據本發明一實施例的虛實整合方法的流程圖。請參照圖4,處理器15辨識擷取影像中的實體顯示區域(步驟S410)。具體而言,影像擷取裝置11拍攝,並據以取得擷取影像。以圖3B為例,穿戴式裝置10朝前方拍攝。圖5是依據本發明一實施例說 明實體顯示區域的辨識的示意圖。請參照圖5,擷取影像C1拍攝到顯示器32。 Figure 4 is a flow chart of a virtual-reality integration method according to an embodiment of the present invention. Referring to Figure 4 , processor 15 identifies a physical display area in a captured image (step S410). Specifically, image capture device 11 captures and obtains the captured image. For example, in Figure 3B , wearable device 10 captures the image facing forward. Figure 5 is a schematic diagram illustrating the identification of a physical display area according to an embodiment of the present invention. Referring to Figure 5 , captured image C1 is captured and displayed on display 32.

實體顯示區域對應於顯示器32的顯示畫面的區域(下文稱顯示區域DA)。實體顯示區域對應於外部裝置30的顯示器32的顯示畫面的尺寸。顯示器32的面板發光並據以顯示影像。因此,顯示器32的顯示畫面的尺寸大致為或等同於面板的尺寸。如圖5所示,實體顯示區域的位置對應於顯示器32的顯示區域DA。 The physical display area corresponds to the area of the display screen of the display 32 (hereinafter referred to as the display area DA). The physical display area corresponds to the size of the display screen of the display 32 of the external device 30. The panel of the display 32 emits light and displays images accordingly. Therefore, the size of the display screen of the display 32 is approximately equal to or equal to the size of the panel. As shown in Figure 5, the position of the physical display area corresponds to the display area DA of the display 32.

在一實施例中,處理器15可辨識擷取影像中的兩個或更多個標記點。標記點可以是印上、嵌入或透過其他人工方式加上一個或更多個文字、符號、圖案、形狀和/或顏色所形成的實體物(例如,紙、塑膠板或貼紙)。或者,標記可以是實體物原有的文字、符號、圖案、形狀和/或顏色所組成。 In one embodiment, the processor 15 may identify two or more markers in the captured image. The markers may be physical objects (e.g., paper, plastic, or stickers) with one or more text, symbols, patterns, shapes, and/or colors printed, embedded, or otherwise artificially added. Alternatively, the markers may consist of text, symbols, patterns, shapes, and/or colors already present on the physical object.

以圖3B為例,假設標記點M1、M2為正方形貼紙,且標記點M1、M2貼在外部裝置30上。請參照圖5,在擷取影像C1中,標記點M1、M2位於顯示器32的左上角及右下角。 Taking Figure 3B as an example, assume that the markers M1 and M2 are square stickers attached to the external device 30. Referring to Figure 5 , in the captured image C1, the markers M1 and M2 are located at the upper left and lower right corners of the display 32.

標記點的辨識可基於物件偵測技術。例如,外部裝置30可應用基於神經網路的演算法(例如,YOLO(You only look once)、基於區域的卷積神經網路(Region Based Convolutional Neural Networks,R-CNN)、或快速R-CNN(Fast CNN))或是基於特徵匹配的演算法(例如,方向梯度直方圖(Histogram of Oriented Gradient,HOG)、尺度不變特徵轉換(Scale-Invariant Feature Transform,SIFT)、Harr、或加速穩健特徵(Speeded Up Robust Features,SURF)的特徵 比對)實現物件偵測。處理器15可判斷擷取影像中的物件是否為預設的標記點。 Marker point recognition can be based on object detection technology. For example, the external device 30 can apply a neural network-based algorithm (e.g., YOLO (You only look once), Region Based Convolutional Neural Networks (R-CNN), or Fast R-CNN) or a feature matching-based algorithm (e.g., Histogram of Oriented Gradient (HOG), Scale-Invariant Feature Transform (SIFT), Harm, or Speeded Up Robust Features (SURF)) to achieve object detection. The processor 15 can determine whether the object in the captured image is a preset marker point.

處理器15可依據兩個或更多個標記點的位置決定實體顯示區域。這些標記點的假想連線可為構出實體顯示區域的形狀/輪廓。以圖5為例,標記點M1的水平假想線與標記點M2的垂直假想線交會,且標記點M2的水平假想線與標記點M1的垂直假想線交會。上述標記點M1、M2的水平假想線及垂直假想線形成顯示區域DA的形狀/輪廓。顯示區域DA的形狀/輪廓可作為實體顯示區域的形狀/輪廓。 The processor 15 can determine the physical display area based on the positions of two or more marker points. The imaginary lines connecting these marker points can form the shape/outline of the physical display area. For example, in Figure 5 , the horizontal imaginary line of marker point M1 intersects the vertical imaginary line of marker point M2, and the horizontal imaginary line of marker point M2 intersects the vertical imaginary line of marker point M1. These horizontal and vertical imaginary lines of marker points M1 and M2 form the shape/outline of the display area DA. The shape/outline of the display area DA can serve as the shape/outline of the physical display area.

在另一實施例中,還可能設置更多標記點。這些標記點位於實體顯示區域的輪廓。 In another embodiment, more marker points may be set. These marker points are located at the outline of the physical display area.

在一實施例中,處理器15可基於前述物件偵測技術辨識顯示器32的面板,並依據面板的形狀/輪廓決定實體顯示區域。也就是說,將面板的形狀/輪廓作為實體顯示區域的形狀/輪廓。 In one embodiment, the processor 15 can identify the panel of the display 32 based on the aforementioned object detection technology and determine the physical display area based on the shape/outline of the panel. In other words, the shape/outline of the panel is used as the shape/outline of the physical display area.

在一實施例中,處理器15可透過輸入裝置(圖未示,例如手持控制器、滑鼠或鍵盤)接收用戶操作。這用戶操作可定義實體顯示區域的形狀/輪廓。 In one embodiment, the processor 15 may receive user input via an input device (not shown), such as a handheld controller, a mouse, or a keyboard. This user input may define the shape/outline of the physical display area.

請參照圖4,處理器15依據實體顯示區域的位置定義虛擬顯示區域的位置(步驟S420)。具體而言,虛擬顯示區域是投影模組12的成像區域。也就是說,當使用者戴上穿戴式裝置10時,投影模組12投影的影像(例如,虛擬畫面)在虛擬顯示區域上呈現。而虛擬顯示區域以外的區域可禁止/停止/不成像,或透過改變透明 度、呈現色塊等方式成像。 Referring to Figure 4 , the processor 15 defines the location of the virtual display area based on the location of the physical display area (step S420 ). Specifically, the virtual display area is the imaging area of the projection module 12 . That is, when the user wears the wearable device 10 , the image (e.g., virtual screen) projected by the projection module 12 appears on the virtual display area. Areas outside the virtual display area can be disabled, stopped, or not imaged, or imaged by changing transparency, displaying color blocks, or other methods.

在一實施例中,處理器15可定義一或多個虛擬顯示區域位於實體顯示區域的右側、左側、底側及頂側中的至少一者。也就說,一或多個虛擬顯示區域的位置位於實體顯示區域右方、左方、下方或上方。 In one embodiment, the processor 15 may define one or more virtual display areas to be located at least one of the right side, left side, bottom side, and top side of the physical display area. In other words, the one or more virtual display areas are located to the right, left, bottom, or top of the physical display area.

例如,圖6A是依據本發明一實施例說明實體顯示區域PDA與虛擬顯示區域VDA1~VDA3的示意圖。請參照圖6A,實體顯示區域PDA涵蓋顯示器32的顯示區域DA。虛擬顯示區域VDA1~VDA3的位置分別在實體顯示區域PDA的右側、頂側及左側。虛擬顯示區域VDA1~VDA3都是橫躺的長方形。即,其水平方向的長度大於垂直方向的長度。虛擬顯示區域VDA1~VDA3未與實體顯示區域PDA重疊,以作為輔助顯示用途。然而,在其他實施例中,虛擬顯示區域可能與實體顯示區域部分重疊。 For example, Figure 6A is a schematic diagram illustrating a physical display area PDA and virtual display areas VDA1-VDA3 according to an embodiment of the present invention. Referring to Figure 6A , the physical display area PDA covers the display area DA of the display 32. Virtual display areas VDA1-VDA3 are located on the right, top, and left sides of the physical display area PDA, respectively. Virtual display areas VDA1-VDA3 are all horizontally shaped rectangles. That is, their horizontal length is greater than their vertical length. Virtual display areas VDA1-VDA3 do not overlap with the physical display area PDA, serving as auxiliary displays. However, in other embodiments, the virtual display area may partially overlap with the physical display area.

又例如,圖6B是依據本發明另一實施例說明實體顯示區域PDA與虛擬顯示區域VDA1’~VDA3的示意圖。請參照圖6B,虛擬顯示區域VDA1’~VDA3的位置分別在實體顯示區域PDA的右側、頂側及左側。虛擬顯示區域VDA1’是直立的長方形。即,其水平方向的長度小於垂直方向的長度。 For example, Figure 6B is a schematic diagram illustrating a physical display area PDA and virtual display areas VDA1'-VDA3 according to another embodiment of the present invention. Referring to Figure 6B , virtual display areas VDA1'-VDA3 are located on the right, top, and left sides of the physical display area PDA, respectively. Virtual display area VDA1' is an upright rectangle. That is, its horizontal length is shorter than its vertical length.

然而,圖6A及圖6B所示的虛擬顯示區域的形狀及大小僅是作為範例說明,且使用者可依據實際需求調整。 However, the shape and size of the virtual display area shown in Figures 6A and 6B are merely examples and can be adjusted by the user according to actual needs.

請參照圖4,處理器15依據外部裝置30的應用程式的相關參數,透過投影模組12投影應用程式的介面至虛擬顯示區域(步 驟S430)。具體而言,外部裝置30執行一或多個應用程式。應用程式可以是文件編輯程式、通訊或會議程式、設計程式、娛樂程式、瀏覽器程式或其他類型的程式。作業系統可提供視窗、全螢幕或小工具(widget)形式呈現應用程式的(使用者)介面。除了透過外部裝置30的顯示器32顯示應用程式的介面,本發明實施例還能透過投影模組12投影包含應用程式的介面的虛擬畫面。當使用者戴上穿戴式裝置10且一邊使用外部裝置30時,虛擬顯示區域可作為顯示器的顯示區域的延伸畫面/螢幕。 Referring to Figure 4 , the processor 15 projects the application interface of the external device 30 onto the virtual display area via the projection module 12 based on the relevant parameters of the application (step S430). Specifically, the external device 30 executes one or more applications. The applications may be document editors, communication or conferencing programs, design programs, entertainment programs, browser programs, or other types of programs. The operating system may present the application user interface in a windowed, full-screen, or widget format. In addition to displaying the application interface on the display 32 of the external device 30, embodiments of the present invention can also project a virtual screen containing the application interface via the projection module 12. When a user wears the wearable device 10 and uses the external device 30, the virtual display area can serve as an extended image/screen of the display area of the monitor.

例如,圖7A是依據本發明一實施例說明視窗收合的示意圖。請參照圖7A,顯示器32顯示多個視窗化的應用程式。這些應用程式包括主應用程式MA及次應用程式SA。前景執行的應用程式為主應用程式MA。背景執行的應用程式為次應用程式SA。顯示器32顯示主應用程式MA及次應用程式SA的介面。在收合模式下,主應用程式MA的介面堆疊於一或多個次應用程式SA的界面上層。也就是,主應用程式MA及一或多個次應用程式的介面都呈現在實體顯示區域PDA內。 For example, Figure 7A is a schematic diagram illustrating window collapse according to an embodiment of the present invention. Referring to Figure 7A , the display 32 displays multiple windowed applications. These applications include a main application MA and a secondary application SA. The application running in the foreground is the main application MA. The application running in the background is the secondary application SA. The display 32 displays the interfaces of the main application MA and the secondary applications SA. In collapsed mode, the interface of the main application MA is stacked on top of the interfaces of one or more secondary applications SA. In other words, the interfaces of the main application MA and the one or more secondary applications are all presented within the physical display area of the PDA.

圖7B是依據本發明一實施例說明視窗展開的示意圖。請參照圖7B,在展開模式下,一或多個次應用程式SA的介面經由虛擬畫面VS分別呈現在虛擬顯示區域VDA1’~VDA3內。在一些應用情境中,較無相關於主應用程式MA的應用屬性或使用歷程的次應用程式SA位於實體顯示區域PDA內並被主應用程式MA覆蓋。應用屬性或使用歷程待後續實施例說明。 Figure 7B is a schematic diagram illustrating window expansion according to an embodiment of the present invention. Referring to Figure 7B , in expanded mode, the interfaces of one or more secondary applications SA are presented in virtual display areas VDA1'-VDA3 via virtual screens VS. In some application scenarios, secondary applications SA with no application attributes or usage history related to the primary application MA are located in the physical display area PDA and are covered by the primary application MA. Application attributes and usage history will be described in subsequent embodiments.

在一實施例中,處理器15可接收開合操作。例如,透過輸入裝置接收用戶操作,且這用戶執行的開合操作用於執行展開模式或收合模式。處理器15可依據開合操作透過投影模組12投影一或多個應用程式的介面至一或多個虛擬顯示區域。如圖7B所是,開合操作對應於展開模式,因此次應用程式SA的介面分別呈現在虛擬顯示區域VDA1’~VDA3內。另一方面,處理器15可依據開合操作停止透過投影模組12投影一或多個應用程式的介面至一或多個虛擬顯示區域。如圖7A所是,開合操作對應於收合模式,投影模組12停止投影且僅剩顯示器32顯示影像,因此次應用程式SA的介面都呈現在顯示區域DA內。 In one embodiment, the processor 15 can receive an opening and closing operation. For example, a user operation is received through an input device, and the opening and closing operation performed by the user is used to execute an expanded mode or a collapsed mode. Based on the opening and closing operation, the processor 15 can project the interface of one or more applications to one or more virtual display areas through the projection module 12. As shown in Figure 7B, the opening and closing operation corresponds to the expanded mode, so the interface of the application SA is presented in the virtual display areas VDA1'~VDA3 respectively. On the other hand, the processor 15 can stop projecting the interface of one or more applications to one or more virtual display areas through the projection module 12 based on the opening and closing operation. As shown in FIG7A , the opening and closing operation corresponds to the collapsed mode. The projection module 12 stops projecting, and only the display 32 displays the image. Therefore, the interface of the secondary application SA is displayed in the display area DA.

在一實施例中,在標記式的擴增實境成像中,處理器15可依據顯示區域DA的影像特徵決定穿戴式裝置10的姿態,並將影像放置在虛擬顯示區域內。 In one embodiment, in a marked augmented reality imaging, the processor 15 can determine the posture of the wearable device 10 based on the image features of the display area DA and place the image in the virtual display area.

在另一實施例中,在無標記式的擴增實境成像中,處理器15可使用時間、加速計、衛星定位及/或指南針資訊,進行自我定位,並使用影像擷取裝置11將影像疊加在虛擬顯示區域。 In another embodiment, in markerless augmented reality imaging, the processor 15 may use time, accelerometer, satellite positioning, and/or compass information to locate itself and use the image capture device 11 to overlay the image on the virtual display area.

在一實施例中,處理器15可依據應用程式的介面的顯示屬性決定一或多個虛擬顯示區域的尺寸及形狀。顯示屬性可以是尺寸、比例及模式。例如,應用程式的介面為1280*800像素,且其比例為16:10。虛擬顯示區域的形狀為長寬比為16:10的長方形,且其尺寸可依據1280*800像素縮放調整。以圖7B為例,位於圖面右方的次應用程式SA的介面為800*1280像素,且其比例為 10:16。因此,虛擬顯示區域VDA1’為直立的長方形。然而,虛擬顯示區域VDA2、VDA3為橫躺的長方形,以供呈現位於圖面上方或左方的次應用程式SA。 In one embodiment, processor 15 may determine the size and shape of one or more virtual display areas based on the display properties of the application interface. Display properties may include size, aspect ratio, and mode. For example, if an application interface is 1280*800 pixels with a 16:10 aspect ratio, the virtual display area is a rectangle with a 16:10 aspect ratio, and its size can be scaled to fit the 1280*800 pixel aspect ratio. For example, in Figure 7B , the interface of the secondary application SA on the right side of the image is 800*1280 pixels with a 10:16 aspect ratio. Therefore, virtual display area VDA1′ is an upright rectangle. However, the virtual display areas VDA2 and VDA3 are horizontal rectangles for displaying the secondary application SA located at the top or left of the screen.

在一實施例中,應用程式的相關參數包括使用歷程,提供多個應用程式,且提供多個虛擬顯示區域。圖8是依據本發明一實施例說明排序的流程圖。請參照圖8,處理器15可對這些虛擬顯示區域分別賦予優先權(步驟S810)。以圖7B為例,以右側優先原則,虛擬顯示區域VDA1’具有最高優先權,虛擬顯示區域VDA2具有第二優先權,且虛擬顯示區域VDA3具有第三優先權。 In one embodiment, application-related parameters include usage history, multiple applications, and multiple virtual display areas. Figure 8 is a flowchart illustrating sorting according to one embodiment of the present invention. Referring to Figure 8 , the processor 15 may assign priorities to the virtual display areas (step S810). Taking Figure 7B as an example, based on the right-side priority principle, virtual display area VDA1' has the highest priority, virtual display area VDA2 has the second priority, and virtual display area VDA3 has the third priority.

請參照圖8,處理器15可依據多個應用程式的使用歷程排序這些應用程式,以產生排序結果(步驟S820)。在一實施例中,使用歷程包括下列至少一者:多個應用程式的切換頻率;多個應用程式的同時顯示時間;及多個應用程式的使用量。 Referring to FIG. 8 , the processor 15 may sort the multiple applications based on their usage history to generate a sorting result (step S820 ). In one embodiment, the usage history includes at least one of the following: the switching frequency of the multiple applications; the simultaneous display time of the multiple applications; and the usage of the multiple applications.

切換頻率是兩個應用程式中的一者切換至前景執行且另一者切換至背景執行/非前景執行的統計次數。例如,在一統計期間(例如,一小時、一天或前次開機運行至關機的期間)中,處理器35統計由應用程式A切換成應用程式B及由應用程式B切換成應用程式A的次數。 The switching frequency is the number of times one of two applications switches to the foreground and the other switches to the background/non-foreground. For example, during a statistical period (e.g., an hour, a day, or the period from the last boot to the last shutdown), the processor 35 counts the number of times application A switches to application B and application B switches to application A.

同時顯示時間是兩個應用程式同時為前景執行的累計時間值。例如,在一統計期間(例如,三小時、半天或前次開機運行至關機的期間)中,處理器35統計應用程式C、D同時顯示的累計時間值。 The concurrent display time is the cumulative time that two applications are simultaneously running in the foreground. For example, during a statistical period (e.g., three hours, half a day, or the period from the last boot to the last shutdown), processor 35 calculates the cumulative time that applications C and D are concurrently displayed.

使用量是任一個應用程式被使用的時間值。例如,應用程式E的介面(例如,視窗)被開啟後到被關閉之間的時間值。 Usage is the time that any application is in use. For example, the time between when the interface (e.g., window) of application E is opened and when it is closed.

圖9是依據本發明一實施例的排序決定的流程圖。請參照圖9,處理器15可將切換頻率、同時顯示時間及使用量中的至少一者依據對應權重進行加權運算(步驟S910)。具體而言,處理器15分別對切換頻率、同時顯示時間及使用量賦予對應權重。例如,切換頻率的權重為60%,同時顯示時間的權重為30%,且使用量的權重為10%。應用程式與使用歷程的對應關係為: FIG9 is a flowchart of a ranking decision according to an embodiment of the present invention. Referring to FIG9 , the processor 15 may perform a weighted operation on at least one of the switching frequency, the concurrent display time, and the usage amount according to a corresponding weight (step S910). Specifically, the processor 15 assigns corresponding weights to the switching frequency, the concurrent display time, and the usage amount. For example, the switching frequency has a weight of 60%, the concurrent display time has a weight of 30%, and the usage amount has a weight of 10%. The corresponding relationship between the application and the usage history is:

網頁瀏覽器的加權運算的數值為20*60%+43.8*30%+13*10%=26.44,郵件程式的加權運算的數值為15*60%+36*30%+11*10%=20.9,且音樂播放程式的加權運算的數值為10*60%+24*30%+24*30%=14.24。然而,這些權重的數值仍可依據實際需求而改變。 The weighted value for the web browser is 20*60%+43.8*30%+13*10%=26.44, the weighted value for the email program is 15*60%+36*30%+11*10%=20.9, and the weighted value for the music player is 10*60%+24*30%+24*30%=14.24. However, these weights can be changed based on actual needs.

請參照圖9,處理器15可依據加權運算的數值決定排序結果(步驟S920)。具體而言,加權運算的數值越大者,排序越前面或越優先;加權運算的數值越小者,排序越後面或越次要/不優先。 Referring to Figure 9 , the processor 15 may determine the ranking result based on the weighted calculation value (step S920). Specifically, the larger the weighted calculation value, the higher the ranking or the higher the priority; the smaller the weighted calculation value, the lower the ranking or the less important/less priority.

請參照圖8,處理器15可依據排序結果投影多個應用程式的介面在對應優先權的虛擬顯示區域(步驟S830)。具體而言,排序越前面或越優先者,其對應優先權越高;排序越後面或越次要者,其對應優先權越低。 Referring to FIG. 8 , the processor 15 may project the interfaces of multiple application programs onto virtual display areas corresponding to their priorities based on the ranking results (step S830 ). Specifically, the higher the ranking or the higher the priority, the higher the corresponding priority; the lower the ranking or the less important the application, the lower the corresponding priority.

圖10是依據本發明一實施例說明依據使用歷程的視窗分配的示意圖。請參照圖10,如表(1)中的網頁瀏覽器的對應優先權最高,郵件程式的對應優先權次高,且音樂播放程式的對應優先權最低。網頁瀏覽器的介面SA1經由虛擬畫面VS投影在具有最高優先權的虛擬顯示區域VDA1’內,郵件程式的介面SA2經由虛擬畫面VS投影在具有第二優先權的虛擬顯示區域VDA2內,且音樂播放程式的介面SA3經由虛擬畫面VS投影在具有第三優先權的虛擬顯示區域VDA3內。而至於其他排序的應用程式的介面可呈現顯示區域DA、或虛擬顯示區域VDA1’~VDA3中的一者並堆疊於主應用程式MA、或介面SA1~SA3的下層。 FIG10 is a schematic diagram illustrating window allocation according to a usage history according to an embodiment of the present invention. Referring to FIG10 , as shown in Table (1), the web browser has the highest corresponding priority, the mail program has the second highest corresponding priority, and the music player has the lowest corresponding priority. The web browser interface SA1 is projected via the virtual screen VS into the virtual display area VDA1′ having the highest priority, the mail program interface SA2 is projected via the virtual screen VS into the virtual display area VDA2 having the second priority, and the music player interface SA3 is projected via the virtual screen VS into the virtual display area VDA3 having the third priority. The interfaces of other applications can be displayed in the display area DA or one of the virtual display areas VDA1'-VDA3 and stacked below the main application MA or interfaces SA1-SA3.

在一實施例中,應用程式的相關參數包括應用屬性,提供多個應用程式,且提供多個虛擬顯示區域。圖11是依據本發明一實施例的位置分配的流程圖。請參照圖11,處理器15可依據多個應用程式的應用屬性將這些應用程式分配到多個群組(步驟S1110)。具體而言,應用屬性相關於應用程式的功能。例如,應用屬性為編輯、通訊、設計、娛樂或瀏覽器類。每一種類別對應於一個群組。也就說,一個群組內的多個應用程式屬於相同或相似應用屬性。 In one embodiment, application-related parameters include application attributes, multiple applications are provided, and multiple virtual display areas are provided. Figure 11 is a flowchart of location allocation according to one embodiment of the present invention. Referring to Figure 11 , the processor 15 may allocate multiple applications to multiple groups based on their application attributes (step S1110). Specifically, the application attributes relate to the application's functionality. For example, the application attributes may be editing, communication, design, entertainment, or browser. Each category corresponds to a group. In other words, multiple applications within a group have the same or similar application attributes.

表(2)~表(6)是多個應用屬性、應用程式及顯示模式的對 應關係: Tables (2) to (6) show the correspondence between multiple application attributes, applications, and display modes:

Word、Excel、Power Point及Note分配到一個群組;Outlook、Line、Messages、Teams及Zoom分配到另一個群組,其依此類推。 Word, Excel, PowerPoint, and Notes are assigned to one group; Outlook, Line, Messages, Teams, and Zoom are assigned to another group, and so on.

請參照圖11,處理器15可透過投影模組12投影一或多個應用程式的介面在對應群組的虛擬顯示區域(步驟S1120)。具體而言,一或多個群組/應用屬性對應於一個虛擬顯示區域。投影模組12將每一應用程式投影到對應的虛擬顯示區域。 Referring to Figure 11 , the processor 15 can project the interfaces of one or more applications onto the virtual display area corresponding to the group via the projection module 12 (step S1120 ). Specifically, one or more group/application attributes correspond to a virtual display area. The projection module 12 projects each application onto the corresponding virtual display area.

例如,圖12是依據本發明一實施例說明依據應用屬性的視窗分配的示意圖。請參照圖12,應用屬性為編輯類及瀏覽器類應用程式對應於虛擬顯示區域VDA1,應用屬性為設計類及娛樂類的應用程式對應於虛擬顯示區域VDA2,且應用屬性為通訊類的應 用程式對應於虛擬顯示區域VDA3。 For example, Figure 12 is a schematic diagram illustrating window allocation based on application attributes according to one embodiment of the present invention. Referring to Figure 12 , applications with the editing and browser attributes correspond to virtual display area VDA1, applications with the design and entertainment attributes correspond to virtual display area VDA2, and applications with the communication attribute correspond to virtual display area VDA3.

例如,Excel的介面SA1-1、Word的介面SA1-2、PDF瀏覽器的介面SA1-3、網頁瀏覽器的介面SA1-4、備忘錄的介面SA1-5及Power Point的介面SA1-6經由虛擬畫面VS呈現在虛擬顯示區域VDA1。After Effects的介面SA2-1、Sketch的介面SA2-2、Illustrator的介面SA2-3、Premiere的介面SA2-4、ProtoPie的介面SA2-5及Photoshop的介面SA2-6經由虛擬畫面VS呈現在虛擬顯示區域VDA2。Line的介面SA3-1、Messager的介面SA3-2、Meet的介面SA3-3、Outlook的介面SA3-4、Teams的介面SA3-5及Zoom的介面SA3-6經由虛擬畫面VS呈現在虛擬顯示區域VDA3。此外,這些應用程式的介面SA1-1~SA1-6、SA2-1~SA2-6、SA3-1~SA3-6的顯示模式(例如,直立或橫躺模式)可維持其前次的模式/狀態。 For example, the Excel interface SA1-1, Word interface SA1-2, PDF viewer interface SA1-3, web browser interface SA1-4, memo interface SA1-5, and PowerPoint interface SA1-6 are displayed in virtual display area VDA1 via the virtual screen VS. The After Effects interface SA2-1, Sketch interface SA2-2, Illustrator interface SA2-3, Premiere interface SA2-4, ProtoPie interface SA2-5, and Photoshop interface SA2-6 are displayed in virtual display area VDA2 via the virtual screen VS. The Line interface SA3-1, Messenger interface SA3-2, Meet interface SA3-3, Outlook interface SA3-4, Teams interface SA3-5, and Zoom interface SA3-6 are presented on the virtual display area VDA3 via the virtual screen VS. Furthermore, the display mode (e.g., portrait or landscape) of these application interfaces SA1-1 through SA1-6, SA2-1 through SA2-6, and SA3-1 through SA3-6 can maintain their previous mode/status.

在一實施例中,處理器15可辨識擷取影像中的主應用程式的介面。例如,基於物件偵測技術或外部裝置30所提供的使用紀錄得知外部裝置30前景執行的主應用程式。接著,處理器15依據一或多個次應用程式的相關參數,透過投影模組12投影一或多個次應用程式的介面至一或多個虛擬顯示區域。相關參數以使用歷程為例,請參照圖10,主應用程式MA例如為Figma,則位於虛擬顯示器VDA1’的應用程式是相關於Figma的使用歷程對應排序最高的。例如,經常Chrome與Figma兩者,經常同時使用Chrome與Figma兩者,且/或Chrome的使用量最高。相關參數以應用屬 性為例,請參照圖12,虛擬顯示區域VDA1~VDA3分別顯示一個與主應用程式MA相關的次應用程式SA,且依據所屬應用屬性分配這些次應用程式SA的位置。或者,虛擬顯示區域VDA1~VDA3只顯示與主應用程式MA相同應用屬性的次應用程式SA。 In one embodiment, the processor 15 can identify the interface of the main application in the captured image. For example, based on object detection technology or usage records provided by the external device 30, the main application running in the foreground of the external device 30 is known. Then, based on the relevant parameters of one or more secondary applications, the processor 15 projects the interfaces of one or more secondary applications to one or more virtual display areas through the projection module 12. Taking the usage history as an example of the relevant parameters, please refer to Figure 10. If the main application MA is Figma, then the application located on the virtual display VDA1' is the application with the highest ranking corresponding to the usage history of Figma. For example, Chrome and Figma are often used, Chrome and Figma are often used at the same time, and/or Chrome is used the most. Taking application attributes as an example, refer to Figure 12. Virtual display areas VDA1 through VDA3 each display a secondary application SA associated with the primary application MA, with the positions of these secondary application SAs assigned based on their application attributes. Alternatively, virtual display areas VDA1 through VDA3 can display only secondary application SAs with the same application attributes as the primary application MA.

綜上所述,在本發明實施例的穿戴式裝置及虛實畫面整合方法中,依據顯示器的實體顯示區域定義投影模組的虛擬顯示區域的位置,並依據外部裝置所執行的應用程式的使用歷程或應用屬性投影應用程式的介面到虛擬顯示區域。 In summary, in the wearable device and virtual-reality screen integration method of the present embodiment, the position of the projection module's virtual display area is defined based on the display's physical display area, and the application interface is projected onto the virtual display area based on the usage history or application properties of the application executed by the external device.

本發明實施例至少提供以下特點:透過實體螢幕與虛擬視窗的整合,讓工作中的資料切換能夠更為順暢。透過學習主螢幕的內容與其他應用程式的使用習慣,依據參數權重自動切換虛擬視窗所擺放的內容與對應的畫面尺寸比例。 The present invention provides at least the following features: By integrating the physical screen with the virtual window, data switching during work is made smoother. By learning the content of the main screen and the usage habits of other applications, the virtual window's content and corresponding screen size ratio are automatically switched based on parameter weights.

雖然本發明已以實施例揭露如上,然其並非用以限定本發明,任何所屬技術領域中具有通常知識者,在不脫離本發明的精神和範圍內,當可作些許的更動與潤飾,故本發明的保護範圍當視後附的申請專利範圍所界定者為準。 Although the present invention has been disclosed above through embodiments, they are not intended to limit the present invention. Anyone with ordinary skill in the art may make minor modifications and improvements without departing from the spirit and scope of the present invention. Therefore, the scope of protection of the present invention shall be determined by the scope of the attached patent application.

S410~S430:步驟 S410~S430: Steps

Claims (16)

一種穿戴式裝置,包括:一影像擷取裝置,用以取得一擷取影像;一投影模組,用以投影一虛擬畫面;以及一處理器,耦接該影像擷取裝置及該投影模組,並經配置用以:辨識該擷取影像中的一實體顯示區域,其中該實體顯示區域對應於一顯示器的顯示畫面的尺寸,且一外部裝置包括該顯示器;依據該實體顯示區域的位置定義至少一虛擬顯示區域的位置,其中該虛擬畫面對應於該虛擬顯示區域;以及依據該外部裝置的至少一應用程式的相關參數,透過該投影模組投影該至少一應用程式的介面至該至少一虛擬顯示區域,其中該至少一應用程式的相關參數包括一使用歷程,該使用歷程包括下列至少一者:該些應用程式的一切換頻率;該些應用程式的一同時顯示時間;以及該些應用程式的一使用量,該至少一應用程式包括多個應用程式,該至少一虛擬顯示區域包括多個虛擬顯示區域,且該處理器更經配置用以:對該些虛擬顯示區域分別賦予一優先權;依據該些應用程式的該使用歷程排序該些應用程式,以 產生一排序結果;以及依據該排序結果透過該投影模組投影該些應用程式的介面在對應優先權的該虛擬顯示區域。 A wearable device includes: an image capture device for acquiring a captured image; a projection module for projecting a virtual screen; and a processor coupled to the image capture device and the projection module and configured to: identify a physical display area in the captured image, wherein the physical display area corresponds to the size of a display screen of a display, and an external device includes the display; define the position of at least one virtual display area according to the position of the physical display area, wherein the virtual screen corresponds to the virtual display area; and project the interface of the at least one application program on the at least one virtual display program through the projection module according to relevant parameters of the at least one application program of the external device. The processor comprises a display area, wherein the parameters related to the at least one application include a usage history, the usage history including at least one of the following: a switching frequency of the applications; a time period during which the applications are displayed simultaneously; and a usage amount of the applications. The at least one application includes a plurality of applications, and the at least one virtual display area includes a plurality of virtual display areas. The processor is further configured to: assign a priority to each of the virtual display areas; sort the applications according to the usage history to generate a sorting result; and project, via the projection module, the interfaces of the applications on the virtual display areas corresponding to the priorities according to the sorting result. 如請求項1所述的穿戴式裝置,其中該處理器更經配置用以:辨識該擷取影像中的至少二標記點;以及依據該至少二標記點的位置決定該實體顯示區域。 The wearable device of claim 1, wherein the processor is further configured to: identify at least two marker points in the captured image; and determine the physical display area based on the positions of the at least two marker points. 如請求項1所述的穿戴式裝置,其中該處理器更經配置用以:定義該至少一虛擬顯示區域位於該實體顯示區域的右側、左側及頂側中的至少一者。 The wearable device of claim 1, wherein the processor is further configured to: define the at least one virtual display area to be located at least one of the right side, the left side, and the top side of the physical display area. 如請求項1所述的穿戴式裝置,其中該處理器更經配置用以:依據該至少一應用程式的介面的顯示屬性決定該至少一虛擬顯示區域的尺寸及形狀。 The wearable device of claim 1, wherein the processor is further configured to: determine the size and shape of the at least one virtual display area based on display properties of the at least one application interface. 如請求項1所述的穿戴式裝置,其中該處理器更經配置用以:將該切換頻率、該同時顯示時間及該使用量中的至少一者依據對應權重進行一加權運算;以及依據該加權運算的數值決定該排序結果。 The wearable device of claim 1, wherein the processor is further configured to: perform a weighted calculation on at least one of the switching frequency, the simultaneous display time, and the usage amount according to a corresponding weight; and determine the ranking result according to a value obtained from the weighted calculation. 如請求項1所述的穿戴式裝置,其中該至少一應用程式的相關參數包括一應用屬性,該至少一應用程式包括多個應用 程式,該至少一虛擬顯示區域包括多個虛擬顯示區域,且該處理器更經配置用以:依據該些應用程式的該應用屬性將該些應用程式分配到多個群組,其中每一該群組對應於一該虛擬顯示區域;以及透過該投影模組投影該些應用程式的介面在對應群組的一該虛擬顯示區域。 The wearable device of claim 1, wherein the parameters associated with the at least one application include an application attribute, the at least one application includes a plurality of applications, the at least one virtual display area includes a plurality of virtual display areas, and the processor is further configured to: assign the applications to a plurality of groups based on the application attributes, wherein each group corresponds to a virtual display area; and project, via the projection module, the interfaces of the applications onto a virtual display area corresponding to a group. 如請求項1所述的穿戴式裝置,其中該至少一應用程式包括一主應用程式及至少一次應用程式,且該處理器更經配置用以:辨識該擷取影像中的該主應用程式的介面;以及依據該至少一次應用程式的相關參數,透過該投影模組投影該至少一次應用程式的介面至該至少一虛擬顯示區域。 The wearable device of claim 1, wherein the at least one application includes a main application and at least one secondary application, and the processor is further configured to: recognize an interface of the main application in the captured image; and project the interface of the at least one primary application onto the at least one virtual display area via the projection module based on parameters related to the at least one secondary application. 如請求項1所述的穿戴式裝置,其中該至少一應用程式包括一主應用程式及至少一次應用程式,該顯示器顯示該主應用程式的介面,且該處理器更經配置用以:接收一開合操作;以及依據該開合操作透過該投影模組投影該至少一應用程式的介面至該至少一虛擬顯示區域;或者依據該開合操作停止透過該投影模組投影該至少一應用程式的介面至該至少一虛擬顯示區域。 The wearable device of claim 1, wherein the at least one application includes a main application and at least one secondary application, the display displays the interface of the main application, and the processor is further configured to: receive an opening and closing operation; and project the interface of the at least one application onto the at least one virtual display area via the projection module in response to the opening and closing operation; or stop projecting the interface of the at least one application onto the at least one virtual display area via the projection module in response to the opening and closing operation. 一種虛實畫面整合方法,適用於一穿戴式裝置,該穿戴式裝置包括一影像擷取裝置、一投影模組、及一處理器,該虛實畫面整合方法包括:透過該處理器辨識該影像擷取裝置取得的一擷取影像中的一實體顯示區域,其中該實體顯示區域對應於一顯示器的顯示畫面的尺寸,且一外部裝置包括該顯示器;透過該處理器依據該實體顯示區域的位置定義至少一虛擬顯示區域的位置,其中該投影模組投影的一虛擬畫面對應於該虛擬顯示區域;以及透過該處理器依據該外部裝置的至少一應用程式的相關參數,經由該投影模組投影該至少一應用程式的介面至該至少一虛擬顯示區域,其中該至少一應用程式的相關參數包括一使用歷程,該使用歷程包括下列至少一者:該些應用程式的一切換頻率;該些應用程式的一同時顯示時間;以及該些應用程式的一使用量,該至少一應用程式包括多個應用程式,該至少一虛擬顯示區域包括多個虛擬顯示區域,且依據該外部裝置的該至少一應用程式的相關參數投影該至少一應用程式的介面至該至少一虛擬顯示區域的步驟包括:透過該處理器對該些虛擬顯示區域分別賦予一優先權;透過該處理器依據該些應用程式的該使用歷程排序該些 應用程式,以產生一排序結果;以及透過該處理器依據該排序結果透過該投影模組投影該些應用程式的介面在對應優先權的一該虛擬顯示區域。 A virtual and real image integration method is applicable to a wearable device, the wearable device including an image capture device, a projection module, and a processor. The virtual and real image integration method includes: identifying a physical display area in a captured image obtained by the image capture device through the processor, wherein the physical display area corresponds to the size of a display screen of a display, and an external device includes the display; The processor defines a position of at least one virtual display area according to the position of the physical display area, wherein a virtual screen projected by the projection module corresponds to the virtual display area; and the processor projects an interface of the at least one application program to the at least one virtual display area through the projection module according to relevant parameters of the at least one application program of the external device, wherein the relevant parameters of the at least one application program are The data includes a usage history, the usage history including at least one of the following: a switching frequency of the applications; a simultaneous display time of the applications; and a usage amount of the applications, the at least one application includes a plurality of applications, the at least one virtual display area includes a plurality of virtual display areas, and the at least one application is projected according to the relevant parameters of the at least one application of the external device. The step of displaying the application interface of the program to the at least one virtual display area includes: assigning a priority to each of the virtual display areas by the processor; sorting the applications by the processor based on the usage history of the applications to generate a sorting result; and projecting the application interfaces of the application programs by the projection module on the virtual display area corresponding to the priority based on the sorting result. 如請求項9所述的虛實畫面整合方法,其中辨識該擷取影像中的該實體顯示區域的步驟包括:透過該處理器辨識該擷取影像中的至少二標記點;以及透過該處理器依據該至少二標記點的位置決定該實體顯示區域。 The virtual-reality image integration method of claim 9, wherein the step of identifying the physical display area in the captured image comprises: identifying at least two marker points in the captured image by the processor; and determining the physical display area by the processor based on the positions of the at least two marker points. 如請求項9所述的虛實畫面整合方法,其中依據該實體顯示區域的位置定義該至少一虛擬顯示區域的位置的步驟包括:透過該處理器定義該至少一虛擬顯示區域位於該實體顯示區域的右側、左側及頂側中的至少一者。 The virtual and real screen integration method of claim 9, wherein the step of defining the position of the at least one virtual display area based on the position of the physical display area includes: defining, by the processor, that the at least one virtual display area is located at least one of the right side, the left side, and the top side of the physical display area. 如請求項9所述的虛實畫面整合方法,更包括:透過該處理器依據該至少一應用程式的介面的顯示屬性決定該至少一虛擬顯示區域的尺寸及形狀。 The virtual and real screen integration method of claim 9 further comprises: determining, by the processor, the size and shape of the at least one virtual display area based on the display properties of the at least one application interface. 如請求項9所述的虛實畫面整合方法,其中依據該排序結果透過該投影模組投影該些應用程式的介面在對應優先權的一該虛擬顯示區域的步驟包括:透過該處理器將該切換頻率、該同時顯示時間及該使用量中的至少一者依據對應權重進行一加權運算;以及透過該處理器依據該加權運算的數值決定該排序結果。 The virtual and real screen integration method of claim 9, wherein the step of projecting the application interfaces onto a virtual display area corresponding to the priority using the projection module based on the ranking result comprises: performing a weighted calculation on at least one of the switching frequency, the simultaneous display time, and the usage amount based on a corresponding weight; and determining the ranking result based on the value of the weighted calculation using the processor. 如請求項9所述的虛實畫面整合方法,其中該至少一應用程式的相關參數包括一應用屬性,該至少一應用程式包括多個應用程式,該至少一虛擬顯示區域包括多個虛擬顯示區域,且依據該外部裝置的該至少一應用程式的相關參數投影該至少一應用程式的介面至該至少一虛擬顯示區域的步驟包括:透過該處理器依據該些應用程式的該應用屬性將該些應用程式分配到多個群組,其中每一該群組對應於該虛擬顯示區域;以及透過該投影模組投影該些應用程式的介面在對應群組的一該虛擬顯示區域。 The method for integrating virtual and real screens as described in claim 9, wherein the parameters related to the at least one application include an application attribute, the at least one application includes a plurality of applications, the at least one virtual display area includes a plurality of virtual display areas, and the step of projecting the interface of the at least one application onto the at least one virtual display area based on the parameters related to the at least one application of the external device comprises: assigning, by the processor, the applications into a plurality of groups based on the application attributes, wherein each group corresponds to the virtual display area; and projecting, by the projection module, the interfaces of the applications onto a virtual display area corresponding to the group. 如請求項9所述的虛實畫面整合方法,其中該至少一應用程式包括一主應用程式及至少一次應用程式,且依據該外部裝置的該至少一應用程式的相關參數投影該至少一應用程式的介面至該至少一虛擬顯示區域的步驟包括:透過該處理器辨識該擷取影像中的該主應用程式的介面;以及透過該處理器依據該至少一次應用程式的相關參數,透過該投影模組投影該至少一次應用程式的介面至該至少一虛擬顯示區域。 The method for integrating virtual and real images as described in claim 9, wherein the at least one application includes a main application and at least one secondary application, and the step of projecting the interface of the at least one application onto the at least one virtual display area based on parameters related to the at least one application from the external device comprises: identifying the interface of the main application in the captured image by the processor; and projecting the interface of the at least one primary application onto the at least one virtual display area by the projection module based on the parameters related to the at least one secondary application by the processor. 如請求項9所述的虛實畫面整合方法,其中該至少一應用程式包括一主應用程式及至少一次應用程式,該顯示器顯示該主應用程式的介面,且該虛實畫面整合方法更包括:透過該處理器接收一開合操作;以及 依據該開合操作透過該投影模組投影該至少一應用程式的介面至該至少一虛擬顯示區域;或者依據該開合操作停止透過該投影模組投影該至少一應用程式的介面至該至少一虛擬顯示區域。 The method for integrating virtual and real screens as described in claim 9, wherein the at least one application includes a main application and at least one secondary application, the display displays the interface of the main application, and the method further comprises: receiving an opening and closing operation via the processor; and projecting the interface of the at least one application onto the at least one virtual display area via the projection module in response to the opening and closing operation; or ceasing to project the interface of the at least one application onto the at least one virtual display area via the projection module in response to the opening and closing operation.
TW113104883A 2023-02-17 2024-02-07 Wearable apparatus and integrating method of virtual and real images TWI899841B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202363446363P 2023-02-17 2023-02-17
US63/446,363 2023-02-17

Publications (2)

Publication Number Publication Date
TW202439091A TW202439091A (en) 2024-10-01
TWI899841B true TWI899841B (en) 2025-10-01

Family

ID=94081658

Family Applications (1)

Application Number Title Priority Date Filing Date
TW113104883A TWI899841B (en) 2023-02-17 2024-02-07 Wearable apparatus and integrating method of virtual and real images

Country Status (1)

Country Link
TW (1) TWI899841B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI437464B (en) * 2011-09-01 2014-05-11 Ind Tech Res Inst Head mount personal computer and interactive system using the same
TWI629507B (en) * 2017-05-11 2018-07-11 宏達國際電子股份有限公司 Head-mounted display devices and adaptive masking methods thereof
CN113342433A (en) * 2021-05-08 2021-09-03 杭州灵伴科技有限公司 Application page display method, head-mounted display device and computer readable medium
TWI745955B (en) * 2020-05-06 2021-11-11 宏碁股份有限公司 Augmented reality system and anchor display method thereof
CN113711175A (en) * 2019-09-26 2021-11-26 苹果公司 Wearable electronic device presenting a computer-generated real-world environment
US20220253263A1 (en) * 2021-02-08 2022-08-11 Multinarity Ltd Temperature-controlled wearable extended reality appliance

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI437464B (en) * 2011-09-01 2014-05-11 Ind Tech Res Inst Head mount personal computer and interactive system using the same
TWI629507B (en) * 2017-05-11 2018-07-11 宏達國際電子股份有限公司 Head-mounted display devices and adaptive masking methods thereof
CN113711175A (en) * 2019-09-26 2021-11-26 苹果公司 Wearable electronic device presenting a computer-generated real-world environment
TWI745955B (en) * 2020-05-06 2021-11-11 宏碁股份有限公司 Augmented reality system and anchor display method thereof
US20220253263A1 (en) * 2021-02-08 2022-08-11 Multinarity Ltd Temperature-controlled wearable extended reality appliance
CN113342433A (en) * 2021-05-08 2021-09-03 杭州灵伴科技有限公司 Application page display method, head-mounted display device and computer readable medium

Also Published As

Publication number Publication date
TW202439091A (en) 2024-10-01

Similar Documents

Publication Publication Date Title
US10832086B2 (en) Target object presentation method and apparatus
US10115015B2 (en) Method for recognizing a specific object inside an image and electronic device thereof
US9245193B2 (en) Dynamic selection of surfaces in real world for projection of information thereon
US11538096B2 (en) Method, medium, and system for live preview via machine learning models
US11158057B2 (en) Device, method, and graphical user interface for processing document
CN111541907B (en) Article display method, apparatus, device and storage medium
US11270485B2 (en) Automatic positioning of textual content within digital images
US9721391B2 (en) Positioning of projected augmented reality content
US11295495B2 (en) Automatic positioning of textual content within digital images
US20200304713A1 (en) Intelligent Video Presentation System
US20170076428A1 (en) Information processing apparatus
US10705720B2 (en) Data entry system with drawing recognition
CN118945309A (en) Camera-based transparent display
TWI899841B (en) Wearable apparatus and integrating method of virtual and real images
US20220293067A1 (en) Information processing apparatus, information processing method, and program
CN115150606A (en) Image blurring method and device, storage medium and terminal equipment
WO2023272495A1 (en) Badging method and apparatus, badge detection model update method and system, and storage medium
US20250124580A1 (en) Method and apparatus for image processing, electronic device and storage medium
KR20230108885A (en) Display apparatus and method for controlling display apparatus
JP6914369B2 (en) Vector format small image generation
JP2025183148A (en) Device, display method, program, display system
KR20250179941A (en) Electronic device and method for the same
JP2013149023A (en) Display system, display program, and display method
CN119449978A (en) Image display method and device
CN112541948A (en) Object detection method and device, terminal equipment and storage medium