TWI877952B - System, method and computer program product for synchronizing virtual and real world scene content - Google Patents
System, method and computer program product for synchronizing virtual and real world scene content Download PDFInfo
- Publication number
- TWI877952B TWI877952B TW112148817A TW112148817A TWI877952B TW I877952 B TWI877952 B TW I877952B TW 112148817 A TW112148817 A TW 112148817A TW 112148817 A TW112148817 A TW 112148817A TW I877952 B TWI877952 B TW I877952B
- Authority
- TW
- Taiwan
- Prior art keywords
- virtual
- reality device
- augmented reality
- image
- avatar
- Prior art date
Links
Landscapes
- Processing Or Creating Images (AREA)
Abstract
Description
本案係關於虛擬實境和擴增實境技術,詳而言之,係關於虛擬與真實世界場景內容同步之系統、方法及電腦程式產品。 This case is about virtual reality and augmented reality technology, and more specifically, about systems, methods and computer program products for synchronizing virtual and real world scene content.
在虛擬世界中舉辦展演活動已經是一種新型態的展示方式,結合遠端渲染技術還可實現高品質的視覺效果,讓觀眾能夠享受到逼真、精細的虛擬世界展演活動,這種方式為用戶提供了新的觀賞選擇。 Holding exhibitions and performances in the virtual world has become a new form of display. Combined with remote rendering technology, high-quality visual effects can be achieved, allowing the audience to enjoy realistic and detailed virtual world exhibitions and performances. This method provides users with new viewing options.
舉例來說,在提供線上遊戲時,伺服器可將虛擬場景畫面透過遠端渲染技術串流到使用者的擴增實境(augmented reality,AR)或虛擬實境(virtual reality,VR)裝置上,藉此進行沈浸式體驗。另外,AR裝置或VR裝置也可與虛擬場景進行互動。再而,系統可捕捉真實世界所舉行的活動畫面,建構於虛擬世界之中,並讓觀眾作為虛擬角色參與其中。然而,上述先前技術僅說明如何與虛擬場景或虛擬物件進行互動,卻尚未解決如何使用遠端渲染技術讓真實世界的人員與虛擬世界的人員進行互動。 For example, when providing online games, the server can stream virtual scene images to the user's augmented reality (AR) or virtual reality (VR) device through remote rendering technology for an immersive experience. In addition, AR devices or VR devices can also interact with virtual scenes. Furthermore, the system can capture the activity images held in the real world, build them in the virtual world, and allow the audience to participate as virtual characters. However, the above-mentioned previous technologies only explain how to interact with virtual scenes or virtual objects, but have not yet solved how to use remote rendering technology to allow people in the real world to interact with people in the virtual world.
為解決上述問題及其他問題,本案揭示一種用於虛擬與真實世界場景內容同步之系統、方法、電腦程式產品、電腦可讀取記錄媒體。 To solve the above problems and other problems, this case discloses a system, method, computer program product, and computer-readable recording medium for synchronizing virtual and real-world scene content.
本案所揭之一種用於虛擬與真實世界場景內容同步之系統,係包括:虛擬化身渲染模組,用於渲染出虛擬實境裝置之一虛擬化身;虛擬代表渲染模組,用於渲染出擴增實境裝置之一虛擬代表;虛擬場景渲染模組,用於渲染出一虛擬場景,以將該虛擬化身和該虛擬代表放入該虛擬場景;參數接收套用模組,係接收來自該虛擬實境裝置之參數及時間戳,以套用在該虛擬化身,使該虛擬化身的虛擬攝影機根據該參數及時間戳更新該虛擬化身的虛擬攝影機的視角畫面,並且,該參數接收套用模組接收來自該擴增實境裝置之參數及時間戳,以套用在該虛擬代表,使該虛擬代表的虛擬攝影機根據該參數及時間戳更新該虛擬代表的虛擬攝影機的視角畫面;以及串流傳輸模組,係將該虛擬化身的虛擬攝影機的視角畫面傳輸至該虛擬實境裝置,並且,該串流傳輸模組將該虛擬代表的虛擬攝影機的視角畫面傳輸至該擴增實境裝置,藉以供該虛擬實境裝置與該擴增實境裝置的使用者係在同一時間軸上進行互動。 The present invention discloses a system for synchronizing virtual and real world scene contents, comprising: a virtual avatar rendering module for rendering a virtual avatar of a virtual reality device; a virtual representative rendering module for rendering a virtual representative of an augmented reality device; a virtual scene rendering module for rendering a virtual scene; A virtual scene is formed to place the virtual avatar and the virtual representative into the virtual scene; a parameter receiving and applying module receives parameters and timestamps from the virtual reality device to apply to the virtual avatar, so that the virtual camera of the virtual avatar is updated according to the parameters and timestamps. The parameter receiving and applying module receives the parameters and timestamp from the augmented reality device to apply to the virtual representation, so that the virtual camera of the virtual representation updates the perspective of the virtual camera of the virtual representation according to the parameters and timestamp; and the streaming transmission module transmits the The view angle of the virtual camera of the virtual avatar is transmitted to the virtual reality device, and the streaming module transmits the view angle of the virtual camera of the virtual representation to the augmented reality device, so that the virtual reality device and the user of the augmented reality device can interact on the same time axis.
本案所揭之一種用於虛擬與真實世界場景內容同步之方法,係包括:渲染出虛擬場景;渲染出虛擬實境裝置的虛擬化身,以將該虛擬化身放入該虛擬場景;渲染出擴增實境裝置的虛擬代表,以將該虛擬代表放入該虛擬場景;接收來自該虛擬實境裝置之參數及時間戳,以套用在該虛擬化身,使該虛擬化身的虛擬攝影機根據該參數及時間戳更新該虛擬化 身的虛擬攝影機的視角畫面;接收來自該擴增實境裝置之參數及時間戳,以套用在該虛擬代表,使該虛擬代表的虛擬攝影機根據該參數及時間戳更新該虛擬代表的虛擬攝影機的視角畫面;及將該虛擬化身的虛擬攝影機的視角畫面傳輸至該虛擬實境裝置以及將該虛擬代表的虛擬攝影機的視角畫面傳輸至該擴增實境裝置,藉以供該虛擬實境裝置與該擴增實境裝置的使用者係在同一時間軸上進行互動。 A method for synchronizing virtual and real world scene contents disclosed in the present case includes: rendering a virtual scene; rendering a virtual avatar of a virtual reality device to place the virtual avatar in the virtual scene; rendering a virtual representation of an augmented reality device to place the virtual representation in the virtual scene; receiving parameters and a timestamp from the virtual reality device to apply to the avatar, so that a virtual camera of the avatar updates the view angle of the virtual camera of the avatar according to the parameters and the timestamp. receiving parameters and timestamps from the augmented reality device to apply to the virtual representative so that the virtual camera of the virtual representative updates the view of the virtual camera of the virtual representative according to the parameters and timestamps; and transmitting the view of the virtual camera of the virtual avatar to the virtual reality device and transmitting the view of the virtual camera of the virtual representative to the augmented reality device, so that the virtual reality device and the user of the augmented reality device can interact on the same timeline.
本案所揭之一種電腦程式產品,經由電腦載入程式以執行本案所揭之用於虛擬與真實世界場景內容同步之方法。 A computer program product disclosed in this case is used to execute the method disclosed in this case for synchronizing virtual and real world scene contents by loading the program into a computer.
本案所揭之一種電腦可讀取記錄媒體,儲存有指令,並可利用計算設備或電腦透過處理器及/或記憶體執行電腦可讀取記錄媒體,以於執行電腦可讀取記錄媒體時執行本案所揭之用於虛擬與真實世界場景內容同步之方法。 The computer-readable recording medium disclosed in this case stores instructions, and can be used to execute the computer-readable recording medium through a computing device or a computer through a processor and/or a memory, so as to execute the method disclosed in this case for synchronizing virtual and real world scene contents when executing the computer-readable recording medium.
於一實施例中,可接收來自現場一影像擷取傳送裝置所拍攝之戴有該擴增實境裝置的使用者之影像及時間戳,藉此分析出該使用者之位置和動作姿態,以根據所分析出之動作姿態生成該虛擬代表,根據所分析出之位置將該虛擬代表放入該虛擬場景。 In one embodiment, an image and a timestamp of a user wearing the augmented reality device taken by an image capture and transmission device at the scene can be received, and the position and movement posture of the user can be analyzed, so as to generate the virtual representative according to the analyzed movement posture, and put the virtual representative into the virtual scene according to the analyzed position.
於一實施例中,在傳輸該虛擬代表的虛擬攝影機的視角畫面時,先將該虛擬代表的虛擬攝影機的視角畫面中的虛擬化身移除背景,以形成虛擬化身去背影像,再將該虛擬化身去背影像串流至該擴增實境裝置,以供該擴增實境裝置將該虛擬化身去背影像疊合至該擴增實境裝置所拍攝的實景影像上。 In one embodiment, when transmitting the view angle picture of the virtual camera of the virtual representative, the background of the virtual avatar in the view angle picture of the virtual camera of the virtual representative is first removed to form a virtual avatar background-removed image, and then the virtual avatar background-removed image is streamed to the augmented reality device, so that the augmented reality device can overlay the virtual avatar background-removed image on the real scene image taken by the augmented reality device.
於一實施例中,可接收來自現場一影像擷取傳送裝置所拍攝之觀眾的影像及時間戳,藉此分析出該觀眾之位置和動作姿態,以根據所分析出之動作姿態生成一虛擬角色,根據所分析出之位置將該虛擬角色放入該虛擬場景中,而在傳輸該虛擬代表的虛擬攝影機的視角畫面時,先將該虛擬代表的虛擬攝影機的視角畫面中的虛擬角色移除背景,以形成虛擬角色去背影像,再將該虛擬角色去背影像串流至該擴增實境裝置,以供該擴增實境裝置將該虛擬角色去背影像疊合至該擴增實境裝置所拍攝的實景影像上。 In one embodiment, an image and a time stamp of a spectator captured by an image capture and transmission device at the scene may be received, and the position and movement posture of the spectator may be analyzed to generate a virtual character according to the analyzed movement posture, and the virtual character may be placed in the virtual scene according to the analyzed position, and the virtual representative of the virtual character may be transmitted in the virtual camera. When the virtual character in the virtual camera's perspective image is viewed, the background of the virtual character in the virtual camera's perspective image represented by the virtual representation is first removed to form a virtual character background-removed image, and then the virtual character background-removed image is streamed to the augmented reality device so that the augmented reality device can overlay the virtual character background-removed image on the real scene image shot by the augmented reality device.
於一實施例中,該虛擬實境裝置之參數係為該虛擬實境裝置本身的自由度數據,而該擴增實境裝置之參數係為該擴增實境裝置本身的自由度數據,且其中,該虛擬實境裝置的自由度數據用於控制該虛擬化身的虛擬攝影機的視角畫面,而該擴增實境裝置的自由度數據用於控制該虛擬代表的虛擬攝影機的視角畫面。 In one embodiment, the parameter of the virtual reality device is the degree of freedom data of the virtual reality device itself, and the parameter of the augmented reality device is the degree of freedom data of the augmented reality device itself, and wherein the degree of freedom data of the virtual reality device is used to control the view angle of the virtual camera of the virtual avatar, and the degree of freedom data of the augmented reality device is used to control the view angle of the virtual camera of the virtual representation.
於一實施例中,該虛擬實境裝置之參數係為該虛擬實境裝置的控制器數據,而該擴增實境裝置之參數係為該擴增實境裝置的控制器數據,且其中,該虛擬實境裝置的控制器數據用於控制該虛擬化身的動作,而該擴增實境裝置的控制器數據用於控制該虛擬代表的動作。 In one embodiment, the parameter of the virtual reality device is the controller data of the virtual reality device, and the parameter of the augmented reality device is the controller data of the augmented reality device, and wherein the controller data of the virtual reality device is used to control the action of the avatar, and the controller data of the augmented reality device is used to control the action of the virtual representation.
換言之,在本案之用於虛擬與真實世界場景內容同步之系統、方法、電腦程式產品、電腦可讀取記錄媒體中,遠端渲染伺服器(即本案所述之系統)係接收在真實世界活動某特定區域內的數台攝影機所拍攝的現場畫面之影像及時間戳記,遠端渲染伺服器負責渲染出虛擬場景以及VR裝置使用者的虛擬化身,另根據攝影機影像中的內容解析出觀眾本體及其 姿態,據此生成虛擬角色(非AR裝置的使用者)和虛擬代表(AR裝置的使用者)並放置於虛擬場景中。另外,AR/VR裝置可將其DoF數據、控制器數據及他們的時間戳,即時地套用至虛擬代表與虛擬化身的虛擬攝影機,故虛擬攝影機能夠隨著使用者移動AR/VR裝置來更新虛擬攝影機的視角畫面。再而,遠端渲染伺服器將虛擬代表與虛擬化身的虛擬攝影機所拍攝到的視角或互動畫面同步串流回到使用者的AR/VR裝置上,以確保兩裝置的使用者所看到的畫面皆處於同一個時間軸上。 In other words, in the system, method, computer program product, and computer-readable recording medium for synchronizing virtual and real world scene contents in this case, the remote rendering server (i.e., the system described in this case) receives images and timestamps of live scenes captured by several cameras in a specific area of a real-world activity. The remote rendering server is responsible for rendering the virtual scene and the virtual avatar of the VR device user, and analyzes the audience body and its posture based on the content in the camera image, and generates virtual characters (non-AR device users) and virtual representatives (AR device users) based on this and places them in the virtual scene. In addition, AR/VR devices can apply their DoF data, controller data, and their timestamps to the virtual cameras of virtual representatives and avatars in real time, so the virtual camera can update the view of the virtual camera as the user moves the AR/VR device. Furthermore, the remote rendering server synchronously streams the view or interactive images captured by the virtual cameras of the virtual representatives and avatars back to the user's AR/VR device to ensure that the images seen by users of both devices are on the same timeline.
10:影像擷取傳送裝置 10: Image capture and transmission device
20:系統 20: System
21:虛擬場景渲染模組 21: Virtual scene rendering module
22:虛擬化身渲染模組 22: Avatar rendering module
23:虛擬代表渲染模組 23: Virtual representative rendering module
24:影像接收分析模組 24: Image reception and analysis module
25:DoF參數接收套用模組 25: DoF parameter receiving application module
26:動作參數接收套用模組 26: Action parameter receiving application module
27:串流傳輸模組 27: Streaming transmission module
30:AR裝置 30:AR device
31:DoF參數傳送模組 31: DoF parameter transmission module
32:動作參數傳送模組 32: Action parameter transmission module
33:影像串流接收模組 33: Image stream receiving module
34:虛實影像同步疊合模組 34: Virtual and real image synchronization superposition module
35:攝影機模組 35: Camera module
40:VR裝置 40: VR device
41:DoF參數傳送模組 41: DoF parameter transmission module
42:動作參數傳送模組 42: Action parameter transmission module
43:影像串流接收模組 43: Image stream receiving module
S201~S214:步驟 S201~S214: Steps
1:攝影機 1: Camera
2:遠端渲染伺服器 2: Remote rendering server
3:AR裝置 3:AR device
4:VR裝置 4: VR device
圖1係為本案之用於虛擬與真實世界場景內容同步之系統之實施例方塊示意圖。 FIG1 is a block diagram of an embodiment of the system for synchronizing virtual and real world scene contents in the present case.
圖2係為本案之用於虛擬與真實世界場景內容同步之方法之實施例流程示意圖。 Figure 2 is a schematic diagram of an implementation example of the method for synchronizing virtual and real world scene contents in this case.
圖3係為本案之用於虛擬與真實世界場景內容同步之系統及方法之具體實施例示意圖。 FIG3 is a schematic diagram of a specific implementation example of the system and method for synchronizing virtual and real world scene contents in this case.
以下藉由特定的實施例說明本案之實施方式,熟習此項技藝之人士可由本文所揭示之內容輕易地瞭解本案之其他優點及功效。本說明書所附圖式所繪示之結構、比值、大小等均僅用於配合說明書所揭示之內容,以供熟悉此技藝之人士之瞭解與閱讀,非用於限定本案可實施之限定 條件,故任何修飾、改變或調整,在不影響本案所能產生之功效及所能達成之目的下,均應仍落在本案所揭示之技術內容得能涵蓋之範圍內。 The following specific implementation examples are used to illustrate the implementation of this case. People familiar with this technology can easily understand other advantages and effects of this case from the content disclosed in this article. The structures, ratios, sizes, etc. shown in the attached figures of this manual are only used to match the content disclosed in the manual for people familiar with this technology to understand and read, and are not used to limit the implementation conditions of this case. Therefore, any modification, change or adjustment should still fall within the scope of the technical content disclosed in this case without affecting the effects and purposes that can be achieved by this case.
於本文中所用之術語「包括」、「包含」、「具有」、「含有」或其任何其他變體都旨在涵蓋非排他性的包含。除非另有說明,單數形式的措辭,如「一」、「一個」、「該」也適用於複數形式,而「或」、「及/或」等措辭可互換使用。 As used herein, the terms "include", "comprising", "having", "containing" or any other variations thereof are intended to cover a non-exclusive inclusion. Unless otherwise indicated, singular forms such as "a", "an", "the" may also be used in the plural, and "or", "and/or" and the like may be used interchangeably.
請參閱圖1,係為本案之用於虛擬與真實世界場景內容同步之系統之架構示意圖,系統20包括虛擬場景渲染模組21、虛擬化身渲染模組22、虛擬代表渲染模組23、影像接收分析模組24、DoF參數接收套用模組25、動作參數接收套用模組26、串流傳輸模組27。 Please refer to FIG. 1, which is a schematic diagram of the architecture of the system for synchronizing virtual and real world scene contents in this case. The system 20 includes a virtual scene rendering module 21, a virtual avatar rendering module 22, a virtual representative rendering module 23, an image receiving and analyzing module 24, a DoF parameter receiving and applying module 25, an action parameter receiving and applying module 26, and a streaming transmission module 27.
系統20可為伺服器(如:遠端渲染伺服器),其中之各模組或單元均可為軟體、硬體或韌體;若為硬體,則可為具有資料處理與運算能力之處理單元、處理器、或電腦主機;若為軟體或韌體,則可包括處理單元、處理器、電腦或電腦主機可執行之指令,且可安裝於同一硬體裝置或分布於不同的複數硬體裝置。 System 20 may be a server (e.g., a remote rendering server), wherein each module or unit may be software, hardware, or firmware; if it is hardware, it may be a processing unit, processor, or computer host with data processing and computing capabilities; if it is software or firmware, it may include instructions executable by a processing unit, processor, computer, or computer host, and may be installed on the same hardware device or distributed on multiple different hardware devices.
虛擬場景渲染模組21用於渲染出虛擬場景。於一實施例中,該虛擬場景可為與真實世界一致的三維立體空間(如:元宇宙)。 The virtual scene rendering module 21 is used to render a virtual scene. In one embodiment, the virtual scene can be a three-dimensional space consistent with the real world (such as the metaverse).
虛擬化身渲染模組22用於渲染出虛擬化身,而虛擬場景渲染模組21將該虛擬化身放入該虛擬場景中。於一實施例中,該虛擬化身係對應到戴有VR裝置40的使用者。 The avatar rendering module 22 is used to render the avatar, and the virtual scene rendering module 21 puts the avatar into the virtual scene. In one embodiment, the avatar corresponds to the user wearing the VR device 40.
虛擬代表渲染模組23用於渲染出虛擬代表,而虛擬場景渲染模組21將該虛擬代表放入該虛擬場景中。於一實施例中,該虛擬代表係對應到戴有AR裝置30的使用者。 The virtual representative rendering module 23 is used to render a virtual representative, and the virtual scene rendering module 21 places the virtual representative into the virtual scene. In one embodiment, the virtual representative corresponds to a user wearing the AR device 30.
影像接收分析模組24用於接收來自影像擷取傳送裝置10(如:現場攝影機)之現場影像和時間戳,從而自現場影像及時間戳分析出現場觀眾之位置和動作姿態,以供系統20根據所分析出之動作姿態生成虛擬代表或虛擬角色,根據所分析出之位置將該虛擬代表或虛擬角色放入該虛擬場景。 The image receiving and analyzing module 24 is used to receive the live image and time stamp from the image capture and transmission device 10 (such as a live camera), thereby analyzing the position and action posture of the live audience from the live image and time stamp, so that the system 20 can generate a virtual representative or a virtual character according to the analyzed action posture, and put the virtual representative or virtual character into the virtual scene according to the analyzed position.
於一實施例中,對於戴有AR裝置30的現場觀眾而言,影像接收分析模組24自影像擷取傳送裝置10接收戴有AR裝置30的使用者影像及時間戳之後,分析出該使用者之位置和動作姿態,虛擬代表渲染模組23根據影像接收分析模組24所分析出之動作姿態生成該虛擬代表,以供虛擬場景渲染模組21根據影像接收分析模組24所分析出之位置將該虛擬化身放入至該虛擬場景中。 In one embodiment, for the on-site audience wearing the AR device 30, the image receiving and analyzing module 24 receives the image and timestamp of the user wearing the AR device 30 from the image capturing and transmitting device 10, and analyzes the position and action posture of the user. The virtual representative rendering module 23 generates the virtual representative according to the action posture analyzed by the image receiving and analyzing module 24, so that the virtual scene rendering module 21 puts the virtual avatar into the virtual scene according to the position analyzed by the image receiving and analyzing module 24.
於另一實施例中,對於未戴有AR裝置30的現場觀眾而言,影像接收分析模組24自影像擷取傳送裝置10接收現場觀眾的影像及時間戳之後,分析出該現場觀眾之位置和動作姿態,系統20根據影像接收分析模組24所分析出之動作姿態生成該虛擬角色,以供虛擬場景渲染模組21根據影像接收分析模組24所分析出之位置將該虛擬角色放入至該虛擬場景中。 In another embodiment, for on-site spectators who do not wear the AR device 30, the image receiving and analyzing module 24 receives the image and timestamp of the on-site spectators from the image capture and transmission device 10, and analyzes the position and action posture of the on-site spectators. The system 20 generates the virtual character according to the action posture analyzed by the image receiving and analyzing module 24, so that the virtual scene rendering module 21 puts the virtual character into the virtual scene according to the position analyzed by the image receiving and analyzing module 24.
換言之,在系統20上,將接收到的攝影機影像透過影像辨識的方式辨識出觀眾本體的座標位置以及動作姿態,將分析後的動作姿態 (如:舉手)套用到虛擬角色上,其中,座標資訊可由同一時間點的多個攝影機影像進行多角度的計算,計算出觀眾本體距離攝影機的位置以及位於場域內的相對位置,最後將虛擬角色放置在虛擬三維空間(對應於真實世界場景)的相對位置中。類似地,虛擬化身也是透過類似方式渲染在該虛擬三維空間中指定的座標位置。 In other words, on the system 20, the received camera images are used to identify the coordinate position and action posture of the audience body through image recognition, and the analyzed action posture (such as raising a hand) is applied to the virtual character, wherein the coordinate information can be calculated from multiple camera images at the same time point at multiple angles, and the position of the audience body from the camera and the relative position in the scene are calculated, and finally the virtual character is placed in the relative position of the virtual three-dimensional space (corresponding to the real world scene). Similarly, the virtual avatar is also rendered in a similar way at the specified coordinate position in the virtual three-dimensional space.
參數接收套用模組包括DoF參數接收套用模組25和動作參數接收套用模組26。 The parameter receiving and applying module includes the DoF parameter receiving and applying module 25 and the motion parameter receiving and applying module 26.
DoF參數接收套用模組25用於接收來自VR裝置40的DoF參數傳送模組41之自由度(Degrees of Freedom,DoF)數據參數及時間戳,以套用在該虛擬化身,使該虛擬化身的虛擬攝影機根據該參數及時間戳更新該虛擬化身的虛擬攝影機的視角畫面,即DoF參數接收套用模組25利用VR裝置40傳來的DoF度數據來控制該虛擬化身的虛擬攝影機的視角畫面。此外,DoF參數接收套用模組25接收來自AR裝置30的DoF參數傳送模組31之參數及時間戳,以套用在該虛擬代表,使該虛擬代表的虛擬攝影機根據該參數及時間戳更新該虛擬代表的虛擬攝影機的視角畫面,即DoF參數接收套用模組25利用AR裝置30傳來的DoF數據控制該虛擬代表的虛擬攝影機的視角畫面。 The DoF parameter receiving and applying module 25 is used to receive the degrees of freedom (DoF) data parameters and timestamp from the DoF parameter transmitting module 41 of the VR device 40, and apply them to the avatar, so that the virtual camera of the avatar updates the viewing angle of the virtual camera of the avatar according to the parameters and timestamp, that is, the DoF parameter receiving and applying module 25 uses the DoF data transmitted from the VR device 40 to control the viewing angle of the virtual camera of the avatar. In addition, the DoF parameter receiving and applying module 25 receives the parameters and timestamps from the DoF parameter transmitting module 31 of the AR device 30 to apply to the virtual representation, so that the virtual camera of the virtual representation updates the view angle of the virtual camera of the virtual representation according to the parameters and timestamps, that is, the DoF parameter receiving and applying module 25 uses the DoF data transmitted from the AR device 30 to control the view angle of the virtual camera of the virtual representation.
動作參數接收套用模組26用於接收來自VR裝置40的動作參數傳送模組42之控制器數據參數及時間戳,以套用在該虛擬化身,使該虛擬化身根據該動作數據及時間戳更新該虛擬化身的動作,即動作參數接收套用模組26利用該VR裝置40傳來的控制器數據控制該虛擬化身的動作。此外,動作參數接收套用模組26用於接收來自AR裝置30的動作參 數傳送模組32之控制器數據參數及時間戳,以套用在該虛擬代表,使該虛擬代表根據該動作數據及時間戳更新該虛擬代表的動作,即動作參數接收套用模組26利用該AR裝置30傳來的控制器數據控制該虛擬代表的動作。 The action parameter receiving and applying module 26 is used to receive the controller data parameters and timestamp from the action parameter transmitting module 42 of the VR device 40 to apply to the virtual avatar, so that the virtual avatar updates the action of the virtual avatar according to the action data and timestamp, that is, the action parameter receiving and applying module 26 uses the controller data transmitted from the VR device 40 to control the action of the virtual avatar. In addition, the action parameter receiving and applying module 26 is used to receive the controller data parameters and timestamps from the action parameter transmitting module 32 of the AR device 30, and apply them to the virtual representative, so that the virtual representative updates the action of the virtual representative according to the action data and timestamp, that is, the action parameter receiving and applying module 26 uses the controller data transmitted from the AR device 30 to control the action of the virtual representative.
藉此,不論是AR裝置30或VR裝置40的使用者,由於它們所對應到的虛擬代表和虛擬化身皆處於同一個虛擬三維空間中,彼此身上的虛擬攝影機即可捕捉到對方的動作以及所在的場景視角。 In this way, no matter the user of the AR device 30 or the VR device 40, since their corresponding virtual representatives and virtual avatars are in the same virtual three-dimensional space, the virtual cameras on each other can capture each other's movements and the scene perspective.
串流傳輸模組27將該虛擬化身的虛擬攝影機的視角畫面傳輸至VR裝置40,以由VR裝置40的影像串流接收模組43來接收和播放。此外,串流傳輸模組27將該虛擬代表的虛擬攝影機的視角畫面傳輸至AR裝置30,以由AR裝置30的影像串流接收模組33來接收和播放,藉此,VR裝置40與AR裝置30的使用者能在同一時間軸上進行互動,達到虛擬世界與真實世界之間的互動。 The streaming module 27 transmits the view of the virtual camera of the virtual avatar to the VR device 40, so that the image streaming receiving module 43 of the VR device 40 can receive and play it. In addition, the streaming module 27 transmits the view of the virtual camera of the virtual representation to the AR device 30, so that the image streaming receiving module 33 of the AR device 30 can receive and play it. In this way, the users of the VR device 40 and the AR device 30 can interact on the same time axis to achieve interaction between the virtual world and the real world.
於一實施例中,在傳輸該虛擬代表的虛擬攝影機的視角畫面時,串流傳輸模組27將該虛擬代表的虛擬攝影機的視角畫面中的虛擬化身移除背景,以形成虛擬化身去背影像,從而將該虛擬化身去背影像串流至AR裝置30,以供AR裝置30的虛實影像同步疊合模組34將該虛擬化身去背影像疊合至AR裝置30的攝影機模組35所拍攝的實景影像上。 In one embodiment, when transmitting the view angle picture of the virtual camera represented by the virtual device, the streaming transmission module 27 removes the background of the virtual avatar in the view angle picture of the virtual camera represented by the virtual device to form a virtual avatar background-removed image, and then streams the virtual avatar background-removed image to the AR device 30, so that the virtual image synchronization superposition module 34 of the AR device 30 can superimpose the virtual avatar background-removed image on the real scene image captured by the camera module 35 of the AR device 30.
於另一實施例中,串流傳輸模組27將該虛擬代表的虛擬攝影機的視角畫面中的虛擬角色移除背景,以形成虛擬角色去背影像,從而將該虛擬角色去背影像串流至AR裝置30,以供AR裝置30的虛實影像同步疊合模組34將該虛擬角色去背影像疊合至AR裝置30的攝影機模組35所拍攝的實景影像上。 In another embodiment, the streaming module 27 removes the background of the virtual character in the view angle of the virtual camera represented by the virtual character to form a virtual character background-removed image, and then streams the virtual character background-removed image to the AR device 30, so that the virtual image synchronization overlay module 34 of the AR device 30 overlays the virtual character background-removed image on the real scene image captured by the camera module 35 of the AR device 30.
於一實施例中,圖1所示之AR裝置30和VR裝置40(如AR/VR頭戴式裝置或MR頭盔),可例如為Apple Vision Pro(AR為主,VR為輔)、Meta Oculus Quest3(VR為主,AR為輔),AR裝置30和VR裝置40上內建或載有元宇宙軟體應用程式。透過該元宇宙軟體應用程式,AR裝置30和VR裝置40的DoF數據藉由DoF參數傳送模組31、41即時傳送至DoF參數接收套用模組25,以套用至虛擬代表與虛擬化身身上的虛擬攝影機,藉以更新虛擬攝影機的視角;此外,該元宇宙軟體應用程式也將AR裝置30和VR裝置40上的控制器所偵測到的動作數據(如:手勢),透過動作參數傳送模組32、42即時傳送至的動作參數接收套用模組26,以套用在虛擬代表與虛擬化身上,從而確保AR裝置30和VR裝置40上所呈現的視角畫面或互動畫面皆處於同一個時間軸上。 In one embodiment, the AR device 30 and VR device 40 (such as an AR/VR head-mounted device or an MR helmet) shown in FIG. 1 may be, for example, Apple Vision Pro (AR as the main function, VR as the auxiliary function) or Meta Oculus Quest3 (VR as the main function, AR as the auxiliary function). The AR device 30 and VR device 40 have a built-in or loaded Metaverse software application. Through the Metaverse software application, the DoF data of the AR device 30 and VR device 40 are transmitted to the DoF parameter receiving and applying module 25 in real time through the DoF parameter transmission modules 31 and 41 to be applied to the virtual camera on the virtual representative and the virtual avatar to update the perspective of the virtual camera. In addition, the Metaverse software application also transmits the DoF data of the AR device 30 and VR device 40 to the DoF parameter receiving and applying module 25 in real time. The action data (such as gestures) detected by the controllers on the AR device 30 and the VR device 40 are transmitted to the action parameter receiving and applying module 26 in real time through the action parameter transmitting modules 32 and 42 to be applied to the virtual representative and the avatar, thereby ensuring that the perspective images or interactive images presented on the AR device 30 and the VR device 40 are on the same timeline.
另外,影像擷取傳送裝置10可擷取包括人和物件的現場畫面並可擷取現場聲音,傳輸至系統20,以供系統20渲染出包括影像和聲音的虛擬場景,進而串流傳輸至AR裝置30和VR裝置40。 In addition, the image capture and transmission device 10 can capture live images including people and objects and can capture live sounds and transmit them to the system 20 so that the system 20 can render a virtual scene including images and sounds, and then stream them to the AR device 30 and the VR device 40.
根據上述本案所揭之用於虛擬與真實世界場景內容同步之系統的一或多個實施例可知,本案藉由遠端渲染以及同步技術,讓真實世界與虛擬世界的使用者可透過AR裝置和VR裝置互相看見對方並進行互動,亦即,當一使用者透過VR裝置登入到元宇宙中,可成為一虛擬化身,並可與其他虛擬化身、虛擬場景、虛擬物件、虛擬代表、虛擬角色等進行互動,而真實世界中的使用者,即使不用登入虛擬世界,也能透過虛擬代表與虛擬世界中的虛擬化身、虛擬場景、虛擬物件、虛擬代表、虛擬角色等進行互動。 According to one or more embodiments of the system for synchronizing virtual and real world scenes disclosed in the above case, the present case uses remote rendering and synchronization technology to allow users in the real world and the virtual world to see each other and interact with each other through AR devices and VR devices. That is, when a user logs into the metaverse through a VR device , can become a virtual avatar, and can interact with other virtual avatars, virtual scenes, virtual objects, virtual representatives, virtual characters, etc., and users in the real world can interact with virtual avatars, virtual scenes, virtual objects, virtual representatives, virtual characters, etc. in the virtual world through virtual representatives even without logging into the virtual world.
請參閱圖2,係為本案之用於虛擬與真實世界場景內容同步之方法之步驟示意圖。須說明的是,圖2所示之方法可由圖1所示之系統20或伺服器所執行。 Please refer to Figure 2, which is a schematic diagram of the steps of the method for synchronizing virtual and real world scene contents in this case. It should be noted that the method shown in Figure 2 can be executed by the system 20 or server shown in Figure 1.
步驟S201,渲染出虛擬場景。於一實施例中,該虛擬場景可為依照真實場景打造的虛擬三維空間或為其他的虛擬世界,而所渲染之虛擬場景可包括聲音或其他多媒體資訊。 Step S201, rendering a virtual scene. In one embodiment, the virtual scene may be a virtual three-dimensional space created according to a real scene or another virtual world, and the rendered virtual scene may include sound or other multimedia information.
步驟S202~S207係關於AR裝置的虛擬代表。步驟S208~S213係關於VR裝置的虛擬化身。 Steps S202~S207 are about the virtual representation of the AR device. Steps S208~S213 are about the virtual avatar of the VR device.
步驟S202和S203,渲染出AR裝置的虛擬代表,將虛擬代表放入虛擬場景;步驟S208和S209,渲染出VR裝置的虛擬化身,將虛擬化身放入虛擬場景。 Steps S202 and S203 render a virtual representation of the AR device and place the virtual representation in the virtual scene; steps S208 and S209 render a virtual avatar of the VR device and place the virtual avatar in the virtual scene.
於一實施例中,該虛擬代表可根據對戴有AR裝置的現場觀眾影像所分析出之動作姿態而生成,其可由現場攝影機捕捉以分析而獲得,再根據對戴有AR裝置的現場觀眾影像所分析出之位置將虛擬代表放入虛擬場景的相對位置上。 In one embodiment, the virtual representative can be generated based on the action posture analyzed from the image of the live audience wearing the AR device, which can be captured by the live camera for analysis, and then the virtual representative is placed at the relative position of the virtual scene according to the position analyzed from the image of the live audience wearing the AR device.
於另一實施例中,另可渲染出對應現場觀眾的虛擬角色或對應現場物件的虛擬物件等等,而該虛擬角色可根據對現場觀眾影像所分析出之動作姿態所生成,而類似地根據所分析出之位置將渲染出之虛擬角色或物件放入虛擬場景中。 In another embodiment, a virtual character corresponding to the on-site audience or a virtual object corresponding to the on-site object can be rendered, and the virtual character can be generated based on the action posture analyzed from the on-site audience image, and similarly, the rendered virtual character or object can be placed in the virtual scene based on the analyzed position.
於又一實施例中,該虛擬化身可為VR裝置的使用者所自行決定。 In another embodiment, the virtual avatar can be determined by the user of the VR device.
步驟S204和S205,接收來自AR裝置之參數及時間戳,將AR裝置之參數及時間戳套用在虛擬代表;步驟S210和S211,接收來自VR裝置之參數及時間戳,將VR裝置之參數及時間戳套用在虛擬化身。 Steps S204 and S205 receive parameters and timestamps from the AR device, and apply the parameters and timestamps of the AR device to the virtual representation; Steps S210 and S211 receive parameters and timestamps from the VR device, and apply the parameters and timestamps of the VR device to the virtual avatar.
於一實施例中,該參數及時間戳指的是AR裝置或VR裝置本身的DoF數據及時間戳,其可接收來控制虛擬代表或虛擬化身的視角畫面。於另一實施例中,該參數及時間戳指的是AR裝置或VR裝置的控制器數據及時間戳,其可接收來控制虛擬代表或虛擬化身的動作。 In one embodiment, the parameter and timestamp refer to the DoF data and timestamp of the AR device or VR device itself, which can be received to control the perspective of the virtual representative or avatar. In another embodiment, the parameter and timestamp refer to the controller data and timestamp of the AR device or VR device, which can be received to control the action of the virtual representative or avatar.
步驟S206,虛擬代表的虛擬攝影機根據參數及時間戳更新虛擬代表的虛擬攝影機的視角畫面;步驟S212,虛擬化身的虛擬攝影機根據參數及時間戳更新虛擬化身的虛擬攝影機的視角畫面。亦即,AR裝置或VR裝置的使用者移動頭部時,對應的虛擬代表的虛擬攝影機的視角畫面也會同步改變。此時,不論是AR裝置或VR裝置的使用者,由於它們所對應到的虛擬代表和虛擬化身皆處於同一個虛擬三維空間中,故彼此身上的虛擬攝影機可捕捉到對方的動作以及所在的場景視角。 In step S206, the virtual camera of the virtual representative updates the view angle of the virtual camera of the virtual representative according to the parameter and the timestamp; in step S212, the virtual camera of the avatar updates the view angle of the virtual camera of the avatar according to the parameter and the timestamp. That is, when the user of the AR device or VR device moves his head, the view angle of the virtual camera of the corresponding virtual representative will also change synchronously. At this time, regardless of whether they are users of AR or VR devices, since their corresponding virtual representatives and virtual avatars are in the same virtual three-dimensional space, the virtual cameras on each other can capture each other's movements and the scene perspective.
步驟S207,將虛擬代表的虛擬攝影機的視角畫面傳輸至AR裝置;步驟S213,將虛擬化身的虛擬攝影機的視角畫面傳輸至VR裝置。 Step S207, transmitting the view angle of the virtual camera of the virtual representative to the AR device; Step S213, transmitting the view angle of the virtual camera of the virtual avatar to the VR device.
於一實施例中,該虛擬化身的虛擬攝影機的視角畫面及聲音可透過WebRTC視訊串流協議串流到VR裝置上,而該虛擬代表的虛擬攝影機視角畫面則串流虛擬化身、虛擬角色的去背影像至AR裝置,則AR裝置將這些去背影像疊合至其攝影機拍到的實景,呈現AR效果。 In one embodiment, the view and sound of the virtual camera of the avatar can be streamed to the VR device via the WebRTC video streaming protocol, and the virtual camera view of the virtual representative streams the background-removed images of the virtual avatar and virtual character to the AR device, and the AR device superimposes these background-removed images on the real scene captured by its camera to present an AR effect.
步驟S214,AR裝置與VR裝置的使用者在同一時間軸上進行互動,達到虛擬世界與真實世界之間的互動。 In step S214, the users of the AR device and the VR device interact on the same timeline to achieve interaction between the virtual world and the real world.
除了上述一或多個實施例之外,本案提供一種電腦程式產品,經由電腦載入程式後執行上述方法。另外,電腦程式(產品)除可儲存於記錄媒體外,亦可在網路上直接傳輸提供,電腦程式(產品)係為載有電腦可讀取之程式且不限外在形式之物。所述電腦包括但不限於具有處理器之電子裝置,例如手機或平板等。 In addition to one or more of the above embodiments, this case provides a computer program product that executes the above method after the program is loaded into a computer. In addition, the computer program (product) can be stored in a recording medium or directly transmitted and provided on the Internet. The computer program (product) is a computer-readable program and is not limited to an external form. The computer includes but is not limited to an electronic device with a processor, such as a mobile phone or a tablet.
此外,本案還提供一種電腦可讀取記錄媒體,係應用於具有處理器及/或記憶體之計算設備或電腦中,且電腦可讀取記錄媒體儲存有指令,並可利用計算設備或電腦透過處理器及/或記憶體執行電腦可讀取記錄媒體,以於執行電腦可讀取記錄媒體時執行上述方法及/或內容。所述電腦可讀取紀錄媒體(例如硬碟、軟碟、光碟、USB隨身碟)係儲存有該電腦程式(產品)。在一實施例中,該電腦可讀取記錄媒體係非暫態(non-transitory)的電腦可讀取記錄儲存媒體。 In addition, the present case also provides a computer-readable recording medium, which is applied to a computing device or a computer having a processor and/or a memory, and the computer-readable recording medium stores instructions, and the computing device or the computer can execute the computer-readable recording medium through the processor and/or the memory, so as to execute the above-mentioned method and/or content when executing the computer-readable recording medium. The computer-readable recording medium (such as a hard disk, a floppy disk, an optical disk, a USB flash drive) stores the computer program (product). In one embodiment, the computer-readable recording medium is a non-transitory computer-readable recording storage medium.
接著,根據以上本案之在虛擬世界中協助多人選擇活動之系統、方法、電腦程式產品、電腦可讀取記錄媒體之一或多個實施例,在此提供示範情境之具體實施例。 Next, based on one or more embodiments of the system, method, computer program product, and computer-readable recording medium for assisting multiple people in selecting activities in a virtual world, a specific embodiment of a demonstration scenario is provided here.
請參閱圖3,數台攝影機1架設在活動現場之特定區域周邊以拍攝區域內的觀眾,透過網際網路將影像以及時間戳記傳送至遠端渲染伺服器2進行解析。 Please refer to Figure 3. Several cameras 1 are set up around a specific area of the event site to shoot the audience in the area, and transmit the images and timestamps to the remote rendering server 2 for analysis via the Internet.
遠端渲染伺服器2將接收到的影像透過影像辨識的方式辨識出觀眾的位置以及動作姿態,將分析出的動作姿態(如:舉手)套用到虛擬角色上,座標資訊則可由同一時間點的攝影機影像進行多角度的計算,計算出觀眾本體距離攝影機1的位置以及位於場域內的相對位置,最後將虛擬 角色放置在虛擬三維空間的相對位置中。類似地,觀眾中有配戴AR裝置3者,對應其所生成的為虛擬代表,也放置在虛擬三維空間的相對位置中。遠端渲染伺服器2渲染出與真實世界一致的虛擬三維空間以及渲染出VR裝置4的虛擬化身,再放入該虛擬三維空間。 The remote rendering server 2 uses image recognition to identify the position and gesture of the audience, and applies the analyzed gesture (such as raising a hand) to the virtual character. The coordinate information can be calculated from the camera image at the same time point in multiple angles to calculate the position of the audience body from the camera 1 and the relative position in the scene, and finally place the virtual character in the relative position of the virtual three-dimensional space. Similarly, if there are audience members wearing AR devices 3, the corresponding virtual representatives are also placed in the relative position of the virtual three-dimensional space. The remote rendering server 2 renders a virtual three-dimensional space that is consistent with the real world and renders a virtual avatar of the VR device 4, and then puts it into the virtual three-dimensional space.
AR裝置3或VR裝置4指可內建元宇宙應用程序(APP)或透過網際網路下載元宇宙APP,真實世界活動的觀眾與正在體驗虛擬活動的觀眾,互動時可分別配戴AR裝置3和VR裝置4,元宇宙APP將AR裝置3和VR裝置4的DoF(Degrees of Freedom)數據以及控制器或手勢的資訊透過網際網路即時傳送至遠端渲染伺服器2上。 AR device 3 or VR device 4 refers to a device that can have a built-in Metaverse application (APP) or can be downloaded through the Internet. Audiences of real-world events and audiences experiencing virtual events can wear AR device 3 and VR device 4 respectively when interacting. The Metaverse APP transmits the DoF (Degrees of Freedom) data of AR device 3 and VR device 4 and the information of controllers or gestures to the remote rendering server 2 in real time through the Internet.
遠端渲染伺服器2將AR裝置3和VR裝置4的控制器所偵測到手勢或動作分別套用至虛擬代表和虛擬化身身上,使虛擬代表和虛擬化身能夠根據所偵測到手勢或動作做出相對應的變化,而AR裝置3和VR裝置4的DoF數據則套用至虛擬代表和虛擬化身身上的虛擬攝影機,隨著使用者移動戴在頭上的AR裝置3和VR裝置4便能即時更新虛擬攝影機的視角畫面,此時不論是AR裝置3和VR裝置4的使用者,由於它們所對應到的虛擬代表和虛擬化身皆處於同一個虛擬三維空間中,彼此身上的虛擬攝影機即可捕捉到對方的動作以及所在的場景視角,達到虛擬世界與真實世界之間的互動。 The remote rendering server 2 applies the gestures or actions detected by the controllers of the AR device 3 and the VR device 4 to the virtual representative and the virtual avatar, respectively, so that the virtual representative and the virtual avatar can make corresponding changes according to the detected gestures or actions, and the DoF data of the AR device 3 and the VR device 4 are applied to the virtual cameras on the virtual representative and the virtual avatar. The AR device 3 and VR device 4 on the head can update the virtual camera's view in real time. At this time, no matter the user of the AR device 3 or VR device 4, since their corresponding virtual representatives and virtual avatars are in the same virtual three-dimensional space, the virtual cameras on each other can capture each other's movements and the scene perspective, achieving interaction between the virtual world and the real world.
虛擬代表和虛擬化身身上的虛擬攝影機畫面經由遠端渲染伺服器2進行編碼,再透過WebRTC通訊協議將畫面及聲音即時串流到AR裝置3和VR裝置4內的APP進行解碼,最後呈現在AR裝置3和VR裝置4內的顯示幕上,因此可確保AR裝置3和VR裝置4的使用者雙方的 互動畫面皆是即時與同步的。對VR裝置4使用者而言,其可透過裝置內的顯示幕看到虛擬場景、對應觀眾的虛擬角色、對應AR裝置3的虛擬代表,也就是說,VR裝置4的使用者是以第一人稱視角。而對於AR裝置3使用者,則遠端渲染伺服器2只會串流具有虛擬化身的去背影像,也就是說,AR視角畫面可看到觀眾本身,接著,AR裝置3將串流的去背影像與相機所拍攝到的實景影像進行疊合,呈現出AR效果。 The virtual camera images of the virtual representative and the avatar are encoded by the remote rendering server 2, and then the images and sounds are streamed to the APP in the AR device 3 and VR device 4 in real time through the WebRTC communication protocol for decoding, and finally presented on the display screen in the AR device 3 and VR device 4, so that the interactive images of the users of the AR device 3 and VR device 4 are guaranteed to be real-time and synchronized. For the user of the VR device 4, he can see the virtual scene, the virtual character corresponding to the audience, and the virtual representative corresponding to the AR device 3 through the display screen in the device, that is, the user of the VR device 4 is in the first-person perspective. For users of AR device 3, remote rendering server 2 will only stream background-removed images with virtual avatars, that is, the audience can see themselves in the AR perspective screen. Then, AR device 3 will overlay the streamed background-removed images with the real-life images captured by the camera to present an AR effect.
綜上所述,本案之用於虛擬與真實世界場景內容同步之系統、方法、電腦程式產品、電腦可讀取記錄媒體具有以下特點或功效: In summary, the system, method, computer program product, and computer-readable recording medium for synchronizing virtual and real world scene contents in this case have the following characteristics or functions:
1.本案能將真實世界活動中的觀眾,透過攝影機拍攝以及遠端渲染伺服器解析,依據分析結果將本體與其姿態建立為虛擬代表和虛擬角色,配置於虛擬世界場景之中。 1. This solution can capture the audience in real-world activities through camera shooting and remote rendering server analysis. Based on the analysis results, the entity and its posture are established as virtual representatives and virtual characters, and are configured in the virtual world scene.
2.本案能讓VR裝置的使用者,在虛擬世界中對應配置有虛擬化身,與真實世界中AR裝置的使用者藉由某虛擬代表為媒介進行互動。 2. This case allows users of VR devices to have corresponding virtual avatars in the virtual world and interact with users of AR devices in the real world through a virtual representative as a medium.
3.本案能將AR/VR裝置的DoF數據即時套用至遠端渲染伺服器上的虛擬代表及虛擬化身身上的虛擬攝影機,並將虛擬攝影機所拍攝到的互動畫面同步串流回到AR/VR裝置上呈現,達到虛擬世界與真實世界之間的互動。 3. This solution can apply the DoF data of the AR/VR device to the virtual representative on the remote rendering server and the virtual camera on the avatar in real time, and simultaneously stream the interactive images captured by the virtual camera back to the AR/VR device for presentation, achieving interaction between the virtual world and the real world.
上述實施例僅例示性說明本案之功效,而非用於限制本案,任何熟習此項技藝之人士均可在不違背本案之精神及範疇下對上述該些實施態樣進行修飾與改變。因此本案之權利保護範圍,應如後述之申請專利範圍所列。 The above embodiments are only illustrative of the effects of this case, and are not intended to limit this case. Anyone familiar with this technology can modify and change the above embodiments without violating the spirit and scope of this case. Therefore, the scope of protection of this case should be as listed in the scope of the patent application described below.
S201~S214:步驟 S201~S214: Steps
Claims (13)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW112148817A TWI877952B (en) | 2023-12-14 | 2023-12-14 | System, method and computer program product for synchronizing virtual and real world scene content |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW112148817A TWI877952B (en) | 2023-12-14 | 2023-12-14 | System, method and computer program product for synchronizing virtual and real world scene content |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| TWI877952B true TWI877952B (en) | 2025-03-21 |
| TW202524422A TW202524422A (en) | 2025-06-16 |
Family
ID=95830400
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| TW112148817A TWI877952B (en) | 2023-12-14 | 2023-12-14 | System, method and computer program product for synchronizing virtual and real world scene content |
Country Status (1)
| Country | Link |
|---|---|
| TW (1) | TWI877952B (en) |
Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20210192413A1 (en) * | 2018-04-30 | 2021-06-24 | Telefonaktiebolaget Lm Ericsson (Publ) | Automated augmented reality rendering platform for providing remote expert assistance |
| US20210216135A1 (en) * | 2017-10-17 | 2021-07-15 | Logitech Europe S.A. | Input device for ar/vr applications |
| TW202245463A (en) * | 2021-04-09 | 2022-11-16 | 日商索尼半導體解決方案公司 | Information processing device and information processing method |
| TW202248994A (en) * | 2021-05-31 | 2022-12-16 | 大陸商北京市商湯科技開發有限公司 | Method for driving interactive object and processing phoneme, device and storage medium |
| US20230073750A1 (en) * | 2018-11-15 | 2023-03-09 | Edx Technologies, Inc. | Augmented reality (ar) imprinting methods and systems |
| CN116962746A (en) * | 2022-04-14 | 2023-10-27 | 广州方硅信息技术有限公司 | Online chorus method and device based on continuous wheat live broadcast and online chorus system |
| CN116958342A (en) * | 2023-05-15 | 2023-10-27 | 腾讯科技(深圳)有限公司 | Method for generating actions of virtual image, method and device for constructing action library |
-
2023
- 2023-12-14 TW TW112148817A patent/TWI877952B/en active
Patent Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20210216135A1 (en) * | 2017-10-17 | 2021-07-15 | Logitech Europe S.A. | Input device for ar/vr applications |
| US20210192413A1 (en) * | 2018-04-30 | 2021-06-24 | Telefonaktiebolaget Lm Ericsson (Publ) | Automated augmented reality rendering platform for providing remote expert assistance |
| US20230073750A1 (en) * | 2018-11-15 | 2023-03-09 | Edx Technologies, Inc. | Augmented reality (ar) imprinting methods and systems |
| TW202245463A (en) * | 2021-04-09 | 2022-11-16 | 日商索尼半導體解決方案公司 | Information processing device and information processing method |
| TW202248994A (en) * | 2021-05-31 | 2022-12-16 | 大陸商北京市商湯科技開發有限公司 | Method for driving interactive object and processing phoneme, device and storage medium |
| CN116962746A (en) * | 2022-04-14 | 2023-10-27 | 广州方硅信息技术有限公司 | Online chorus method and device based on continuous wheat live broadcast and online chorus system |
| CN116958342A (en) * | 2023-05-15 | 2023-10-27 | 腾讯科技(深圳)有限公司 | Method for generating actions of virtual image, method and device for constructing action library |
Also Published As
| Publication number | Publication date |
|---|---|
| TW202524422A (en) | 2025-06-16 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP7419460B2 (en) | Expanded field of view re-rendering for VR viewing | |
| US11079999B2 (en) | Display screen front panel of HMD for viewing by users viewing the HMD player | |
| US11389726B2 (en) | Second screen virtual window into VR environment | |
| US10388071B2 (en) | Virtual reality (VR) cadence profile adjustments for navigating VR users in VR environments | |
| US11468605B2 (en) | VR real player capture for in-game interaction view | |
| US9779538B2 (en) | Real-time content immersion system | |
| US10049496B2 (en) | Multiple perspective video system and method | |
| CN111862348B (en) | Video display method, video generation method, device, equipment and storage medium | |
| CN106576158A (en) | immersive video | |
| US20240114181A1 (en) | Information processing device, information processing method, and program | |
| WO2019056904A1 (en) | Video transmission method, server, vr playback terminal and computer-readable storage medium | |
| US20240303947A1 (en) | Information processing device, information processing terminal, information processing method, and program | |
| Cannavò et al. | Supporting motion-capture acting with collaborative Mixed Reality | |
| TWI877952B (en) | System, method and computer program product for synchronizing virtual and real world scene content | |
| JP7809684B2 (en) | Information processing device, information processing terminal, information processing method, and program | |
| HK40030855B (en) | Video display method, video generation method, apparatus, device and storage medium | |
| CN116941234A (en) | Reference frame for motion capture |