TWI669682B - Image processing system and image processing method - Google Patents
Image processing system and image processing method Download PDFInfo
- Publication number
- TWI669682B TWI669682B TW107117977A TW107117977A TWI669682B TW I669682 B TWI669682 B TW I669682B TW 107117977 A TW107117977 A TW 107117977A TW 107117977 A TW107117977 A TW 107117977A TW I669682 B TWI669682 B TW I669682B
- Authority
- TW
- Taiwan
- Prior art keywords
- virtual
- information
- virtual object
- real environment
- image processing
- Prior art date
Links
Landscapes
- Processing Or Creating Images (AREA)
Abstract
一種影像處理系統,包含:一攝像機、一定位裝置、一姿態估測裝置以及一處理器。攝像機用以拍攝一真實環境。定位裝置用以偵測攝像機的一攝像位置。姿態估測裝置用以偵測攝像機的一攝像姿態。處理器用以依據時間資訊及緯度資訊以推知一光源資訊,並依據攝像位置、攝像姿態、對應真實環境的真實環境資訊、光源資訊及一第一虛擬物體之一第一虛擬資訊及一光源追蹤(ray tracing)演算法,以將真實環境的倒影呈現在第一虛擬物體上。An image processing system includes: a camera, a positioning device, an attitude estimating device, and a processor. The camera is used to capture a real environment. The positioning device is used to detect an imaging position of the camera. The attitude estimating device is used to detect an imaging posture of the camera. The processor is configured to infer a light source information according to the time information and the latitude information, and according to the image capturing position, the image capturing posture, the real environment information corresponding to the real environment, the light source information, and the first virtual information of one of the first virtual objects and a light source tracking ( Ray tracing) algorithm to present the reflection of the real environment on the first virtual object.
Description
本案是有關於一種影像處理系統及影像處理方法,且特別是有關於一種應用於擴增實境之影像處理系統及影像處理方法。The present invention relates to an image processing system and an image processing method, and more particularly to an image processing system and an image processing method applied to an augmented reality.
一般而言,在擴增實境技術中,容易產生虛擬物體與真實環境不融合或虛擬物體不夠真實的現象。此種缺陷通常是因為在渲染(rendering)虛擬物體時,沒有參考真實環境的狀況而導致。例如,在使用者觀看擴增實境的場景時,虛擬物體的光影沒有跟著攝像機的方向或角度而調整。In general, in the augmented reality technology, it is easy to cause a phenomenon that a virtual object does not fuse with a real environment or a virtual object is not real enough. This type of defect is usually caused by not referring to the real environment when rendering a virtual object. For example, when a user views a scene in an augmented reality, the light and shadow of the virtual object are not adjusted following the direction or angle of the camera.
因此,如何使虛擬物體在擴增實境中,更貼近真實環境,已成為須解決的問題之一。Therefore, how to make virtual objects closer to the real environment in augmented reality has become one of the problems to be solved.
根據本案之一方面,提供一種影像處理系統,包含:一攝像機、一定位裝置、一姿態估測裝置以及一處理器。攝像機用以拍攝一真實環境。定位裝置用以偵測攝像機的一攝像位置。姿態估測裝置用以偵測攝像機的一攝像姿態。處理器用以依據一時間資訊及一緯度資訊以推知一光源資訊,並依據攝像位置、攝像姿態、對應真實環境的真實環境資訊、光源資訊、一第一虛擬物體之一第一虛擬資訊及一光源追蹤(ray tracing)演算法,以將真實環境的倒影呈現在第一虛擬物體上。According to an aspect of the present invention, an image processing system includes: a camera, a positioning device, a posture estimating device, and a processor. The camera is used to capture a real environment. The positioning device is used to detect an imaging position of the camera. The attitude estimating device is used to detect an imaging posture of the camera. The processor is configured to infer a light source information according to the time information and the latitude information, and according to the image capturing position, the camera posture, the real environment information corresponding to the real environment, the light source information, the first virtual information of the first virtual object, and a light source A ray tracing algorithm is used to present a reflection of the real environment on the first virtual object.
根據本案之另一方面,提供一種影像處理方法包含:藉由一攝影機拍攝一真實環境;藉由一定位裝置偵測攝像機的一攝像位置;藉由一姿態估測裝置偵測攝像機的一攝像姿態;藉由一處理器依據一時間資訊及一緯度資訊推知一光源資訊;以及藉由處理器依據攝像位置、攝像姿態、對應真實環境的真實環境資訊、光源資訊、一第一虛擬物體之一第一虛擬資訊及一光源追蹤(ray tracing)演算法,以將真實環境的倒影呈現在第一虛擬物體上。According to another aspect of the present invention, an image processing method includes: capturing a real environment by a camera; detecting an image capturing position of the camera by a positioning device; and detecting an image capturing posture of the camera by using a posture estimating device Deriving a light source information according to a time information and a latitude information by a processor; and by the processor according to the image capturing position, the image capturing posture, the real environment information corresponding to the real environment, the light source information, and one of the first virtual objects A virtual information and a ray tracing algorithm to present a reflection of the real environment on the first virtual object.
綜上,本案的影像處理系統及影像處理方法應用攝像位置、攝像姿態、真實環境資訊、光源資訊及虛擬物體之虛擬資訊及光源追蹤演算法,以考量太陽在世界座標的位置、光源色溫、攝像機的位置及其擺放方向、真實物體的材質及/或反射率、虛擬物體的材質及/或反射率等因素配合光源追蹤演算法,使真實環境的倒影呈現在虛擬物體上,使得擴增實境中的虛擬物體可以呈現更貼近真實的光影樣貌。In summary, the image processing system and image processing method of the present application apply the imaging position, the imaging posture, the real environment information, the light source information, and the virtual information of the virtual object and the light source tracking algorithm to consider the position of the sun at the world coordinates, the color temperature of the light source, and the camera. The position and its orientation, the material and/or reflectivity of the real object, the material of the virtual object and/or the reflectivity are combined with the light source tracking algorithm to make the reflection of the real environment appear on the virtual object, so that the amplification is real. Virtual objects in the environment can appear closer to the real light and shadow.
請參閱第1A圖,第1A圖為根據本案一實施例繪示的一種影像處理系統100a的方塊圖。於一實施例中,影像處理系統100a包含一攝像機10、一定位裝置20、一姿態估測裝置30及一處理器40。於一實施例中,處理器40分別耦接攝像機10、定位裝置20以及姿態估測裝置30。Please refer to FIG. 1A. FIG. 1A is a block diagram of an image processing system 100a according to an embodiment of the present invention. In one embodiment, the image processing system 100a includes a camera 10, a positioning device 20, an attitude estimating device 30, and a processor 40. In one embodiment, the processor 40 is coupled to the camera 10, the positioning device 20, and the attitude estimating device 30, respectively.
於一實施例中,攝像機10可以是電荷耦合元件(Charge Coupled Device,CCD)或互補性氧化金屬半導體(Complementary Metal-Oxide Semiconductor,CMOS)。定位裝置20可以是全球定位系统(Global Positioning System,GPS)定位器,全球定位系统定位器可用以取得攝像機10的位置資訊。姿態估測裝置30可以由慣性量測單元(Inertial measurement unit,IMU)以實現,慣性量測單元可以偵測攝像機10的朝向(例如朝向北邊或南邊、仰角或俯角)。處理器40可以被實施為微控制單元(microcontroller)、微處理器(microprocessor)、數位訊號處理器(digital signal processor)、特殊應用積體電路(application specific integrated circuit,ASIC)或一邏輯電路。In an embodiment, the camera 10 may be a Charge Coupled Device (CCD) or a Complementary Metal-Oxide Semiconductor (CMOS). The positioning device 20 may be a Global Positioning System (GPS) locator, and the global positioning system locator may be used to obtain position information of the camera 10. The attitude estimating device 30 can be implemented by an Inertial Measurement Unit (IMU), which can detect the orientation of the camera 10 (for example, toward the north or south, elevation or depression). The processor 40 can be implemented as a micro control unit, a microprocessor, a digital signal processor, an application specific integrated circuit (ASIC), or a logic circuit.
請參閱第1B圖,第1B圖為根據本案另一實施例繪示的一種影像處理系統100b的方塊圖。相較於第1A圖的影像處理系統100a,第1B圖的影像處理系統100b更包含一顯示器50,處理器40耦接顯示器50。顯示器50可以是手持電子裝置(例如手機、平板)的顯示裝置或是頭戴式裝置中的顯示裝置。於一實施例中,攝像機10、定位裝置20、姿態估測裝置30、處理器40以及顯示器50可整合於單一裝置(如:手持式電子裝置)當中。Please refer to FIG. 1B. FIG. 1B is a block diagram of an image processing system 100b according to another embodiment of the present invention. Compared with the image processing system 100a of FIG. 1A, the image processing system 100b of FIG. 1B further includes a display 50, and the processor 40 is coupled to the display 50. Display 50 can be a display device of a handheld electronic device (eg, a cell phone, a tablet) or a display device in a head mounted device. In one embodiment, the camera 10, the positioning device 20, the attitude estimating device 30, the processor 40, and the display 50 can be integrated into a single device (eg, a handheld electronic device).
請參閱第2圖,第2圖為根據本案一實施例繪示的一種影像處理方法200的流程圖。以下詳述本案影像處理方法200的流程。影像處理方法200中所提及的元件可由第1A圖或第1B圖所述的元件實現之。Referring to FIG. 2, FIG. 2 is a flowchart of an image processing method 200 according to an embodiment of the present invention. The flow of the image processing method 200 of the present invention will be described in detail below. The elements mentioned in the image processing method 200 can be implemented by the elements described in FIG. 1A or FIG. 1B.
於步驟210中,攝像機10拍攝一真實環境。In step 210, camera 10 captures a real environment.
於一實施例中,對應真實環境的一真實環境資訊係由一精密地圖以取得,真實環境資訊包含真實環境中每個真實物體之一三維資訊、一反射率、一顏色或一材質資訊。例如,攝像機10拍攝一辦公室場景時,處理器40可依據對應辦公室場景的精密地圖,得知辦公室場景中的桌子、椅子、窗戶等真實物體各別的三維資訊、反射率、顏色或材質等資訊。In an embodiment, a real environment information corresponding to the real environment is obtained by a precise map, and the real environment information includes three-dimensional information, a reflectivity, a color or a material information of each real object in the real environment. For example, when the camera 10 captures an office scene, the processor 40 can learn the three-dimensional information, reflectivity, color, or material of the real objects such as tables, chairs, and windows in the office scene according to the precise map of the corresponding office scene. .
於一實施例中,真實環境資訊中的材質資訊及反射率可以應用於進行渲染時,作為光源追蹤(ray tracing)的參考,以確認真實環境及或虛擬物體上有來自何種方向之光線。In one embodiment, the material information and reflectivity in the real environment information can be applied as a reference for ray tracing when rendering, to confirm the direction of the real environment and or the virtual object from which direction.
於步驟220中,定位裝置20偵測攝像機10的一攝像位置。In step 220, the positioning device 20 detects an imaging position of the camera 10.
於一實施例中,定位裝置20為全球定位系统定位器,可用以取得攝像機10的一攝像位置(如攝像機10的GPS資訊)。In one embodiment, the positioning device 20 is a global positioning system locator that can be used to obtain an imaging position of the camera 10 (eg, GPS information of the camera 10).
於步驟230中,姿態估測裝置30偵測攝像機10的一攝像姿態。In step 230, the attitude estimating device 30 detects an imaging posture of the camera 10.
於一實施例中,姿態估測裝置30為慣性量測單元,慣性量測單元可以偵測攝像機10的朝向(例如朝向北邊或南邊、仰角或俯角),藉此可以得知攝像機10的攝像姿態。In one embodiment, the attitude estimating device 30 is an inertial measuring unit, and the inertial measuring unit can detect the orientation of the camera 10 (for example, toward the north or south side, the elevation angle or the depression angle), thereby obtaining the imaging posture of the camera 10. .
上述步驟210~230的次序可以依實際實施調整。The order of the above steps 210-230 can be adjusted according to actual implementation.
於步驟240中,處理器40依據時間資訊(如當下的時間及日期)及緯度資訊(如攝像機10所在的緯度)以推知一光源資訊。於一實施例中,時間資訊及緯度資訊可藉由定位裝置20取得,亦可以藉由網路取得。In step 240, the processor 40 infers a light source information based on time information (such as the current time and date) and latitude information (such as the latitude of the camera 10). In an embodiment, the time information and the latitude information can be obtained by the positioning device 20 or can be obtained by the network.
於一實施例中,當處理器40由定位裝置20或網路得知時間資訊及緯度資訊時,可推算出或以查表方式取得光源資訊。其中,光源資訊包含一光源位置(如太陽在世界座標的位置)或一色溫資訊(如光源顏色)。In an embodiment, when the processor 40 knows the time information and the latitude information from the positioning device 20 or the network, the light source information can be obtained or obtained by looking up the table. The light source information includes a light source position (such as the position of the sun at the coordinates of the world) or a color temperature information (such as the color of the light source).
於一實施例中,處理器40可建立天氣對應色溫資訊的表格,例如記錄大晴天早上、大晴天晚上、陰天早上、陰天晚上等天氣狀況及對應之時段的色溫,使得當處理器40取得時間資訊及緯度資訊時,可以得知該地點的天氣且透過查表以取得色溫資訊。In an embodiment, the processor 40 can establish a table corresponding to the color temperature information of the weather, for example, recording the weather conditions of the sunny day, the sunny day, the cloudy morning, the cloudy night, and the corresponding time period, so that when the processor 40 When you get time information and latitude information, you can know the weather of the place and check the table to get the color temperature information.
於步驟250中,處理器40依據攝像位置、攝像姿態、真實環境資訊、光源資訊及一虛擬物體之一虛擬資訊及一光源追蹤(ray tracing)演算法,以將真實環境的倒影呈現在虛擬物體上。In step 250, the processor 40 renders the reflection of the real environment on the virtual object according to the imaging position, the imaging posture, the real environment information, the light source information, and a virtual information of a virtual object and a ray tracing algorithm. on.
於一實施例中,處理器40可由前述步驟210~240可取得攝像位置、攝像姿態、真實環境資訊、光源資訊及虛擬物體之虛擬資訊,於步驟250中,可將此些資訊及光源追蹤演算法一併考量,以將真實環境的倒影呈現在虛擬物體上。於一實施例中,由於精密地圖中包含真實物體的顏色,故可將帶有顏色的倒影呈現在虛擬物體上,以使得擴增實境中的虛擬物體可以呈現更貼近真實的光影樣貌。In an embodiment, the processor 40 can obtain the imaging position, the imaging posture, the real environment information, the light source information, and the virtual information of the virtual object by using the foregoing steps 210-240. In step 250, the information and the light source tracking calculation can be performed. The law considers together to present the reflection of the real environment on a virtual object. In an embodiment, since the precise map contains the color of the real object, the color-reflecting reflection can be presented on the virtual object, so that the virtual object in the augmented reality can present a closer to the real light and shadow.
以下敘述光源追蹤演算法的應用方式。The application of the light source tracking algorithm is described below.
請參閱第3圖,第3圖為根據本案一實施例繪示的一種應用光源追蹤演算法的示意圖。於一實施例中,虛擬物體OBJ1的虛擬資訊為事先預設之資訊,虛擬資訊包含虛擬物體OBJ1之一虛擬位置及虛擬物體OBJ1之一反射率。Please refer to FIG. 3 , which is a schematic diagram of an application light source tracking algorithm according to an embodiment of the present invention. In an embodiment, the virtual information of the virtual object OBJ1 is pre-preset information, and the virtual information includes a virtual position of the virtual object OBJ1 and a reflectivity of the virtual object OBJ1.
於第3圖中,處理器40應用光源追蹤演算法,以推算出顯示器50a每個像素(例如位置P1、P2)應呈現的亮度與顏色。於此例中,人眼位置P0可以置換為攝像機10的位置,光源追蹤演算法是指計算由人眼位置P0所看到的,由顯示器50a上每個像素位置(例如位置P1、P2)射出一射線,並且計算射線打到真實環境EN及虛擬物體OBJ1上的反射、折射及/或陰影效應。基於射線在空間中反射、折射路徑,顯示器50a上的每個像素都與一個光線資訊對應。In FIG. 3, processor 40 applies a light source tracking algorithm to derive the brightness and color that should be presented for each pixel (eg, positions P1, P2) of display 50a. In this example, the human eye position P0 can be replaced with the position of the camera 10, and the light source tracking algorithm is calculated by the human eye position P0, and is projected by each pixel position (for example, positions P1, P2) on the display 50a. A ray, and the reflection, refraction, and/or shadow effects of the ray hitting the real environment EN and the virtual object OBJ1. Each pixel on display 50a corresponds to a piece of light information based on the reflected, refracted path of the ray in space.
例如,處理器40模擬由位置P1射出一射線,此射線打到真實環境EN(例如:鏡子60)後,再從真實環境EN產生一反射射線打到虛擬物體OBJ1,接著從虛擬物體OBJ1產生另一反射射線打到光源SC1,藉此,處理器40可推算位置P1應顯示的亮度及色溫。For example, the processor 40 simulates emitting a ray from the position P1, and after the ray hits the real environment EN (for example, the mirror 60), generates a reflected ray from the real environment EN to hit the virtual object OBJ1, and then generates another from the virtual object OBJ1. A reflected ray strikes the light source SC1, whereby the processor 40 can estimate the brightness and color temperature that should be displayed at the position P1.
又例如,處理器40模擬由位置P2射出一射線,此射線打到虛擬物體OBJ1之地面陰影處SD2,再從地面陰影SD2產生一反射射線打到虛擬物體OBJ1及光源SC2,藉此,處理器40可推算位置P2應顯示的亮度及色溫。For another example, the processor 40 simulates emitting a ray from the position P2, the ray hits the ground shadow SD2 of the virtual object OBJ1, and generates a reflected ray from the ground shadow SD2 to hit the virtual object OBJ1 and the light source SC2, whereby the processor 40 can estimate the brightness and color temperature that should be displayed at position P2.
在第3圖所繪示的空間中,實際上具有更多條光源的反射、折射、漫射或陰影(例如為陰影SD1、SD2),為方便解釋,僅繪示上述例子中所涉及的部分射線為例。In the space depicted in Figure 3, there are actually more reflections, refractions, diffuses or shadows of the light sources (for example, shadows SD1, SD2). For convenience of explanation, only the parts involved in the above examples are shown. Rays are an example.
接著,以下敘述處理器40依據攝像位置、攝像姿態、真實環境資訊、光源資訊、虛擬物體之虛擬資訊及光源追蹤演算法,以將真實環境的倒影呈現在虛擬物體上之例以及將虛擬物體的倒影呈現在另一虛擬物體上之例。Next, the processor 40 according to the imaging position, the imaging posture, the real environment information, the light source information, the virtual object virtual information, and the light source tracking algorithm to present the reflection of the real environment on the virtual object and the virtual object The reflection is presented on another virtual object.
請參閱第4~6圖,第4~6圖為根據本案一實施例繪示的一種應用影像處理方法的示意圖。需特別說明的是,第4圖中箭頭的方向是光源追蹤演算法的計算方向,其與光源SC’的實際照射方向相反。於第4圖中,虛擬物體OBJa為一顆虛擬的光滑金屬球,其懸空地位於底板FL之上,此外,光源SC’打光到真實環境EN’及虛擬物體OBJa上,而真實環境EN’亦再反射光線到虛擬物體OBJa上,使得虛擬物體OBJa上呈現真實環境EN’的倒影,接著,虛擬物體OBJa上的光線再反射至攝像機10。基於上述內容並應用光源追蹤演算法,處理器40可以依據攝像位置(例如攝像機10的擺放位置)、攝像姿態(例如攝像機10的朝向)、真實環境EN’的真實環境資訊、光源SC’的光源資訊及虛擬物體OBJa之虛擬資訊,以得知光源SC’及真實環境EN’於虛擬物體OBJa上所產生的影響,以將真實環境EN’的倒影呈現在虛擬物體OBJa上。Please refer to FIGS. 4-6. FIG. 4-6 is a schematic diagram of an application image processing method according to an embodiment of the present invention. It should be particularly noted that the direction of the arrow in Fig. 4 is the calculation direction of the light source tracking algorithm, which is opposite to the actual illumination direction of the light source SC'. In Fig. 4, the virtual object OBJa is a virtual smooth metal ball which is suspended above the bottom plate FL. In addition, the light source SC' is illuminated to the real environment EN' and the virtual object OBJa, and the real environment EN' The light is then reflected onto the virtual object OBJa such that the virtual object OBJa presents a reflection of the real environment EN', and then the light on the virtual object OBJa is reflected back to the camera 10. Based on the above content and applying the light source tracking algorithm, the processor 40 can be based on the image capturing position (for example, the position of the camera 10), the image capturing posture (for example, the orientation of the camera 10), the real environment information of the real environment EN', and the light source SC'. The light source information and the virtual information of the virtual object OBJa are used to know the influence of the light source SC' and the real environment EN' on the virtual object OBJa to present the reflection of the real environment EN' on the virtual object OBJa.
於第5圖中,其與第4圖為類似的原理,於此例中,虛擬物體OBJa為一顆虛擬的光滑金屬球,其放置於底板FL之上,光源SC’打光到牆面WL1~WL4(即真實環境)及虛擬物體OBJa上,而牆面WL1~WL4亦再反射光線到虛擬物體OBJa上,接著,虛擬物體OBJa上的光線再反射至攝像機10。基於上述內容並應用光源追蹤演算法,處理器40可以依據攝像位置(例如攝像機10的擺放位置)、攝像姿態(例如攝像機10的朝向)、真實環境(例如牆面WL1~WL4)的真實環境資訊、光源SC’的光源資訊及虛擬物體OBJa之虛擬資訊,以得知光源SC’及牆面WL1~WL4於虛擬物體OBJa上所產生的影響,藉此將牆面WL1~WL4的倒影呈現在虛擬物體OBJa上,並呈現虛擬物體OBJa的虛擬陰影SDa。In Fig. 5, it is similar to the principle of Fig. 4. In this example, the virtual object OBJa is a virtual smooth metal ball placed on the bottom plate FL, and the light source SC' is polished to the wall surface WL1. ~WL4 (ie, the real environment) and the virtual object OBJa, and the walls WL1 WL WL4 also reflect the light onto the virtual object OBJa, and then the light on the virtual object OBJa is reflected to the camera 10. Based on the above content and applying the light source tracking algorithm, the processor 40 can depend on the imaging position (for example, the position of the camera 10), the imaging posture (for example, the orientation of the camera 10), and the real environment (for example, the wall surface WL1 to WL4). The information, the light source information of the light source SC' and the virtual information of the virtual object OBJa, to know the influence of the light source SC' and the wall surface WL1~WL4 on the virtual object OBJa, thereby presenting the reflection of the wall surface WL1~WL4 The virtual object OBJa is on, and presents a virtual shadow SDa of the virtual object OBJa.
於一實施例中,當真實環境(例如為牆面WL1~WL4)的倒影呈現在虛擬物體(例如為虛擬物體OBJa)上後,將此虛擬物體稱為一渲染物體,第1B圖中的顯示器50用以同時呈現該真實環境及該渲染物體。In an embodiment, after the reflection of the real environment (for example, the walls WL1 WL WL4) is presented on the virtual object (for example, the virtual object OBJa), the virtual object is referred to as a rendered object, and the display in FIG. 1B 50 is used to simultaneously present the real environment and the rendered object.
於第6圖中,其與第5圖為類似的原理,於此例中,虛擬物體OBJa、OBJb皆為虛擬的光滑金屬球,其放置於底板FL之上,光源SC’打光到牆面WL1~WL4(真實環境)及虛擬物體OBJa、OBJb上,而牆面WL1~WL4亦再反射光線到虛擬物體OBJa、OBJb上。於一實施例中,反射至虛擬物體OBJa及OBJb其中一者的光線亦可再反射至另一者上,使得虛擬物體OBJb的倒影會呈現於虛擬物體OBJa上或虛擬物體OBJa的倒影會呈現於虛擬物體OBJb上。In Fig. 6, it is similar to the principle of Fig. 5. In this example, the virtual objects OBJa and OBJb are virtual smooth metal balls placed on the bottom plate FL, and the light source SC' is polished to the wall surface. WL1~WL4 (real environment) and virtual objects OBJa, OBJb, and wall WL1~WL4 also reflect light onto virtual objects OBJa, OBJb. In an embodiment, the light reflected to one of the virtual objects OBJa and OBJb may be reflected to the other, so that the reflection of the virtual object OBJb is presented on the virtual object OBJa or the reflection of the virtual object OBJa is presented. Virtual object OBJb.
換言之,對於虛擬物體OBJa而言,處理器40可以依據攝像位置(例如攝像機10的擺放位置)、攝像姿態(例如攝像機10的朝向)、真實環境(例如牆面WL1~WL4)的真實環境資訊、光源SC’的光源資訊及虛擬物體OBJa、OBJb各自之虛擬資訊及光源追蹤演算法,以得知光源SC’、牆面WL1~WL4及虛擬物體OBJb於虛擬物體OBJa上所產生的影響,以將牆面WL1~WL4及虛擬物體OBJb的倒影呈現在虛擬物體OBJa上,並呈現虛擬物體OBJa的虛擬陰影SDa。In other words, for the virtual object OBJa, the processor 40 can depend on the actual position information of the imaging position (for example, the position of the camera 10), the imaging posture (for example, the orientation of the camera 10), and the real environment (for example, the wall surface WL1 WL WL4). The light source information of the light source SC' and the virtual information of the virtual objects OBJa, OBJb and the light source tracking algorithm to know the influence of the light source SC', the wall surfaces WL1 WL WL4 and the virtual object OBJb on the virtual object OBJa, The reflection of the wall surfaces WL1 WL WL4 and the virtual object OBJb is presented on the virtual object OBJa, and the virtual shadow SDa of the virtual object OBJa is presented.
另一方面,對於虛擬物體OBJb而言,處理器40可以依據攝像位置(例如攝像機10的擺放位置)、攝像姿態(例如攝像機10的朝向)、真實環境(例如牆面WL1~WL4)的真實環境資訊、光源SC’的光源資訊及虛擬物體OBJa、OBJb各自之虛擬資訊及光源追蹤演算法,以得知光源SC’、牆面WL1~WL4及虛擬物體OBJa於虛擬物體OBJb上所產生的影響,以將牆面WL1~WL4及虛擬物體OBJa的倒影呈現在虛擬物體OBJb上,並呈現虛擬物體OBJb的虛擬陰影SDb。On the other hand, for the virtual object OBJb, the processor 40 can depend on the imaging position (for example, the placement position of the camera 10), the imaging posture (for example, the orientation of the camera 10), and the real environment (for example, the wall surface WL1 to WL4). Environmental information, source information of the light source SC', and virtual information of the virtual objects OBJa, OBJb and the light source tracking algorithm to know the influence of the light source SC', the wall surface WL1~WL4 and the virtual object OBJa on the virtual object OBJb , the reflection of the wall surfaces WL1 WL WL4 and the virtual object OBJa is presented on the virtual object OBJb, and the virtual shadow SDb of the virtual object OBJb is presented.
於一實施例中,當真實環境(例如為牆面WL1~WL4)的倒影呈現在虛擬物體(例如虛擬物體OBJa)上,將此虛擬物體稱為一渲染物體。當此虛擬物體的倒影呈現在另一虛擬物體(例如虛擬物體OBJb)上,此另一虛擬物體亦為一渲染物體。第1B圖中的顯示器50可用以同時呈現該真實環境及該些渲染物體。藉此,可使得擴增實境中的多個虛擬物體皆可呈現更貼近真實的光影樣貌。In one embodiment, when a reflection of a real environment (eg, walls WL1 WL WL4) is presented on a virtual object (eg, virtual object OBJa), the virtual object is referred to as a rendered object. When the reflection of this virtual object is presented on another virtual object (eg, virtual object OBJb), this other virtual object is also a rendered object. The display 50 in Figure 1B can be used to simultaneously present the real environment and the rendered objects. Thereby, a plurality of virtual objects in the augmented reality can be rendered closer to the real light and shadow appearance.
綜上,本案的影像處理系統及影像處理方法應用攝像位置、攝像姿態、真實環境資訊、光源資訊及虛擬物體之虛擬資訊及光源追蹤演算法,以考量太陽在世界座標的位置、光源色溫、攝像機的位置及其擺放方向、真實物體的材質及/或反射率、虛擬物體的材質及/或反射率等因素配合光源追蹤演算法,使真實環境的倒影呈現在虛擬物體上,使得擴增實境中的虛擬物體可以呈現更貼近真實的光影樣貌。In summary, the image processing system and image processing method of the present application apply the imaging position, the imaging posture, the real environment information, the light source information, and the virtual information of the virtual object and the light source tracking algorithm to consider the position of the sun at the world coordinates, the color temperature of the light source, and the camera. The position and its orientation, the material and/or reflectivity of the real object, the material of the virtual object and/or the reflectivity are combined with the light source tracking algorithm to make the reflection of the real environment appear on the virtual object, so that the amplification is real. Virtual objects in the environment can appear closer to the real light and shadow.
雖然本案已以實施例揭露如上,然其並非用以限定本案,任何熟習此技藝者,在不脫離本案之精神和範圍內,當可作各種之更動與潤飾,因此本案之保護範圍當視後附之申請專利範圍所界定者為準。Although the present invention has been disclosed in the above embodiments, it is not intended to limit the present case. Anyone skilled in the art can make various changes and refinements without departing from the spirit and scope of the present case. The scope defined in the patent application is subject to change.
100a、100b‧‧‧影像處理系統100a, 100b‧‧‧ image processing system
10‧‧‧攝像機 10‧‧‧ camera
20‧‧‧定位裝置 20‧‧‧ Positioning device
30‧‧‧姿態估測裝置 30‧‧‧ attitude estimation device
40‧‧‧處理器 40‧‧‧ processor
200‧‧‧影像處理方法 200‧‧‧Image processing method
210~250‧‧‧步驟 210~250‧‧‧Steps
OBJ1、OBJa、OBJb‧‧‧虛擬物體 OBJ1, OBJa, OBJb‧‧‧ virtual objects
P0‧‧‧人眼位置 P0‧‧‧ human eye position
P1、P2‧‧‧位置 P1, P2‧‧‧ position
50、50a‧‧‧顯示器 50, 50a‧‧‧ display
60‧‧‧鏡子 60‧‧‧Mirror
EN、EN’‧‧‧真實環境 EN, EN’‧‧‧Real environment
SC1、SC2、SC’‧‧‧光源 SC1, SC2, SC'‧‧‧ light source
SD1、SD2、SDa、SDb‧‧‧陰影 SD1, SD2, SDa, SDb‧‧‧ shadow
FL‧‧‧底板 FL‧‧‧Bottom plate
WL1~WL4‧‧‧牆面 WL1~WL4‧‧‧ wall
為讓本揭示內容之上述和其他目的、特徵、優點與實施例能更明顯易懂,所附圖示之說明如下: 第1A圖為根據本案一實施例繪示的一種影像處理系統的方塊圖; 第1B圖為根據本案另一實施例繪示的一種影像處理系統的方塊圖; 第2圖為根據本案一實施例繪示的一種影像處理方法的流程圖; 第3圖為根據本案一實施例繪示的一種應用影像處理方法的示意圖; 第4圖為根據本案一實施例繪示的一種應用影像處理方法的示意圖; 第5圖為根據本案一實施例繪示的一種應用影像處理方法的示意圖;以及 第6圖為根據本案一實施例繪示的一種應用影像處理方法的示意圖。The above and other objects, features, advantages and embodiments of the present disclosure will become more apparent and understood by the accompanying drawings. FIG. 1A is a block diagram of an image processing system according to an embodiment of the present invention. FIG. 1B is a block diagram of an image processing system according to another embodiment of the present invention; FIG. 2 is a flowchart of an image processing method according to an embodiment of the present invention; FIG. 3 is a flowchart according to an embodiment of the present invention; FIG. 4 is a schematic diagram of an application image processing method according to an embodiment of the present invention; FIG. 5 is a schematic diagram of an application image processing method according to an embodiment of the present invention; FIG. 6 is a schematic diagram of an application image processing method according to an embodiment of the present invention.
Claims (14)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW107117977A TWI669682B (en) | 2018-05-25 | 2018-05-25 | Image processing system and image processing method |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW107117977A TWI669682B (en) | 2018-05-25 | 2018-05-25 | Image processing system and image processing method |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| TWI669682B true TWI669682B (en) | 2019-08-21 |
| TW202004675A TW202004675A (en) | 2020-01-16 |
Family
ID=68316322
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| TW107117977A TWI669682B (en) | 2018-05-25 | 2018-05-25 | Image processing system and image processing method |
Country Status (1)
| Country | Link |
|---|---|
| TW (1) | TWI669682B (en) |
Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102419631A (en) * | 2010-10-15 | 2012-04-18 | 微软公司 | Fusing virtual content into real content |
| CN103472909A (en) * | 2012-04-10 | 2013-12-25 | 微软公司 | Realistic occlusion for a head mounted augmented reality display |
| US20140368535A1 (en) * | 2013-06-18 | 2014-12-18 | Tom G. Salter | Hybrid world/body locked hud on an hmd |
| US8933965B2 (en) * | 2006-07-27 | 2015-01-13 | Canon Kabushiki Kaisha | Method for calculating light source information and generating images combining real and virtual images |
| TW201626046A (en) * | 2014-10-15 | 2016-07-16 | 精工愛普生股份有限公司 | Head-mounted display device, method of controlling head-mounted display device, and computer program |
| TW201633104A (en) * | 2015-03-06 | 2016-09-16 | 新力電腦娛樂股份有限公司 | Tracking system for head mounted display |
| CN207096572U (en) * | 2017-06-16 | 2018-03-13 | 全球能源互联网研究院 | A kind of augmented reality intelligent helmet based on Binocular displays function |
| CN108010118A (en) * | 2017-11-28 | 2018-05-08 | 网易(杭州)网络有限公司 | Virtual objects processing method, virtual objects processing unit, medium and computing device |
-
2018
- 2018-05-25 TW TW107117977A patent/TWI669682B/en active
Patent Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US8933965B2 (en) * | 2006-07-27 | 2015-01-13 | Canon Kabushiki Kaisha | Method for calculating light source information and generating images combining real and virtual images |
| CN102419631A (en) * | 2010-10-15 | 2012-04-18 | 微软公司 | Fusing virtual content into real content |
| CN103472909A (en) * | 2012-04-10 | 2013-12-25 | 微软公司 | Realistic occlusion for a head mounted augmented reality display |
| US20140368535A1 (en) * | 2013-06-18 | 2014-12-18 | Tom G. Salter | Hybrid world/body locked hud on an hmd |
| TW201626046A (en) * | 2014-10-15 | 2016-07-16 | 精工愛普生股份有限公司 | Head-mounted display device, method of controlling head-mounted display device, and computer program |
| TW201633104A (en) * | 2015-03-06 | 2016-09-16 | 新力電腦娛樂股份有限公司 | Tracking system for head mounted display |
| CN207096572U (en) * | 2017-06-16 | 2018-03-13 | 全球能源互联网研究院 | A kind of augmented reality intelligent helmet based on Binocular displays function |
| CN108010118A (en) * | 2017-11-28 | 2018-05-08 | 网易(杭州)网络有限公司 | Virtual objects processing method, virtual objects processing unit, medium and computing device |
Also Published As
| Publication number | Publication date |
|---|---|
| TW202004675A (en) | 2020-01-16 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US6628298B1 (en) | Apparatus and method for rendering synthetic objects into real scenes using measurements of scene illumination | |
| US20200364939A1 (en) | Method and System for Representing a Virtual Object in a View of a Real Environment | |
| US11533466B2 (en) | Active stereo matching for depth applications | |
| US20040070565A1 (en) | Method and apparatus for displaying images | |
| US10565720B2 (en) | External IR illuminator enabling improved head tracking and surface reconstruction for virtual reality | |
| US12469227B2 (en) | Depth-based relighting in augmented reality | |
| JP2008516352A (en) | Apparatus and method for lighting simulation and shadow simulation in augmented reality system | |
| US20190295273A1 (en) | Hybrid depth detection and movement detection | |
| US20190362150A1 (en) | Image processing system and image processing method | |
| US10485420B2 (en) | Eye gaze tracking | |
| TWI651657B (en) | Augmented reality system and method | |
| CN113016008A (en) | Machine learning inference of gravity aligned images | |
| US10728518B2 (en) | Movement detection in low light environments | |
| CN110622237B (en) | Method and computer program product for controlling display parameters of a mobile device | |
| EP3381015B1 (en) | Systems and methods for forming three-dimensional models of objects | |
| WO2023097805A1 (en) | Display method, display device, and computer-readable storage medium | |
| US10447998B2 (en) | Power efficient long range depth sensing | |
| TWI669682B (en) | Image processing system and image processing method | |
| Rohmer et al. | Interactive near-field illumination for photorealistic augmented reality with varying materials on mobile devices | |
| CN107341991A (en) | Position indication method, operation device for position indication and display system | |
| US9684847B2 (en) | Spherical lighting device with backlighting coronal ring | |
| WO2002047395A2 (en) | Method and apparatus for displaying images | |
| Madsen et al. | Probeless illumination estimation for outdoor augmented reality | |
| TWI885862B (en) | See-through display method and see-through display system | |
| US20250117994A1 (en) | Removing image overlays |