TWI843315B - Method of panoramic image fusion in augmented reality and computer program product implementing the same - Google Patents
Method of panoramic image fusion in augmented reality and computer program product implementing the same Download PDFInfo
- Publication number
- TWI843315B TWI843315B TW111146604A TW111146604A TWI843315B TW I843315 B TWI843315 B TW I843315B TW 111146604 A TW111146604 A TW 111146604A TW 111146604 A TW111146604 A TW 111146604A TW I843315 B TWI843315 B TW I843315B
- Authority
- TW
- Taiwan
- Prior art keywords
- image
- mobile device
- location
- plane
- range
- Prior art date
Links
Images
Landscapes
- Processing Or Creating Images (AREA)
Abstract
Description
本案係關於一種擴增實境技術,詳而言之,係關於一種在擴增實境中融合環景影像之方法及電腦程式產品。 This case is about an augmented reality technology, more specifically, a method and a computer program product for integrating ambient images into augmented reality.
擴增實境(Augmented Reality;AR)可將資訊、圖像、物件、影音等內容,藉由行動裝置、頭戴式顯示器或抬頭顯示器等電子產品,以虛擬方式呈現在現實環境中,達到虛實整合而同時呈現之效果。 Augmented Reality (AR) can present information, images, objects, audio and video content in a virtual way in the real environment through electronic products such as mobile devices, head-mounted displays or head-up displays, achieving the effect of integrating virtual and real and presenting them simultaneously.
隨著行動電子裝置運算能力的提升,擴增實境的用途也越來越廣。除了應用於遊戲之外,擴增實境也應用於各種展覽、路線導引或網路導購中。舉例來說,將虛擬的路線導引箭頭,顯示在電子產品畫面上的展場真實環境中,以導引或推薦使用者參觀路線。 As the computing power of mobile electronic devices improves, the use of augmented reality is becoming more and more extensive. In addition to being used in games, augmented reality is also used in various exhibitions, route guidance or online shopping guides. For example, virtual route guidance arrows are displayed in the real environment of the exhibition hall on the screen of electronic products to guide or recommend users to visit routes.
然而,先前技術中並無法將原本在平面之後而看不到的影像融合於該平面上,並配合使用者行動裝置的位置及行動裝置上的攝影鏡頭 的視角變化而動態改變,達到讓使用者在不同的位置和角度,能較佳地看到對應影像的透視效果。 However, the prior art was unable to integrate the image that was originally invisible behind the plane onto the plane, and dynamically change it in accordance with the position of the user's mobile device and the change of the viewing angle of the camera on the mobile device, so as to achieve a better perspective effect of the corresponding image that the user can see at different positions and angles.
因此,如何克服上述缺失,及/或開發更多不同的擴增實境應用情境,為目前業界亟待研界之議題。 Therefore, how to overcome the above shortcomings and/or develop more different augmented reality application scenarios is an issue that the industry urgently needs to study.
為解決上述問題及其他問題,本案揭示一種在擴增實境中融合環景影像之方法及執行該方法之電腦程式產品。 To solve the above problems and other problems, this case discloses a method for integrating ambient images in augmented reality and a computer program product for executing the method.
本案所揭之在擴增實境中融合環景影像之方法係包括下列步驟:根據一行動裝置所取得之一即時影像,於預建立的一空間地圖中定位出該行動裝置的所在位置,以根據該行動裝置的所在位置,自預獲取的基於該空間地圖之複數環景影像中搜尋出與該行動裝置的所在位置對應的一環景影像;根據該行動裝置的所在位置、該環景影像的空間位置、該行動裝置之方位角資訊及/或視角資訊,自該環景影像中擷取出一套疊影像;以及根據該行動裝置的所在位置、該環景影像的空間位置、該行動裝置之方位角資訊及/或視角資訊,識別出一平面,以決定該套疊影像在該平面上的套疊範圍,俾根據該套疊範圍,將該套疊影像套疊於該平面上。 The method for integrating ambient images in augmented reality disclosed in this case includes the following steps: locating the location of a mobile device in a pre-established spatial map based on a real-time image obtained by the mobile device, and searching for an ambient image corresponding to the location of the mobile device from a plurality of ambient images pre-acquired based on the spatial map according to the location of the mobile device; The spatial position of the surrounding image, the azimuth information and/or the viewing angle information of the mobile device are used to extract a set of overlapping images from the surrounding image; and a plane is identified according to the location of the mobile device, the spatial position of the surrounding image, the azimuth information and/or the viewing angle information of the mobile device to determine the overlapping range of the overlapping images on the plane, so as to overlap the overlapping images on the plane according to the overlapping range.
於一實施例中,所述根據該行動裝置的所在位置、該環景影像的空間位置、該行動裝置之方位角資訊及/或視角資訊,自該環景影像中擷取出一套疊影像之步驟係包括:根據該行動裝置的所在位置與該環景影像的空間位置之間的距離、該環景影像的空間位置與該環景影像中一給定 點之距離、該行動裝置的所在位置與該環景影像中該給定點之間的距離,決定該套疊影像之大小。 In one embodiment, the step of extracting a set of overlapping images from the surrounding image according to the location of the mobile device, the spatial location of the surrounding image, the azimuth information and/or the viewing angle information of the mobile device includes: determining the size of the overlapping images according to the distance between the location of the mobile device and the spatial location of the surrounding image, the distance between the spatial location of the surrounding image and a given point in the surrounding image, and the distance between the location of the mobile device and the given point in the surrounding image.
於一實施例中,所述根據該行動裝置的所在位置、該環景影像的空間位置、該行動裝置之方位角資訊及/或視角資訊,自該環景影像中擷取出一套疊影像之步驟係包括:根據該環景影像的空間位置、該環景影像的空間位置、該行動裝置之水平視角範圍和垂直視角範圍,確認在該環景影像上的影像水平範圍和影像垂直範圍,以根據在該環景影像上的影像水平範圍和影像垂直範圍,自該環景影像中擷取出該套疊影像。 In one embodiment, the step of extracting a set of overlapping images from the surrounding image according to the location of the mobile device, the spatial location of the surrounding image, the azimuth information and/or the viewing angle information of the mobile device includes: confirming the horizontal range and the vertical range of the image on the surrounding image according to the spatial location of the surrounding image, the spatial location of the surrounding image, the horizontal viewing angle range and the vertical viewing angle range of the mobile device, so as to extract the overlapping images from the surrounding image according to the horizontal range and the vertical range of the image on the surrounding image.
於一實施例中,所述根據該行動裝置的所在位置、該環景影像的空間位置,識別出一平面,以決定該套疊影像在該平面上的套疊範圍之步驟係包括:連接該行動裝置的所在位置與該環景影像的空間位置成一直線,沿該直線的方向於該行動裝置的所在位置與該環景影像的空間位置之間識別出該平面;以及連接該行動裝置的所在位置與在該環景影像上的影像水平範圍的端點和影像垂直範圍的端點,以確認在該平面上的套疊水平範圍和套疊垂直範圍。 In one embodiment, the step of identifying a plane according to the location of the mobile device and the spatial location of the surrounding image to determine the overlapping range of the overlapping image on the plane includes: connecting the location of the mobile device and the spatial location of the surrounding image in a straight line, and identifying the plane between the location of the mobile device and the spatial location of the surrounding image along the direction of the straight line; and connecting the location of the mobile device with the end points of the horizontal range of the image and the end points of the vertical range of the image on the surrounding image to confirm the horizontal range of the overlapping and the vertical range of the overlapping on the plane.
於一實施例中,所述根據該套疊範圍,將該套疊影像套疊於該平面上之步驟係包括:將該平面之對應該套疊範圍的部分濾除,以將該套疊影像套疊於該平面的該部分。於另一實施例中,所述根據該套疊範圍,將該套疊影像套疊於該平面上之步驟係包括:根據該行動裝置的所在位置與該平面之間的距離,決定該平面之對應該套疊範圍的部分之透明度,其中,該透明度係與該距離成反比。於又一實施例中,所述根據該套疊範圍, 將該套疊影像套疊於該平面上之步驟係包括:將該套疊影像處理成透明,以將呈透明的該套疊影像套疊於該平面上。 In one embodiment, the step of overlaying the overlay image on the plane according to the overlay range includes: filtering the portion of the plane corresponding to the overlay range to overlay the overlay image on the portion of the plane. In another embodiment, the step of overlaying the overlay image on the plane according to the overlay range includes: determining the transparency of the portion of the plane corresponding to the overlay range according to the distance between the location of the mobile device and the plane, wherein the transparency is inversely proportional to the distance. In another embodiment, the step of overlaying the overlay image on the plane according to the overlay range includes: processing the overlay image to be transparent, so as to overlay the transparent overlay image on the plane.
於一實施例中,該行動裝置之該即時影像係包括該行動裝置之方位角資訊及/或視角資訊。於另一實施例中,該複數環景影像係具有各自的該空間地圖的空間坐標資訊、方位角資訊、及/或深度資訊。 In one embodiment, the real-time image of the mobile device includes the azimuth information and/or the viewing angle information of the mobile device. In another embodiment, the plurality of panoramic images have the spatial coordinate information, azimuth information, and/or depth information of the respective spatial maps.
本案所揭之電腦程式產品,經電腦載入後執行上述之在擴增實境中融合環景影像之方法。 The computer program product disclosed in this case executes the above-mentioned method of integrating ambient images in augmented reality after being loaded into a computer.
根據本案所揭之在擴增實境中融合環景影像之方法及執行該方法之電腦程式產品,能將原本在平面之後而看不到的影像,融合於該平面上,並能配合使用者行動裝置的位置及行動裝置上的攝影鏡頭的視角變化而動態改變,達到讓使用者在不同的位置和角度,能較佳地看到對應影像的透視效果。 According to the method of integrating surrounding images in augmented reality and the computer program product implementing the method disclosed in this case, images that were originally invisible behind a plane can be integrated onto the plane, and can change dynamically with the position of the user's mobile device and the change of the viewing angle of the camera on the mobile device, so that the user can better see the perspective effect of the corresponding image at different positions and angles.
S201~S208:步驟 S201~S208: Steps
P1:行動裝置的所在位置 P1: Location of the mobile device
P2:平面的位置 P2: Position of the plane
P2-1:平面上一點的位置 P2-1: The position of a point on a plane
P2-2:平面上另一點的位置 P2-2: The position of another point on the plane
P3:環景影像的空間位置 P3: Spatial location of the panoramic image
P4:交會點 P4: Intersection point
P5:環景影像中一給定點 P5: A given point in the panoramic image
E1、E2:環景影像上的水平範圍端點 E1, E2: Horizontal range endpoints on the panoramic image
E3、E4:環景影像上的垂直範圍端點 E3, E4: vertical range endpoints on the panoramic image
R1:影像水平範圍 R1: Image horizontal range
R2:影像垂直範圍 R2: Image vertical range
d1:景深 d1: Depth of field
d2:距離 d2: distance
d3:景深 d3: Depth of field
d4:距離 d4: distance
d5:距離 d5: distance
θ 1:方向角 θ 1: Direction angle
圖1係為本案之在擴增實境中融合環景影像之方法的流程示意圖。 Figure 1 is a schematic diagram of the process of integrating ambient images in augmented reality in this case.
圖2係為本案之在擴增實境中融合環景影像之方法的自環景影像擷取出套疊影像的示意圖。 FIG2 is a schematic diagram of extracting a nested image from a surrounding image in the method of fusing surrounding images in augmented reality in this case.
圖3係為本案之在擴增實境中融合環景影像之方法的自環景影像擷取出套疊影像的示意圖。 FIG3 is a schematic diagram of extracting a nested image from a surrounding image in the method of fusing surrounding images in augmented reality in this case.
圖4係為本案之在擴增實境中融合環景影像之方法的決定套疊影像的大小之示意圖。 FIG4 is a schematic diagram of determining the size of the overlapping images in the method of fusing ambient images in augmented reality in this case.
圖5係為本案之在擴增實境中融合環景影像之方法的決定套疊影像的大小之示意圖。 FIG5 is a schematic diagram of determining the size of the overlapping images in the method of fusing ambient images in augmented reality in this case.
圖6係為本案之在擴增實境中融合環景影像之方法的決定平面之對應套疊範圍的部分之透明度的示意圖。 FIG6 is a schematic diagram showing the transparency of the portion of the corresponding overlapping range of the plane in the method of fusing ambient images in augmented reality in this case.
以下藉由特定的實施例說明本案之實施方式,熟習此項技藝之人士可由本文所揭示之內容輕易地瞭解本案之其他優點及功效。本說明書所附圖式所繪示之結構、比值、大小等均僅用於配合說明書所揭示之內容,以供熟悉此技藝之人士之瞭解與閱讀,非用於限定本案可實施之限定條件,故任何修飾、改變或調整,在不影響本案所能產生之功效及所能達成之目的下,均應仍落在本案所揭示之技術內容得能涵蓋之範圍內。 The following specific examples are used to illustrate the implementation of this case. People familiar with this technology can easily understand the other advantages and effects of this case from the content disclosed in this article. The structures, ratios, sizes, etc. shown in the attached figures of this manual are only used to match the content disclosed in the manual for people familiar with this technology to understand and read, and are not used to limit the conditions under which this case can be implemented. Therefore, any modification, change or adjustment should still fall within the scope of the technical content disclosed in this case without affecting the effects and purposes that can be achieved by this case.
於本文中所用之術語「包括」、「包含」、「具有」、「含有」或其任何其他變體都旨在涵蓋非排他性的包含。例如,由一系列步驟組成的方法不一定只限於這些步驟,還可能包括沒有明確列出的其他步驟。此外,除非另有說明,單數形式的措辭,如「一」、「一個」、「該」也適用於複數形式,而「或」、「及/或」等措辭可互換使用。 The terms "include", "comprising", "having", "containing" or any other variations thereof used herein are intended to cover a non-exclusive inclusion. For example, a method consisting of a series of steps is not necessarily limited to these steps, but may also include other steps not explicitly listed. In addition, unless otherwise specified, singular forms such as "a", "an", "the" also apply to plural forms, and "or", "and/or" and other forms can be used interchangeably.
請參閱圖1,其為本案之在擴增實境中融合環景影像之方法的流程示意圖,主要包括步驟S201~S208。 Please refer to Figure 1, which is a schematic diagram of the process of the method of fusing ambient images in augmented reality, which mainly includes steps S201~S208.
於步驟S201中,建立空間地圖。於一實施例中,透過擷取一空間的多個特徵點,來預建立該空間的地圖坐標系。接著進至步驟S202。 In step S201, a spatial map is established. In one embodiment, a map coordinate system of a space is pre-established by capturing multiple feature points of the space. Then proceed to step S202.
於步驟S202中,獲取基於空間地圖之複數環景影像。例如,在該空間地圖的坐標系中一空間位置拍攝一環景影像,重複執行以獲取更多基於該空間地圖的環景影像,且各個環景影像都帶有此空間地圖的參考點座標,以作為後續搜尋和定位之用。此外,各個環景影像都具有方位角資訊及/或深度資訊、或是可推導出方位角及/或深度的相關資訊。接著進至步驟S203。 In step S202, multiple panoramic images based on the spatial map are obtained. For example, a panoramic image is taken at a spatial position in the coordinate system of the spatial map, and the process is repeated to obtain more panoramic images based on the spatial map, and each panoramic image carries the reference point coordinates of the spatial map for subsequent search and positioning. In addition, each panoramic image has azimuth information and/or depth information, or information related to azimuth and/or depth can be derived. Then proceed to step S203.
於步驟S203中,根據行動裝置所取得之即時影像,於空間地圖中定位出行動裝置的所在位置。詳言之,該即時影像具有行動裝置上攝影鏡頭拍攝的方位角資訊及/或視角資訊,而在定位過程中能得知該行動裝置在該空間地圖的空間坐標。接著進至步驟S204。 In step S203, the location of the mobile device is located in the spatial map according to the real-time image obtained by the mobile device. In detail, the real-time image has the azimuth information and/or viewing angle information taken by the camera on the mobile device, and the spatial coordinates of the mobile device in the spatial map can be obtained during the positioning process. Then proceed to step S204.
於步驟S204中,根據行動裝置所在位置,於複數環景影像中搜尋出與行動裝置的所在位置對應的環景影像,其中,與行動裝置的所在位置對應的環景影像的空間位置需處於行動裝置上攝影機的視角範圍內。另外,若行動裝置定位的所在位置(即,行動裝置的空間坐標)離環景影像的空間位置(即,環景影像的參考點座標)過遠(即超過一預設值),則不會顯示該環景影像。接著進至步驟S205。 In step S204, according to the location of the mobile device, a surrounding image corresponding to the location of the mobile device is searched in the plurality of surrounding images, wherein the spatial position of the surrounding image corresponding to the location of the mobile device must be within the viewing angle of the camera on the mobile device. In addition, if the location of the mobile device (i.e., the spatial coordinates of the mobile device) is too far from the spatial position of the surrounding image (i.e., the reference point coordinates of the surrounding image) (i.e., exceeding a preset value), the surrounding image will not be displayed. Then proceed to step S205.
於步驟S205中,根據行動裝置的所在位置、環景影像的空間位置、行動裝置之方位角資訊及/或視角資訊,自環景影像中擷取出套疊影像。於一實施例中,先根據該環景影像的空間位置、該環景影像的空間位置、該行動裝置之水平視角範圍和垂直視角範圍,確認在該環景影像上的影像水平範圍和影像垂直範圍,再根據在該環景影像上的影像水平範圍和影像垂直範圍,自該環景影像中擷取出該套疊影像。於另一實施例中,根據該行動裝置的所在 位置與該環景影像的空間位置之間的距離、該環景影像的空間位置與該環景影像中一給定點之距離、該行動裝置的所在位置與該環景影像中該給定點之間的距離,決定該套疊影像之大小,以符合遠物小、近物大的效果。接著進至步驟S206。 In step S205, a nested image is captured from the surrounding image according to the location of the mobile device, the spatial location of the surrounding image, the azimuth information and/or the viewing angle information of the mobile device. In one embodiment, the horizontal range and the vertical range of the image on the surrounding image are first confirmed according to the spatial location of the surrounding image, the spatial location of the surrounding image, the horizontal viewing angle range and the vertical viewing angle range of the mobile device, and then the nested image is captured from the surrounding image according to the horizontal range and the vertical range of the image on the surrounding image. In another embodiment, the size of the stacked image is determined based on the distance between the location of the mobile device and the spatial location of the ambient image, the distance between the spatial location of the ambient image and a given point in the ambient image, and the distance between the location of the mobile device and the given point in the ambient image, so as to achieve the effect of making distant objects small and near objects large. Then proceed to step S206.
於步驟S206中,根據行動裝置的所在位置、環景影像的空間位置,識別出平面。於一實施例中,連接該行動裝置的所在位置與該環景影像的空間位置成一直線,沿該直線的方向於該行動裝置的所在位置與該環景影像的空間位置之間識別出該平面。接著進至步驟S207。 In step S206, a plane is identified according to the location of the mobile device and the spatial location of the ambient image. In one embodiment, a straight line is formed between the location of the mobile device and the spatial location of the ambient image, and the plane is identified between the location of the mobile device and the spatial location of the ambient image along the direction of the straight line. Then proceed to step S207.
於步驟S207中,根據行動裝置的所在位置、環景影像的空間位置、行動裝置之方位角資訊及/或視角資訊,決定套疊影像在平面上的套疊範圍。於一實施例中,連接該行動裝置的所在位置與在該環景影像上的影像水平範圍的端點和影像垂直範圍的端點,以確認在該平面上的套疊水平範圍和套疊垂直範圍。於另一實施例中,該套疊影像在該平面上的該套疊範圍可為圓形,或者包含上述套疊水平範圍和套疊垂直範圍之任意形狀。接著進至步驟S208。 In step S207, the overlapping range of the overlapping images on the plane is determined according to the location of the mobile device, the spatial location of the surrounding image, the azimuth information and/or the viewing angle information of the mobile device. In one embodiment, the location of the mobile device is connected with the end points of the horizontal range of the image and the end points of the vertical range of the image on the surrounding image to confirm the horizontal range and vertical range of the overlapping on the plane. In another embodiment, the overlapping range of the overlapping image on the plane can be circular, or any shape including the above-mentioned horizontal range and vertical range. Then proceed to step S208.
於步驟S208中,根據套疊範圍,將套疊影像套疊於平面上。於一實施例中,將該平面之對應該套疊範圍的部分濾除,以使該套疊影像以不透明方式呈現於該平面上。於另一實施例中,根據該行動裝置的所在位置與該平面之間的距離,決定該平面之對應該套疊範圍的部分之透明度,其中,該透明度係與該距離成反比。於又一實施例中,將該套疊影像處理為透明,以將呈透明的該套疊影像套疊於該平面上。 In step S208, the stacked image is stacked on the plane according to the stacking range. In one embodiment, the portion of the plane corresponding to the stacking range is filtered so that the stacked image is presented on the plane in an opaque manner. In another embodiment, the transparency of the portion of the plane corresponding to the stacking range is determined according to the distance between the location of the mobile device and the plane, wherein the transparency is inversely proportional to the distance. In yet another embodiment, the stacked image is processed to be transparent so that the transparent stacked image is stacked on the plane.
由圖1所示之本案之在擴增實境中融合環景影像之方法的實施例,可知本案能在擴增實境的環境中增加透視的效果,且因其採用的實際位置的環景影像,並可隨著使用者的位置、視角來調整透視的範圍,來達到逼真的效果,更可有效利用擴增實境中空白的平面空間,較佳地作為廣告的訴求。 From the implementation example of the method of integrating ambient images in augmented reality shown in FIG1, it can be seen that the present invention can increase the perspective effect in the augmented reality environment, and because it adopts the ambient image of the actual position, the perspective range can be adjusted according to the user's position and viewing angle to achieve a realistic effect, and the blank plane space in the augmented reality can be effectively utilized, which is better used as an advertisement.
於一實施例中,本案之方法可執行在例如伺服器、電腦或其他具有資料處理、運算、儲存、網路通聯等功能的一個單獨或多個集合之設備中,其中,該伺服器、電腦或設備包括中央處理器、硬碟、記憶體等。 In one embodiment, the method of the present invention can be executed in a single or multiple devices such as a server, a computer or other devices with data processing, computing, storage, network communication and other functions, wherein the server, computer or device includes a central processing unit, a hard disk, a memory, etc.
此外,本案之電腦程式產品係經由電腦載入程式後執行該方法。另外,電腦程式(產品)除可儲存於記錄媒體外,亦可在網路上直接傳輸提供,電腦程式(產品)係為載有電腦可讀取之程式且不限外在形式之物。 In addition, the computer program product in this case is executed by loading the program into the computer. In addition, in addition to being stored in a recording medium, the computer program (product) can also be directly transmitted and provided on the Internet. The computer program (product) is a thing that contains a program that can be read by a computer and is not limited to an external form.
另外,本案還提供一種電腦可讀取記錄媒體,係應用於具有處理器及/或記憶體之計算設備或電腦中,且電腦可讀取記錄媒體儲存有指令,並可利用計算設備或電腦透過處理器及/或記憶體執行電腦可讀取記錄媒體,以於執行電腦可讀取記錄媒體時執行上述方法及/或內容。所述電腦可讀取紀錄媒體(例如硬碟、軟碟、光碟、USB隨身碟)係儲存有該電腦程式(產品)。 In addition, this case also provides a computer-readable recording medium, which is applied to a computing device or computer having a processor and/or memory, and the computer-readable recording medium stores instructions, and the computing device or computer can be used to execute the computer-readable recording medium through the processor and/or memory to execute the above method and/or content when executing the computer-readable recording medium. The computer-readable recording medium (such as a hard disk, a floppy disk, an optical disk, a USB flash drive) stores the computer program (product).
於一實施例中,步驟S201至S208可在雲端電腦執行,其中,雲端電腦包括電腦處理器和電腦程式(產品)(或稱非暫時性電腦可讀儲存媒體),該電腦程式(產品)由該電腦處理器執行以執行上述步驟。另外,在步驟S203時,用戶端行動裝置可利用載入於電子產品(例如行動裝置、 頭戴式顯示器或抬頭顯示器等)之應用程式(例如APP),將其所拍攝的即時影像傳輸至雲端電腦,以供雲端電腦對該用戶端行動裝置定位,而在步驟S206之時或之前,雲端電腦可將環景影像的空間位置傳輸至用戶端行動裝置,以供用戶端裝置據以識別出該平面。 In one embodiment, steps S201 to S208 may be executed on a cloud computer, wherein the cloud computer includes a computer processor and a computer program (product) (or a non-transitory computer-readable storage medium), and the computer program (product) is executed by the computer processor to perform the above steps. In addition, in step S203, the client mobile device can use an application (such as an APP) loaded in an electronic product (such as a mobile device, a head-mounted display or a head-up display, etc.) to transmit the real-time image it has taken to the cloud computer, so that the cloud computer can locate the client mobile device. In step S206 or before, the cloud computer can transmit the spatial position of the surrounding image to the client mobile device, so that the client device can identify the plane.
接著請參閱圖2至圖6,其說明本案之在擴增實境中融合環景影像之方法的具體實施例,其中,圖2和圖3顯示為本案之在擴增實境中融合環景影像之方法的自環景影像擷取出套疊影像的示意圖,圖4顯示為本案之在擴增實境中融合環景影像之方法的決定套疊影像的大小之示意圖,圖5顯示為本案之在擴增實境中融合環景影像之方法的決定套疊影像的大小之示意圖,圖6顯示為本案之在擴增實境中融合環景影像之方法的決定平面之對應套疊範圍的部分之透明度的示意圖。 Next, please refer to Figures 2 to 6, which illustrate a specific embodiment of the method for fusing ambient images in augmented reality of the present case, wherein Figures 2 and 3 are schematic diagrams showing the method for fusing ambient images in augmented reality of the present case of extracting an overlay image from an ambient image, Figure 4 is a schematic diagram showing the method for fusing ambient images in augmented reality of the present case of determining the size of an overlay image, Figure 5 is a schematic diagram showing the method for fusing ambient images in augmented reality of the present case of determining the size of an overlay image, and Figure 6 is a schematic diagram showing the method for fusing ambient images in augmented reality of the present case of determining the transparency of a portion of a plane corresponding to an overlay range.
如圖2和圖3所示,連接行動裝置的所在位置P1與環景影像的空間位置P3成一直線,進而延長該直線以交會於環景影像於交會點P4,從交會點P4沿著水平平面,分別向順時鐘和逆時鐘共取約120度的影像水平範圍R1(即,符合人類雙眼視角的水平視角範圍即可),從交會點P4沿著垂直平面,往上和往下共取約55度的影像垂直範圍R2(即,符合人類雙眼視角的垂直視角範圍即可)。藉此,獲得在該環景影像上的水平範圍端點E1、E2,以及在該環景影像上的垂直範圍端點E3、E4。 As shown in Figures 2 and 3, the location P1 of the connected mobile device and the spatial location P3 of the panoramic image form a straight line, and then the straight line is extended to intersect the panoramic image at the intersection point P4. From the intersection point P4, along the horizontal plane, a total of about 120 degrees of image horizontal range R1 is taken clockwise and counterclockwise (i.e., the horizontal viewing angle range that conforms to the human binocular viewing angle can be taken), and from the intersection point P4 along the vertical plane, a total of about 55 degrees of image vertical range R2 is taken upward and downward (i.e., the vertical viewing angle range that conforms to the human binocular viewing angle can be taken). In this way, the horizontal range endpoints E1 and E2 on the panoramic image, and the vertical range endpoints E3 and E4 on the panoramic image are obtained.
另外,如圖2和圖3所示,連接行動裝置的所在位置P1與環景影像的空間位置P3成一直線,透過一可提供景深的攝影鏡頭和可識別空間中平面的軟體(例如ARKit/ARCore)等,沿著該直線方向可識別出一平面,平面的位置為P2。連接P1與在該環景影像上的水平範圍端點E1、E2,以及連接P1與在 該環景影像上的垂直範圍端點E3、E4,這些連線在該平面上產生四個交點,以在該平面上劃出套疊範圍,此套疊平面範圍可為任意平面形狀,只要能包含該平面上的四個點交點即可。於一實施例中,套疊範圍的計算可由雲端電腦或用戶端裝置執行,例如在雲端電腦上執行套疊範圍的計算。 In addition, as shown in FIG. 2 and FIG. 3, the position P1 of the mobile device is connected to the spatial position P3 of the surround image in a straight line. Through a camera lens that can provide depth of field and software that can identify planes in space (such as ARKit/ARCore), a plane can be identified along the straight line, and the position of the plane is P2. Connect P1 with the horizontal range endpoints E1 and E2 on the surround image, and connect P1 with the vertical range endpoints E3 and E4 on the surround image. These lines generate four intersections on the plane to delineate an overlapping range on the plane. The overlapping plane range can be any plane shape as long as it can contain the four point intersections on the plane. In one embodiment, the calculation of the overlapping range can be performed by the cloud computer or the client device, for example, the calculation of the overlapping range is performed on the cloud computer.
如圖4所示,行動裝置的所在位置P1到平面的位置P2的距離為景深d1,此資訊可從空間識別軟體(如ARKit/ArCore)得知,而行動裝置的所在位置P1到環景影像的空間位置P3之間的距離為d2,這是因為兩者位置皆採用相同的定位坐標系統,因此可經計算得知。於一實施例中,圖4中的計算可由雲端電腦或用戶端裝置執行,例如在雲端電腦上執行。 As shown in FIG. 4 , the distance from the location P1 of the mobile device to the location P2 of the plane is the depth of field d1, which can be obtained from the spatial recognition software (such as ARKit/ArCore), and the distance from the location P1 of the mobile device to the spatial location P3 of the surrounding image is d2, which can be calculated because both locations use the same positioning coordinate system. In one embodiment, the calculation in FIG. 4 can be performed by a cloud computer or a client device, for example, on a cloud computer.
此外,環景影像的空間位置P3到環景影像中一給定點P5的距離為景深d3,此可從環景影像中得知,而行動裝置的所在位置P1至環景影像的空間位置P3的直線交會於環景影像為交會點P4,環景影像的空間位置P3至交會點P4之間有距離d4,則交會點P4至環景影像中一給定點P5之間有方向角θ 1,則根據三角函數計算,如圖5所示,可推得行動裝置的所在位置P1至環景影像中一給定點P5的距離d5如下: In addition, the distance from the spatial position P3 of the ambient image to a given point P5 in the ambient image is the depth of field d3, which can be known from the ambient image, and the straight line from the location P1 of the mobile device to the spatial position P3 of the ambient image intersects at the ambient image as the intersection point P4. There is a distance d4 between the spatial position P3 of the ambient image and the intersection point P4, and there is a direction angle θ1 between the intersection point P4 and a given point P5 in the ambient image. Then, according to the trigonometric function calculation, as shown in Figure 5, the distance d5 from the location P1 of the mobile device to a given point P5 in the ambient image can be deduced as follows:
藉由行動裝置的所在位置P1至環景影像中一給定點P5的距離d5,可對要套疊於平面上的套疊影像合適的大小縮放,來達到遠物小,近物大的視覺效果,避免了不管從那個角度看環景影像大小都沒變化的問 題,從而增加了透視效果的真實感。於一實施例中,套疊影像的大小縮放可由雲端電腦或用戶端裝置執行,例如在雲端電腦上執行套疊影像的大小縮放。 By using the distance d5 from the location P1 of the mobile device to a given point P5 in the surrounding image, the overlapping image to be overlapped on the plane can be appropriately scaled to achieve the visual effect of small distant objects and large near objects, avoiding the problem that the size of the surrounding image does not change regardless of the angle from which it is viewed, thereby increasing the realism of the perspective effect. In one embodiment, the size scaling of the overlapping image can be executed by the cloud computer or the client device, for example, the size scaling of the overlapping image is executed on the cloud computer.
另外,透視的效果可以有多種選擇,可以選擇用透明度來將套疊影像套疊於一平面上,也可把平面上的部分真實平面影像濾掉,直接讓在平面之後的套疊影像顯出來,像是電視牆般的效果,也可更精確的利用攝影鏡頭到識別平面不同的景深資訊,來做為透明度的參數。如圖6所示,讓較靠近行動裝置的攝影鏡頭的套疊影像有著較高的透明度,而較遠的套疊影像有著較低的透明度,引導使用者因為想看到全透明的擴增實境效果,而往接近該平面的方向走,來讓行動裝置的攝影鏡頭到該平面的景深(deep of field)一致,且因為愈靠近該平面,該套疊環景影像也會因為使用者靠近而放大,看得更清楚。可透過以下公式來設定該套疊影像所具有的透明度(transparent)。 In addition, there are many options for perspective effects. You can choose to use transparency to overlay the stacked images on a plane, or filter out part of the real plane image on the plane to directly show the stacked image behind the plane, like a TV wall effect. You can also more accurately use the depth of field information of the camera lens to identify different planes as a parameter of transparency. As shown in Figure 6, the stacked image of the camera lens closer to the mobile device has a higher transparency, while the stacked image farther away has a lower transparency, guiding the user to walk closer to the plane because he wants to see the fully transparent augmented reality effect, so that the depth of field from the camera lens of the mobile device to the plane is consistent, and because the closer to the plane, the stacked surrounding image will be enlarged as the user approaches, and it will be clearer. The transparency of the overlay image can be set using the following formula.
透明度=較小景深/較大景深=d1-1/d1-2。 Transparency = smaller depth of field/larger depth of field = d1-1/d1-2.
例如,行動裝置的所在位置P1到平面上一點的位置P2-1之間的距離為d1-1,而行動裝置的所在位置P1到平面上另一點的位置P2-2之間的距離為d1-2,如果d1-1為1公尺而d1-2為2公尺,那在d1-1的影像透明度就為100%,而d1-2的影像透明度就只為50%。於一實施例中,圖6中的計算可由雲端電腦或用戶端裝置執行,例如在雲端電腦上執行。 For example, the distance between the location P1 of the mobile device and the location P2-1 of a point on the plane is d1-1, and the distance between the location P1 of the mobile device and the location P2-2 of another point on the plane is d1-2. If d1-1 is 1 meter and d1-2 is 2 meters, then the transparency of the image at d1-1 is 100%, while the transparency of the image at d1-2 is only 50%. In one embodiment, the calculation in FIG. 6 can be performed by a cloud computer or a client device, for example, on a cloud computer.
綜上所述,本案之在擴增實境中融合環景影像之方法及執行該方法的電腦程式產品,能應用在導覽導購或商家廣告上,以在導引過程中,藉由透視牆面效果來增加擴增實境(Augmented Reality,AR)導行的內 容豐富度,以增加其便利性與實用性,及增加先前技術無法實現的效果。例如,在室內導覽時,透過透視商品效果來吸引使用者進入商家。又例如,在商家林立的百貨空間中,提供店家混和實境(Mixed Reality,MR)的透明展示櫃效果。 In summary, the method of integrating ambient images in augmented reality and the computer program product for executing the method can be applied to navigation and shopping or merchant advertising, so that during the guidance process, the content of augmented reality (AR) guidance can be increased by the see-through wall effect, so as to increase its convenience and practicality, and increase the effect that previous technologies cannot achieve. For example, when guiding indoors, the see-through product effect can be used to attract users to enter the merchant. For another example, in a department store space with many merchants, a transparent display cabinet effect of mixed reality (MR) can be provided for the store.
上述實施例僅例示性說明本案之功效,而非用於限制本案,任何熟習此項技藝之人士均可在不違背本案之精神及範疇下對上述該些實施態樣進行修飾與改變。因此本案之權利保護範圍,應如後述之申請專利範圍所列。 The above embodiments are only illustrative of the effects of this case, and are not intended to limit this case. Anyone familiar with this technology can modify and change the above embodiments without violating the spirit and scope of this case. Therefore, the scope of protection of the rights of this case should be as listed in the scope of the patent application described below.
S201~S208:步驟 S201~S208: Steps
Claims (10)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW111146604A TWI843315B (en) | 2022-12-05 | 2022-12-05 | Method of panoramic image fusion in augmented reality and computer program product implementing the same |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW111146604A TWI843315B (en) | 2022-12-05 | 2022-12-05 | Method of panoramic image fusion in augmented reality and computer program product implementing the same |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| TWI843315B true TWI843315B (en) | 2024-05-21 |
| TW202424901A TW202424901A (en) | 2024-06-16 |
Family
ID=92077109
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| TW111146604A TWI843315B (en) | 2022-12-05 | 2022-12-05 | Method of panoramic image fusion in augmented reality and computer program product implementing the same |
Country Status (1)
| Country | Link |
|---|---|
| TW (1) | TWI843315B (en) |
Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| TWI450132B (en) * | 2008-06-03 | 2014-08-21 | Shimane Prefectural Government | A portrait recognition device, an operation judgment method, and a computer program |
| TW201508547A (en) * | 2013-08-22 | 2015-03-01 | Chunghwa Telecom Co Ltd | Interactive reality system and method featuring the combination of on-site reality and virtual component |
| CN104487916A (en) * | 2012-07-26 | 2015-04-01 | 高通股份有限公司 | Interactions of tangible and augmented reality objects |
| TWI675583B (en) * | 2018-07-23 | 2019-10-21 | 緯創資通股份有限公司 | Augmented reality system and color compensation method thereof |
| TWI697317B (en) * | 2019-08-30 | 2020-07-01 | 國立中央大學 | Digital image reality alignment kit and method applied to mixed reality system for surgical navigation |
| TWI700671B (en) * | 2019-03-06 | 2020-08-01 | 廣達電腦股份有限公司 | Electronic device and method for adjusting size of three-dimensional object in augmented reality |
| TW202215370A (en) * | 2020-08-14 | 2022-04-16 | 美商海思智財控股有限公司 | Systems and methods for superimposing virtual image on real-time image |
| TW202242805A (en) * | 2021-04-22 | 2022-11-01 | 政威資訊顧問有限公司 | Positioning method and server end for presenting facility objects based on augmented reality view wherein the locations of the presented facility objects can be accurately overlaid with the images of the facility objects in the augmented reality view |
-
2022
- 2022-12-05 TW TW111146604A patent/TWI843315B/en active
Patent Citations (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| TWI450132B (en) * | 2008-06-03 | 2014-08-21 | Shimane Prefectural Government | A portrait recognition device, an operation judgment method, and a computer program |
| CN104487916A (en) * | 2012-07-26 | 2015-04-01 | 高通股份有限公司 | Interactions of tangible and augmented reality objects |
| CN104487916B (en) | 2012-07-26 | 2017-09-19 | 高通股份有限公司 | Physical objects are interacted with augmented reality object |
| TW201508547A (en) * | 2013-08-22 | 2015-03-01 | Chunghwa Telecom Co Ltd | Interactive reality system and method featuring the combination of on-site reality and virtual component |
| TWI675583B (en) * | 2018-07-23 | 2019-10-21 | 緯創資通股份有限公司 | Augmented reality system and color compensation method thereof |
| TWI700671B (en) * | 2019-03-06 | 2020-08-01 | 廣達電腦股份有限公司 | Electronic device and method for adjusting size of three-dimensional object in augmented reality |
| TWI697317B (en) * | 2019-08-30 | 2020-07-01 | 國立中央大學 | Digital image reality alignment kit and method applied to mixed reality system for surgical navigation |
| TW202215370A (en) * | 2020-08-14 | 2022-04-16 | 美商海思智財控股有限公司 | Systems and methods for superimposing virtual image on real-time image |
| TW202242805A (en) * | 2021-04-22 | 2022-11-01 | 政威資訊顧問有限公司 | Positioning method and server end for presenting facility objects based on augmented reality view wherein the locations of the presented facility objects can be accurately overlaid with the images of the facility objects in the augmented reality view |
Also Published As
| Publication number | Publication date |
|---|---|
| TW202424901A (en) | 2024-06-16 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN101911128B (en) | Method and system for providing three-dimensional web map service using augmented reality | |
| US10755485B2 (en) | Augmented reality product preview | |
| US10089794B2 (en) | System and method for defining an augmented reality view in a specific location | |
| CN105046752B (en) | Method for describing virtual information in the view of true environment | |
| JP6050518B2 (en) | How to represent virtual information in the real environment | |
| US9224237B2 (en) | Simulating three-dimensional views using planes of content | |
| CN104995666B (en) | A method for representing virtual information in a real environment | |
| WO2019242262A1 (en) | Augmented reality-based remote guidance method and device, terminal, and storage medium | |
| TWI410608B (en) | Use the point of interest information to display the system and method of the smartphone lens image | |
| CN111833458B (en) | Image display method and device, equipment and computer readable storage medium | |
| CN110478901A (en) | Exchange method and system based on augmented reality equipment | |
| CN105659295A (en) | Method for representing a point of interest in a view of a real environment on a mobile device and a mobile device for the method | |
| CN106304842A (en) | For location and the augmented reality system and method for map building | |
| CN109978753B (en) | Method and device for drawing panoramic heat map | |
| JP2015001875A (en) | Image processing apparatus, image processing method, program, print medium, and set of print medium | |
| WO2023124693A1 (en) | Augmented reality scene display | |
| US20180350103A1 (en) | Methods, devices, and systems for determining field of view and producing augmented reality | |
| KR20150106879A (en) | Method and apparatus for adding annotations to a plenoptic light field | |
| JP2014115957A (en) | Augmented reality building simulation device | |
| KR20180120456A (en) | Apparatus for providing virtual reality contents based on panoramic image and method for the same | |
| WO2023124698A1 (en) | Display of augmented reality scene | |
| CN116858215B (en) | AR navigation map generation method and device | |
| JP2018010599A (en) | Information processor, panoramic image display method, panoramic image display program | |
| TWI843315B (en) | Method of panoramic image fusion in augmented reality and computer program product implementing the same | |
| CN115797602A (en) | Method and device for adding AR explanation based on object positioning |