[go: up one dir, main page]

TWI889317B - 3d around view monitoring system - Google Patents

3d around view monitoring system Download PDF

Info

Publication number
TWI889317B
TWI889317B TW113115985A TW113115985A TWI889317B TW I889317 B TWI889317 B TW I889317B TW 113115985 A TW113115985 A TW 113115985A TW 113115985 A TW113115985 A TW 113115985A TW I889317 B TWI889317 B TW I889317B
Authority
TW
Taiwan
Prior art keywords
cloud data
point
coordinate
point cloud
computer device
Prior art date
Application number
TW113115985A
Other languages
Chinese (zh)
Other versions
TW202543291A (en
Inventor
高立人
邱瓏峻
鄭達源
羅宇翔
Original Assignee
國立臺北科技大學
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 國立臺北科技大學 filed Critical 國立臺北科技大學
Priority to TW113115985A priority Critical patent/TWI889317B/en
Application granted granted Critical
Publication of TWI889317B publication Critical patent/TWI889317B/en
Publication of TW202543291A publication Critical patent/TW202543291A/en

Links

Images

Landscapes

  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

A three-dimensional around view monitoring system includes multiple photography modules, multiple optical radars, and a computer device. The computer device converts a first coordinate of each pixel of a still image captured by each camera module at the same time point to the corresponding optical radar at the same time through the previously stored and corresponding coordinate conversion matrix. A second coordinate of the point cloud data scanned at the time point, and the color of the second coordinate is equal to the color of the pixel corresponding to the first coordinate to generate a corresponding color point cloud data. The computer device then splices the color point cloud data of the optical radars at the same time point to generate a spliced ​​color point cloud data as a three-dimensional surround image.

Description

三維環景系統3D Surround System

本發明是有關於一種環景系統,特別是指一種能夠提供準確距離資訊的三維環景系統。The present invention relates to a surround system, and more particularly to a three-dimensional surround system capable of providing accurate distance information.

現有車用的環景(Around view monitoring,AVM)系統大多是藉由四個攝影鏡頭分別安裝在車輛的前後左右四個方向,以拍攝車輛四周環境而產生四個影像。再藉由處理器或運算晶片執行對該四個影像進行拼接,而獲得360度的環景影像。這種習知的作法在環境光線不足的情況下,所拍攝到的物體的可見度和辨識度會降低,且該四個影像在接合處也容易發生鬼影或扭曲畸變的問題。因此,是否存有其他車用的環景系統來改善習知技術的問題,或甚至還能夠提供更多影像資訊便成為一個待解決的問題。Most of the existing vehicle-based around view monitoring (AVM) systems use four cameras installed in the front, rear, left and right directions of the vehicle to capture the vehicle's surroundings and generate four images. The four images are then stitched together by a processor or computing chip to obtain a 360-degree surround image. This known approach will reduce the visibility and recognition of the captured objects when the ambient light is insufficient, and the four images are prone to ghosting or distortion at the joints. Therefore, whether there are other vehicle-based surround view systems to improve the problem of learning technology, or even provide more image information, has become a problem to be solved.

因此,本發明的目的,即在提供一種能夠提供準確距離資訊的三維環景系統。Therefore, an object of the present invention is to provide a three-dimensional surround system that can provide accurate distance information.

於是,本發明提供一種三維環景系統,適用於一車輛,並包含多個攝影模組、多個光學雷達、及一電腦裝置。該等多個攝影模組設置於該車輛上,並拍攝該車輛的四周環境以分別產生多個動態影像。每一該動態影像包括在不同時間點的多個靜態影像。該等光學雷達設置於該車輛上,並掃描該車輛的四周環境以產生在不同時間點的多個點雲資料。Therefore, the present invention provides a three-dimensional surrounding system, which is applicable to a vehicle and includes a plurality of camera modules, a plurality of optical radars, and a computer device. The plurality of camera modules are arranged on the vehicle and take pictures of the surrounding environment of the vehicle to generate a plurality of dynamic images respectively. Each of the dynamic images includes a plurality of static images at different time points. The optical radars are arranged on the vehicle and scan the surrounding environment of the vehicle to generate a plurality of point cloud data at different time points.

該電腦裝置設置於該車輛上,並電連接該等攝影模組及該等光學雷達,且儲存分別對應該等光學雷達的多個座標轉換矩陣。每一該座標轉換矩陣是用於將對應的該攝影模組的該靜態影像中任一座標點轉換成對應的該光學雷達的該點雲資料中的一座標點。The computer device is installed on the vehicle and electrically connected to the camera modules and the optical radars, and stores a plurality of coordinate conversion matrices respectively corresponding to the optical radars. Each of the coordinate conversion matrices is used to convert any point in the static image of the corresponding camera module into a point in the point cloud data of the corresponding optical radar.

該電腦裝置將每一該攝影模組在相同時間點的該靜態影像的每一畫素的一第一座標經由對應的該座標轉換矩陣轉換至對應的該光學雷達在相同時間點的該點雲資料的一第二座標,且將該第二座標的顏色等於(即設定為)該第一座標所對應的該畫素的顏色,以產生對應的一彩色點雲資料。該電腦裝置將在相同時間點的該等光學雷達的該等彩色點雲資料作拼接,以產生一拼接後彩色點雲資料進而作為一個三維環景影像。The computer device converts a first coordinate of each pixel of the static image of each camera module at the same time point into a second coordinate of the point cloud data of the corresponding optical radar at the same time point through the corresponding coordinate conversion matrix, and sets the color of the second coordinate to be equal to (i.e., set to) the color of the pixel corresponding to the first coordinate to generate a corresponding color point cloud data. The computer device splices the color point cloud data of the optical radars at the same time point to generate a spliced color point cloud data as a three-dimensional panoramic image.

在一些實施態樣中,其中,該電腦裝置根據一設定指令,在該三維環景影像中還同時顯示該設定指令所選擇的至少一點與該車輛之間的至少一距離,以使得一使用者得知該至少一點所對應的至少一特定物件與該車輛之間的距離。In some implementations, the computer device, based on a setting instruction, also displays at least one distance between at least one point selected by the setting instruction and the vehicle in the three-dimensional panoramic image, so that a user can know the distance between at least one specific object corresponding to the at least one point and the vehicle.

在另一些實施態樣中,其中,該電腦裝置事先執行對應座標轉換的該攝影模組與該光學雷達的一校正程序。在該校正程序中,對應的該攝影模組及該光學雷達分別對一特徵圖拍攝與掃描,該電腦裝置根據該靜態影像中該特徵圖與對應的該點雲資料中該特徵圖作特徵比對,以獲得對應的該座標轉換矩陣。In other embodiments, the computer device executes a calibration procedure of the corresponding coordinate conversion of the camera module and the optical radar in advance. In the calibration procedure, the corresponding camera module and the optical radar respectively shoot and scan a feature map, and the computer device performs feature comparison based on the feature map in the static image and the feature map in the corresponding point cloud data to obtain the corresponding coordinate conversion matrix.

在一些實施態樣中,其中,該特徵圖包括多個棋盤狀方格,該電腦裝置將對應的該靜態影像中與該點雲資料中的該等棋盤狀方格的邊角作比對,該座標轉換矩陣包括一旋轉矩陣及一平移向量,對應的該靜態影像的該第一座標經由該旋轉矩陣作旋轉,再經由該平移向量作平移到對應的該點雲資料的該第二座標。In some implementations, the feature map includes a plurality of checkerboard squares, the computer device compares the corners of the checkerboard squares in the corresponding static image with those in the point cloud data, the coordinate transformation matrix includes a rotation matrix and a translation vector, the first coordinate of the corresponding static image is rotated by the rotation matrix, and then translated to the second coordinate of the corresponding point cloud data by the translation vector.

本發明的功效在於:藉由每一該座標轉換矩陣在對應的該攝影模組與該光學雷達之間作座標轉換,使得該等攝影模組所拍攝的該等動態影像的該等靜態影像的每一該畫素的顏色,能夠對應的被顯示於該等光學雷達所掃描的該等點雲資料的正確位置上,進而產生一種能夠提供準確距離資訊的三維環景影像。The effect of the present invention is that by performing coordinate conversion between the corresponding camera module and the optical radar through each coordinate conversion matrix, the color of each pixel of the dynamic images and the static images taken by the camera modules can be displayed at the correct position of the point cloud data scanned by the optical radars, thereby generating a three-dimensional panoramic image that can provide accurate distance information.

在本發明被詳細描述之前,應當注意在以下的說明內容中,類似的元件是以相同的編號來表示。Before the present invention is described in detail, it should be noted that similar components are represented by the same reference numerals in the following description.

參閱圖1,本發明三維環景系統之一實施例,適用於一車輛9,並包含四個攝影模組31~34、四個光學雷達21~24、及一電腦裝置1。Referring to FIG. 1 , an embodiment of the three-dimensional surround system of the present invention is applicable to a vehicle 9 and includes four camera modules 31-34, four optical radars 21-24, and a computer device 1.

該四個攝影模組31~34分別設置於該車輛9的前方、後方、左側、及右側,並採用廣角鏡頭分別拍攝該車輛9的四個方向,以分別產生四個動態影像。每一該動態影像包括在不同時間點的多個靜態影像。也就是說,該四個動態影像的拼接結果即是屬於一種習知技術的環景影像。The four camera modules 31-34 are respectively arranged at the front, rear, left and right sides of the vehicle 9, and use wide-angle lenses to shoot four directions of the vehicle 9 to generate four dynamic images. Each dynamic image includes multiple static images at different time points. In other words, the splicing result of the four dynamic images is a panoramic image belonging to a known technology.

該四個光學雷達21~24分別設置於該車輛9的前方、後方、左側、及右側(周遭),例如分別在該四個攝影模組31~34的上方,並分別掃描該車輛9的四個方向,以分別產生在不同時間點的多個點雲資料(Point cloud data),亦即每一該光學雷達21~24都會產生在不同時間點的多個點雲資料。每一該點雲資料包含一時間標籤,該時間標籤指示該點雲資料所對應的時間點。The four optical radars 21-24 are respectively arranged at the front, rear, left, and right sides (surroundings) of the vehicle 9, for example, respectively above the four camera modules 31-34, and respectively scan the four directions of the vehicle 9 to respectively generate a plurality of point cloud data at different time points, that is, each of the optical radars 21-24 will generate a plurality of point cloud data at different time points. Each of the point cloud data includes a time tag, and the time tag indicates the time point corresponding to the point cloud data.

該電腦裝置1例如是一行車電腦或其他運算裝置,並包括一處理單元(如處理器)、一輸入單元、及一顯示單元,該輸入單元與該顯示單元例如整合為一觸控式螢幕。該電腦裝置1設置於該車輛9上,並電連接該等攝影模組31~34及該等光學雷達21~24,以接收該四個動態影像及該等點雲資料,並儲存分別對應該四個光學雷達21~24的四個座標轉換矩陣。The computer device 1 is, for example, a vehicle computer or other computing device, and includes a processing unit (such as a processor), an input unit, and a display unit, wherein the input unit and the display unit are integrated into a touch screen, for example. The computer device 1 is disposed on the vehicle 9 and is electrically connected to the camera modules 31-34 and the optical radars 21-24 to receive the four dynamic images and the point cloud data, and store four coordinate conversion matrices corresponding to the four optical radars 21-24 respectively.

更詳細地說,該電腦裝置1對於對應的該攝影模組與該光學雷達,例如21與31、22與32、23與33、24與34,事先執行兩者的座標系之間座標轉換的一校正程序。在該校正程序中,對應的該攝影模組(如31)及該光學雷達(如21)分別對一特徵圖拍攝與掃描,該特徵圖包括多個棋盤狀方格,例如是多個黑白相間且呈矩形的棋盤格,但並不以此為限。該電腦裝置1根據對應的該攝影模組(如31)的該靜態影像中該特徵圖與對應的該光學雷達(如21)的該點雲資料中該特徵圖作特徵(即該等棋盤狀方格的邊角)比對,以獲得對應的一座標轉換矩陣。該座標轉換矩陣是用於將對應的該攝影模組(如31)的該靜態影像中任一座標點轉換成對應的該光學雷達(如21)的該點雲資料中的一座標點。More specifically, the computer device 1 performs a calibration procedure for coordinate conversion between the coordinate systems of the corresponding camera module and the optical radar, such as 21 and 31, 22 and 32, 23 and 33, and 24 and 34. In the calibration procedure, the corresponding camera module (such as 31) and the optical radar (such as 21) respectively photograph and scan a feature map, and the feature map includes a plurality of chessboard-like squares, such as a plurality of black and white rectangular chessboards, but not limited thereto. The computer device 1 compares the features (i.e., the corners of the chessboard-shaped squares) of the feature map in the static image of the corresponding camera module (such as 31) with the feature map in the point cloud data of the corresponding optical radar (such as 21) to obtain a corresponding coordinate conversion matrix. The coordinate conversion matrix is used to convert any coordinate point in the static image of the corresponding camera module (such as 31) into a coordinate point in the point cloud data of the corresponding optical radar (such as 21).

每一該攝影模組的該靜態影像與對應的該光學雷達的該點雲資料之間的該座標轉換矩陣包括一旋轉矩陣R及一平移向量t。藉由 高效率透視N點攝影機姿態估計(Efficient Perspective-n-Point,EPNP)演算法將2D-3D對應問題轉換成3D-3D匹配問題,即可藉由迭代最近點(Iterative closest point,ICP)演算法求得該旋轉矩陣R及該平移向量t。The coordinate transformation matrix between the static image of each camera module and the corresponding point cloud data of the optical radar includes a rotation matrix R and a translation vector t. The 2D-3D correspondence problem is converted into a 3D-3D matching problem by the Efficient Perspective-n-Point (EPNP) algorithm, and the rotation matrix R and the translation vector t can be obtained by the Iterative Closest Point (ICP) algorithm.

EPNP演算法求解過程需要先已知n個世界坐標系(即對應該光學雷達的座標系)中的3D參考點與其所對應2D圖像(即該靜態影像)中的參考點,以及相機(即攝影模組)的內部參數K。首先在世界坐標系下的3D點雲p w中找出四個虛擬控制點c w,第一點以點雲的重心公式求得,另外三點的一般選擇方式是通過主成份分析(Principal component analysis,PCA),確定點雲資料分佈的三個軸向,藉以求解出另外三個控制點。 The EPNP algorithm solution process requires the knowledge of n 3D reference points in the world coordinate system (i.e. the coordinate system corresponding to the optical radar) and the reference points in the corresponding 2D image (i.e. the static image), as well as the internal parameters K of the camera (i.e. the camera module). First, four virtual control points c w are found in the 3D point cloud p w in the world coordinate system. The first point is obtained by the centroid formula of the point cloud. The general selection method for the other three points is to determine the three axes of the point cloud data distribution through principal component analysis (PCA) to solve the other three control points.

透過3D點雲與虛擬控制點c w的關係,求出齊次重心座標α(α是一權重向量)。因為已知相機投影模型的關係、齊次重心座標α、及相機的內部參數K、2D點座標,可透過SVD(singular value decomposition)求解,得出相機坐標系下的控制點c c。接著透過控制點c c與對應的齊次重心座標α求出在相機座標系下的3D點p cThrough the relationship between the 3D point cloud and the virtual control point c w , the homogeneous center of gravity coordinate α (α is a weight vector) is obtained. Because the relationship between the camera projection model, the homogeneous center of gravity coordinate α, the camera's internal parameters K, and the 2D point coordinates are known, the control point c c in the camera coordinate system can be solved through SVD (singular value decomposition). Then, the 3D point p c in the camera coordinate system is obtained through the control point c c and the corresponding homogeneous center of gravity coordinate α.

最後透過3D-3D點雲匹配用的ICP演算法求解出一座標轉換矩陣R’與一平移向量t’。整合2D-3D轉換過程(EPNP演算法)與3D-3D匹配過程(ICP演算法)即可求得最終2D-3D對應所需之該轉換矩陣,包括該旋轉矩陣R與該平移向量t。Finally, the coordinate transformation matrix R’ and a translation vector t’ are solved by the ICP algorithm for 3D-3D point cloud matching. The transformation matrix required for the final 2D-3D correspondence, including the rotation matrix R and the translation vector t, can be obtained by integrating the 2D-3D transformation process (EPNP algorithm) and the 3D-3D matching process (ICP algorithm).

該電腦裝置1將每一該攝影模組31~34在相同時間點的該靜態影像的每一畫素的一第一座標經由對應的該座標轉換矩陣轉換至對應的該光學雷達21~24在相同時間點的該點雲資料的一第二座標,且將該第二座標的顏色(如RGB值或其他色彩數值)等於(即設定為)該第一座標所對應的該畫素的顏色(如RGB值或其他色彩數值),以產生對應的一彩色點雲資料。The computer device 1 converts a first coordinate of each pixel of the static image of each of the photographic modules 31-34 at the same time point into a second coordinate of the point cloud data of the corresponding optical radar 21-24 at the same time point via the corresponding coordinate conversion matrix, and makes the color (such as RGB value or other color value) of the second coordinate equal to (i.e., sets to) the color (such as RGB value or other color value) of the pixel corresponding to the first coordinate to generate a corresponding color point cloud data.

該電腦裝置1將在相同時間點的該等光學雷達21~24的該等彩色點雲資料作拼接,以產生一拼接後彩色點雲資料,換句話說,該電腦裝置1將原本在不同時間點沒有色彩資訊的該等點雲資訊,轉換為具有色彩資訊的該等彩色點雲資訊,使得該電腦裝置1在該顯示單元依照該時間標籤依序顯示該等拼接後彩色點雲資料,以作為一個能夠提供準確距離資訊的三維環景影像。The computer device 1 splices the color point cloud data of the optical radars 21-24 at the same time point to generate a spliced color point cloud data. In other words, the computer device 1 converts the point cloud information that originally has no color information at different time points into the color point cloud information with color information, so that the computer device 1 displays the spliced color point cloud data in sequence according to the time label on the display unit as a three-dimensional panoramic image that can provide accurate distance information.

舉例來說,該電腦裝置1根據使用者藉由該觸控螢幕所輸入的一設定指令(如顯示後方物體的最近距離),在該三維環景影像中還同時顯示位於該車輛9後方的一物體(即在所顯示的時間點的該拼接後彩色點雲資料之其中一點)與該車輛9的一最近距離。再舉例來說,該電腦裝置1根據使用者藉由該觸控螢幕所輸入的另一設定指令(如顯示左側與右側物體的最近距離),在該三維環景影像中還同時顯示位於該車輛9左側的另一物體及位於該車輛9右側的另一物體分別與該車輛9的另外兩個最近距離。For example, according to a setting instruction (such as displaying the closest distance of the rear object) input by the user through the touch screen, the computer device 1 also displays in the three-dimensional panoramic image a closest distance between an object located behind the vehicle 9 (i.e., one point of the spliced color point cloud data at the displayed time point) and the vehicle 9. For another example, according to another setting instruction (such as displaying the closest distances of the left and right objects) input by the user through the touch screen, the computer device 1 also displays in the three-dimensional panoramic image two other closest distances between another object located on the left side of the vehicle 9 and another object located on the right side of the vehicle 9 and the vehicle 9, respectively.

要特別補充說明的是:在本實施例中,該四個光學雷達21~24都是一種固態式光學雷達,且具有廣視角的掃瞄範圍。而在其他實施例中,該等光學雷達的數量也可以是單一個設置於該車輛9上方或車體較高處的機械式雷達,或者,也可以是單一個設置於該車輛9上方的機械式光學雷達再加上分別設置於該車輛9的左側及右側的兩個固態式光學雷達。此外,該等光學雷達也可以是其他多數個,並不一定要與該等攝影模組31~34的數量相同,只要該等光學雷達及該等攝影模組能夠取得該車輛9周遭各面向的環境(即該等動態影像及該等點雲資料)為部署原則即可。It should be particularly noted that in this embodiment, the four optical radars 21-24 are all solid-state optical radars with a wide-angle scanning range. In other embodiments, the number of optical radars may be a single mechanical radar disposed above the vehicle 9 or at a higher position of the vehicle body, or a single mechanical optical radar disposed above the vehicle 9 plus two solid-state optical radars disposed on the left and right sides of the vehicle 9, respectively. In addition, the optical radars may be other pluralities and do not necessarily have to be the same as the number of the camera modules 31-34, as long as the optical radars and the camera modules can obtain the environment of all sides of the vehicle 9 (i.e., the dynamic images and the point cloud data) as the deployment principle.

此外,在其他的實施例中,當在該校正程序中,該特徵圖所包括的該等棋盤狀方格的特徵不完整時,例如受限於硬體的解析度較低導致該等棋盤狀方格的邊角部分或全部不完整時,藉由將對應的該攝影模組及該光學雷達作為相對位置保持不變的一組合件,且藉由該組合件作擺動的拍攝與掃描,而能夠經由預校正而取得該特徵圖的完整特徵。In addition, in other embodiments, when the features of the chessboard squares included in the feature map are incomplete during the calibration process, for example, due to the low resolution of the hardware causing the corners of the chessboard squares to be partially or completely incomplete, the complete features of the feature map can be obtained through pre-calibration by keeping the corresponding camera module and the optical radar as a combination with unchanged relative positions and by swinging the combination for shooting and scanning.

綜上所述,藉由每一該座標轉換矩陣在對應的該攝影模組與該光學雷達之間作座標轉換,使得該等攝影模組所拍攝的該等動態影像的該等靜態影像的每一該畫素的顏色,能夠對應的被顯示於該等光學雷達所掃描的該等點雲資料的正確位置上,進而能夠藉由顯示該等拼接後彩色點雲資料及所選擇的距離資訊,以實現一種能夠提供準確距離資訊的三維環景系統,故確實能達成本發明的目的。In summary, by performing coordinate conversion between the corresponding camera module and the optical radar using each coordinate conversion matrix, the color of each pixel of the dynamic images and the static images captured by the camera modules can be displayed at the correct position of the point cloud data scanned by the optical radars. Furthermore, by displaying the stitched color point cloud data and the selected distance information, a three-dimensional surround system capable of providing accurate distance information can be realized, thereby achieving the purpose of the present invention.

惟以上所述者,僅為本發明的實施例而已,當不能以此限定本發明實施的範圍,凡是依本發明申請專利範圍及專利說明書內容所作的簡單的等效變化與修飾,皆仍屬本發明專利涵蓋的範圍內。However, the above is only an embodiment of the present invention and should not be used to limit the scope of implementation of the present invention. All simple equivalent changes and modifications made according to the scope of the patent application of the present invention and the content of the patent specification are still within the scope of the present patent.

1:電腦裝置 21~24:光學雷達 31~34:攝影模組 9:車輛 1: Computer device 21~24: Optical radar 31~34: Photographic module 9: Vehicle

本發明的其他的特徵及功效,將於參照圖式的實施方式中清楚地呈現,其中: 圖1是一方塊圖,說明本發明三維環景系統的一實施例。 Other features and functions of the present invention will be clearly presented in the implementation method with reference to the drawings, wherein: Figure 1 is a block diagram illustrating an implementation example of the three-dimensional ambient system of the present invention.

1:電腦裝置 1:Computer device

21~24:光學雷達 21~24: Optical radar

31~34:攝影模組 31~34: Photography module

9:車輛 9: Vehicles

Claims (2)

一種三維環景系統,適用於一車輛,並包含: 多個攝影模組,設置於該車輛上,並拍攝該車輛的四周環境以分別產生多個動態影像,每一該動態影像包括在不同時間點的多個靜態影像; 多個光學雷達,設置於該車輛上,並掃描該車輛的四周環境以產生在不同時間點的多個點雲資料;及 一電腦裝置,設置於該車輛上,並電連接該等攝影模組及該等光學雷達,且儲存分別對應該等光學雷達的多個座標轉換矩陣,每一該座標轉換矩陣是用於將對應的該攝影模組的該靜態影像中任一座標點轉換成對應的該光學雷達的該點雲資料中的一座標點, 該電腦裝置將每一該攝影模組在相同時間點的該靜態影像的每一畫素的一第一座標經由對應的該座標轉換矩陣轉換至對應的該光學雷達在相同時間點的該點雲資料的一第二座標,且將該第二座標的顏色等於(即設定為)該第一座標所對應的該畫素的顏色,以產生對應的一彩色點雲資料,該電腦裝置將在相同時間點的該等光學雷達的該等彩色點雲資料作拼接,以產生一拼接後彩色點雲資料進而作為一個三維環景影像, 該電腦裝置事先執行對應座標轉換的該攝影模組與該光學雷達的一校正程序,在該校正程序中,對應的該攝影模組及該光學雷達分別對一特徵圖拍攝與掃描,該電腦裝置根據該靜態影像中該特徵圖與對應的該點雲資料中該特徵圖作特徵比對,以獲得對應的該座標轉換矩陣, 該特徵圖包括多個棋盤狀方格,該電腦裝置將對應的該靜態影像中與該點雲資料中的該等棋盤狀方格的邊角作比對,該座標轉換矩陣包括一旋轉矩陣及一平移向量,對應的該靜態影像的該第一座標經由該旋轉矩陣作旋轉,再經由該平移向量作平移到對應的該點雲資料的該第二座標, 每一該攝影模組的該靜態影像與對應的該光學雷達的該點雲資料之間的該座標轉換矩陣包括該旋轉矩陣R及該平移向量t,該電腦裝置藉由 高效率透視N點攝影機姿態估計(Efficient Perspective-n-Point,EPNP)演算法將2D-3D對應問題轉換成3D-3D匹配問題,以藉由迭代最近點(Iterative closest point,ICP)演算法求得該旋轉矩陣R及該平移向量t。 A three-dimensional surrounding system is applicable to a vehicle and comprises: A plurality of camera modules, which are arranged on the vehicle and photograph the surrounding environment of the vehicle to generate a plurality of dynamic images respectively, each of which includes a plurality of static images at different time points; A plurality of optical radars, which are arranged on the vehicle and scan the surrounding environment of the vehicle to generate a plurality of point cloud data at different time points; and A computer device is installed on the vehicle and electrically connected to the camera modules and the optical radars, and stores a plurality of coordinate conversion matrices corresponding to the optical radars, each of which is used to convert any point in the static image of the corresponding camera module into a point in the point cloud data of the corresponding optical radar. The computer device converts a first coordinate of each pixel of the static image of each camera module at the same time point to a second coordinate of the point cloud data of the corresponding optical radar at the same time point through the corresponding coordinate conversion matrix, and sets the color of the second coordinate to be equal to (i.e., set to) the color of the pixel corresponding to the first coordinate to generate a corresponding color point cloud data. The computer device splices the color point cloud data of the optical radars at the same time point to generate a spliced color point cloud data as a three-dimensional panoramic image. The computer device executes a calibration procedure of the corresponding coordinate conversion of the photographic module and the optical radar in advance. In the calibration procedure, the corresponding photographic module and the optical radar respectively photograph and scan a feature map. The computer device performs feature comparison based on the feature map in the static image and the feature map in the corresponding point cloud data to obtain the corresponding coordinate conversion matrix. The feature map includes a plurality of chessboard-like squares. The computer device compares the corners of the chessboard-like squares in the corresponding static image with those in the point cloud data. The coordinate conversion matrix includes a rotation matrix and a translation vector. The first coordinate of the corresponding static image is rotated by the rotation matrix and then translated to the second coordinate of the corresponding point cloud data by the translation vector. The coordinate conversion matrix between the static image of each camera module and the point cloud data of the corresponding optical radar includes the rotation matrix R and the translation vector t. The computer device uses Efficient Perspective N-Point Camera Attitude Estimation (Efficient Perspective N-Point Camera Attitude Estimation) The EPNP (Perspective-n-Point) algorithm converts the 2D-3D correspondence problem into a 3D-3D matching problem, and obtains the rotation matrix R and the translation vector t by the iterative closest point (ICP) algorithm. 如請求項1所述的三維環景系統,其中,該電腦裝置根據一設定指令,在該三維環景影像中還同時顯示該設定指令所選擇的至少一點與該車輛之間的至少一距離,以使得一使用者得知該至少一點所對應的至少一特定物件與該車輛之間的距離。A three-dimensional surround system as described in claim 1, wherein the computer device, based on a setting instruction, also simultaneously displays in the three-dimensional surround image at least one distance between at least one point selected by the setting instruction and the vehicle, so that a user can know the distance between at least one specific object corresponding to the at least one point and the vehicle.
TW113115985A 2024-04-29 2024-04-29 3d around view monitoring system TWI889317B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW113115985A TWI889317B (en) 2024-04-29 2024-04-29 3d around view monitoring system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW113115985A TWI889317B (en) 2024-04-29 2024-04-29 3d around view monitoring system

Publications (2)

Publication Number Publication Date
TWI889317B true TWI889317B (en) 2025-07-01
TW202543291A TW202543291A (en) 2025-11-01

Family

ID=97227871

Family Applications (1)

Application Number Title Priority Date Filing Date
TW113115985A TWI889317B (en) 2024-04-29 2024-04-29 3d around view monitoring system

Country Status (1)

Country Link
TW (1) TWI889317B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104268935A (en) * 2014-09-18 2015-01-07 华南理工大学 Feature-based airborne laser point cloud and image data fusion system and method
CN105818763A (en) * 2016-03-09 2016-08-03 乐卡汽车智能科技(北京)有限公司 Method, device and system for confirming distance of object around vehicle
CN106097348A (en) * 2016-06-13 2016-11-09 大连理工大学 A Fusion Method of 3D Laser Point Cloud and 2D Image
CN110235026A (en) * 2017-01-26 2019-09-13 御眼视觉技术有限公司 Vehicle Navigation Based on Aligned Image and Lidar Information
US20190311546A1 (en) * 2018-04-09 2019-10-10 drive.ai Inc. Method for rendering 2d and 3d data within a 3d virtual environment
CN111238494A (en) * 2018-11-29 2020-06-05 财团法人工业技术研究院 Carrier, carrier positioning system and carrier positioning method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104268935A (en) * 2014-09-18 2015-01-07 华南理工大学 Feature-based airborne laser point cloud and image data fusion system and method
CN105818763A (en) * 2016-03-09 2016-08-03 乐卡汽车智能科技(北京)有限公司 Method, device and system for confirming distance of object around vehicle
CN106097348A (en) * 2016-06-13 2016-11-09 大连理工大学 A Fusion Method of 3D Laser Point Cloud and 2D Image
CN110235026A (en) * 2017-01-26 2019-09-13 御眼视觉技术有限公司 Vehicle Navigation Based on Aligned Image and Lidar Information
US20190311546A1 (en) * 2018-04-09 2019-10-10 drive.ai Inc. Method for rendering 2d and 3d data within a 3d virtual environment
CN111238494A (en) * 2018-11-29 2020-06-05 财团法人工业技术研究院 Carrier, carrier positioning system and carrier positioning method

Also Published As

Publication number Publication date
TW202543291A (en) 2025-11-01

Similar Documents

Publication Publication Date Title
CN111062873B (en) A Parallax Image Mosaic and Visualization Method Based on Multiple Pairs of Binocular Cameras
CN110809786B (en) Calibration device, calibration chart, chart pattern generating device and calibration method
US20170127045A1 (en) Image calibrating, stitching and depth rebuilding method of a panoramic fish-eye camera and a system thereof
US9030524B2 (en) Image generating apparatus, synthesis table generating apparatus, and computer readable storage medium
CN112655024A (en) Image calibration method and device
US10726580B2 (en) Method and device for calibration
CN108846796B (en) Image splicing method and electronic equipment
WO2012176945A1 (en) Apparatus for synthesizing three-dimensional images to visualize surroundings of vehicle and method thereof
CN111028155A (en) A Parallax Image Stitching Method Based on Multiple Pairs of Binocular Cameras
CN107038724A (en) Panoramic fisheye camera image correction, synthesis and depth of field reconstruction method and system
CN109754427A (en) A method and apparatus for calibration
CN113870364B (en) Self-adaptive binocular camera calibration method
CN105210368A (en) Background-differential extraction device and background-differential extraction method
CN113259642B (en) Method and system for adjusting video viewing angle
CN116051647B (en) Camera calibration method and electronic equipment
CN109920004A (en) Image processing method, device, calibration material combination, terminal equipment and calibration system
TWI731430B (en) Information display method and information display system
CN114581297B (en) Image processing method and device for surround image
CN110211220A (en) The image calibration suture of panorama fish eye camera and depth reconstruction method and its system
CN108898550A (en) Image split-joint method based on the fitting of space triangular dough sheet
CN115760560A (en) Depth information acquisition method and device, equipment and storage medium
TWI889317B (en) 3d around view monitoring system
JP7474137B2 (en) Information processing device and control method thereof
KR20230007034A (en) Methods and device for lossless correction of fisheye distortion image
CN118505494A (en) Image space conversion processing method