[go: up one dir, main page]

TWI382267B - Auto depth field capturing system and method thereof - Google Patents

Auto depth field capturing system and method thereof Download PDF

Info

Publication number
TWI382267B
TWI382267B TW97136687A TW97136687A TWI382267B TW I382267 B TWI382267 B TW I382267B TW 97136687 A TW97136687 A TW 97136687A TW 97136687 A TW97136687 A TW 97136687A TW I382267 B TWI382267 B TW I382267B
Authority
TW
Taiwan
Prior art keywords
depth
images
camera
field capture
image
Prior art date
Application number
TW97136687A
Other languages
Chinese (zh)
Other versions
TW201013292A (en
Inventor
Liang Gee Chen
Wan Yu Chen
Yu Lin Chang
Chao Chung Cheng
Original Assignee
Univ Nat Taiwan
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Univ Nat Taiwan filed Critical Univ Nat Taiwan
Priority to TW97136687A priority Critical patent/TWI382267B/en
Publication of TW201013292A publication Critical patent/TW201013292A/en
Application granted granted Critical
Publication of TWI382267B publication Critical patent/TWI382267B/en

Links

Landscapes

  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Studio Devices (AREA)

Description

自動景深捕捉系統及自動景深捕捉方法Automatic depth of field capture system and automatic depth of field capture method

本發明關於一種影像捕捉系統,尤其,關於一種用於相機之自動景深捕捉系統及自動景深捕捉方法。The present invention relates to an image capture system, and more particularly to an automatic depth of field capture system for a camera and an automatic depth of field capture method.

電視自二十世紀初期發展至今,從黑白、彩色一直進步到今日的數位電視,是人類對視覺界限的挑戰,也是科技精益求精的表現。在二十一世紀的今天,人類仍舊不斷在顯示技術上突破,希望能夠再為人類帶來另一個不同的視覺影像革命。而顯示技術也從更鮮豔、更細緻的畫面朝著更真實的方向前進。Since the beginning of the twentieth century, television has evolved from black and white and color to today's digital TV. It is a human challenge to the visual boundaries and a manifestation of technological excellence. Today in the 21st century, human beings continue to make breakthroughs in display technology, hoping to bring another different visual image revolution to humanity. And display technology is moving from a more vivid and detailed picture to a more realistic direction.

在以往的顯示媒介,都是以二維的顯示器為主,舉凡映像管電視、電腦螢幕一直到現代的液晶螢幕和電漿電視,皆以呈現出二維的畫面為主。但對人類而言,雙眼視覺更加地真實及自然。在如此的潮流下,立體視覺內容的提供也將會成為新時代數位相片之必需。立體影像可以呈現不同視點的影像給觀眾的雙眼,雙視點三維電視系統上,可以帶給觀眾立體視覺的呈現,在呈現音樂會、電影、運動、景點導覽的影片時尤其特別。而若要有立體感,知道被攝物的「深度」則為最重要。In the past, the display media were mainly based on two-dimensional displays. The video tube TV and the computer screen went all the way to the modern LCD screen and plasma TV. But for humans, binocular vision is more real and natural. Under such a trend, the provision of stereoscopic content will also become a necessity for digital photos in the new era. The stereoscopic image can present images of different viewpoints to the viewer's eyes. On the dual-view 3D television system, the stereoscopic vision can be presented to the audience, especially when presenting concerts, movies, sports, and scenic spots. If you want to have a three-dimensional sense, knowing the "depth" of the subject is the most important.

依據習知技藝,要利用相機拍出立體照片,多半必須以雙鏡頭雙底片為之。在數位的時代,則可以雙鏡頭雙感光元件,只要相機位置校正到適合位置,即可完成立體照 片的拍攝。為了解決上述問題,美國專利第6,959,253號揭露一種用來對超過一個相機之機器視覺測量系統之校準方法。此外,美國專利第6,781,618號揭露一種架構三維景象模型之方法,先以第一相機拍攝具有未知特徵之一景象以取得第一影像,再以第二相機拍攝具有已知特徵之另一景象以取得對應第二影像。第一及第二相機彼此具有一固定實體關係。其三維模型可藉由使用兩個相機之對應方位及固定實體關係來分析。According to the conventional skill, it is necessary to take a stereo photo with a camera, and most of them must use a double lens and a double negative film. In the digital age, you can use a dual-lens double-sensing element. As long as the camera position is corrected to the appropriate position, you can complete the stereo photo. Filming. In order to solve the above problems, a calibration method for a machine vision measuring system for more than one camera is disclosed in U.S. Patent No. 6,959,253. In addition, U.S. Patent No. 6,781,618 discloses a method for constructing a three-dimensional scene model by first taking a scene with an unknown feature to obtain a first image, and then photographing another scene with a known feature with a second camera. Corresponding to the second image. The first and second cameras have a fixed entity relationship with each other. Its three-dimensional model can be analyzed by using the corresponding orientation and fixed entity relationships of the two cameras.

請參見圖1。其揭示美國專利號第6,977,674號立體影像捕捉裝置之電路結構方塊圖。如圖1所示,使用具有一RGB晶片彩色濾鏡14之CCD作為其成像裝置11。意即,應用於光圈22R、22G、及22B之彩色濾鏡對應於成像裝置11所使用之彩色濾鏡。成像裝置(CCD)11所偵測到的影像訊號將提供給影像處理單元30,使得訊號從類比訊號轉換成數位訊號再進行預設訊號處理。立體影像捕捉裝置之影像捕捉操作及錄製媒介M之影像資料錄製操作皆受操作開關群組34之操作所控制。雖然僅有一個相機鏡頭拿來捕捉立體影像,相機鏡頭應該設計成特定及複雜的形狀並進一步由很多受限制的裝置所實現。See Figure 1. It discloses a block diagram of the circuit structure of the stereoscopic image capturing device of U.S. Patent No. 6,977,674. As shown in Fig. 1, a CCD having an RGB wafer color filter 14 is used as its image forming apparatus 11. That is, the color filters applied to the apertures 22R, 22G, and 22B correspond to the color filters used by the imaging device 11. The image signal detected by the imaging device (CCD) 11 is supplied to the image processing unit 30, so that the signal is converted from the analog signal into a digital signal and then subjected to preset signal processing. The image capturing operation of the stereoscopic image capturing device and the image data recording operation of the recording medium M are all controlled by the operation of the operation switch group 34. Although only one camera lens is used to capture stereoscopic images, the camera lens should be designed in a specific and complex shape and further implemented by many restricted devices.

上述裝置使用數個校準鏡頭來實現捕捉立體影像的目的,其中這些裝置更大且更複雜。因此,影像校準被認為可達成重建三維影像的目的。三維影像重建方法包含下列步驟:暫存一景象之正視影像、結合攝影測量影像及景象之工業繪圖以形成正視與透視之對位影像,再從COP影 像重建三維影像。然而,攝影測量影像必須事先由數個相機拍攝。The above apparatus uses several calibration lenses for the purpose of capturing stereoscopic images, where these devices are larger and more complex. Therefore, image calibration is considered to achieve the goal of reconstructing three-dimensional images. The 3D image reconstruction method comprises the following steps: temporarily storing a front view image of a scene, combining the photogrammetric image and the industrial drawing of the scene to form an alignment image of the front view and the perspective view, and then from the COP image Like reconstructing 3D images. However, photogrammetry images must be taken in advance by several cameras.

在美國專利第6,724,930號中,其揭露一種三維方位及方向感測裝置。請參見圖2,其揭示依據美國專利第6,724,930號三維方位及方向感測裝置之結構方塊圖。如圖2所示,具有獨特幾何特徵之複數個標誌2(簡稱為代碼標誌)被置於所欲估計三維方位及方向之物體上或鄰近。這些代碼標誌2皆由影像擷取裝置3所拍攝,且所拍攝影像5將被傳送到電腦4。理論上,物體1及影像擷取裝置3具有其本身之座標系統,而由影像擷取裝置3所拍攝之影像5定義為相機影像平面。然而,影像5應該進一步由電腦處理。在電腦4接收影像5之後,電腦4會從影像5中找出對應於代碼標誌2之候選區域。電腦4會詳細分析所找出之候選區域,然後計算對應於候選區域代碼標誌2之代碼的幾何特徵。識別出代碼後,電腦藉由將此區域視為標誌區域來將方位暫存於影像及代碼中。最後,藉由使用在步驟2暫存於影像中之代碼標誌2的二維影像方位及有關物體1之代碼標誌2的三維方位,電腦4將可計算有關影像擷取裝置3之物體1之三維方位及方向。電腦分析同時了引入很多操作。A three-dimensional orientation and orientation sensing device is disclosed in U.S. Patent No. 6,724,930. Referring to FIG. 2, a block diagram of a three-dimensional orientation and orientation sensing device in accordance with U.S. Patent No. 6,724,930 is incorporated. As shown in FIG. 2, a plurality of markers 2 (abbreviated as code markers) having unique geometric features are placed on or adjacent to the object in which the three-dimensional orientation and direction is to be estimated. These code marks 2 are all taken by the image capturing device 3, and the captured image 5 is transmitted to the computer 4. In theory, the object 1 and the image capturing device 3 have their own coordinate system, and the image 5 captured by the image capturing device 3 is defined as a camera image plane. However, image 5 should be further processed by a computer. After the computer 4 receives the image 5, the computer 4 finds a candidate region corresponding to the code flag 2 from the image 5. The computer 4 analyzes the identified candidate regions in detail and then calculates the geometric features of the code corresponding to the candidate region code flag 2. Once the code is recognized, the computer temporarily stores the location in the image and code by treating the area as a marker area. Finally, by using the two-dimensional image orientation of the code mark 2 temporarily stored in the image in step 2 and the three-dimensional orientation of the code mark 2 of the object 1, the computer 4 can calculate the three-dimensional object 1 of the image capturing device 3. Direction and direction. Computer analysis has introduced many operations at the same time.

然而,在實際應用時,由於習知技藝在三維深度捕捉時,需要使用不只一台相機及很多複雜的校準鏡頭或很多電腦程式操作,故難以實施。因此,需要提供一種透過數位訊號處理的方式來取得物體深度之系統及方法,藉由差異對深度轉換模組提供差異向量及相機外部參數以取得 其深度,這簡化了整個結構及流程,並達成不用改變相機本身而能自動取得物體深度的目的,因此更容易讓使用者拍攝立體影像,並可修正那些習知技藝之缺點及解決上述問題。However, in practical applications, it is difficult to implement because the conventional technique requires more than one camera and many complicated calibration lenses or many computer programs to operate in three-dimensional depth capture. Therefore, it is desirable to provide a system and method for obtaining object depth by means of digital signal processing, by providing a difference vector and a camera external parameter for the depth conversion module to obtain Its depth, which simplifies the entire structure and process, and achieves the purpose of automatically obtaining the depth of the object without changing the camera itself, thus making it easier for the user to take stereoscopic images and correct the shortcomings of the prior art and solve the above problems.

本段章節輯取了本發明之某些特色,而其他體現本發明特徵與優點的一些典型實施例將在後段的說明中詳細敘述。應理解的是本發明能夠在不同的態樣上具有各種的變化,其皆不脫離本發明的範圍,且其中的說明及圖式在本質上係當作說明之用,而非用以限制本發明。Some of the features of the present invention are set forth in this section, and other exemplary embodiments that embody the features and advantages of the present invention are described in detail in the following description. It is to be understood that the invention is capable of various modifications in the various embodiments of the invention invention.

如前所述,習知技藝受限於上述之問題。本發明之一目的為提供一種透過數位訊號處理的方式來取得物體之深度用於相機之自動景深捕捉系統,藉由差異對深度轉換模組提供差異向量及相機外部參數以取得其深度,這簡化了整個結構及流程,並達成不用改變相機本身而能自動取得物體深度的目的,因此更容易讓使用者拍攝立體影像,並可修正那些習知技藝之缺點及解決上述問題。As mentioned previously, the prior art is limited by the above problems. An object of the present invention is to provide a method for processing the depth of an object through a digital signal processing method for an automatic depth of field capture system of a camera. By providing a difference vector and a camera external parameter to the depth conversion module to obtain the depth thereof, the simplification is simplified. The entire structure and process, and the purpose of automatically obtaining the depth of the object without changing the camera itself, makes it easier for the user to take stereoscopic images, and can correct the shortcomings of the prior art and solve the above problems.

依照本發明之一觀點,用於相機之自動景深捕捉系統包含一相機鏡頭,用來拍攝複數個影像;一相機校準裝置,包含一用來估計複數個影像的複數個外極資料之外極估計模組;及一用來估計位置資料之相機外部參數估計模組;至少一圖框緩衝器,用以暫時儲存複數個影像;一影像修正模組,用來修正對應於複數個外極資料之複數個影 像並取得複數個修正影像;一差異估計模組,與相機校準裝置及影像修正模組連接用來接收位置資料並取得修正影像之差異向量;一差異對深度轉換模組,用來取得回應於差異向量及位置資料之深度;及一深度影像繪製模組,用來繪製對應於深度之三維影像。According to one aspect of the present invention, an automatic depth of field capture system for a camera includes a camera lens for capturing a plurality of images, and a camera calibration device including a plurality of external polar data estimates for estimating a plurality of images. a module; and a camera external parameter estimation module for estimating position data; at least one frame buffer for temporarily storing a plurality of images; and an image correction module for correcting a plurality of external pole data Multiple shadows And a plurality of corrected images are obtained; a difference estimation module is connected to the camera calibration device and the image correction module for receiving the position data and obtaining a difference vector of the corrected image; and a difference to the depth conversion module is used to obtain a response The depth vector and the depth of the position data; and a depth image rendering module for drawing a three-dimensional image corresponding to the depth.

根據本案構想,複數個影像可由相機在不同方位所拍攝。According to the concept of the present invention, a plurality of images can be taken by the camera in different directions.

根據本案構想,複數個外極資料為複數個外極線及外極點。According to the concept of the case, the plurality of external pole data are a plurality of outer pole lines and outer poles.

根據本案構想,外極估計模組進一步用追蹤演算法的方式來產生一回應於複數個外極線及外極點之矩陣。According to the concept of the present invention, the external pole estimation module further uses a tracking algorithm to generate a matrix that responds to a plurality of outer and outer poles.

根據本案構想,矩陣在至少兩個複數個影像之間包含一相對移動向量及一相對方向向量。According to the present invention, the matrix includes a relative motion vector and a relative direction vector between at least two of the plurality of images.

根據本案構想,位置資料為相機之三維方位及角度。According to the concept of the case, the location data is the three-dimensional orientation and angle of the camera.

本發明之另一目的為提供一種透過數位訊號處理的方式來取得物體深度之自動景深捕捉方法,藉由差異對深度轉換模組提供差異向量及相機外部參數以取得其深度,這簡化了整個結構及流程。並達成不用改變相機本身而能自動取得物體深度的目的,因此更容易讓使用者拍攝立體影像,並可修正那些習知技藝之缺點及解決上述問題。Another object of the present invention is to provide an automatic depth of field capture method for obtaining object depth by means of digital signal processing, which provides a depth vector and a camera external parameter to obtain depth by using a difference to the depth conversion module, which simplifies the entire structure. And process. And the purpose of automatically obtaining the depth of the object without changing the camera itself is achieved, so that it is easier for the user to take a stereoscopic image, and the shortcomings of the conventional techniques can be corrected and the above problems can be solved.

依照本發明之觀點,用於相機之自動景深捕捉方法包含下列步驟a)拍攝複數個影像;b)估計複數個影像的複數個外極資料以取得一矩陣;c)估計回應於複數個外極資料 及矩陣之位置資料;d)修正對應於複數個外極資料之複數個影像以取得複數個修正影像;e)計算位置資料以取得修正影像之差異向量;f)取得回應於差異向量及位置資料之景深;及g)繪製對應於景深之三維影像。In accordance with the teachings of the present invention, an automatic depth of field capture method for a camera includes the steps of a) capturing a plurality of images; b) estimating a plurality of outer polar data of the plurality of images to obtain a matrix; c) estimating the response to the plurality of outer polarities data And the position data of the matrix; d) correcting the plurality of images corresponding to the plurality of outer pole data to obtain a plurality of corrected images; e) calculating the position data to obtain the difference vector of the corrected image; f) obtaining the response vector and the position data Depth of field; and g) draw a three-dimensional image corresponding to the depth of field.

根據本案構想,步驟a)經由相機鏡頭來執行。According to the present concept, step a) is performed via a camera lens.

根據本案構想,複數個影像為相機在不同方位下所拍攝。According to the concept of the present case, a plurality of images are taken by the camera in different orientations.

根據本案構想,步驟b)經由外極估計模組來執行。According to the present concept, step b) is performed via an external pole estimation module.

根據本案構想,複數個外極資料為複數個影像之複數個外極線及外極點。According to the concept of the present case, a plurality of external pole data are a plurality of outer pole lines and outer poles of a plurality of images.

根據本案構想,回應於複數個外極線及外極點之基本矩陣是由追蹤演算法的方式來取得。According to the concept of the present case, the basic matrix in response to a plurality of outer and outer poles is obtained by means of a tracking algorithm.

根據本案構想,基本矩陣在至少兩個複數個影像之間包括一相對移動向量及一相對方向向量。According to the present concept, the basic matrix includes a relative motion vector and a relative direction vector between at least two of the plurality of images.

根據本案構想,步驟c)經由相機外部參數估計模組來執行。According to the present concept, step c) is performed via a camera external parameter estimation module.

根據本案構想,位置資料為相機之三維方位及角度。According to the concept of the case, the location data is the three-dimensional orientation and angle of the camera.

根據本案構想,步驟d)經由影像修正模組來執行。According to the present concept, step d) is performed via an image correction module.

根據本案構想,步驟e)經由差異估計模組來執行。According to the present concept, step e) is performed via a difference estimation module.

根據本案構想,步驟f)經由差異對深度轉換模組來執行。According to the present concept, step f) is performed via a difference to depth conversion module.

根據本案構想,步驟g)經由深度影像繪製模組來執行。According to the present concept, step g) is performed via a depth image rendering module.

根據本案構想,自動景深捕捉方法進一步包含步驟b1)提供至少一圖框緩衝器,用以暫時儲存複數個影像。According to the present invention, the automatic depth of field capture method further includes the step b1) providing at least one frame buffer for temporarily storing the plurality of images.

本發明上述觀點及優點,熟悉本技藝之人士將可藉由下列圖式與實施例說明,俾得一更清楚之了解。The above-mentioned aspects and advantages of the present invention will become apparent to those skilled in the art from

本發明揭露一種數位訊號處理的方式來取得物體深度之系統及方法。本發明的觀點及優點,熟悉本技藝之人士將可藉由下列圖式與實施例說明,俾得一更清楚之了解。本段落所述之實施例係解釋本發明,但不限制本發明。The invention discloses a system and method for obtaining the depth of an object by means of digital signal processing. The aspects and advantages of the present invention will become apparent to those skilled in the art from the <RTIgt; The examples described in this paragraph are illustrative of the invention, but do not limit the invention.

請參見圖3。其揭示依據本發明一種用於相機之自動景深捕捉系統。如圖3所示,本發明之捕捉系統包含一相機鏡頭401,用來拍攝複數個影像;一相機校準裝置41,包含一外極估計模組411,用來估計複數個影像的複數個外極資料;及一相機外部參數估計模組412,用來估計位置資料;至少一圖框緩衝器413,用以暫時儲存複數個影像;一影像修正模組42,用來修正對應於複數個外極資料之複數個影像並取得複數個修正影像;一差異估計模組43,與相機校準裝置41及影像修正模組42連接以接收位置資料並取得修正影像之差異向量;一差異對深度轉換模組44,用來取得回應於差異向量及位置資料之景深;及一深度影像繪製模組45,用來繪製對應於景深之三維影像。See Figure 3. It discloses an automatic depth of field capture system for a camera in accordance with the present invention. As shown in FIG. 3, the capture system of the present invention includes a camera lens 401 for capturing a plurality of images. A camera calibration device 41 includes an external pole estimation module 411 for estimating a plurality of outer poles of the plurality of images. And a camera external parameter estimation module 412 for estimating location data; at least one frame buffer 413 for temporarily storing a plurality of images; and an image correction module 42 for correcting corresponding to a plurality of external electrodes a plurality of images of the data and a plurality of corrected images; a difference estimation module 43 coupled to the camera calibration device 41 and the image correction module 42 to receive the position data and obtain a difference vector of the corrected image; a difference to depth conversion module 44, for obtaining a depth of field in response to the difference vector and the position data; and a depth image drawing module 45 for drawing a three-dimensional image corresponding to the depth of field.

在實際應用時,複數個影像為相機在不同方位下所各別拍攝。在相機鏡頭401拍攝至少兩個影像之後,影像將暫時儲存到圖框緩衝器413。儲存於影像緩衝器413之影像會再提供給相機校準裝置41之外極估計模組411。在此實 施例中,複數個外極資料為複數個外極線及外極點。外極估計模組411會產生一矩陣,諸如下述方程式中由旋轉矩陣R 及平移矩陣t 所構成之矩陣 其回應於複數個外極線及外極點,並以追蹤演算法的方式來取得。該矩陣在至少兩個複數個影像之間包含一相對移動向量及一相對方向向量。舉例來說,基本矩陣可當成矩陣使用並描述為:F~K2 -1 EK1 -1 ,其中K1 及K2 皆為由不同角度所拍攝之兩個影像的相機參數。為了即時運算考量,得以反向先以全域差別估計取得外極線的方向,再以外極線去反推出基本矩陣。如此即可增加效率。所算出的矩陣將會被傳送至相機外部參數估計模組412。In practical applications, multiple images are taken separately by the camera in different orientations. After the camera lens 401 captures at least two images, the images are temporarily stored in the frame buffer 413. The image stored in the image buffer 413 is again supplied to the external calibration module 411 of the camera calibration device 41. In this embodiment, the plurality of outer pole data are a plurality of outer pole lines and outer poles. The outer pole estimation module 411 generates a matrix, such as a matrix composed of a rotation matrix R and a translation matrix t in the following equations. It responds to a plurality of outer and outer poles and is obtained by means of a tracking algorithm. The matrix includes a relative motion vector and a relative direction vector between at least two of the plurality of images. For example, the basic matrix can be used as a matrix and described as: F~K 2 -1 EK 1 -1 , where K 1 and K 2 are camera parameters of two images taken at different angles. For the purpose of real-time calculation, it is possible to reverse the direction of the outer pole line by the global difference estimation, and then deduct the basic matrix by the outer pole line. This will increase efficiency. The calculated matrix will be passed to the camera external parameter estimation module 412.

在接收複數個外極線及矩陣之後,相機外部參數估計模組412將可產生相機之位置資料,其中位置資料為相機之三維方位及角度,並與先前影像之來源相關。位置資料將提供給差異估計模組43及差異對深度轉換模組44。After receiving the plurality of outer pole lines and the matrix, the camera external parameter estimation module 412 can generate the position data of the camera, wherein the position data is the three-dimensional orientation and angle of the camera and is related to the source of the previous image. The location data will be provided to the difference estimation module 43 and the difference versus depth conversion module 44.

影像修正模組42會修正至少兩個對應於複數個外極資料之複數個影像,並取得複數個修正影像,其中兩個影像之外極線被修正為同一個;且給移動之兩個影像一個中心點,並命名為捕捉影像中心點。將相機外部參數估計模組412之修正影像及位置資料輸入到差異估計模組43。在 本發明中,外極估計模組411、相機外部參數估計模組412、及圖框緩衝器413可整合到相機校準裝置41,其為一較小尺寸之IC裝置,以取代大型電腦。The image correcting module 42 corrects at least two images corresponding to the plurality of outer pole data, and obtains a plurality of corrected images, wherein the outer lines of the two images are corrected to be the same; and the two images are moved A center point and named to capture the center point of the image. The corrected image and position data of the camera external parameter estimation module 412 are input to the difference estimation module 43. in In the present invention, the external pole estimation module 411, the camera external parameter estimation module 412, and the frame buffer 413 can be integrated into the camera calibration device 41, which is a smaller size IC device to replace the large computer.

在差異估計模組43接收到相機外部參數估計模組412之修正影像及位置資料之後,產生一差異向量以傳送到差異對深度轉換模組44。差異對深度轉換模組44將產生回應於差異向量及位置資料之深度,然後深度影像繪製模組45將可繪製對應於深度之三維影像。After the difference estimation module 43 receives the corrected image and position data of the camera external parameter estimation module 412, a difference vector is generated to be transmitted to the difference-to-depth conversion module 44. The difference versus depth conversion module 44 will generate a depth that is responsive to the difference vector and the location data, and then the depth image rendering module 45 will render the 3D image corresponding to the depth.

如前所述,本發明可經由相機來執行。如圖4所示,其揭示依據本發明一種用於相機之自動景深捕捉系統。如圖4所示,數位相機50包含上述本發明之系統。此外,更有一具有一影像感測器501之相機鏡頭於數位相機50上。當使用者半按快門按鈕502時,數位相機50將開始執行捕捉三維影像之流程。其結果可顯示於數位相機50之二維或三維螢幕上(圖5未顯示)。為了捕捉更多影像,數位相機50可移動或旋轉數個方向60。當使用者對於結果滿意時,使用者可完全按下快門按鈕502以完成拍攝三維影像,且其結果將輸出或儲存。圖5進一步揭示用於本發明相機之顯示螢幕。顯示螢幕70可進一步包含一標註區域701以定義聚焦區域以作為實現本發明捕捉流程的主要目標。As previously stated, the present invention can be performed via a camera. As shown in FIG. 4, it discloses an automatic depth of field capture system for a camera in accordance with the present invention. As shown in Figure 4, digital camera 50 includes the system of the present invention described above. In addition, a camera lens having an image sensor 501 is further mounted on the digital camera 50. When the user presses the shutter button 502 halfway, the digital camera 50 will begin the process of capturing the three-dimensional image. The result can be displayed on a two-dimensional or three-dimensional screen of the digital camera 50 (not shown in Figure 5). To capture more images, the digital camera 50 can be moved or rotated in several directions 60. When the user is satisfied with the result, the user can fully press the shutter button 502 to complete the shooting of the three-dimensional image, and the result will be output or stored. Figure 5 further discloses a display screen for use with the camera of the present invention. Display screen 70 may further include an annotation area 701 to define a focus area as a primary goal in implementing the capture process of the present invention.

依照上述系統之觀點,本發明進一步提供一種用於相機之自動景深捕捉方法。請參見圖6。其依據本發明揭示一種用於相機之自動景深捕捉方法。依據圖3及圖6,其方法包含下列步驟a)經由相機鏡頭401拍攝複數個影像,如程 序S601所示,其中複數個影像暫時儲存於圖框緩衝器413;b)經由外極估計模組411估計複數個影像的複數個外極資料以取得基本矩陣,如程序S602所示;c)經由相機外部參數估計模組412估計回應於複數個外極資料及基本矩陣之位置資料,如程序S603所示;d)經由影像修正模組42修正對應於複數個外極資料之複數個影像以取得複數個修正影像,如程序S604所示;e)經由差異估計模組43計算位置資料以取得修正影像之差異向量,如程序S605所示;f)經由差異對深度轉換模組44以取得回應於差異向量及位置資料之深度,如程序S606所示;及g)經由深度影像繪製模組45繪製對應於深度之三維影像,如程序S607所示。In accordance with the above system, the present invention further provides an automatic depth of field capture method for a camera. See Figure 6. It discloses an automatic depth of field capture method for a camera in accordance with the present invention. According to FIG. 3 and FIG. 6, the method includes the following steps: a) capturing a plurality of images via the camera lens 401, such as Step S601, wherein a plurality of images are temporarily stored in the frame buffer 413; b) estimating a plurality of outer pole data of the plurality of images via the outer pole estimation module 411 to obtain a basic matrix, as shown in step S602; c) Estimating the position data of the plurality of external pole data and the basic matrix via the camera external parameter estimation module 412, as shown in step S603; d) correcting the plurality of images corresponding to the plurality of outer pole data via the image correction module 42 Obtaining a plurality of corrected images, as shown in step S604; e) calculating the position data via the difference estimation module 43 to obtain the difference vector of the corrected image, as shown in the program S605; f) obtaining the response via the difference pair depth conversion module 44 The difference vector and the depth of the position data are as shown in the program S606; and g) the three-dimensional image corresponding to the depth is drawn via the depth image drawing module 45, as shown in the program S607.

在實際應用時,複數個影像為相機在不同方位下所各別拍攝。在相機鏡頭401拍攝至少兩個影像之後,影像將暫時儲存於圖框緩衝器413。儲存於影像緩衝器413之影像將再提供給相機校準裝置41之外極估計模組411。在此實施例中,複數個外極資料為複數個外極線及外極點。外極估計模組411以追蹤演算法的方式來產生一回應於複數個外極線及外極點之基本矩陣,其中基本矩陣在至少兩個複數個影像之間包含一相對移動向量及一相對方向向量。如前所述,本發明的方法可達成不用改變相機本身而能自動取得物體深度的目的,因此更容易讓使用者拍攝立體影像。In practical applications, multiple images are taken separately by the camera in different orientations. After the camera lens 401 captures at least two images, the image will be temporarily stored in the frame buffer 413. The image stored in the image buffer 413 is again supplied to the external calibration module 411 of the camera calibration device 41. In this embodiment, the plurality of outer pole data are a plurality of outer pole lines and outer poles. The outer pole estimation module 411 generates a basic matrix responsive to the plurality of outer and outer poles in a tracking algorithm, wherein the basic matrix includes a relative motion vector and a relative direction between the at least two complex images. vector. As described above, the method of the present invention can achieve the purpose of automatically obtaining the depth of the object without changing the camera itself, and thus it is easier for the user to take a stereoscopic image.

總而言之,本發明提供一種透過數位訊號處理的方式來取得物體深度用於相機之自動景深捕捉系統。藉由差異對深度轉換模組提供差異向量及相機外部參數以取得其深度,這簡化了整個結構及流程,並達成不用改變相機本身而能自動取得物體深度的目的,因此更容易讓使用者拍攝三維影像。並可修正那些習知技藝的缺點及解決上述問題。In summary, the present invention provides an automatic depth of field capture system for obtaining camera depth by means of digital signal processing. By providing the difference vector and the camera external parameters to the depth of the depth conversion module to obtain the depth, which simplifies the entire structure and process, and achieves the purpose of automatically obtaining the depth of the object without changing the camera itself, so that it is easier for the user to shoot 3D imagery. It can correct the shortcomings of those skilled arts and solve the above problems.

縱使本發明已由上述之實施例詳細敘述而可由熟悉本技藝之人士任施匠思而為諸般修飾,然皆不脫如附申請專利範圍所欲保護者。The present invention has been described in detail by the above-described embodiments, and may be modified by those skilled in the art, without departing from the scope of the appended claims.

1‧‧‧物體1‧‧‧ objects

2‧‧‧代碼標誌2‧‧‧ code mark

3‧‧‧影像擷取裝置3‧‧‧Image capture device

4‧‧‧電腦4‧‧‧ computer

5‧‧‧影像5‧‧‧ images

11‧‧‧成像裝置11‧‧‧ imaging device

22R、22G、22B‧‧‧彩色濾鏡22R, 22G, 22B‧‧‧ color filters

30‧‧‧影像處理單元30‧‧‧Image Processing Unit

34‧‧‧操作開關群組34‧‧‧Operation switch group

41‧‧‧相機校準裝置41‧‧‧ Camera calibration device

42‧‧‧影像修正模組42‧‧‧Image Correction Module

43‧‧‧差異估計模組43‧‧‧Difference Estimation Module

44‧‧‧差異對深度轉換模組44‧‧‧Difference to depth conversion module

45‧‧‧深度影像繪製模組45‧‧‧Deep image rendering module

401‧‧‧相機鏡頭401‧‧‧ camera lens

411‧‧‧外極估計模組411‧‧‧External Estimation Module

412‧‧‧相機外部參數估計模組412‧‧‧ Camera external parameter estimation module

413‧‧‧圖框緩衝器413‧‧‧Frame buffer

50‧‧‧數位相機50‧‧‧ digital camera

501‧‧‧影像感測器501‧‧‧Image Sensor

502‧‧‧快門按鈕502‧‧‧Shutter button

60‧‧‧方向60‧‧‧ Direction

70‧‧‧顯示螢幕70‧‧‧ Display screen

701‧‧‧標註區域701‧‧‧ marked area

第1圖揭示依據習知技藝立體影像捕捉裝置之電路結構方塊圖;第2圖揭示依據習知技藝三維方位及位置感測裝置之結構方塊圖;第3圖揭示依據本發明一種用於相機之自動景深捕捉系統;第4圖揭示依據本發明一種用於相機之自動景深捕捉系統;第5圖進一步揭示用於本發明相機之顯示螢幕;及第6圖揭示依據本發明一種用於相機之自動景深捕捉方法。1 is a block diagram showing the circuit structure of a stereoscopic image capturing device according to the prior art; FIG. 2 is a block diagram showing the structure of a three-dimensional azimuth and position sensing device according to the prior art; and FIG. 3 is a view showing a camera for use in accordance with the present invention. Automatic depth of field capture system; FIG. 4 discloses an automatic depth of field capture system for a camera in accordance with the present invention; FIG. 5 further discloses a display screen for a camera of the present invention; and FIG. 6 discloses an automatic use for a camera according to the present invention Depth of field capture method.

401‧‧‧相機鏡頭401‧‧‧ camera lens

41‧‧‧相機校準裝置41‧‧‧ Camera calibration device

411‧‧‧外極估計模組411‧‧‧External Estimation Module

412‧‧‧相機外部參數估計模組412‧‧‧ Camera external parameter estimation module

413‧‧‧圖框緩衝器413‧‧‧Frame buffer

42‧‧‧影像修正模組42‧‧‧Image Correction Module

43‧‧‧差異估計模組43‧‧‧Difference Estimation Module

44‧‧‧差異對深度轉換模組44‧‧‧Difference to depth conversion module

45‧‧‧深度影像繪製模組45‧‧‧Deep image rendering module

Claims (20)

一種用於相機之自動景深捕捉系統,包括:一相機鏡頭,用來拍攝複數個影像;一相機校準裝置,包含一用來估計該複數個影像的複數個外極資料之外極估計模組,及一用來估計位置資料之相機外部參數估計模組;至少一圖框緩衝器,用以暫時儲存該複數個影像;一影像修正模組,用來修正對應於該複數個外極資料之該複數個影像並取得複數個修正影像;一差異估計模組,與該相機校準裝置及該影像修正模組連接,用來接收該位置資料並取得該修正影像之差異向量;一差異對深度轉換模組,用來取得回應於該差異向量及該位置資料之深度;及一深度影像繪製模組,用來繪製一對應於該深度之三維影像。 An automatic depth of field capture system for a camera, comprising: a camera lens for capturing a plurality of images; a camera calibration device comprising a plurality of external pole data external estimation modules for estimating the plurality of images, And a camera external parameter estimation module for estimating location data; at least one frame buffer for temporarily storing the plurality of images; and an image correction module for correcting the corresponding plurality of external pole data a plurality of images and obtaining a plurality of corrected images; a difference estimation module coupled to the camera calibration device and the image correction module for receiving the position data and obtaining a difference vector of the corrected image; a difference to depth conversion mode a group for obtaining a depth in response to the difference vector and the position data; and a depth image drawing module for drawing a three-dimensional image corresponding to the depth. 如申請專利範圍第1項所述之自動景深捕捉系統,其中該複數個影像為該相機在不同方位下所拍攝。 The automatic depth of field capture system of claim 1, wherein the plurality of images are captured by the camera in different orientations. 如申請專利範圍第1項所述之自動景深捕捉系統,其中該複數個外極資料為複數個外極線及外極點。 The automatic depth of field capture system of claim 1, wherein the plurality of outer pole data are a plurality of outer pole lines and outer poles. 如申請專利範圍第3項所述之自動景深捕捉系統,其中該外極估計模組進一步用追蹤演算法的方式來產生一回應於該複數個外極線及外極點之矩陣。 The automatic depth of field capture system of claim 3, wherein the external pole estimation module further uses a tracking algorithm to generate a matrix that is responsive to the plurality of outer and outer poles. 如申請專利範圍第4項所述之自動景深捕捉系統,其中該矩陣在至少兩個該複數個影像之間包括一相對移動向量及 一相對方向向量。 The automatic depth of field capture system of claim 4, wherein the matrix includes a relative motion vector between at least two of the plurality of images and A relative direction vector. 如申請專利範圍第1項所述之自動景深捕捉系統,其中該位置資料為該相機之三維方位及角度。 The automatic depth of field capture system of claim 1, wherein the location data is a three-dimensional orientation and angle of the camera. 一種用於相機之自動景深捕捉方法,包括下列步驟:a)拍攝複數個影像;b)估計該複數個影像之複數個外極資料以取得一矩陣;c)估計回應於該複數個外極資料及該矩陣之位置資料;d)修正對應於該複數個外極資料之該複數個影像以取得複數個修正影像;e)計算該位置資料以取得該修正影像之差異向量;f)取得回應於該差異向量及該位置資料之深度;及g)繪製對應於該深度之三維影像。 An automatic depth of field capture method for a camera, comprising the steps of: a) capturing a plurality of images; b) estimating a plurality of outer polar data of the plurality of images to obtain a matrix; c) estimating an response to the plurality of outer polar data And the position data of the matrix; d) correcting the plurality of images corresponding to the plurality of outer pole data to obtain a plurality of corrected images; e) calculating the position data to obtain a difference vector of the corrected image; f) obtaining a response The difference vector and the depth of the position data; and g) drawing a three-dimensional image corresponding to the depth. 如申請專利範圍第7項所述之自動景深捕捉方法,其中該步驟a)經由相機鏡頭來執行。 The automatic depth of field capture method of claim 7, wherein the step a) is performed via a camera lens. 如申請專利範圍第7項所述之自動景深捕捉方法,其中該複數個影像為該相機在不同方位下所拍攝。 The automatic depth of field capture method of claim 7, wherein the plurality of images are captured by the camera in different orientations. 如申請專利範圍第7項所述之自動景深捕捉方法,其中該步驟b)經由外極估計模組來執行。 The automatic depth of field capture method of claim 7, wherein the step b) is performed via an external pole estimation module. 如申請專利範圍第7項所述之自動景深捕捉方法,其中該複數個外極資料為該複數個影像之複數個外極線及外極點。 The automatic depth of field capture method of claim 7, wherein the plurality of outer pole data are a plurality of outer pole lines and outer poles of the plurality of images. 如申請專利範圍第11項所述之自動景深捕捉方法,其中回應於該複數個外極線及外極點之該矩陣是以追蹤演算法的方式來取得。 The automatic depth of field capture method of claim 11, wherein the matrix that is responsive to the plurality of outer and outer poles is obtained by a tracking algorithm. 如申請專利範圍第7項所述之自動景深捕捉方法,其中該矩陣在至少兩個該複數個影像之間包括一相對移動向量及一相對方向向量。 The automatic depth of field capture method of claim 7, wherein the matrix includes a relative motion vector and a relative direction vector between the at least two of the plurality of images. 如申請專利範圍第7項所述之自動景深捕捉方法,其中該步驟c)經由相機外部參數估計模組來執行。 The automatic depth of field capture method of claim 7, wherein the step c) is performed via a camera external parameter estimation module. 如申請專利範圍第7項所述之自動景深捕捉方法,其中該位置資料為該相機之三維方位及角度。 The automatic depth of field capture method of claim 7, wherein the location data is a three-dimensional orientation and angle of the camera. 如申請專利範圍第7項所述之自動景深捕捉方法,其中該步驟d)經由影像修正模組來執行。 The automatic depth of field capture method of claim 7, wherein the step d) is performed by the image correction module. 如申請專利範圍第7項所述之自動景深捕捉方法,其中該步驟e)經由差異估計模組來執行。 The automatic depth of field capture method of claim 7, wherein the step e) is performed via a difference estimation module. 如申請專利範圍第7項所述之自動景深捕捉方法,其中該步驟f)經由差異對深度轉換模組來執行。 The automatic depth of field capture method of claim 7, wherein the step f) is performed via a difference pair depth conversion module. 如申請專利範圍第7項所述之自動景深捕捉方法,其中該步驟g)經由深度影像繪製模組來執行。 The automatic depth of field capture method of claim 7, wherein the step g) is performed via a depth image rendering module. 如申請專利範圍第7項所述之自動景深捕捉方法,進一步包括步驟b1)提供至少一圖框緩衝器,用以暫時儲存該複數個影像。The automatic depth of field capture method of claim 7, further comprising the step b1) providing at least one frame buffer for temporarily storing the plurality of images.
TW97136687A 2008-09-24 2008-09-24 Auto depth field capturing system and method thereof TWI382267B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW97136687A TWI382267B (en) 2008-09-24 2008-09-24 Auto depth field capturing system and method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW97136687A TWI382267B (en) 2008-09-24 2008-09-24 Auto depth field capturing system and method thereof

Publications (2)

Publication Number Publication Date
TW201013292A TW201013292A (en) 2010-04-01
TWI382267B true TWI382267B (en) 2013-01-11

Family

ID=44829285

Family Applications (1)

Application Number Title Priority Date Filing Date
TW97136687A TWI382267B (en) 2008-09-24 2008-09-24 Auto depth field capturing system and method thereof

Country Status (1)

Country Link
TW (1) TWI382267B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106610553A (en) * 2015-10-22 2017-05-03 深圳超多维光电子有限公司 A method and apparatus for auto-focusing

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1464970A (en) * 2000-03-23 2003-12-31 捷装技术公司 Self-Calibrating, Multi-Camera Machine Vision Measurement System
US6724930B1 (en) * 1999-02-04 2004-04-20 Olympus Corporation Three-dimensional position and orientation sensing system
US6771810B1 (en) * 2000-06-16 2004-08-03 Microsoft Corporation System and method for estimating the epipolar geometry between images
US6781618B2 (en) * 2001-08-06 2004-08-24 Mitsubishi Electric Research Laboratories, Inc. Hand-held 3D vision system
US6977674B2 (en) * 2001-05-21 2005-12-20 Pentax Corporation Stereo-image capturing device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6724930B1 (en) * 1999-02-04 2004-04-20 Olympus Corporation Three-dimensional position and orientation sensing system
CN1464970A (en) * 2000-03-23 2003-12-31 捷装技术公司 Self-Calibrating, Multi-Camera Machine Vision Measurement System
US6771810B1 (en) * 2000-06-16 2004-08-03 Microsoft Corporation System and method for estimating the epipolar geometry between images
US6977674B2 (en) * 2001-05-21 2005-12-20 Pentax Corporation Stereo-image capturing device
US6781618B2 (en) * 2001-08-06 2004-08-24 Mitsubishi Electric Research Laboratories, Inc. Hand-held 3D vision system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106610553A (en) * 2015-10-22 2017-05-03 深圳超多维光电子有限公司 A method and apparatus for auto-focusing
CN106610553B (en) * 2015-10-22 2019-06-18 深圳超多维科技有限公司 A kind of method and device of auto-focusing

Also Published As

Publication number Publication date
TW201013292A (en) 2010-04-01

Similar Documents

Publication Publication Date Title
CN106875339B (en) Fisheye image splicing method based on strip-shaped calibration plate
US8208048B2 (en) Method for high dynamic range imaging
US8274552B2 (en) Primary and auxiliary image capture devices for image processing and related methods
CN107666606B (en) Method and device for acquiring binocular panoramic image
CN101884222B (en) The image procossing presented for supporting solid
CN102227746B (en) Stereoscopic image processing device, method, recording medium and stereoscopic imaging apparatus
JP4657313B2 (en) Stereoscopic image display apparatus and method, and program
CN103493484B (en) Imaging device and imaging method
TWI433530B (en) Camera system and image-shooting method with guide for taking stereo photo and method for automatically adjusting stereo photo
JP5814692B2 (en) Imaging apparatus, control method therefor, and program
US20130113898A1 (en) Image capturing apparatus
CN107925751A (en) Systems and methods for multi-view noise reduction and high dynamic range
JP2017108387A (en) Image calibrating, stitching and depth rebuilding method of panoramic fish-eye camera and system thereof
WO2015192547A1 (en) Method for taking three-dimensional picture based on mobile terminal, and mobile terminal
CN114331835A (en) Panoramic image splicing method and device based on optimal mapping matrix
CN111866523B (en) Panoramic video synthesis method and device, electronic equipment and computer storage medium
CN109785225B (en) A method and device for image correction
JP2012085252A (en) Image generation device, image generation method, program, and recording medium with program recorded thereon
Gurrieri et al. Stereoscopic cameras for the real-time acquisition of panoramic 3D images and videos
TWI382267B (en) Auto depth field capturing system and method thereof
CN107743222A (en) A collector-based image data processing method and a three-dimensional panoramic VR collector
TWI504936B (en) Image processing device
JP2017103695A (en) Image processing apparatus, image processing method, and program thereof
WO2012014695A1 (en) Three-dimensional imaging device and imaging method for same
JP5689693B2 (en) Drawing processor