[go: up one dir, main page]

TW201803342A - Apparatus and method for combining with wavelet transformer and edge detector to generate a depth map from a single image can convert 2D planar imaging signals into 3D stereoscopic image signals - Google Patents

Apparatus and method for combining with wavelet transformer and edge detector to generate a depth map from a single image can convert 2D planar imaging signals into 3D stereoscopic image signals Download PDF

Info

Publication number
TW201803342A
TW201803342A TW105121679A TW105121679A TW201803342A TW 201803342 A TW201803342 A TW 201803342A TW 105121679 A TW105121679 A TW 105121679A TW 105121679 A TW105121679 A TW 105121679A TW 201803342 A TW201803342 A TW 201803342A
Authority
TW
Taiwan
Prior art keywords
image
depth
algorithm
wavelet transformation
edge detection
Prior art date
Application number
TW105121679A
Other languages
Chinese (zh)
Other versions
TWI613903B (en
Inventor
陳昱翔
黃德豐
Original Assignee
龍華科技大學
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 龍華科技大學 filed Critical 龍華科技大學
Priority to TW105121679A priority Critical patent/TWI613903B/en
Publication of TW201803342A publication Critical patent/TW201803342A/en
Application granted granted Critical
Publication of TWI613903B publication Critical patent/TWI613903B/en

Links

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

An apparatus and method for combining with wavelet transformer and edge detector to generate a depth map from a single image provide a conversion system for converting 2D planar imaging signals into 3D stereoscopic image signals. In the depth map generated by the depth map establishment method of the invention, each image block is respectively assigned with a depth value. The method of the invention comprises the following steps of inputting an original image (S1), executing wavelet transformation and edge detection on the original image (S2), establishing a defocused map according to the wavelet transforming results and the edge detecting results (S3), executing depth prediction according to the wavelet coefficients of the wavelet transformation (S4), corresponding the depth value to the defocused map to execute depth diffusion (S5), and finally generating a depth map (S6).

Description

結合小波轉換及邊緣偵測建立單張影像深度圖的裝置及其方法 Device and method for establishing single image depth map by combining wavelet transformation and edge detection

本發明有關於視訊系統,尤指一種深度圖(Depth Map)產生裝置及其方法,用以將二維影像資料轉換成三維影像資料。 The invention relates to a videoconferencing system, in particular to a device and a method for generating a depth map, which are used to convert two-dimensional image data into three-dimensional image data.

自從2009年阿凡達3D電影的上市以來,人們開始追求於3D顯示技術的娛樂效果,2010年3D轉播世界盃足球賽,一直到了2016年的虛擬實境頭盔,皆顯示著我們的娛樂產業由2D轉像了3D,人們不在滿足於2D所帶來的影像效果,開始追求於3D顯示技術,目前,因為3D顯示技術的商業化,以及有關3D內容的服務也日益增加,相對的使用者對於3D的需求也跟著增加,然而,對於3D內容的開發並沒有顯著的進展,對比之下,現有相當龐大數量的2D影像或視訊,且個人所拍攝之影像也屬於2D影像,正等著被有效的利用,以便轉換成3D視訊應用。 Since the launch of the Avatar 3D movie in 2009, people have begun to pursue the entertainment effects of 3D display technology. The 3D broadcast of the World Cup in 2010 and the virtual reality helmet in 2016 all show that our entertainment industry has changed from 2D. Like 3D, people are no longer satisfied with the image effects brought by 2D, and began to pursue 3D display technology. At present, because of the commercialization of 3D display technology and services related to 3D content are also increasing, the relative users for 3D Demand has also increased. However, there has been no significant progress in the development of 3D content. In contrast, there are quite a large number of 2D images or videos, and the images taken by individuals are also 2D images, waiting to be effectively used. To convert to 3D video applications.

緣此,有發明人發明如中國專利公開號第CN 103559701 A「基於DCT係數熵的二維單視圖像深度估計方法」中,其提出以具有景深的單張影像中,進行深度的預測,其對擷取待處理的影像中的每個像素點, 以該像素點為中心擷取窗口作為子影像,並對這些子影像進行小波轉換後,對影像中的小波係數值進行量化,然後計算其係數熵以做為該像素點的模糊度,接著透過線性映射把熵值映射到一8bit的深度值域,以得到一像素級的深度圖。又,有發明人提出中國專利公告號第CN 10247539B號「視頻圖像2D轉3D的方法」,其利用小波轉換對單張具有景深的影像進行深度的預測,透過對原始影像進行小波轉換,以提取影像中的高頻係數,並將影像分為數個區塊,接著統計每個區塊中非零係數的數目為該區塊的模糊度,同時,基於原始影像的顏色特徵,對原始影像進行顏色分割成三類像素集合,然後比較每一個像素集合的模糊度以統計平均值,最大值對應的像素集合做為前景,次大值對應的像素集合看作中景,最小對應的像素值則看作背景,最後由預設景深的系統對前景、中景以及背景分別賦予不同的深度值,以得到深度圖。 For this reason, some inventors have invented, for example, Chinese Patent Publication No. CN 103559701 A "Depth estimation method of two-dimensional single-view image based on DCT coefficient entropy", which proposes to predict depth in a single image with depth of field. It captures every pixel in the image to be processed. Take the pixel as the center to capture the window as a sub-image. After wavelet transformation of these sub-images, quantize the wavelet coefficient value in the image, then calculate its coefficient entropy as the blur of the pixel, and then pass through The linear mapping maps the entropy value to an 8-bit depth range to get a one-pixel depth map. In addition, some inventors have proposed Chinese Patent Publication No. CN 10247539B "Method for 2D to 3D Video Image", which uses wavelet transform to predict the depth of a single image with depth of field, and performs wavelet transform on the original image to Extract the high-frequency coefficients in the image, divide the image into several blocks, and then count the number of non-zero coefficients in each block as the blur of the block. At the same time, based on the color characteristics of the original image, the original image is processed. The color is divided into three types of pixel sets, and then the ambiguity of each pixel set is compared with a statistical average. The pixel set corresponding to the maximum value is used as the foreground, the pixel set corresponding to the second largest value is regarded as the middle shot, and the minimum corresponding pixel value is Considered as the background, the depth of the foreground, the middle scene and the background are given different depth values by the system of preset depth of field to obtain the depth map.

由上述所揭之習知技術可知先前技術,習知之深度圖產生方法具有相關缺點,如對於以像素點為中心之窗口,其窗口設定之大小需人工設立,且無法根據不同的影像自動進行調整。再者,使用原始影像的顏色特徵將影像分割為三類像素的集合,僅僅將影像分為前景、中景以及背景,顯然在我們平時所看到的豐富影像具有多層次的深度資訊不同,導致無法產生正確的深度圖。 According to the known techniques disclosed above, the prior art and the known depth map generation method have related disadvantages. For example, for a window centered on a pixel, the size of the window setting needs to be manually set, and it cannot be automatically adjusted according to different images. . Furthermore, using the color characteristics of the original image to segment the image into a collection of three types of pixels, and only dividing the image into the foreground, mid-range, and background, it is clear that the rich images we usually see have different levels of depth information, leading to Unable to produce the correct depth map.

有鑑於此,本發明人係依據多年從事相關行業及研究,針對現有的深度圖產生方法進行研究及分析,期能發明出改善習知缺點之深度 圖產生方法,緣此,本發明之主要目的在於不需人工干涉,且符合人眼所觀看之深度資訊的結合小波轉換及邊緣偵測建立單張影像深度圖的裝置及其方法。 In view of this, the inventors have been engaged in related industries and research based on many years of research and analysis of the existing method of generating depth maps, and hope to invent the depth of improving the known shortcomings. A method for generating a map. For this reason, the main purpose of the present invention is a device and method for creating a single image depth map by combining wavelet transformation and edge detection without human intervention and conforming to the depth information viewed by the human eye.

為達上述目的,本發明所述之結合小波轉換及邊緣偵測建立單張影像深度圖的裝置及其方法,具有一影像擷取單元,用以擷取或輸入一原始影像;一影像分析單元,與影像擷取單元呈資訊連結,用以對該影像擷取單元所擷取或輸入之原始影像執行影像分析,其中,所述的影像分析可執行影像分析演算法,所述的影像分析演算法可為小波轉換、邊緣偵測等;一影像合成單元,與該影像分析單元呈資訊連結,當影像分析單元執行影像分析演算法後,影像合成單元可將影像分析單元分析之結果執行影像合成,以產生一散焦圖;一深度計算單元,與影像合成單元呈資訊連結,當影像合成單元產生該散焦圖後,該深度計算單元係依據該影像分析單元分析之結果,執行一深度預測演算法,並將深度預測之結果對應至該散焦圖,以便深度計算單元可依據深度預測與散焦圖對應之結果,執行一深度擴散演算法,且所述的深度擴散演算法可為一拉普拉斯插值技術或一全域插值演算法,並不以此為限。 To achieve the above object, the device and method for creating a single image depth map by combining wavelet transformation and edge detection according to the present invention have an image acquisition unit for acquiring or inputting an original image; an image analysis unit And an information link with the image capturing unit for performing image analysis on the original image captured or input by the image capturing unit, wherein the image analysis can execute an image analysis algorithm and the image analysis algorithm The method can be wavelet transform, edge detection, etc .; an image synthesis unit is linked to the image analysis unit. After the image analysis unit executes the image analysis algorithm, the image synthesis unit can perform the image synthesis by analyzing the results of the image analysis unit. To generate a defocus map; a depth calculation unit that is informationally linked to the image synthesis unit; after the image synthesis unit generates the defocus map, the depth calculation unit performs a depth prediction based on the analysis result of the image analysis unit Algorithm and map the result of depth prediction to the defocus map so that the depth calculation unit can The results correspond to the defocus map, performing a deep diffusion algorithm, and the depth of diffusion algorithm may be a Laplacian interpolation technique or a global interpolation algorithm, it is not limited thereto.

1‧‧‧結合小波轉換及邊緣偵測建立單張影像深度圖的裝置 1‧‧‧ Device for creating a single image depth map by combining wavelet transform and edge detection

11‧‧‧影像擷取單元 11‧‧‧Image Acquisition Unit

12‧‧‧影像分析單元 12‧‧‧Image Analysis Unit

13‧‧‧影像合成單元 13‧‧‧Image Synthesis Unit

14‧‧‧深度計算單元 14‧‧‧Deep Computing Unit

S1‧‧‧輸入原始影像步驟 S1‧‧‧ Input original image steps

S2‧‧‧影像分析步驟 S2‧‧‧Image analysis steps

S22‧‧‧邊緣偵測步驟 S22‧‧‧Edge detection steps

S23‧‧‧小波轉換步驟 S23‧‧‧ Wavelet conversion steps

S231‧‧‧轉換為灰階影像 S231‧‧‧Convert to grayscale image

S232‧‧‧尋找局部最大直步驟 S232‧‧‧Find the local maximum straight step

S233‧‧‧局部最大值對應小波係數值 S233‧‧‧ local maximum corresponding to wavelet coefficient value

S234‧‧‧閥值計算結果 S234‧‧‧Threshold calculation result

S3‧‧‧建立散焦圖步驟 S3‧‧‧Steps to create a defocus map

S4‧‧‧深度預測步驟 S4‧‧‧ depth prediction steps

S41‧‧‧直方圖局部最大值之個數步驟 Steps in S41‧‧‧Histogram Local Maximum

S42‧‧‧依據個數建立窗口步驟 S42‧‧‧ Create window steps based on number

S43‧‧‧對小波轉換結果進行模糊度計算步驟 S43‧‧‧ Steps for calculating the ambiguity of the wavelet transform result

S44‧‧‧深度預測結果步驟 S44‧‧‧Depth prediction result steps

S5‧‧‧深度擴散步驟 S5‧‧‧Deep diffusion step

S6‧‧‧產生深度圖步驟 S6‧‧‧ Generate depth map step

第1圖,為本發明之結構示意圖。 FIG. 1 is a structural diagram of the present invention.

第2圖,為本發明之步驟流程圖。 Figure 2 is a flowchart of the steps of the present invention.

第3圖,為本發明之實施示意圖。 Figure 3 is a schematic diagram of the implementation of the present invention.

第4圖,為本發明之實施示意圖(一)。 Figure 4 is a schematic diagram (I) of the implementation of the present invention.

第5圖,為本發明之實施示意圖(二)。 FIG. 5 is a schematic diagram (2) of the implementation of the present invention.

第6圖,為本發明之實施例示意圖。 FIG. 6 is a schematic diagram of an embodiment of the present invention.

第7圖,為本發明之實施例示意圖(一)。 FIG. 7 is a schematic diagram (1) of an embodiment of the present invention.

第8圖,為本發明之實施立示意圖(二)。 FIG. 8 is a schematic view (two) of the implementation of the present invention.

於以下說明書的描述中,「深度圖」一詞是指深度值的二維矩陣,而該矩陣中的每一深度值,分別對應一場景的相對位置,以及每一深度值代表一特定參考位置至該場景之各相對位置的距離,若一2D影像的每一像素具有各自的深度值,則該2D影像就能使用3D技術來顯示。 In the description of the following description, the term "depth map" refers to a two-dimensional matrix of depth values, and each depth value in the matrix corresponds to the relative position of a scene, and each depth value represents a specific reference position The distance to the relative position of the scene, if each pixel of a 2D image has its own depth value, the 2D image can be displayed using 3D technology.

茲為使 貴審查委員得以對本發明所欲達成之目的、技術手段及功效等有進一步了解與認識,謹佐以較佳實施例搭配圖式說明。 In order to allow your reviewers to have a better understanding and understanding of the purpose, technical means, and effect of the present invention, we would like to explain it with preferred embodiments and drawings.

請參閱「第1圖」,圖中所示為本發明之結構示意圖,如圖所示,本發明之結合小波轉換及邊緣偵測建立單張影像深度圖的裝置1,主要係由一影像擷取單元11、一影像分析單元12、一影像合成單元13、一深度計算單元14所組構而成,其中,該影像擷取單元11係可擷取一原始影像,而所述的該原始影像為2D影像或視訊,該影像分析單元12,與該影像擷取單元11呈資訊連結,用以接收該原始影像後,執行複數個影像分析演算法,其中,所述的影像分析演算法可為一小波轉換演算法、一邊緣偵測演算法其中之一種或其組合,又,所述的該小波轉換演算法可為離散小波轉換或連續小波轉換,又,該邊緣偵測演算法可為Roberts Cross算子、Prewitt算 子、Sobel算子、Canny算子、羅盤算子、Marr-Hildreth、小波轉換其中之一種,但凡可偵測該原始影像中之邊緣偵測演算法皆為本發明之實施範疇內,但並不以此為限。該影像合成單元13與該影像分析單元12呈資訊連結,用以將該影像分析單元12所分析之影像結果執行一影像合成,進而產生一散焦圖。該深度計算單元14與該影像合成單元13呈資訊連結,係依據該影像分析單元12所分析之影像結果執行一深度預測演算法後,經該深度預測演算法至結果透過該影像合成單元13進行合成,爾後,該深度計算單元14接續執行一深度擴散演算法,且所述的深度擴散演算法可為一拉普拉斯插值技術或一全域插值演算法,並不以此為限,產生與該原始影像搭配之一深度圖。 Please refer to "Fig. 1", which shows a schematic diagram of the structure of the present invention. As shown in the figure, the apparatus 1 for creating a single image depth map by combining wavelet transformation and edge detection according to the present invention is mainly composed of an image capture It is composed of an acquisition unit 11, an image analysis unit 12, an image synthesis unit 13, and a depth calculation unit 14. The image acquisition unit 11 can acquire an original image, and the original image is described. It is a 2D image or video. The image analysis unit 12 is information-linked to the image capture unit 11 to receive the original image and execute a plurality of image analysis algorithms. The image analysis algorithm may be One or a combination of a wavelet transform algorithm, an edge detection algorithm, and the wavelet transform algorithm may be discrete wavelet transform or continuous wavelet transform, and the edge detection algorithm may be Roberts Cross operator, Prewitt operator Operator, Sobel operator, Canny operator, compass operator, Marr-Hildreth, and wavelet transform. Any edge detection algorithm that can detect the original image is within the scope of the present invention, but it is not This is the limit. The image synthesizing unit 13 and the image analyzing unit 12 are information-linked to perform an image synthesizing of the image results analyzed by the image analyzing unit 12 to generate a defocus image. The depth calculation unit 14 and the image synthesis unit 13 are informationally linked. After performing a depth prediction algorithm based on the image results analyzed by the image analysis unit 12, the depth prediction algorithm is used to obtain the results through the image synthesis unit 13 After synthesis, the depth calculation unit 14 then executes a depth diffusion algorithm, and the depth diffusion algorithm may be a Laplacian interpolation technique or a global interpolation algorithm, and is not limited to this. The original image is paired with a depth map.

承上所述,並請參閱「第2圖」,圖中所示為本發明之步驟流程圖,如圖所示,本發明實施步驟如下:一輸入原始影像步驟S1,其為該影像擷取單元11所輸入之該原始影像。一影像分析步驟S2,其包含有一小波轉換步驟S23及一邊緣偵測步驟S22,係對該原始影像進行一小波轉換分析及一邊緣偵測分析,其中,該小波轉換步驟S23係執行一小波轉換演算法,以產生一小波轉換分析結果,且所述的小波轉換演算法可為一離散小波轉換或一連續小波轉換其中之一種,並不以此為限,及該邊緣偵測步驟S22係執行一邊緣偵測演算法,以產生一邊緣偵測之結果,且該邊緣偵測演算法可為Roberts Cross算子、Prewitt算子、Sobel算子、Canny算子、羅盤算子、Marr-Hildreth、小波轉換其中之一種,並請搭配參閱「第3圖」,圖中所示為本發明之實施示意圖,如圖所示為小波轉換分析二值化之結果。一建立散焦圖步驟S3,其為該影像合成單元13將該小波轉換分析結果及該 邊緣偵測之結果進行合成,以產生一散焦圖,請搭配參閱「第4圖」,圖中所示為本發明之實施示意圖(一),如圖所示為該散焦圖。所述的合成為該邊緣偵測之結果對應於該小波轉換分析結果,以提取該邊緣偵測結果中像素點於該小波轉換分析結果之係數值。一深度預測步驟S4,為該深度計算單元14依據該影像分析步驟S2對該原始影像進行小波轉換後之小波轉換結果執行一深度預測演算法,以對原始影像進行一深度預測,該深度計算單元14執行該深度預測演算法後,與該建立散焦圖步驟S3所產生之該散焦圖,透過該影像合成單元13進行合成,以產生一散焦深度圖。所述的合成為該深度預測之結果對應於該散焦圖之結果,並將深度預測之結果替換至散焦圖中。一深度擴散步驟S5,為依據該散焦深度圖執行一深度擴散演算法,且所述之該深度擴散演算法可為該拉普拉斯插值技術或該全域插值演算法。最後,產生深度圖步驟S6,該深度計算單元14執行完深度擴散演算法後,係會產生一深度圖,請參閱「第5圖」,圖中所示為本發明之實施示意圖(二),如圖所示為該深度圖。 Carrying on the above, please refer to "Figure 2", which shows a flowchart of the steps of the present invention. As shown in the figure, the implementation steps of the present invention are as follows: an input original image step S1, which is an image capture The original image input by the unit 11. An image analysis step S2, which includes a wavelet conversion step S23 and an edge detection step S22, and performs a wavelet conversion analysis and an edge detection analysis on the original image, wherein the wavelet conversion step S23 performs a wavelet conversion Algorithm to generate a wavelet transform analysis result, and the wavelet transform algorithm may be one of a discrete wavelet transform or a continuous wavelet transform, and is not limited thereto, and the edge detection step S22 is performed An edge detection algorithm to generate an edge detection result, and the edge detection algorithm can be Roberts Cross operator, Prewitt operator, Sobel operator, Canny operator, compass operator, Marr-Hildreth, One of the wavelet transforms, and please refer to "Figure 3" for the matching. The figure shows the implementation of the present invention. The figure shows the results of the binarization of the wavelet transform analysis. A step S3 of establishing a defocus map is the result of the wavelet transform analysis by the image synthesis unit 13 and the The results of edge detection are synthesized to generate a defocus image. Please refer to "Figure 4" for the matching. The figure shows the implementation schematic diagram (1) of the present invention, and the defocus image is shown in the figure. The synthesis is that the result of the edge detection corresponds to the result of the wavelet transform analysis to extract the coefficient values of the pixels in the edge detection result to the result of the wavelet transform analysis. A depth prediction step S4 is to perform a depth prediction algorithm for the depth calculation unit 14 to perform a depth prediction on the original image according to the wavelet transform result of the original image according to the image analysis step S2. The depth calculation unit 14 After executing the depth prediction algorithm, the defocus map generated in step S3 of establishing a defocus map is synthesized through the image synthesis unit 13 to generate a defocused depth map. The synthesis is that the result of the depth prediction corresponds to the result of the defocus map, and the result of the depth prediction is replaced into the defocus map. A depth diffusion step S5 is to perform a depth diffusion algorithm based on the defocused depth map, and the depth diffusion algorithm may be the Laplacian interpolation technique or the global interpolation algorithm. Finally, a depth map generation step S6 is performed. After the depth calculation unit 14 executes the depth diffusion algorithm, a depth map will be generated. Please refer to "Figure 5", which shows the implementation schematic diagram (2) of the present invention. The depth map is shown in the figure.

承上所述,並請同時搭配參閱「第6圖」,圖中所示為本發明之實施例示意圖,如圖所示,該深度預測步驟S4所執行之該深度預測演算法之步驟流程為一直方圖局部最大值個數步驟S41,係找出該原始影像之灰階值直方圖於該灰階值直方圖中之峰值個數,一依據個數建立窗口步驟S42,依據該峰值個數建立一計算窗口,一對小波轉換結果進行模糊度計算S43,係依據該小波轉換結果以該窗口之中心像素點為中心執行一鄰域計算,計算該中心像素點鄰域之小波轉換結果,一深度預測結果步驟S44,係依據計算中心點像素鄰域之結果,即模糊度,進行深度預測。 Following the above description, please also refer to "Figure 6". The figure is a schematic diagram of an embodiment of the present invention. As shown in the figure, the step flow of the depth prediction algorithm performed by the depth prediction step S4 is: The number of local maximums of the histogram step S41 is to find the number of peaks of the grayscale value histogram of the original image in the grayscale value histogram, and a window is established based on the number of steps S42, based on the number of peaks A calculation window is established, and a pair of wavelet transformation results is used to perform the ambiguity calculation S43. Based on the wavelet transformation result, a neighborhood calculation is performed with the center pixel of the window as the center, and the wavelet transformation result of the neighborhood of the center pixel is calculated. Step S44 of the depth prediction result is to perform depth prediction based on the result of calculating the pixel neighborhood of the center point, that is, the blur degree.

承上所述,並請同時搭配參閱「第7圖」,圖中所示為本發明之實施例示意圖(一),如圖所示,小波轉換步驟S23進一步包含有小波轉換閥值設立步驟,其步驟包含有一轉換為灰階影像步驟S231,係將該原始影像轉換為一灰階影像。一尋找局部最大值步驟S232,係依據該原始影像之灰階影像建立一灰階值直方圖,並於該灰階值直方圖中,尋找峰值所在之灰階值。一局部最大值對應小波係數值步驟S233,將所尋找到的峰值所在之所有灰階值,其位於於該原始影像之位置對應至小波轉換結果中之係數值所在之位置,並將所有係數值擷取出來。一閥值計算結果步驟S234,將所擷取之係數值透過數值分析,進行小波轉換閥值設立,其中所述的數值分析可為辛普森法則,且閥值計算函式如下:

Figure TW201803342AD00001
其中,f(x)為所擷取之所有係數值,並將所有係數值中,取前三大之係數值,透過方程式(1)進行閥值之計算,當閥值計算完成後,滿足下列函式:
Figure TW201803342AD00002
其中,I(m,n)為小波轉換結果,Th為計算之閥值,並請搭配參閱「第3圖」,圖中所示為本發明之實施示意圖,如圖所示為小波轉換後並二值化之結果,即為尚未設立閥值之結果,並請參閱「第8圖」,圖中所示為本發明之實施立示意圖(二),如圖所示為閥值設立後之結果,將大於等於閥值之結果設為255,即白色部分,小於閥值之結果為0,即黑色部分。 Following the above description, please also refer to "Figure 7". The figure shows the schematic diagram (I) of the embodiment of the present invention. As shown in the figure, the wavelet transformation step S23 further includes a wavelet transformation threshold setting step. The steps include a step S231 of converting to a grayscale image, which converts the original image into a grayscale image. A step S232 of finding a local maximum is to establish a grayscale value histogram based on the grayscale image of the original image, and find the grayscale value where the peak is located in the grayscale value histogram. A local maximum corresponds to the wavelet coefficient value step S233. All grayscale values of the found peak are located at positions corresponding to the original image to the coefficient values in the wavelet transform result, and all coefficient values are set. Take it out. A threshold value calculation result step S234 is to set the wavelet transform threshold value through numerical analysis through numerical analysis. The numerical analysis can be Simpson's law, and the threshold value calculation function is as follows:
Figure TW201803342AD00001
Among them, f (x) is all the captured coefficient values, and among all the coefficient values, the first three major coefficient values are taken, and the threshold value is calculated through equation (1). When the threshold value calculation is completed, the following Function:
Figure TW201803342AD00002
Among them, I (m, n) is the result of wavelet transformation, Th is the calculated threshold value, and please refer to "Figure 3" for matching. The result of binarization is the result that the threshold has not been established. Please refer to "Figure 8". The figure shows the schematic diagram of the implementation of the present invention (II). , Set the result greater than or equal to the threshold to 255, that is, the white part, and the result less than the threshold to 0, that is, the black part.

綜上所述,本發明之結合小波轉換及邊緣偵測建立單張影像深度圖的裝置及其方法,主要係藉一影像分析演算法對一原始影像進行影像分析,以使一深度計算單元可執行深度預測演算法後,執行深度擴散演算法,以產生深度圖,由於影像分析演算法執行快速,且準確率高,不需複雜計算,因此量測效率佳,且因本發明不需複雜且龐大的運算,因此成本亦相對減少,又,可達到本發明之主要目不需人工干涉,且符合人眼所觀看之深度資訊的結合小波轉換及邊緣偵測建立單張影像深度圖的裝置及其方法。 In summary, the device and method for creating a single image depth map by combining wavelet transformation and edge detection according to the present invention are mainly based on an image analysis algorithm to perform image analysis on an original image, so that a depth calculation unit can After the depth prediction algorithm is executed, the depth diffusion algorithm is executed to generate the depth map. Since the image analysis algorithm is fast and accurate, and does not require complicated calculations, the measurement efficiency is good, and because the present invention does not need to be complicated and A huge calculation, so the cost is relatively reduced, and it can achieve the main purpose of the present invention without artificial intervention, and in accordance with the depth information viewed by the human eye, a device that creates a single image depth map by combining wavelet transformation and edge detection, and Its method.

雖本發明已以較佳實施例揭露如上,然,其並非用以限定本發明之申請專利範圍,任何熟習此技藝者,再不脫離本發明之精神和範圍內,當可作些許更動及修改,因此本發明之保護範圍並不以此為限。 Although the present invention has been disclosed as above with preferred embodiments, it is not intended to limit the scope of patent application for the present invention. Anyone skilled in this art can make some changes and modifications without departing from the spirit and scope of the present invention. Therefore, the protection scope of the present invention is not limited thereto.

S1‧‧‧輸入原始影像步驟 S1‧‧‧ Input original image steps

S2‧‧‧影像分析步驟 S2‧‧‧Image analysis steps

S22‧‧‧小波轉換步驟 S22‧‧‧ Wavelet conversion steps

S23‧‧‧邊緣偵測步驟 S23‧‧‧Edge detection steps

S3‧‧‧建立散焦圖步驟 S3‧‧‧Steps to create a defocus map

S4‧‧‧深度預測步驟 S4‧‧‧ depth prediction steps

S5‧‧‧深度擴散步驟 S5‧‧‧Deep diffusion step

S6‧‧‧產生深度圖步驟 S6‧‧‧ Generate depth map step

Claims (10)

一種結合小波轉換及邊緣偵測建立單張影像深度圖的裝置,其包括有:一影像擷取單元,用以輸入一原始影像;一影像分析單元,與該影像擷取單元呈資訊連結,用以執行一影像分析演算法,並分析該影像擷取單元輸入之該原始影像;一影像合成單元,與該影像分析單元呈資訊連結,用以將該影像分析單元分析該原始影像後之複數個分析結果,將該複數個分析結果執行一影像合成,而產生一散焦圖;一深度計算單元,與該影像合成單元呈資訊連結,可依據該複數個分析結果執行一深度預測演算法,完成後,執行一深度擴散演算法,以及,該深度擴散演算法執行後,可產生一深度圖。 A device for creating a single image depth map by combining wavelet transformation and edge detection, which includes: an image acquisition unit for inputting an original image; an image analysis unit that is information-linked to the image acquisition unit, and An image analysis algorithm is executed to analyze the original image input by the image capture unit; an image synthesis unit is information-linked with the image analysis unit to analyze the original image by the image analysis unit. Analyze the results, perform an image synthesis on the plurality of analysis results to generate a defocus map; a depth calculation unit, which is information-linked to the image synthesis unit, and can perform a depth prediction algorithm based on the plurality of analysis results to complete Then, a depth diffusion algorithm is executed, and a depth map can be generated after the depth diffusion algorithm is executed. 如申請專利範圍第1項所述之該結合小波轉換及邊緣偵測建立單張影像深度圖的裝置,其中,該影像分析演算法可為一小波轉換演算法或一邊緣偵測演算法其中之一種或其組合。 The device for creating a single image depth map by combining wavelet transformation and edge detection as described in item 1 of the scope of patent application, wherein the image analysis algorithm may be one of a wavelet transformation algorithm or an edge detection algorithm. One or a combination. 如申請專利範圍第2項所述之該結合小波轉換及邊緣偵測建立單張影像深度圖的裝置,其中,該小波轉換演算法可為離散小轉換或連續小波轉換其中之一種。 The device for creating a single image depth map by combining wavelet transformation and edge detection as described in item 2 of the scope of the patent application, wherein the wavelet transformation algorithm can be one of discrete wavelet transformation or continuous wavelet transformation. 如申請專利範圍第2項所述之該結合小波轉換及邊緣偵測建立單張影像深度圖的裝置,其中,該邊緣偵測演算法可為Roberts Cross算子、Prewitt算子、Sobel算子、Canny算子、羅盤算子、Marr-Hildreth、小波轉換其中之一種。 The device for creating a single image depth map by combining wavelet transformation and edge detection as described in item 2 of the scope of the patent application, wherein the edge detection algorithm can be Roberts Cross operator, Prewitt operator, Sobel operator, Canny operator, compass operator, Marr-Hildreth, wavelet transform. 如申請專利範圍第1項所述之該結合小波轉換及邊緣偵測建立單張影像深度圖的裝置,其中,該深度計算單元所執行之該深度預測演算法,其步驟為依據該原始影像之一灰階值直方圖之局部最大值個數,建立一窗口,並依據該影像分析之結果執行一模糊度之計算。 The device for creating a single image depth map by combining wavelet transformation and edge detection as described in item 1 of the scope of patent application, wherein the depth prediction algorithm executed by the depth calculation unit is based on the steps of the original image A number of local maximums of a grayscale value histogram, a window is created, and a blur calculation is performed according to the result of the image analysis. 如申請專利範圍第3項所述之該結合小波轉換及邊緣偵測建立單張影像深度圖的裝置,其中,該小波轉換演算法,進一步包含有一小波轉換閥值設立步驟。 The device for establishing a single image depth map by combining wavelet transformation and edge detection as described in item 3 of the scope of the patent application, wherein the wavelet transformation algorithm further includes a step of establishing a wavelet transformation threshold. 一種結合小波轉換及邊緣偵測建立單張影像深度圖的方法,其係應用於一結合小波轉換及邊緣偵測建立單張影像深度圖的裝置,包括有一影像擷取單元、一影像分析單元、一影像合成單元、一深度計算單元,且所述之各單元互相成資訊連結,該結合小波轉換及邊緣偵測建立單張影像深度圖的裝置,其實施方法包括:一輸入原始影像步驟,透過該影像擷取單元輸入一原始影像;一影像分析步驟,承上,該影像分析單元執行複數個影像分析演算法,分析該原始影像;一建立散焦圖步驟,當該影像分析步驟完成後,該影像合成單元將該影像分析步驟所分析之該原始應像的結果,執行一影像合成,進而產生一散焦圖;一深度預測步驟,當該散焦圖產生後,該深度計算單元依據該影像分析步驟之結果執行一深度預測演算法;一深度擴散步驟,當該深度計算單元執行完該深度預測演算法後,執行一深度擴散演算法; 一產生深度圖步驟,承上,當該深度擴散演算法完成後,產生一深度圖。 A method for establishing a single image depth map by combining wavelet transformation and edge detection, which is applied to a device for establishing a single image depth map by combining wavelet transformation and edge detection, and includes an image acquisition unit, an image analysis unit, An image synthesis unit and a depth calculation unit, and each of the units is an information link with each other. The device for combining a wavelet transformation and edge detection to create a single image depth map includes a method of inputting an original image, and The image capture unit inputs an original image; an image analysis step, and the image analysis unit executes a plurality of image analysis algorithms to analyze the original image; a step of establishing a defocus image, and after the image analysis step is completed, The image synthesis unit performs an image synthesis on the result of the original response image analyzed by the image analysis step to generate a defocus map; a depth prediction step, after the defocus map is generated, the depth calculation unit is based on the The result of the image analysis step executes a depth prediction algorithm; a depth diffusion step, when the depth calculation After execution of the depth prediction algorithm, performing a deep diffusion algorithm; At the step of generating a depth map, a depth map is generated after the depth diffusion algorithm is completed. 如申請專利範圍第8項所述之該結合小波轉換及邊緣偵測建立單張影像深度圖的方法,其中,該影像分析演算法可為一小波轉換演算法或一邊緣偵測演算法之其中一種或其組合。 The method for establishing a single image depth map by combining wavelet transformation and edge detection as described in item 8 of the scope of patent application, wherein the image analysis algorithm may be one of a wavelet transformation algorithm or an edge detection algorithm. One or a combination. 如申請專利範圍第8項所述之該結合小波轉換及邊緣偵測建立單張影像深度圖的方法,其中,該深度預測演算法其步驟為依據該原始影像之一灰階值直方圖之局部最大值個數,建立一窗口,並依據該影像分析之結果執行一模糊度之計算。 The method for establishing a single image depth map by combining wavelet transformation and edge detection as described in item 8 of the scope of the patent application, wherein the steps of the depth prediction algorithm are based on a part of the grayscale value histogram of the original image A maximum number is established, and a window is created, and a blur calculation is performed according to the result of the image analysis. 如申請專利範圍第8項所述之該結合小波轉換及邊緣偵測建立單張影像深度圖的方法,其中,該深度擴散演算法可為一拉普拉斯插值技術或一全域插值演算法,其中之一種。 The method for establishing a single image depth map combining wavelet transformation and edge detection as described in item 8 of the scope of patent application, wherein the depth diffusion algorithm may be a Laplacian interpolation technique or a global interpolation algorithm. One of them.
TW105121679A 2016-07-11 2016-07-11 Apparatus and method for combining with wavelet transformer and edge detector to generate a depth map from a single image TWI613903B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW105121679A TWI613903B (en) 2016-07-11 2016-07-11 Apparatus and method for combining with wavelet transformer and edge detector to generate a depth map from a single image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW105121679A TWI613903B (en) 2016-07-11 2016-07-11 Apparatus and method for combining with wavelet transformer and edge detector to generate a depth map from a single image

Publications (2)

Publication Number Publication Date
TW201803342A true TW201803342A (en) 2018-01-16
TWI613903B TWI613903B (en) 2018-02-01

Family

ID=61725230

Family Applications (1)

Application Number Title Priority Date Filing Date
TW105121679A TWI613903B (en) 2016-07-11 2016-07-11 Apparatus and method for combining with wavelet transformer and edge detector to generate a depth map from a single image

Country Status (1)

Country Link
TW (1) TWI613903B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI722297B (en) * 2018-06-28 2021-03-21 國立高雄科技大學 Internal edge detection system and method thereof for processing medical images

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI314832B (en) * 2006-10-03 2009-09-11 Univ Nat Taiwan Single lens auto focus system for stereo image generation and method thereof
TWI368183B (en) * 2008-10-03 2012-07-11 Himax Tech Ltd 3d depth generation by local blurriness estimation
WO2014174587A1 (en) * 2013-04-23 2014-10-30 新日鐵住金株式会社 Spring steel having excellent fatigue characteristics and process for manufacturing same
CN103559702B (en) * 2013-09-26 2016-04-20 哈尔滨商业大学 Based on the two-dimensional single-view image depth estimation method of wavelet coefficient entropy
TWM535848U (en) * 2016-07-11 2017-01-21 Lunghwa Univ Of Science And Tech Apparatus for combining with wavelet transformer and edge detector to generate a depth map from a single image

Also Published As

Publication number Publication date
TWI613903B (en) 2018-02-01

Similar Documents

Publication Publication Date Title
US8718356B2 (en) Method and apparatus for 2D to 3D conversion using scene classification and face detection
CN102741879B (en) Method and system for generating depth map from monocular image
CN101765022B (en) Depth representing method based on light stream and image segmentation
TWI457853B (en) Image processing method for providing depth information and image processing system using the same
CN109889799B (en) Monocular structure light depth perception method and device based on RGBIR camera
JP2017527011A (en) Method and apparatus for upscaling an image
CN102905136A (en) Video coding and decoding method and system
US12374117B2 (en) Method, system and computer readable media for object detection coverage estimation
CN101873506B (en) Image processing method and image processing system for providing depth information
CN108830804B (en) Virtual-real fusion fuzzy consistency processing method based on line spread function standard deviation
CN111465937B (en) Face detection and recognition method using light field camera system
CN104982032B (en) The method and apparatus of 3D rendering data segmentation
KR20140026078A (en) Apparatus and method for extracting object
TWM535848U (en) Apparatus for combining with wavelet transformer and edge detector to generate a depth map from a single image
TWI613903B (en) Apparatus and method for combining with wavelet transformer and edge detector to generate a depth map from a single image
Taha et al. Partial differential equations and digital image processing: A review
CN113487487A (en) Super-resolution reconstruction method and system for heterogeneous stereo image
TWI610271B (en) Apparatus and method for combining with wavelet transformer and corner point detector to generate a depth map from a single image
KR101626679B1 (en) Method for generating stereoscopic image from 2D image and for medium recording the same
CN104981841B (en) The method and apparatus of 3D rendering data segmentation
TWM542833U (en) Apparatus for combining with wavelet transformer and corner point detector to generate a depth map from a single image
Balure et al. A Survey--Super Resolution Techniques for Multiple, Single, and Stereo Images
JP3992607B2 (en) Distance image generating apparatus and method, program therefor, and recording medium
Yan et al. Depth map super resolution and edge enhancement by utilizing RGB information
KR101458616B1 (en) Image contrast improvement method by discrete cosine transform and histogram processing

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees