[go: up one dir, main page]

TW201034441A - Method of establishing the depth of filed data for three-dimensional (3D) image and a system thereof - Google Patents

Method of establishing the depth of filed data for three-dimensional (3D) image and a system thereof Download PDF

Info

Publication number
TW201034441A
TW201034441A TW98108318A TW98108318A TW201034441A TW 201034441 A TW201034441 A TW 201034441A TW 98108318 A TW98108318 A TW 98108318A TW 98108318 A TW98108318 A TW 98108318A TW 201034441 A TW201034441 A TW 201034441A
Authority
TW
Taiwan
Prior art keywords
eye image
depth
image
pixel
offset vector
Prior art date
Application number
TW98108318A
Other languages
Chinese (zh)
Inventor
Meng-Chao Kao
Chun-Chueh Chiu
Chien-Hung Chen
Hsiang-Tan Lin
Original Assignee
Chunghwa Picture Tubes Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chunghwa Picture Tubes Ltd filed Critical Chunghwa Picture Tubes Ltd
Priority to TW98108318A priority Critical patent/TW201034441A/en
Publication of TW201034441A publication Critical patent/TW201034441A/en

Links

Landscapes

  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

A method of establishing the depth of filed data for three-dimensional (3D) image and a system thereof, which is applied to a 3D image having a first eye image and a second eye image. The system includes a deviation vector matrix, a deviation operator, and a comparator. The deviation vector matrix includes a data column which has the same pixel amount as that of the first eye image and has a location corresponding to that of the first eye image. The deviation operator uses the first pixel in the a sequence of the first eye image as a center to divide a basis frame, and finds a target frame from the second eye image. The basis frame and the target frame have a minimized gray scale value, so as to calculate a deviation vector value based on the minimized gray scale value. The comparator determines whether all of the deviation vector value of the first pixel in the a sequence are record in order to convert the deviation vector matrix to a diagram of the deep of filed.

Description

201034441 六、發明說明: 【發明所屬之技術領域】 一種景深資料建立方法,特別是有關於一種用以計算 兩不同視角之眼影像的偏移量,以取得深度圖的景深資料 建立方法及其系統。 【先前技術】 一般而言,立體影像多由兩組不同視角的影像資料所 Φ 組成,其中’一組係對應於左眼視角’另一組則是對應於 右眼視角。對應於左眼視角的影像稱為左眼影像,對應於 右眼視角的影像稱為右眼影像。 先前技術中 吃儿儿骚彩1豕的力%小外予有三種。! 種:利用虛擬實境軟體(virtual software)建立立體(31 three_dimensi〇nal)場景,包含虛擬人物、虛擬物品、虛! 建築物等等’再利用虛擬實境軟體的攝影套件以不同視; 了 =場景。然而,利用虛擬實境軟體(一 像已3個相互垂直㈣㈣, 被虛擬實境軟體控制而轉動> _物體杨景。 生針置對同-景物進行拍攝,產 的左眼影像與右眼影像。;播放像影像各別為上过 僅看到左《彡像,令觀財之右 時,錢視者之左眼 觀視者即會於大财產生讀視覺H錢景彡像。藉此, 4 吏觀視者感覺看到真 201034441 實的立體實物。 第三種:利用具有紅外線感測器的拍攝裝置對一景物 進行拍攝,紅外線感測器係發射一紅外光線,此紅外光線 碰到物體會反射,紅外線感測器係接收此反射的紅外光 線,並根據接收紅外光線的時間與頻率等條件,判斷景物 與拍攝裝置之距離,判斷出實際景物外輪廓的深度變化, 進而計算出景物的深度資料,以整合於拍攝的影像中。 然利用虛擬實境軟體建立立體場景再進行拍攝的手 段,需先設計虛擬場景並拍攝製作3D的立體動晝,十分 費時,而且無法應用於實際物體(包含人體或物品)的拍 攝作業。 其次,對同一景物進行拍攝兩不同視角影像,再合成 為立體影像者,觀視者皆可從此立體影像感覺出物體的立 體感,但此種立體影像並不能得到有景深資料或景深訊號。 而且,具有紅外線感測器的拍攝裝置在拍攝影像時, ❹ 雖可利用紅外光線感應景物的深淺遠近以計算出相關景深 資料,但紅外線感測器的感應距離相當有限,拍攝裝置與 實際景物太遠時,紅外線感測器即無法感應出實際景物外 輪廓的深度變化,即無法正確的取得有效的景深資料。 因此,如何有效的取得立體景像的景深資料即為各廠 商應思考的課題。 201034441 【發明内容】 有鑑於此,本發明所欲解決之問題係在於提供一種快 速且有效的取得立體影像之景深資料的方法與系統。 為解決上述方法問題,本發明所提供之技術手段係揭 露一種立體影像之景深資料建立方法,係應用於一立體影 像,此立體影像包含一第一眼影像與一第二眼影像。此方 法中,係建立一偏移向量矩陣,此偏移向量矩陣包含複數 個資料攔,各資料欄係與第一眼影像之η個第一晝素相互 對應,η為自然數。取得第一眼影像之一第a個第一晝素, i為介於1與η之間的整數。以一晝素選取區塊建立一基準 框於第一眼影像,基準框包含複數個第一晝素,並以第a 個第一晝素為中心。根據第a個第一晝素所屬之基準框, 於第二眼影像中搜尋一目標框,目標框與基準框具有一最 小灰階差值,以根據最小灰階差值計算出一偏移向量值。 藉此方式將所有第a個第一晝素對應的偏移向量值找出, ® 記錄於偏移向量矩陣。將偏移向量矩陣轉換成深度圖。 為解決上述裝置問題,本發明所提供之技術手段係揭 露一種立體影像之景深資料建立系統,係應用於一立體影 像,此立體影像包含一第一眼影像與一第二眼影像。此系 統包含一儲存模組、一偏移運算器與一比較器。 儲存模組用以記錄一偏移向量矩陣,此偏移向量矩陣 包含複數個資料攔,各資料欄係與第一眼影像之η個第一 晝素相互對應,η為自然數。偏移運算器用以根據一畫素 6 201034441 選取區塊建立一基準框於第一眼影像,基準框内涵蓋多個 第一畫素,並以第a個第一晝素為中心,及根據第a個第 一晝素對應之基準框,於第二眼影像中搜尋一目標框,目 標框與基準框具有一最小灰階差值,以根據最小灰階差值 計算出一偏移向量值。比較器用以記錄各第一晝素所對應 的偏移向量值於偏移向量矩陣之各資料攔,在判斷每一第 a個第一晝素之偏移向量值已全數記錄於偏移向量矩陣 時,轉換偏移向量矩陣為一深度圖。 本發明所揭露之方法與系統,其使得傳統3D左眼影 像與右眼影像在被轉換為2D影像時,迅速產生上述的深 度圖,供影像顯示設備根據2D影像與深度圖呈現具有立 體感的立體影像,並顯示對應立體影像之複數個視點的立 體效果。而且,偏移向量矩陣係記錄各第一畫素在第二眼 影像上的偏移向量值,故轉換出來的深度圖,在結合於原 立體影像時,可有效的改善立體影像的合成效果。而且, ® 本發明所揭露之方法與系統不僅可對由拍攝設備進行所產 出的影像作處理,對於並非拍攝而得的動態晝面或靜態晝 面也可進行處理,進一步的擴大本發明的實用範圍、適用 場合與應用層面。 【實施方式】 為使對本發明之終點、構造特徵及其功能有進一步之 了解,茲配合相關實施例及圖式詳細說明如下: 201034441 請參照圖1,其為本發明之系統方塊圖之一例,此系 統包含一第一成像模組21、一第二成像模組22、一儲存模 組25、一偏移運算器23與一比較器24。 第一成像模組21係拍攝一景物1以產生一第一眼影像 11,第二成像模組22係拍攝相同的景物1以產生一第二眼 影像12。儲存模組25用以記錄一偏移向量矩陣13,此偏 移向量矩陣13包含複數個資料欄,資料欄的數量係與第一 眼影像11欲進行偏移計算的第一晝素之數量相同,在此設 ❿ 為η。 偏移運算器23會以第一眼影像11之第a個第一晝素 41為中心,根據一晝素選取區塊建立一基準框31,此基準 框31除第a個第一晝素41外,尚涵蓋有多個第一畫素。 偏移運算器23會根據此基準框31,在第二眼影像12上找 出一個目標框,此目標框的第二晝素與基準框31之第一晝 素之間係具有一個最小灰階差值,以藉由最小灰階差值計 ® 算出第a個第一畫素41在第二眼影像12上的一個偏移向 量值。 比較器24係用以記錄各偏移向量值至偏移向量矩陣 13的資料欄,即第a個第一畫素41之偏移向量值即記錄 在第a個資料欄。比較器24在判斷各第a個資料攔皆填入 各第a個第一畫素41的偏移向量值,係將偏移向量矩陣 13轉換為一深度圖。 在此說明,上述各晝素的類型可為一般公知性的晝素 8 201034441 或次晝素。 請參照圖2,其為本發明之立體影像之景深資料建立 方法流程圖之一例。請同時參照圖1所示之系統方塊圖以 利於了解。此方法施行前,先利用第一成像模組21與第二 成像模組22各自拍攝一景物1以形成一立體影像,此立體 影像包含一第一眼影像11與第二眼影像12。在此說明, 第一眼影像11為一左眼影像,而第二眼影像12為右眼影 像;亦或,第一眼影像11為一右眼影像,第二眼影像12 參 為左眼影像。本實施例中,係以右眼影像視為第一眼影像 11,左眼影像視為第二眼影像12。此方法包含下列步驟: 建立一偏移向量矩陣13 (步驟S110),偏移向量矩陣 13包含複數個資料欄,資料欄係對應第一眼影像11之η 個第一畫素,η為自然數。如圖1,係在儲存模組25中建 立一個矩陣,此矩陣可為一維矩陣或二維矩陣,但資料欄 位需與第一眼影像11中欲進行計算偏移向量的第一晝素 ® 的數量為高,或相等。在此,視此矩陣為偏移向量矩陣13, 資料欄位的數量為η,第一眼影像11欲進行計算偏移向量 的第一晝素的數量同為η。 取得第一眼影像11之一第a個第一晝素41 (步驟 S120),i為介於1與η之間的整數。此步驟中,視第一眼 影像11的第一畫素排列方式從左至右,從上至下,視左上 的第一晝素為第一眼影像11的第1個第一晝素,視右下的 第一晝素為第一眼影像11的最末個第一晝素。 9 201034441 以第a個第-晝素41為令心’根據一晝素選取區塊建 立一基準框31於第一眼影像11 (步驟S130),基準框31 尚複數個心進行灰階比對的第-畫素。此基準框31可為 方形,其長度可為三個畫素長度、五個畫素長度、七個畫 素長度或九個畫素長度,即單數個畫素長度。 請參照圖3,其為本發明之基準框31劃分示意圖,本 實施例以基準框31為5x5的方形,第丨個第—晝素為其中 ❹〜進行說明。但基準;^ 31可能會超出第—眼影像u的邊 界,在此可將第一眼影像11邊界的第一畫素,其包含的數 值遞補基準框3!超出的範圍。舉例來說’設第一眼影像 11的晝素選取區塊為(x,y),則若基準框31超出第一眼与 像Η上方時,係以(〇,〇)至⑽的第一晝素進行數值= 補,超出第-眼影像U左方時,係以(〇〇)至(〇,y)的第 -畫素進行數值遞補,超出第—眼影像n下方時,係以 (〇,y)至(x,y)的第-晝素進行數值遞補,超出第一眼影像 11上树,係以⑽至(x,y)的第—畫素進行數值遞補。 请參照圖4 ’其為本發日月之基準框31結構*意圖之一 例。第一眼影像U之第a個第一晝素41的畫素座標值為 R⑽,其中U為自然數,R代表本實施例之第一眼影 11為右眼影像。因此,基準框31 Θ包含的所有第_晝 之晝素座標值範圍係為叩_2,>2)至R(i+2,j+2),順序:左 至右,由上至下。假設,目前第a個第-個畫素為第i個 第-晝素’晝素座標為(〇,〇),則基準框31内包含的所有 201034441 第一晝素的晝素座標值範圍係為R(-2,-2)至R(2,2)。 根據第a個第一晝素41所屬之基準框31,於第二眼 影像12中搜尋一目標框,目標框與基準框31之間具有一 最小灰階差值(步驟S140)。 請參照圖5,其為本發明之景深資料建立方法之細部 流程圖,請同時圖6以利於了解,圖6係本發明之預選框 32於第二眼影像12之配置圖之一例。此步驟中,根據第 二眼影像12之一第a個第二晝素42與一偏移畫素值,取 得複數個預選第二晝素43 (步驟S141)。令偏移晝素值為 X,預選第二晝素43的選擇範圍係為第a-x個第二晝素至 第a+x個第二晝素,其中X為介於0與η之間的整數。假 設,當基準框31之中心為第1個第一晝素,且偏移晝素值 為10,偏移運算器23即從第二眼影像12上選擇第1個第 二晝素,並將第1-10個第二晝素至第1 + 10個第二晝素作 為預選第二晝素43,也就是第-9個第二晝素至第11個第 ® 二晝素。 偏移運算器23係以各預選第二晝素43為中心,根據 畫素選取區塊在第二眼影像12劃分複數個預選框32,每 一預選框32包含複數個第二晝素(步驟S142)。 請參照圖7,其為本發明之預選框32結構圖之一例。 本實施例中,每一預選框32的構造雷同圖4所示之基準框 31,為5x5的方形。假設第二眼影像12之第a個第二晝素 42的晝素座標值為L(i,j),其中i,j為自然數,L代表本實 11 201034441 * 施例之第二眼影像12為左眼影像。 - 第a個第二晝素42所屬之預選框32内包含的所有第 •二畫素,其晝素座標值範圍為L(i-2,j-2)至L(i+2,j+2),順序 由左至右,由上至下。假設,目前第a個第二個晝素為第 1個第-晝素,畫素座標為L (〇,〇),則第i個第—晝素所 屬之預選框32内包含的所有第二畫素的晝素座^範^ 係為 L(-2,-2)至 L(2,2)。 瘳 ㈤理’當第&個第二個晝素為第2個第—主素, 座標為[(Μ)’則預馳32内包麵所有^書= 素座標值範圍係為叫,姐L(3,2)。當第 為第:個第二晝素,晝素座標為L(1M),mi 内^的所有第二畫素的晝素座標值範圍料 ^,2)。當第a個第二個畫素為第_9個第二晝素書) 標為L (-1〇,0),則預選框3 。:、 辛庙椤佶益同及* 町所有第二畫素的畫 瘳素座心值觀圍係為L(-12,-2)至L(-8,2)。 但任一預選框32係超出第二 利用第二眼影像12中 艮:像12的邊界,因此 作為遞補。舉㈣說,包含的畫素值 (P,q),則若預選框32 /衫像12的畫素長寬為 (〇,〇)至_的第二查H 艮祕12上方時,係以 ]2 . . . — 1素進行數值遞補,超出第_ 12左方時,係以(〇 弟一眼衫像 超出第二眼影像12’ ⑷的第二晝素進行數值遞補, 吸〜像U下方時 素進行數值遞補,超出第聲(1至如)的第二畫 艮办像12上方時,係以(p 〇) 12 201034441 至(p,q)的第-去* 昂〜晝素進行數值遞補。 .偏移運复哭1 預選框32 ° 23將基準圖之所有第一晝素個別與每一 φ 之第一書素所有第二晝素進行位置匹配,計算位置相匹配 應該等預選亥第二晝素之灰階差並加總’以取得個別對 舉例二I之複數個灰階差總值(步驟S143)° 基準框31, °烏移運算器23取得第1個第一畫素所屬之 每一個第_含所有第—晝素’即R(-2,-2)至R(2,2)中, 預選第二:晝素對應的灰階值。偏移運算器23係選擇任一 屬之預屬的預選框32 ’以第11個第二畫素所 會取得第u ·、、、列(即偏移晝素值χ=10),偏移運算器23 二晝素的第二晝素所屬之預選框32,其内含所有第 圖之二例照圖8 ’其為本發明之畫素選取區塊之格式編碼 區塊在笛。如上述’偏移運算器23係利用相同的晝素選取 •預^ 3:7艮影像U與第二眼影像12上劃分基準框_ 式,將*因此偏移運算器23會根據畫素選取區塊的格 畫素愈笛應相同格式編石馬’也就是畫素位置相對應的第一 &加:旦素進仃灰階差的計算,再將所有灰階差值進 $加總形成對應預選樞32的—灰階差總值。計算公式如 , + [L(i + 2 + x,j + 2)~ R(i + 2,j + 2)]2 二查去^中’第1個第—晝素之基準框31與第11個第 、之預選框32之間的灰階差總值為 201034441 £)(10) = [L(i - 2 +1 〇, 7 - 2) -/?(/- 2, y - 2)]2 + [L(i -1 +10, y - 2) - R(i -1, y _ 2)f + ...+ [L(i +10, _/) 一及(i,J·)]2 + …+ [I(i + 2 +10,)+ 2) - i?(i + 2, _/ + 2)]2 -同理,第1個第一晝素之基準框31與其它預選第二晝 • 素43 (即第10個第二晝素至第-9個第二晝素,偏移晝素 值為-10至9之間)之預選框32之間的灰階差總值各為 = [L{i -2 + 9, j-2)- R(i -2,j- 2)f + [L(i -1 + 9,7 - 2) - R(i -\,j~ 2)f + …+ [Ζ〇· + 9, y)—邓,y)]2 + …+ [I(z_ + 2 + 9, + 2)-邱 + 2, J + 2)]2 D(8) = [L(i - 2 + 8, j - 2) - R(i ~2,j- 2)f + [L(i -1 + 8,; - 2) -R(i ~\,j~2)]2 + ...+ [I〇· + 8, _/) - i?(z·,y)]2 + …+ [I(!_ + 2 + 8, _/· + 2)-耶 + 2, + 2)]2 ❹ £>⑼=[Z</ - 2, y - 2) - Λ(ί - 2,7· - 2)]2 + 间-1,/ — 2)-邪-1,/ - 2)]2 + ... + [L(i, j) - R(i, j)f + · · · + [L(i + 2,j + 2)~ R(i + 2,j + 2)]2 D(-8) = [L(i -2-8, j-2)-R(i -2,j-2)f + [L(i-1-S,j-2)-R(i 2)]2 ... + [L{i - 8, j) - R(i, j)]2 + · · · + [£(/ + 2 - 8, y + 2) - R(i + 2,7 + 2)]2 + £>(-9) =⑽-2-9,)-2)-即-2,y-2)]2+[你—ujuo· · ... + [L(i - 9, j) - R(i, 7)]2 + · · · + [L(i + 2 - 9s y + 2) - R(i + 2,j + 2)]2 + ❹ ... + [L{i -10, j) - R(i, 7)]2 + · · · + [L(i + ^~\0,j + 2)-R{i + 2,j + 2)]2 , )+ 偏移運算器23係從所有灰階差總值取得一最小灰階 差值,此最小灰階差值所屬之預選框32即為目標框(牛^ S144)。 偏移運算器23會根據取得之最小灰階差值計算出第^ 個第-畫素於第二眼影像12之偏移向量值(步驟s⑷)。 以本實施例來說,假設㈣為最小灰階差值,心即 一畫素於第二眼影像12之偏移向量值。 *''' 201034441 比較器24會記錄偏移向量值於偏移向量矩陣13之一 第a個資料欄(步驟S150)。以本實施例來說,第a個第 一晝素41乃指第1個第一晝素,所取得偏移向量值也是指 第1個第一晝素在第二眼影像12之偏移量,故比較器24 會將對應第1個第一晝素的偏移向量值(如上述的-8)記 錄在偏移向量矩陣13的第1個資料欄。 比較器24會判斷第a個第一晝素41之偏移向量值是 否全數記錄於偏移向量矩陣13 (步驟S160)。本實施例中, 比較器24會判斷當前用以進行偏移向量值計算的第a個第 一晝素41是否為第一眼影像11之最末一個第一晝素,即 第η個第一晝素。 當比較器24判斷出第a個第一晝素41不為第η個第 一晝素,尚未全數取得第一眼影像11之各第一晝素的偏移 向量值。比較器24會令一第a+Ι個第一晝素為第a個第一 晝素41 (步驟S163),以上述實施例而言,原先第a個第 ® 一畫素41為第1個第一畫素,第a+1個第一晝素為第2個 第一晝素。在步驟S163後,比較器24會將第2個第一畫 素視為第a個第一晝素41,第3個第二畫素視為第a+Ι個 第一晝素,第1個第一晝素視為第a-Ι個第一晝素,以此 類推。之後,比較器24會重新執行步驟S130至步驟S163, 直至所有第a個第一晝素41之偏移向量值皆全數記錄於偏 移向量矩陣13中。 當比較器24判斷出第a個第一晝素41為第η個第一 15 201034441 畫素’即全數第-晝素的偏移向量值已被記錄在偏移向量 矩陣13中。比較器24即轉換此偏移向量矩陣13為一深度 圖(步驟S162)。 請參照圖9,其為本發明之偏移向量矩陣13示意圖之 一例。在此,以二維矩陣的形式說明,令偏移向量矩^車Η 為A’則所有資料攔之數量為n,等同於第一佥素的數量, 每-資料攔之函數則以A(i,j)表示。如圖9,^移向量矩陣 _ 13之資料攔的排列順序如同第一眼影像u之第一佥素的 排列形式,由左至右,由上至下。各資料铜位係與各第一 晝素在第一眼影像11相互對應,如前述,笛 弟a個第一晝素 41之偏移向量值係記錄於第a個資料欄。备一 ^ 貢料欄所記 錄的偏移向量值係介於偏移畫素值的正負數之門,即 x之間。假設,偏移晝素值為-10至1〇,坌 a 弟一眼影像11之 解析度為640x480,供307200個第一書杳 _ ^ 1,第1個第一晝 素之偏移向量值為-8 ’則第1個資料攔即A 春 | 芍 Α(〇,〇)=·8。同 理,第640個第一畫素之偏移向量值為6。目,丨笙 則第640個資 料攔位即為Α(639,0)=6 ;第641個第一晝素之偏移向 為-7 ’則第641個資料欄位即為α(〇,1)=:_7,以此類推值 307200個第一畫素之偏移向量值為9,則第3〇72〇〇個資= 搁即為A(639,479)=9。當所有資料攔各記錄有第a個^二 晝素41之偏移向量值後,此偏移向量矩陣a可視為一 ^ 度圖A。此初步深度圖A即被影像顯示設備所利用、奸入 第一眼影像11與第二眼影像12以形成一具有景深的立" 16 201034441 影像。 請參照圖10,其為本發明之偏移向量矩陣Z之一例, 請同時參照圖9與圖11以利於了解,圖11為本發明之立 體影像之景深資料建立方法之另一例。為避免其它廠商或 影像顯示設備不具有利用此深度圖A之能力,可在比較器 24轉換此偏移向量矩陣為一深度圖前(步驟S162),令比 較器24將偏移向量矩陣13之所有偏移向量值轉換成符合 一灰階值記錄規則之複數個灰階差值(步驟S161)。轉換 @ 公式如下: Z{i,j)=[A{i,j) + x}*{255l2x) 其中,X為偏移晝素值,z(/,y)代表由偏移向量矩陣A 轉換而得之偏移向量矩陣Z。每一偏移向量值即被轉換為 符合灰階數值法則的灰階差值,每一灰階差值為介於0至 255之間的整數。之後,比較器24會執行步驟S163以將 偏移向量矩陣Z轉換成Z深度圖,然一般而言,可視偏移 ⑩ 向量矩陣Z為數值化的Z深度圖。 如圖10所示,原偏移向量矩陣A中,第1個資料欄 A(0,0)=-8。第640個資料欄A(639,0)=6,第641個資料欄 位 A(0,1)=-7,以及第 307200 個資料欄 A(639,479)=9。當 偏移向量矩陣A被轉換成偏移向量矩陣Z後,第1個資料 攔 Z(0,0)=25。第 640 個資料攔 Z(639,0)=204,第 641 個資 料攔位 Z(0,1)=38 ,以及第 307200 個資料攔 A(639,479)=242。由偏移向量矩陣A轉換而得之偏移向量 17 201034441 矩陣z及其z深度圖即可為其它廠商或市面上有販售的影 像顯示設備所使用。 請同時參照圖12、圖13與圖14,圖12係本發明之拍 攝景物1的第一眼影像11之一例,圖13係本發明之拍攝 景物的第二眼影像12之一例,圖14係本發明之Z深度圖 之一例。 本實施例中,第一眼影像11係為右眼影像,第二眼影 像12為左眼影像。根據上述的景深資料建立方法及其使用 的系統,可計算出第一眼影像11之各第一晝素在第二眼影 像12上的偏移向量值,以記錄成偏移向量矩陣A。為方便 其它廠商或影像顯示設備使用,可將偏移向量矩陣A轉換 成符合灰階格式的偏移向量矩陣Z,再轉換出如圖13所示 的Z深度圖。之後,其它廠商或影像顯示設備可根據第一 眼影像11與第二眼影像12結合Z深度圖以顯示一具有景 深的立體影像。 ® 請依序參照圖15、圖16、圖17與圖18,圖15為本 發明之立體影像由右至左之第一視角圖,圖16為本發明之 立體影像由右至左之第二視角圖,圖17為本發明之立體影 像由右至左之第三視角圖,圖18為本發明之立體影像由右 至左之第四視角圖。 如圖15至圖18,請同時參照此四個圖中,各紅框内 容,當觀視者由不同角度觀看此立體影像時,可明確的看 出不同晝素偏移的情形,確實看到立體影像於不同視點所 18 201034441 呈現的立體效果。 雖然本發明以前述之較佳實施例揭露如上,然其並非 用以限定本發明,任何熟習相像技藝者,在不脫離本發明 之精神和範圍内,所作更動與潤飾之等效替換,仍為本發 明之專利保護範圍内。 【圖式簡單說明】 圖1係本發明之系統方塊圖之一例; 圖2係本發明之景深資料建立方法流程圖之一例; 圖3係本發明之基準框劃分示意圖; 圖4係本發明之基準框結構示意圖之一例; 圖5係本發明之景深資料建立方法之細部流程圖; 圖6係本發明之預選框於第二眼影像之配置圖之一例; 圖7係本發明之預選框結構圖之一例; 圖8係本發明之晝素選取區塊之格式編碼圖之一例; 圖9係本發明之偏移向量矩陣示意圖之一例; ⑩ 圖10係本發明之偏移向量矩陣Z之一例; 圖11係本發明之立體影像之景深資料建立方法之另一例; 圖12係本發明之拍攝景物的第一眼影像之一例; 圖13係本發明之拍攝景物的第二眼影像之一例; 圖14係本發明之Z深度圖之一例; 圖15係本發明之立體影像由右至左之第一視角圖; 圖16係本發明之立體影像由右至左之第二視角圖; 圖17係本發明之立體影像由右至左之第三視角圖;以及 19 201034441 圖18係本發明之立體影像由右至左之第四視角圖。 【主要元件符號說明】 I 景物 II 第一眼影像 12 第二眼影像 13 偏移向量矩陣 21 第一成像模組 22 第二成像模組 參 23 偏移運算器 24 比較器 25 儲存模組 31 基準框 32 預選框 41 第a個第一晝素 42 第a個第二晝素 參 43 預選第二晝素 S110、S120、S130、S140、S150、S160、S161、S162、S163 : 步驟 20201034441 VI. Description of the invention: [Technical field of invention] A method for establishing depth of field data, in particular, a method and system for establishing depth of field data for calculating an offset of an eye image of two different viewing angles . [Prior Art] Generally, a stereoscopic image is composed of two sets of image data of different viewing angles, where 'one set corresponds to the left eye view' and the other set corresponds to the right eye view. The image corresponding to the left-eye view is called the left-eye image, and the image corresponding to the right-eye view is called the right-eye image. In the prior art, there are three types of powers for eating and drinking. ! Species: Use virtual software to create stereoscopic (31 three_dimensi〇nal) scenes, including virtual characters, virtual objects, virtual! buildings, etc. 'Reuse the virtual reality software photography suite to different views; Scenes. However, using the virtual reality software (the image has been three vertical (four) (four), controlled by the virtual reality software and rotated > _ object Yang Jing. The needle is placed on the same scene, the left eye image and the right eye are produced. The video is played separately. Only the left image is seen. When the right eye is viewed, the left eye viewer of the money viewer will read the visual H. Therefore, the 4 吏 viewers feel that the real 201034441 real three-dimensional object. The third: using a camera with an infrared sensor to shoot a scene, the infrared sensor emits an infrared light, the infrared light touches When the object is reflected, the infrared sensor receives the reflected infrared light, and determines the distance between the scene and the camera according to the time and frequency of receiving the infrared light, and determines the depth change of the outer contour of the actual scene, thereby calculating The depth data of the scene is integrated into the captured image. However, using the virtual reality software to create a stereoscopic scene and then shooting, you need to design the virtual scene and shoot 3D stereoscopic motion. It is very time consuming and cannot be applied to the shooting of actual objects (including human bodies or objects). Secondly, two different perspective images are taken for the same scene, and then synthesized into stereo images, and viewers can feel objects from the stereoscopic images. The three-dimensional image, but this stereo image does not have depth of field data or depth of field signals. Moreover, when shooting with an infrared sensor, the infrared light can be used to sense the depth and depth of the scene to calculate the relevant depth of field. However, the sensing distance of the infrared sensor is quite limited. When the camera is too far away from the actual scene, the infrared sensor cannot sense the depth change of the outer contour of the actual scene, that is, the effective depth of field data cannot be obtained correctly. How to effectively obtain the depth of field data of a stereoscopic scene is a subject that various manufacturers should consider. 201034441 [Invention] In view of the above, the problem to be solved by the present invention is to provide a fast and effective way to obtain depth of field data of a stereoscopic image. Method and system. To solve the above method problem, this issue The technical means provided by the present invention discloses a method for establishing depth of field data of a stereoscopic image, which is applied to a stereoscopic image, the stereoscopic image comprising a first eye image and a second eye image. In this method, an offset vector is established. a matrix, the offset vector matrix comprises a plurality of data blocks, each data column corresponds to n first pixels of the first eye image, and η is a natural number. The first first image of the first eye image is obtained. I, i is an integer between 1 and η. A reference frame is used to create a reference frame in the first eye image, and the reference frame includes a plurality of first pixels, and the first first element Centering on the reference frame of the first first element, searching for a target frame in the second eye image, the target frame and the reference frame having a minimum grayscale difference value to calculate a minimum grayscale difference value Offset vector value. In this way, the offset vector values corresponding to all the first first pixels are found, and ® is recorded in the offset vector matrix. Convert the offset vector matrix to a depth map. In order to solve the above-mentioned device problems, the technical means provided by the present invention discloses a stereoscopic image depth of field data establishing system for applying to a stereoscopic image, the stereoscopic image comprising a first eye image and a second eye image. The system includes a storage module, an offset operator and a comparator. The storage module is configured to record an offset vector matrix, the offset vector matrix includes a plurality of data blocks, and each data column corresponds to n first pixels of the first eye image, and η is a natural number. The offset operator is configured to create a reference frame in the first eye image according to the selected block of a pixel 6 201034441, the reference frame includes a plurality of first pixels, and is centered on the first first pixel, and according to the first a reference frame corresponding to the first element, searching for a target frame in the second eye image, the target frame and the reference frame having a minimum gray level difference to calculate an offset vector value according to the minimum gray level difference. The comparator is configured to record the offset vector values corresponding to the first pixels in the offset vector matrix, and determine that the offset vector values of each of the first first pixels are all recorded in the offset vector matrix. The conversion offset vector matrix is a depth map. The method and system of the present invention enable the conventional 3D left-eye image and the right-eye image to be quickly generated when converted into a 2D image, for the image display device to have a stereoscopic effect according to the 2D image and the depth map. A stereoscopic image and displaying a stereoscopic effect corresponding to a plurality of viewpoints of the stereoscopic image. Moreover, the offset vector matrix records the offset vector values of the first pixels on the second eye image, so that the converted depth map can effectively improve the synthesis effect of the stereo image when combined with the original stereo image. Moreover, the method and system disclosed by the present invention can not only process the image produced by the photographing device, but also process the dynamic kneading surface or the static kneading surface which is not photographed, further expanding the present invention. Practical range, applicable occasions and application level. [Embodiment] In order to further understand the end point, structural features and functions of the present invention, the following embodiments and drawings are described in detail as follows: 201034441 Please refer to FIG. 1 , which is an example of a block diagram of the system of the present invention. The system includes a first imaging module 21, a second imaging module 22, a storage module 25, an offset operator 23 and a comparator 24. The first imaging module 21 captures a scene 1 to generate a first eye image. The second imaging module 22 captures the same scene 1 to generate a second eye image 12. The storage module 25 is configured to record an offset vector matrix 13 comprising a plurality of data columns, the number of data columns being the same as the number of first pixels to be offset calculated by the first eye image 11 Here, set ❿ to η. The offset operator 23 centers on the first first pixel 41 of the first eye image 11 to establish a reference frame 31 according to a pixel selection block. The reference frame 31 is divided by the a first first element 41. In addition, there are multiple first pixels. Based on the reference frame 31, the offset operator 23 finds a target frame on the second eye image 12, and the second pixel of the target frame and the first pixel of the reference frame 31 have a minimum gray level. The difference is used to calculate an offset vector value of the a-th first pixel 41 on the second eye image 12 by the minimum gray-scale difference meter. The comparator 24 is for recording the offset vector values to the data field of the offset vector matrix 13, i.e., the offset vector value of the a-th first pixel 41 is recorded in the a-th data field. The comparator 24 determines that each of the a-th data blocks is filled with the offset vector value of each of the a-th first pixels 41, and converts the offset vector matrix 13 into a depth map. Herein, the types of the above-mentioned respective halogens may be generally known as alizarin 8 201034441 or sub-halogen. Please refer to FIG. 2, which is an example of a flow chart of a method for establishing depth of field data of a stereoscopic image of the present invention. Please also refer to the system block diagram shown in Figure 1 for easy understanding. Before the method is implemented, the first imaging module 21 and the second imaging module 22 are respectively used to capture a scene 1 to form a stereoscopic image. The stereoscopic image includes a first eye image 11 and a second eye image 12. Here, the first eye image 11 is a left eye image, and the second eye image 12 is a right eye image; or the first eye image 11 is a right eye image, and the second eye image 12 is a left eye image. . In this embodiment, the right eye image is regarded as the first eye image 11, and the left eye image is regarded as the second eye image 12. The method comprises the steps of: establishing an offset vector matrix 13 (step S110), the offset vector matrix 13 comprising a plurality of data columns, the data column corresponding to the first first pixels of the first eye image 11, and η being a natural number . As shown in FIG. 1, a matrix is established in the storage module 25, and the matrix may be a one-dimensional matrix or a two-dimensional matrix, but the data field needs to be the first element in the first eye image 11 to calculate an offset vector. The number of ® is high, or equal. Here, the matrix is the offset vector matrix 13, and the number of data fields is η, and the number of first pixels to be calculated by the first eye image 11 is η. The a-th first element 41 of one of the first-eye images 11 is obtained (step S120), and i is an integer between 1 and η. In this step, the first pixel arrangement of the first-eye image 11 is from left to right, from top to bottom, and the first pixel on the upper left is the first first element of the first eye image 11, The first element in the lower right is the last first element of the first eye image 11. 9 201034441 Taking the first a-thoratin 41 as the center of the heart, a reference frame 31 is created on the first eye image 11 according to the pixel selection block (step S130), and the reference frame 31 has a plurality of hearts for gray scale comparison. The first pixel. The reference frame 31 can be square and can be three pixels long, five pixel lengths, seven pixel lengths or nine pixel lengths, i.e., a single pixel length. Please refer to FIG. 3, which is a schematic diagram of the reference frame 31 of the present invention. In the embodiment, the reference frame 31 is a square of 5x5, and the first one is a ❹~. However, the reference; ^ 31 may exceed the boundary of the first-eye image u, where the first pixel of the boundary of the first-eye image 11 may be added to the range of the reference frame 3! For example, if the pixel selection block of the first eye image 11 is (x, y), if the reference frame 31 is beyond the first eye and the top of the image, the first is (〇, 〇) to (10). The value of the element is =, and when it is beyond the left side of the first-eye image U, the value is complemented by the first pixel of (〇〇) to (〇, y), and when it is below the first-eye image n,第, y) to (x, y) of the 昼-昼 is numerically replenished, beyond the tree on the first eye image 11, and is numerically complemented by the first pixel of (10) to (x, y). Please refer to FIG. 4' which is an example of the structure of the reference frame 31 of the date of the month. The pixel coordinate value of the a-th first pixel 41 of the first-eye image U is R (10), where U is a natural number, and R represents that the first eye shadow 11 of the present embodiment is a right-eye image. Therefore, all the 第 昼 昼 座 座 基准 基准 基准 基准 基准 , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , . Assume that the current a-th pixel is the ith 第-昼素' 昼 座 coordinates (〇, 〇), then all the 201034441 first 昼 昼 昼 座 座 基准 基准 基准 基准 2010 It is R(-2,-2) to R(2,2). According to the reference frame 31 to which the a first first element 41 belongs, a target frame is searched for in the second eye image 12, and a minimum grayscale difference value between the target frame and the reference frame 31 is obtained (step S140). Please refer to FIG. 5 , which is a detailed flowchart of the method for establishing the depth of field data of the present invention. Please refer to FIG. 6 for facilitating understanding. FIG. 6 is an example of a configuration diagram of the preselected frame 32 of the present invention in the second eye image 12 . In this step, a plurality of preselected second pixels 43 are obtained based on the a second second element 42 of the second eye image 12 and an offset pixel value (step S141). Let the offset pixel value be X, and the preselected second element 43 is selected from the ax second element to the a+x second element, where X is an integer between 0 and η . It is assumed that when the center of the reference frame 31 is the first first pixel and the offset pixel value is 10, the offset operator 23 selects the first second pixel from the second eye image 12, and The first 1-10 second halogen to the first + ten second halogen are used as the preselected second halogen 43 , that is, the -9th second halogen to the eleventh second halogen. The offset operator 23 is centered on each preselected second element 43 and divides a plurality of preselected boxes 32 in the second eye image 12 according to the pixel selection block. Each preselection box 32 includes a plurality of second elements (steps). S142). Please refer to FIG. 7, which is an example of a structural diagram of the preselection box 32 of the present invention. In this embodiment, the structure of each preselection frame 32 is the same as the reference frame 31 shown in FIG. 4, which is a square of 5x5. Assume that the azimuth coordinate of the a second second element 42 of the second eye image 12 is L(i,j), where i,j is a natural number, and L represents the second eye image of the present embodiment 11 201034441 * 12 is the left eye image. - all the second pixels contained in the preselected box 32 to which the a second second element 42 belongs, whose pixel coordinate values range from L(i-2, j-2) to L(i+2, j+ 2), the order is from left to right, from top to bottom. Assume that the current second and second morpheme are the first 昼-昼, and the pixel coordinates are L (〇, 〇), then all the seconds contained in the pre-selection box 32 of the i-th 昼-昼 所属The pixel of the pixel is L (-2, -2) to L (2, 2).瘳(五)理' When the second and the second element are the second element - the main element, the coordinates are [(Μ)', then the pre-capture 32 inner cover all ^ book = the prime coordinate range is called, sister L (3, 2). When the first: second morpheme, the enthalpy coordinates are L (1M), and the eigenvalues of all the second pixels in mi are ^, 2). When the a second second pixel is the _9th second syllabus), it is marked as L (-1 〇, 0), then the box 3 is preselected. :, Xinmiao Yi Yitong and * The second picture of all the paintings in the town. The heart value of the Sussex is L (-12, -2) to L (-8, 2). However, any of the preselected boxes 32 is beyond the second use of the boundary of the image 2 of the second eye image 12, and thus is used as a complement. (4) said that the included pixel value (P, q), if the length of the pixel of the preselected box 32 / shirt 12 is (〇, 〇) to _ the second check H 艮 secret 12, ] 2 . . . — 1 is numerically replenished. When it is beyond the left side of the _ 12th, it is numerically replenished with a second element that is beyond the second eye image 12' (4). The time is numerically replenished. When the second picture is above the first picture (1 to ru), the value is (p 〇) 12 201034441 to (p, q) - de * ang ~ 昼Replenishment. Offset transport crying 1 Preselection box 32 ° 23 Position all the first elements of the reference map individually with all the second elements of the first pixel of each φ, and calculate the position matching should wait for pre-selection The gray level difference of the second element is summed up to obtain the total number of gray level differences of the individual pair of examples I (step S143). The reference frame 31, the shifting operator 23 obtains the first first pixel. Each of the first _ contains all the first 昼 ' ', ie, R (-2, -2) to R (2, 2), preselected the second: the gray scale value corresponding to the morpheme. The offset operator 23 selects Pre-owned by any genus The box 32' takes the eleventh, second, and second columns (i.e., the offset pixel value χ=10), and the offset operator 23 selects the second element of the second element. Block 32, which contains all of the second examples of the figure, is shown in Fig. 8 which is the format coding block of the pixel selection block of the present invention in the flute. As described above, the 'offset operator 23 is selected by the same pixel. Pre-compared 3:7艮 image U and second eye image 12 are divided into reference frame _, so that the offset operator 23 will select the stone according to the pixel selection block of the pixel. That is, the first & plus: the calculation of the gray level difference corresponding to the pixel position, and then all the gray level difference values are added to the total value of the gray level difference corresponding to the preselected pivot 32. The calculation formula is as follows. , + [L(i + 2 + x,j + 2)~ R(i + 2,j + 2)]2 Check the '1st first-quality element frame 31 and the 11th The total grayscale difference between the preselection boxes 32 is 201034441 £)(10) = [L(i - 2 +1 〇, 7 - 2) -/?(/- 2, y - 2)]2 + [L(i -1 +10, y - 2) - R(i -1, y _ 2)f + ... + [L(i +10, _/) I and (i,J·)]2 + ...+ [I(i + 2 +10,)+ 2) - i?(i + 2, _/ + 2)]2 - Similarly, the first first element of the reference frame 31 and the other preselected second element 43 (ie the 10th second element The total grayscale difference between the preselected boxes 32 up to the -9th second element, offsetting the pixel value between -10 and 9 is = [L{i -2 + 9, j-2 )- R(i -2,j- 2)f + [L(i -1 + 9,7 - 2) - R(i -\,j~ 2)f + ...+ [Ζ〇· + 9, y )—Deng, y)]2 + ...+ [I(z_ + 2 + 9, + 2)-Qiu + 2, J + 2)]2 D(8) = [L(i - 2 + 8, j - 2) - R(i ~2,j- 2)f + [L(i -1 + 8,; - 2) -R(i ~\,j~2)]2 + ...+ [I〇· + 8, _/) - i?(z·,y)]2 + ...+ [I(!_ + 2 + 8, _/· + 2)-yeah + 2, + 2)]2 ❹ £>(9)=[Z</ - 2, y - 2) - Λ(ί - 2,7· - 2)]2 + between -1,/ - 2)----1, / - 2)]2 + .. + [L(i, j) - R(i, j)f + · · · + [L(i + 2,j + 2)~ R(i + 2,j + 2)]2 D(-8 ) = [L(i -2-8, j-2)-R(i -2,j-2)f + [L(i-1-S,j-2)-R(i 2)]2 . .. + [L{i - 8, j) - R(i, j)]2 + · · · + [£(/ + 2 - 8, y + 2) - R(i + 2,7 + 2) ]2 + £>(-9) =(10)-2-9,)-2)---2,y-2)]2+[you-ujuo· · ... + [L(i - 9, j) - R(i, 7)]2 + · · · + [L(i + 2 - 9s y + 2) - R(i + 2,j + 2)]2 + ❹ ... + [L{i -10, j) - R(i, 7)]2 + · · · + [L(i + ^~\0,j + 2) -R{i + 2, j + 2)] 2 , ) + The offset operator 23 obtains a minimum gray scale difference value from all the gray scale difference total values, and the minimum gray scale difference value belongs to the preselection box 32. That is the target box (Niu ^ S144). The offset operator 23 calculates the offset vector value of the ^th pixel-to-second image 12 from the minimum grayscale difference obtained (step s(4)). In the present embodiment, it is assumed that (4) is the minimum gray scale difference, and the heart is the offset vector value of the second eye image 12. *''' 201034441 The comparator 24 records the offset vector value in one of the offset vector matrices 13 in the a-th data field (step S150). In this embodiment, the first first pixel 41 refers to the first first pixel, and the obtained offset vector value also refers to the offset of the first first pixel in the second eye image 12. Therefore, the comparator 24 records the offset vector value (such as -8 above) corresponding to the first first pixel in the first data column of the offset vector matrix 13. The comparator 24 judges whether or not the offset vector value of the a-th first pixel 41 is recorded in the offset vector matrix 13 (step S160). In this embodiment, the comparator 24 determines whether the first first pixel 41 currently used for calculating the offset vector value is the last first element of the first eye image 11, that is, the nth first Russell. When the comparator 24 judges that the a-th first pixel 41 is not the n-th first pixel, the offset vector values of the first pixels of the first-eye image 11 are not yet fully obtained. The comparator 24 causes the first a + first first element to be the a first first element 41 (step S163). In the above embodiment, the first a-th first pixel 41 is the first one. The first pixel, the a+1th first element is the second first element. After step S163, the comparator 24 regards the second first pixel as the a first first element 41, and the third second picture as the a+th first element, the first The first element is considered to be the first a-first element, and so on. Thereafter, the comparator 24 re-executes steps S130 to S163 until the offset vector values of all the a-th first pixels 41 are all recorded in the offset vector matrix 13. When the comparator 24 judges that the a-th first pixel 41 is the n-th first 15 201034441 pixel', that is, the offset-vector value of the all-th-thortex has been recorded in the offset vector matrix 13. The comparator 24 converts the offset vector matrix 13 into a depth map (step S162). Please refer to FIG. 9, which is an example of a schematic diagram of the offset vector matrix 13 of the present invention. Here, in the form of a two-dimensional matrix, let the offset vector moment Η Η be A', then the number of all data blocks is n, which is equivalent to the number of first pixels, and the function of each data block is A ( i, j) indicates. As shown in Fig. 9, the data block of the shift vector matrix _ 13 is arranged in the same order as the first element of the first eye image u, from left to right, from top to bottom. The data copper level and the first pixels correspond to each other in the first eye image 11. As described above, the offset vector values of the first ten elements of the flute are recorded in the a-th data field.备一 ^ The offset vector value recorded in the tribute column is between the positive and negative gates of the offset pixel value, that is, between x. Assume that the offset pixel value is -10 to 1〇, and the resolution of the image 11 of 坌a is 640x480 for 307200 first book 杳 _ ^ 1, and the offset vector value of the first first element is -8 'The first data block is A spring | 芍Α (〇, 〇) = · 8. Similarly, the offset vector value of the 640th first pixel is 6. For the purpose, the 640th data block is Α(639,0)=6; the 641th first element has an offset of -7', then the 641th data field is α(〇, 1) =: _7, and so on 307200 first pixels have an offset vector value of 9, then the third 〇 72 〇〇 = = = is A (639, 479) = 9. When all the data blocks have the offset vector value of the a-th dioxin 41, the offset vector matrix a can be regarded as a graph A. This preliminary depth map A is used by the image display device to sneak into the first eye image 11 and the second eye image 12 to form a vertical image with a depth of field of 16 201034441. Referring to FIG. 10, which is an example of the offset vector matrix Z of the present invention, please refer to FIG. 9 and FIG. 11 for convenience. FIG. 11 is another example of the method for establishing depth of field data of the stereo image of the present invention. In order to prevent other manufacturers or image display devices from having the ability to utilize the depth map A, before the comparator 24 converts the offset vector matrix into a depth map (step S162), the comparator 24 sets the offset vector matrix 13 All of the offset vector values are converted into a plurality of grayscale difference values that conform to a grayscale value recording rule (step S161). The conversion @ formula is as follows: Z{i,j)=[A{i,j) + x}*{255l2x) where X is the offset pixel value and z(/,y) represents the transformation from the offset vector matrix A The offset vector matrix Z is obtained. Each offset vector value is converted to a grayscale difference that conforms to the grayscale numerical rule, and each grayscale difference is an integer between 0 and 255. Thereafter, the comparator 24 performs step S163 to convert the offset vector matrix Z into a Z depth map, but in general, the visual offset 10 vector matrix Z is a quantized Z depth map. As shown in FIG. 10, in the original offset vector matrix A, the first data column A(0, 0) = -8. The 640th data column A (639, 0) = 6, the 641th data field A (0, 1) = -7, and the 307,200 data column A (639, 479) = 9. When the offset vector matrix A is converted into the offset vector matrix Z, the first data block Z(0, 0) = 25. The 640th data block Z(639,0)=204, the 641th data block Z(0,1)=38, and the 307,200th data block A(639,479)=242. The offset vector converted from the offset vector matrix A 17 201034441 The matrix z and its z-depth map can be used by other manufacturers or commercially available image display devices. Referring to FIG. 12, FIG. 13, and FIG. 14, FIG. 12 is an example of the first eye image 11 of the scene 1 of the present invention, and FIG. 13 is an example of the second eye image 12 of the scene of the present invention. FIG. An example of a Z depth map of the present invention. In this embodiment, the first eye image 11 is a right eye image, and the second eye image 12 is a left eye image. According to the above-described depth of field data establishing method and the system used therefor, the offset vector value of each of the first pixels of the first eye image 11 on the second eye image 12 can be calculated to be recorded as the offset vector matrix A. For the convenience of other manufacturers or image display devices, the offset vector matrix A can be converted into an offset vector matrix Z conforming to the gray scale format, and then the Z depth map shown in Fig. 13 can be converted. Thereafter, other manufacturers or image display devices can combine a Z depth map according to the first eye image 11 and the second eye image 12 to display a stereoscopic image having a depth of field. Referring to FIG. 15, FIG. 16, FIG. 17, and FIG. 18, FIG. 15 is a first perspective view of a stereoscopic image of the present invention from right to left, and FIG. 16 is a second aspect of the stereoscopic image of the present invention from right to left. FIG. 17 is a third perspective view of the stereoscopic image of the present invention from right to left, and FIG. 18 is a fourth perspective view of the stereoscopic image of the present invention from right to left. As shown in Fig. 15 to Fig. 18, please refer to the contents of the red frame in the four figures. When the viewer views the stereoscopic image from different angles, the situation of different pixel offsets can be clearly seen. Stereoscopic images are presented in different perspectives on 18 201034441. While the present invention has been described above in terms of the preferred embodiments thereof, it is not intended to limit the invention, and the equivalent of the modification and retouching of the present invention is still within the spirit and scope of the present invention. Within the scope of patent protection of the present invention. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is an example of a block diagram of a system of the present invention; FIG. 2 is an example of a flow chart of a method for establishing depth of field data of the present invention; FIG. 3 is a schematic diagram of a reference frame division of the present invention; FIG. 5 is a detailed flowchart of a method for establishing a depth of field data according to the present invention; FIG. 6 is an example of a configuration diagram of a preselected frame of the present invention in a second eye image; FIG. 7 is a preselected frame structure of the present invention; FIG. 8 is an example of a format coding diagram of a pixel selection block of the present invention; FIG. 9 is an example of an offset vector matrix diagram of the present invention; FIG. 10 is an example of an offset vector matrix Z of the present invention. Figure 11 is another example of a method for establishing depth of field data of a stereoscopic image of the present invention; Figure 12 is an example of a first eye image of a scene of the present invention; and Figure 13 is an example of a second eye image of a scene of the present invention; Figure 14 is an example of a Z depth map of the present invention; Figure 15 is a first perspective view of a stereoscopic image of the present invention from right to left; Figure 16 is a second perspective view of a stereoscopic image of the present invention from right to left; Department of hair The three-dimensional image of the third view from right to left in FIG.; FIG. 18 and 19,201,034,441 based stereoscopic image according to the present invention of a fourth perspective view of the left from right to. [Description of main component symbols] I Scenery II First-eye image 12 Second-eye image 13 Offset vector matrix 21 First imaging module 22 Second imaging module reference 23 Offset computing unit 24 Comparator 25 Storage module 31 Reference Block 32 Preselection box 41 The a first first element 42 The a second second element 43 Preselects the second element S110, S120, S130, S140, S150, S160, S161, S162, S163: Step 20

Claims (1)

201034441 七、申請專利範圍: 1.一種立體影像之景深資料建立方法,係應用於一立體影 像,該立體影像包含一第一眼影像與一第二眼影像,該 景深資料建立方法包含: 建立一偏移向量矩陣,該偏移向量矩陣包含複數個 資料欄,該等資料欄係對應該第一眼影像之η個第一晝 素,η為自然數; 取得該第一眼影像之一第a個第一晝素,i為介於1 與η之間的整數; 以該第a個第一晝素為中心,根據一晝素選取區塊 建立一基準框於第一眼影像,該基準框包含複數個該等 第一畫素; 根據該第a個第一晝素所屬之該基準框,於該第二 眼影像中搜尋一目標框,該目標框與該基準框之間具有 一最小灰階差值; 根據該最小灰階差值計算出該第a個第一晝素之一 偏移向量值; 記錄該偏移向量值於該偏移向量矩陣之一第a個資 料搁, 判斷每一該第a個第一畫素之該偏移向量值是否全 數記錄; 當判斷已全數記錄,轉換該偏移向量矩陣為一深度 圖;以及 21 201034441 當判斷未全數記錄,令一第a+l個第一晝素為該第 a個第一晝素,返回以一晝素選取區塊建立一基準框於 該第一眼影像之該步驟。 2. 如申請專利範圍第1項所述之立體影像之景深資料建立 方法,其中該第一眼影像為一左眼影像,該第二眼影像 為右眼影像。 3. 如申請專利範圍第1項所述之立體影像之景深資料建立 方法,其中該第一眼影像為一右眼影像,該第二眼影像 參 為左眼影像。 4. 如申請專利範圍第1項所述之立體影像之景深資料建立 方法,其中該根據該第a個第一晝素對應之該基準框, 於該第二眼影像上搜尋一目標框步驟包含下列步驟: 根據該第二眼影像之一第a個第二晝素與一偏移畫 素值,取得複數個預選第二晝素; 以各該預選第二晝素為中心,根據該晝素選取區塊 ❹ 在該第二眼影像建立複數個預選框,每一該等預選框包 含複數個第二晝素; 將該基準圖之該等第一晝素個別與每一該預選框之 該等第二畫素進行位置匹配,計算位置相匹配之該第一 晝素與該第二晝素之灰階差並加總,以取得對應該等預 選框之複數個灰階差總值; 從該等灰階差總值取得一最小灰階差值,該最小灰 階差值所屬之該預選框即為該目標框;以及 22 201034441 根據該最小灰階差值計算出該偏移向量值。 5. 如申請專利範圍第4項所述之立體影像之景深資料建立 方法,其中該偏移晝素值為X,該等預選第二晝素係為一 a 第a-x個第二晝素至一第a+x個第二晝素,其中X為介 於0與η之間的整數。 6. 如申請專利範圍第5項所述之立體影像之景深資料建立 方法,其中每一該偏移向量值為介於-X至X之間的整數。 7. 如申請專利範圍第1項所述之立體影像之景深資料建立 方法,其中該晝素選取區塊係為方形,該方形之長度係 為三個晝素、五個晝素、七個晝素或九個晝素。 8. 如申請專利範圍第1項所述之立體影像之景深資料建立 方法,其中轉換該偏移向量矩陣為一深度圖之該步驟前 更包含下列步驟: 將該偏移向量矩陣之該等偏移向量值轉換成符合一 灰階格式之複數個灰階差值。 ® 9.如申請專利範圍第8項所述之立體影像之景深資料建立 方法,其中每一該灰階差值為介於〇至255之間的整數。 10.—種立體影像之景深資料建立系統,係應用於一立體影 像,該立體影像包含一第一眼影像與一第二眼影像,該 景深資料建立系統包含: 一儲存模組,用以記錄一偏移向量矩陣,該偏移向 量矩陣包含複數個資料欄,該等資料欄係對應該第一眼 影像之η個第一晝素,η為自然數; 23 201034441 一偏移運算器,用以根據該第a個第一晝素為中 心,根據一晝素選取區塊建立一基準框於第一眼影像, 該基準框包含複數個該等第一晝素,及根據該第a個第 一晝素所屬之該基準框,於該第二眼影像中搜尋一目標 框,該目標框與該基準框具有一最小灰階差值,以根據 該最小灰階差值計算出該第a個第一畫素之一偏移向量 值;以及 一比較器,用以記錄該偏移向量值於該偏移向量矩 陣之一第a個資料攔,與判斷該偏移向量矩陣之該等資 料攔未全數填入數值時,令一第a+Ι個第一晝素為該第 a個第一晝素以回傳至偏移運算器,以及判斷每一該第 a個第一畫素之該偏移向量值已全數記錄,轉換該偏移 向量矩陣為一深度圖。 11. 如申請專利範圍第10項所述之立體影像之景深資料建 立系統,其中該第一眼影像為一左眼影像,該第二眼影 ⑩ 像為右眼影像。 12. 如申請專利範圍第10項所述之立體影像之景深資料建 立系統,其中該第一眼影像為一右眼影像,該第二眼影 像為左眼影像。 13. 如申請專利範圍第10項所述之立體影像之景深資料建 立系統,其中該比較器搜尋該目標框時,係根據下列步 驟: 根據該第二眼影像之一第a個第二晝素與一偏移晝 24 201034441 素值,取得複數個預選第二晝素; 以各該預選第二晝素為中心,根據該晝素選取區塊 在該第二眼影像建立複數個預選框,每一該等預選框包 含複數個第二畫素; 將該基準圖之該等第一晝素個別與每一該預選框 之該等第二晝素進行位置匹配,計算位置相匹配之該第 一畫素與第二晝素之灰階並加總,以取得對應該等預選 框之複數個灰階差總值; 從該等灰階差總值取得一最小灰階差值,該最小灰 階差值所屬之該預選框即為該目標框;以及 根據該最小灰階差值計算出該偏移向量值。 14.如申請專利範圍第13項所述之立體影像之景深資料建 立系統,其中該偏移晝素值為X,該等預選第二晝素係 為一第a-x個第二晝素至一第a+x個第二畫素,其中X 為介於0與η之間的整數。 ❹ 15.如申請專利範圍第14項所述之立體影像之景深資料建 立系統,其中該偏移晝素值為X,該等預選第二晝素係 為一第a-x個第二晝素至一第a+x個第二晝素,其中X 為介於0與η之間的整數。 16. 如申請專利範圍第14項所述之立體影像之景深資料建 立系統,其中該晝素選取區塊之晝素長度與像度寬度係 為三個晝素、五個畫素、七個晝素或九個晝素。 17. 如申請專利範圍第10項所述之立體影像之景深資料建 25 201034441 立系統,其中該比較器轉換該偏移向量矩陣為一深度圖 前更將該偏移向量矩陣之該等偏移向量值轉換成符合 一灰階值記錄規則之複數個灰階差值。 18.如申請專利範圍第17項所述之立體影像之景深資料建 立系統,其中每一該灰階差值為介於〇至255之間的整 數。 ❹ 26201034441 VII. Patent application scope: 1. A method for establishing depth of field data of a stereoscopic image is applied to a stereoscopic image, the stereoscopic image includes a first eye image and a second eye image, and the depth of field data establishing method comprises: establishing a An offset vector matrix, the offset vector matrix comprising a plurality of data columns corresponding to n first pixels of the first eye image, η being a natural number; obtaining one of the first eye images a first element, i is an integer between 1 and η; centering on the first a first element, establishing a reference frame on the first eye according to a pixel selection block, the reference frame Include a plurality of the first pixels; searching for a target frame in the second eye image according to the reference frame to which the first first element belongs, and having a minimum gray between the target frame and the reference frame Step difference value; calculating an offset vector value of the first a first element from the minimum gray level difference; recording the offset vector value in the first data of the offset vector matrix, determining each a bias of the first a first pixel Whether the vector value is recorded in full; when it is judged that the total number of records is recorded, the offset vector matrix is converted into a depth map; and 21 201034441 When it is judged that the total number of records is not recorded, the first a + l first element is the first one The pixel returns to the step of creating a reference frame in the first eye image by using a pixel selection block. 2. The method for establishing depth of field data of a stereoscopic image according to claim 1, wherein the first eye image is a left eye image and the second eye image is a right eye image. 3. The method for establishing a depth of field data of a stereoscopic image according to claim 1, wherein the first eye image is a right eye image, and the second eye image is a left eye image. 4. The method for establishing a depth of field data of a stereoscopic image according to claim 1, wherein the step of searching for a target frame on the second eye image according to the reference frame corresponding to the first first element The following steps: obtaining a plurality of preselected second pixels according to the a second second element and an offset pixel value of the second eye image; and each of the preselected second pixels is centered according to the element Selecting a block ❹ creating a plurality of pre-selected boxes in the second eye image, each of the pre-selected boxes comprising a plurality of second pixels; and the first pixels of the reference image are individually associated with each of the pre-selected frames Waiting for the second pixel to perform position matching, calculating the gray level difference between the first pixel and the second element matching the position and summing them to obtain a total of the plurality of gray level differences corresponding to the preselected boxes; The grayscale difference total value obtains a minimum grayscale difference value, and the preselected frame to which the minimum grayscale difference value belongs is the target frame; and 22 201034441 calculates the offset vector value according to the minimum grayscale difference value. 5. A method for establishing depth of field data of a stereoscopic image as described in claim 4, wherein the offset pixel value is X, and the preselected second element is a a ath second element to one The a+x second element, where X is an integer between 0 and η. 6. The method of establishing a depth of field data for a stereoscopic image according to claim 5, wherein each of the offset vector values is an integer between -X and X. 7. The method for establishing depth of field data of a stereoscopic image according to claim 1, wherein the pixel selection block is a square, and the length of the square is three elements, five elements, and seven elements. Or nine venetian. 8. The method for establishing a depth of field data of a stereoscopic image according to claim 1, wherein the step of converting the offset vector matrix into a depth map further comprises the following steps: the equalization of the offset vector matrix The shift vector value is converted into a plurality of gray scale differences in accordance with a gray scale format. A method of establishing a depth of field data for a stereoscopic image as described in claim 8 wherein each of the grayscale differences is an integer between 〇 and 255. 10. A stereoscopic image depth data creation system for a stereoscopic image, the stereoscopic image comprising a first eye image and a second eye image, the depth of field data establishing system comprising: a storage module for recording An offset vector matrix, the offset vector matrix comprising a plurality of data columns corresponding to n first pixels of the first eye image, η being a natural number; 23 201034441 an offset operator, Based on the first a first element, a reference frame is formed on the first eye image according to the pixel selection block, the reference frame includes a plurality of the first pixels, and according to the first a a reference frame to which a pixel belongs, searching for a target frame in the second eye image, the target frame and the reference frame having a minimum grayscale difference value, to calculate the first a according to the minimum grayscale difference value An offset vector value of the first pixel; and a comparator for recording the offset vector value in the first data block of the offset vector matrix, and determining the data block of the offset vector matrix When not filling in the full value, The a+ first first element is the first a first element to be passed back to the offset operator, and the offset vector value of each of the a first first pixels is determined to be fully recorded, and converted The offset vector matrix is a depth map. 11. The depth of field data creation system of the stereoscopic image of claim 10, wherein the first eye image is a left eye image and the second eye shadow 10 image is a right eye image. 12. The depth of field data creation system of the stereoscopic image of claim 10, wherein the first eye image is a right eye image and the second eye image is a left eye image. 13. The depth of field data creation system of the stereoscopic image according to claim 10, wherein the comparator searches for the target frame according to the following steps: according to one of the second eye images, the a second second element And a plurality of preselected second pixels are obtained by using an offset 昼24 201034441 prime value; and each of the preselected second pixels is used to establish a plurality of preselected boxes in the second eye image according to the pixel selection block, The first pre-selected frame includes a plurality of second pixels; the first pixels of the reference map are individually matched with the second pixels of each of the pre-selected frames, and the first position of the matching position is calculated The gray level of the pixel and the second element are summed to obtain a total number of gray level differences corresponding to the preselected boxes; and a minimum gray level difference is obtained from the total gray level difference values, the minimum gray level The preselected box to which the difference belongs is the target frame; and the offset vector value is calculated according to the minimum grayscale difference. 14. The system for establishing a depth of field data of a stereoscopic image as described in claim 13 wherein the offset pixel value is X, and the preselected second element is an ax second element to a first a+x second pixels, where X is an integer between 0 and η. ❹ 15. The depth of field data creation system of the stereoscopic image described in claim 14 wherein the offset pixel value is X, and the preselected second element is an ax second element to one The a+x second element, where X is an integer between 0 and η. 16. The depth of field data creation system of the stereoscopic image described in claim 14 wherein the pixel length and the image width of the pixel selection block are three elements, five pixels, and seven pixels. Or nine venetian. 17. The depth of field data of the stereoscopic image described in claim 10, wherein the comparator converts the offset vector matrix to a depth map and further offsets the offset vector matrix. The vector value is converted into a plurality of grayscale differences that conform to a grayscale value recording rule. 18. The depth of field data creation system of the stereoscopic image of claim 17, wherein each of the grayscale differences is an integer between 〇 and 255. ❹ 26
TW98108318A 2009-03-13 2009-03-13 Method of establishing the depth of filed data for three-dimensional (3D) image and a system thereof TW201034441A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW98108318A TW201034441A (en) 2009-03-13 2009-03-13 Method of establishing the depth of filed data for three-dimensional (3D) image and a system thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW98108318A TW201034441A (en) 2009-03-13 2009-03-13 Method of establishing the depth of filed data for three-dimensional (3D) image and a system thereof

Publications (1)

Publication Number Publication Date
TW201034441A true TW201034441A (en) 2010-09-16

Family

ID=44855510

Family Applications (1)

Application Number Title Priority Date Filing Date
TW98108318A TW201034441A (en) 2009-03-13 2009-03-13 Method of establishing the depth of filed data for three-dimensional (3D) image and a system thereof

Country Status (1)

Country Link
TW (1) TW201034441A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI686711B (en) * 2017-03-07 2020-03-01 美商谷歌有限責任公司 Method and computer storage medium for performing neural network computations, and neural network system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI686711B (en) * 2017-03-07 2020-03-01 美商谷歌有限責任公司 Method and computer storage medium for performing neural network computations, and neural network system
US10699182B2 (en) 2017-03-07 2020-06-30 Google Llc Depth concatenation using a matrix computation unit
US10896367B2 (en) 2017-03-07 2021-01-19 Google Llc Depth concatenation using a matrix computation unit

Similar Documents

Publication Publication Date Title
US9619933B2 (en) Model and sizing information from smartphone acquired image sequences
CN109615703B (en) Augmented reality image display method, device and equipment
US6160909A (en) Depth control for stereoscopic images
US7643025B2 (en) Method and apparatus for applying stereoscopic imagery to three-dimensionally defined substrates
CN102282857B (en) Imaging device and method
US10547822B2 (en) Image processing apparatus and method to generate high-definition viewpoint interpolation image
JP7058277B2 (en) Reconstruction method and reconfiguration device
US20120300041A1 (en) Image capturing device
CN102428707A (en) Stereovision-Image Position Matching Apparatus, Stereovision-Image Position Matching Method, And Program Therefor
TW201709718A (en) Method and apparatus for displaying a light field based image on a user's device, and corresponding computer program product
JP2020506487A (en) Apparatus and method for obtaining depth information from a scene
CN101729920B (en) Method for displaying stereoscopic video with free visual angles
US20100302234A1 (en) Method of establishing dof data of 3d image and system thereof
JP5467993B2 (en) Image processing apparatus, compound-eye digital camera, and program
JP7479729B2 (en) Three-dimensional representation method and device
JP5824953B2 (en) Image processing apparatus, image processing method, and imaging apparatus
WO2017183470A1 (en) Three-dimensional reconstruction method
WO2012153447A1 (en) Image processing device, image processing method, program, and integrated circuit
CN111108742A (en) Information processing device, information processing method, program, and interchangeable lens
JP2014095808A (en) Image creation method, image display method, image creation program, image creation system, and image display device
KR20230074179A (en) Techniques for processing multi-planar images
JP5929922B2 (en) Image processing apparatus and image processing method
JP4862004B2 (en) Depth data generation apparatus, depth data generation method, and program thereof
TW201034441A (en) Method of establishing the depth of filed data for three-dimensional (3D) image and a system thereof
JP2014164497A (en) Information processor, image processing method and program