TW201228382A - Capturing and processing of images using monolithic camera array with heterogeneous imagers - Google Patents
Capturing and processing of images using monolithic camera array with heterogeneous imagers Download PDFInfo
- Publication number
- TW201228382A TW201228382A TW99147177A TW99147177A TW201228382A TW 201228382 A TW201228382 A TW 201228382A TW 99147177 A TW99147177 A TW 99147177A TW 99147177 A TW99147177 A TW 99147177A TW 201228382 A TW201228382 A TW 201228382A
- Authority
- TW
- Taiwan
- Prior art keywords
- lens
- imager
- array
- image
- imagers
- Prior art date
Links
Landscapes
- Studio Devices (AREA)
- Transforming Light Signals Into Electric Signals (AREA)
- Solid State Image Pick-Up Elements (AREA)
Abstract
Description
201228382 六、發明說明: 【發明所屬之技術領域】 本發明係關於一種包含複數個異質的成像器的影像感 測器,更特別地,關於一種具有内含不同架構的客製濾片、 感測器和光學儀器的複數個晶圓級成像器的影像感測器。 【先前技術】 影像感測器被使用於相機和其它成像裝置中以捕捉影 像。在一典型成像裝置中,光透過該成像裝置一端的開口(孔 徑)進入並由例如一透鏡的光學構件來導引至一影像感測 器。在多數成像裝置中,一或更多層光學構件被放置於該 孔徑和該影像感測器間以將光聚焦於該影像感測器上。該 影像感測器由透過該光學構件來接收光以產生訊號的像素 所構成》常用的影像感測器包含CCD(電荷耦合裝置)影像感 測器和CMOS(互補式金屬氧化物半導體)感測器。 濾片時常被運用於該影像感測器以選擇性地傳送某些 波長的光至像素上。一貝爾濾片馬赛克時常形成於該影像 感測器。’亥貝爾濾片係一彩色濾片陣列以將該些紅綠藍三 色遽片巾的一者安排於該些彩色像素中的每—個。該貝爾 濾片圖案包含百分之五十的綠色濾片、百分之二十五的紅 色慮片及百刀之—十五藍色濾片。既然每一個像素產生一 訊號以代表該光t的-色彩成分強度,但不是全部色彩範 圍,去馬賽克技術被執行以内㈣於每—個影像像素的— 組紅色、綠色及藍色值。 201228382 該些影像感測器受到各種執行效率限制。該些影像感 測器的執行效率限制尤其包含動態範圍、訊雜比(SNR)及低 光靈敏度。該動態範圍係定義為—像素可捕捉到的最大可 能訊號對該總雜訊的比值。典型土也,一影像感測器的井容 量限制該影像感測器可捕捉到的最大可能訊號。接著,該 最大可能訊號係視該入射照明強度及曝光持續期間(例如, 整合時間及快門寬度)而定。該動態範圍可以分貝W…表示 為一無維度數量: DR= 全井容量 均方根雜訊 公式(1) 典型地,在該捕捉影像中的雜訊位準影響該動態範圍 底限。因此,對於一八位开芎後‘ 位兀〜像而s,假設該均分根雜訊 立準為-位元,則該最佳例子會是48分貝。然而,實際上, 該f均分根雜訊位準係高於-位元,且這個進-步降低該 從衫像的訊雜比(SNR)在-大範圍上為-影像口 ㈣量值。大體上’該像素捕捉到更多光時該訊雜比更高二 一捕捉影像的訊雜比通常係關於該像素的集光能力。 大體亡,貞爾遽波感測器具有低光靈敏度。在低光位 低訊號:=Γ集光能力係受到射在每一個像素上的 …此外’在該像素上方的彩色據片進- 二後t錢素的訊號。IR(紅外線)據片同時降低來自近 紅外線訊號的光響應,其可攜帶有用資訊。 、 5 201228382 由於料限制天性之故,在為了行㈣統所設計的相 機中,這些影像感測器的執行效率限制被大大地放大。行 動相機的像素典型地係遠小於轂 , 於数位相機(DSC)的像素。由於 集光能力的限制'降低的訊雜& 叼㈣比 '該動態範圍的限制及降 低的低光場景敏感度之故,行動 仃動相棧中的相機顯現不良的 執行效率。 【發明内容】 相機陣列、包含一相機陣列的成像裝置及/或-種運用 複數個成像器來捕捉-影像的方法被揭示,其中,每一個 成像器包含複數個感測器構件及可根據本發明實施例來使 用於相機陣列中的透鏡堆疊陣列。該複數個成像器可包含 至少-第-成像器及一第二成像器,纟中,該第一成像器 及該第二成像器可具有相同成像特徵或不同成像特徵。 在-實施例中’該第一成像器和該第二成像器具有不 同成像特徵1些成像特徵尤其可包含該成像器尺寸 '該 成像器中所含像素類型、該成像器外形、與該成像器相關 的濾片、該成像器的曝光時間、與該成像器相關的孔徑大 小、與该成像器相關的光學構件架構、該成像器的增益、 忒成像器的解析度及該成像器的操作時序。 在一實施例中’該第一成像器包含用於傳送光譜的濾 片。該第二成像器也包含用於傳送與該第一成像器相同光 譜的同類型濾片,但卻捕捉由該第一成像器所捕捉影像中 進行次像素相位移所產生的影像。來自該第一成像器及該 201228382 第二成像器的影像係使用一超解析度方法來結合以得到更 高解析度的影像。 在一實施例中,該第一成像器包含用於傳送一第一光 "曰的第;慮片,且5亥第二成像器包含用於傳送一第二光譜 的第一濾片。來自第一及第二成像器的影像接著被處理以 得到一較高品質的影像。 在貫施例中’透鏡構件被提供以導引並聚光於該些 成像器上《該些透鏡構件構成透鏡堆疊以產生光學通道, 且每一個透鏡堆疊將光聚焦於一成像器上。因為每一個透 鏡構件係與一成像器有關’每一個透鏡構件可被設計並架 構以提供一窄光譜。進一步,該透鏡構件厚度可被降低以 減少該相機陣列的整體厚度。在這類實施例中,該些透鏡 構件可使用任何合適製造技術,例如,使用晶圓級光學 (WLO)技術、射出成型及/或玻璃模造來製造之。 在一實施例中,該複數個成像器包含專用於接收近 IR(紅外線)光譜的至少一近紅外線成像器。由該近紅外線成 像器所產生的影像可混合由具有彩色濾片的其它成像器所 產生的影像’以降低雜訊並增加該些影像的品質。在另一 這類實施例中’涵蓋包含遠紅外線及紫外線光譜的其它光 譜範圍的成像器也可被納入。 在一實施例中,該複數個成像器可結合提供變焦能力 的透鏡構件。在一這類實施例中,不同成像器可結合不同 焦距的透鏡以具有不同視野並提供不同程度的變焦能力。 不同視野也可使用具有不同感測器尺寸/格式的成像器,藉 g 7 201228382 由不同像素大小或不同像素/感光構件數量而得之。—機構 可被提供以提供自一變焦級至另一變焦級的平滑轉移。 在一或更多實施例中,該複數個成像器被協調操作以 得到一高動態範圍影像、一全景影像、一高光譜影像一 至物體的距離及一高晝面速率的視訊令至少其中之_。 根據本發明一實施例,一成像裝置包含至少—成像陣 列,且該陣列中的每一個成像器包括複數個感光構件及包 含至少一透鏡表面的一透鏡堆疊,其中,該透鏡堆疊被架 構以在該些感光構件上形成一影像;控制電路,被架構以 捕捉形成於該些成像器中的每一個成像器的感光構件上的 影像;及一超解析度處理模組,被架構以使用複數個捕捉 影像來產生至少一較高解析度超解像影像。 根據本發明另一實施例,一透鏡堆疊陣列包含形成於 被間隔物所分開的基板上的透鏡構件,其中,該些透鏡構 件、基板及間隔物被架構以構成複數個光學通道、位在每 一個光學通道内的至少一孔徑、位在每一個光學通道内的 至少一光譜濾片、和位在該透鏡堆疊陣咧内以光學性地隔 離3亥些光學通道的擋光材料,其中,每·^一個光譜減片被竿 構以通過一特定光譜帶。 該說明書所示特徵及優勢並未包括全部,尤其,基於 該些圖式、說明及申請專利範圍,許多額外特徵及優勢對 一熟知此項技術之人士會是顯而易見。甚至,應注意,該 ^明書所使用語言原則上係基於易讀及教學目的而選擇, 並非選來描述或限制本發明内容。 201228382 【實施方式】 現在本發明實施例 伯述,其中 似參考號指示一模一樣或功能類似的構士 ^ 圖形中,每-個參考號最左邊的數字^在°亥些 考號數字。 先被使用的參 〇貫施例關於使用捕捉使用不同成像特徵的複數個 個成像器所產生的影像的分佈式方法。可以每一個綱 捕捉被偏移一次像素量的影像的這類方式來架構每一個成 像器,像與其它成像器所捕捉的影像具有類似成像特 徵。母-個成像器也可包含具有不同濾片的獨立光學儀器 並以不同操作參數(例如’曝光時間)進行操作。該些成像器 所產生的不同影像被處理以得到一強化影像。在許多實施 例中,整合至每-個成像器中的獨立光學儀器係使用一透 鏡堆疊陣列來配置。該透鏡堆疊陣列可包含使用晶圓級光 學(WLO)技術所製造的一或更多光學構件。 一感測器構件或像素參考至一成像器内的個別感光構 件。該感光構件可為傳統CIS(互補式金屬氧化物半導體影 像感測盗)、CCD(電荷耦合裝置)、高動態範圍像素、多光 邊像素及其各種替代構件,但不限於此。 一感測器參考至用於捕捉由該成像器的光學儀器形成 於該感測器上的影像的二維像素陣列。每一個感測器的感 測器構件具有類似物理特性並透過相同光學元件來接收 光。進一步’每一個感測器内的感測器構件可結合相同的 彩色濾片。 201228382 一相機陣列參考至被計充杏 。… 1 又彳以兗田早-兀件的大量成像 窃。该相機陣列可被製造於單一晶片 々% # 平日日乃上从安裝或或設置於 各種裝置中。 相機陣列的陣列參考至二或更多相機陣列的聚集。二 ,更多相機陣列可共同操作以提供單—相機陣列的延伸功 能,例如,立體聲解析度之類β -成像器的成像特徵參考至與影像捕捉有關的成像器 的任何特徵或參數。該成像特徵尤其可包含該成像器尺 :、該成像n中所含像素類型、該成像器外形、與該成像 益相關的遽片、該成像考的腹伞拉Ρ弓 广 又1豕态的曝先時間、與該成像器相關的 孔杻大小、與該成像器相關的光學構件架構(例如,構件數 量、該些透鏡表面的外形、輪廓及大小,包含曲率半徑' 非球狀係數、該些物鏡的焦距及視野、色彩校正、孔徑比/ 2距等等)、該成像器的增益、該成像器的解析度及該成像 器的操作時序。 相機陣列結構 圖1係根據一貫施例的具有成像器i Α至ΝΜ的相機陣 列1 〇〇的平面圖。s玄相機陣列100係製造於一半導體晶片 上以包含複數個成像器1A至NM。該些成像器1人至]^]^ 中的每一個可包含複數個像素(例如,0·32百萬像素)。在一 貫施例中,該些成像器1Α至ΝΜ被安排成圖1所示的網狀 格式。在其它實施例中,該些成像器係安排成一非網狀格 式。例如’該些成像器可被安排成一環狀圖案、鋸齒狀圖 案或散射圖案或包含次像素偏移的不規則圖案。 201228382 5亥相機陣列可包含二或更多異質成像器類型,每一個 成像益包含二或更多感測器構件或像素。該些成像器中的 每一個可具有不同成像特徵。替代性地,可具有二或更多 不同成像《類型’其中’相同成像器類型分享相同成像特 徵。 在貝施例中’每一個成像器ία至ΝΜ具有它自已的 濾片及/或光學構件(例如,透鏡)。特別地,該些成像器以 至ΝΜ中的每一個或一群成像器可結合光譜彩色濾片來接 收某些光波長。示範濾片包含在該貝爾圖案(紅色、綠色、 藍色或它們的補色(青、洋紅、黃)中所使用的傳統遽片、紅 外線遽片、近紅外、㈣片、偏光據片及適合高錢成像需 求的客製濾片。一些成像器可以沒有濾片以允許整個可見 光〜及近紅外線兩者的接收,其增加該成像器的訊雜比。 不同據片數量可與該相機陣列内的成像器數量一樣多。進 步,邊些成像器1Α至ΝΜ中的每一個或一群成像器可透 過具有不同光學特徵(例如,焦距)或不同孔徑大小的透鏡來 接收光。 a在-實施例中,該相機陣列包含其它相關電路。該其 匕電路尤其可包含控制成像參數的電路及感冑物理參數的 感測器。該控制電路可控制例如曝光時間、增錢黑階偏 移的成像參數。該感測器可包含暗像素以估測在操作溫度 時的暗電流。該暗電流可被測量以對該基板可能受到任何 熱潛變的損害進行連動補償。替代性地,W如因為該透鏡 材料的折射率變化之類與該光學儀器有關的熱效應補償可 201228382 藉由权正不同溫度的點擴散函數而得。 在一實施例中’用於控制成像參數的電路可單獨或以 同步方式來觸發母一個成像器。該相機陣列中的各種成像 器的曝光週期的啟動(類似於打開快門)可以一重疊方式來 交錯安排,使得該些場景被依序取樣而令一些成像器同時 曝光。在一傳統攝影機以每秒N次曝光來取樣一場景時, 每個樣本的曝光時間係限制為1/N秒》利用複數個成像器, 因為多個成像器可被操作以交錯安排方式來捕捉影像,故 沒有這類曝光時間的限制。 母一個成像器可被獨立操作。與每一個別成像器有關 的整體或多數操作可被個別處理。在一實施例中,一主設 定參數被程式化且每一個成像器的這類主設定參數的誤差 (也就是,偏移或增益)被架構。該些誤差可反應例如高動態 範圍、增益設定參數、整合時間設定參數、數位處理設定 參數或其結合的函數。這些誤差可標示該特定相機陣列為 一低位準(例如’該增益誤差)或一高位準(例如,該開放系 統連結編號上的差異,其接著自動轉換成用於增益的三角 積分、整合時間或内文/主控制暫存器所示的其它方面)。藉 由設定該些主控值及該些主控值的誤差,較高級的控制抽 象概念可被取得,有助於用於許多操作的較簡單程式模 組。在一實施例中,該些成像器的參數對於一目標應用而 言係任意固定。在另一實施例中,該些參數被架構以允許 尚度彈性及可程式性β 在一實施例中’該相機陣列被設計成一直接替代元件 12 201228382 以取代使用於手機及直它扞翻驻恶七以 口。心太日从'、匕订動裝置中的現有相機影像感測 斋。基於本w,儘管在許多攝影情形中,所得相機 的解析度可能超過傳統影像感測器,該相機陣列仍可被設201228382 VI. Description of the Invention: [Technical Field] The present invention relates to an image sensor comprising a plurality of heterogeneous imagers, and more particularly to a custom filter having a different architecture, sensing Image sensor for a plurality of wafer level imagers of optical instruments and optical instruments. [Prior Art] Image sensors are used in cameras and other imaging devices to capture images. In a typical imaging device, light enters through an opening (aperture) at one end of the imaging device and is guided to an image sensor by an optical member such as a lens. In most imaging devices, one or more layers of optical components are placed between the aperture and the image sensor to focus light onto the image sensor. The image sensor is composed of pixels that receive light through the optical member to generate a signal. A commonly used image sensor includes a CCD (Charge Coupled Device) image sensor and a CMOS (Complementary Metal Oxide Semiconductor) sensing. Device. Filters are often used in the image sensor to selectively deliver light of certain wavelengths to the pixel. A Bell filter mosaic is often formed in the image sensor. The Hebel filter is a color filter array to arrange one of the red, green and blue three-color wipes for each of the color pixels. The Bell filter pattern contains fifty percent green filter, twenty-five percent red patch, and one hundred and fifteen blue filter. Since each pixel produces a signal to represent the intensity of the color component of the light t, but not the full color range, the demosaicing technique is performed within (four) the red, green, and blue values of each image pixel. 201228382 These image sensors are subject to various execution efficiency limitations. The performance efficiencies of these image sensors include, inter alia, dynamic range, signal-to-noise ratio (SNR), and low light sensitivity. The dynamic range is defined as the ratio of the maximum possible signal that the pixel can capture to the total noise. Typically, the image sensor's well capacity limits the maximum possible signal that the image sensor can capture. The maximum possible signal is then dependent on the intensity of the incident illumination and the duration of the exposure (e.g., integration time and shutter width). The dynamic range can be expressed in decibels W... as a dimensionless number: DR = full well capacity rms noise formula (1) Typically, the noise level in the captured image affects the dynamic range floor. Therefore, for an eight-bit ‘ ‘ 像 像 像 像 s, assuming that the average root noise is aligned as a bit, the best example would be 48 decibels. However, in practice, the f-averaged noise level is higher than the -bit, and this step-by-step reduces the signal-to-noise ratio (SNR) of the slave image over a large range - the image port (four) magnitude . In general, the signal-to-noise ratio is higher when the pixel captures more light. The signal-to-noise ratio of the captured image is usually related to the light collecting capability of the pixel. In general, the Muir Chopper sensor has low light sensitivity. In the low light level, the low signal: = Γ light collecting capability is received by each pixel. In addition, the color data above the pixel enters the signal of the second. The IR (infrared) film simultaneously reduces the photoresponse from the near-infrared signal, which can carry useful information. 5 201228382 Due to the limited nature of the material, the efficiency limits of these image sensors are greatly magnified in the camera designed for the line (4). The pixels of a mobile camera are typically much smaller than the hub, the pixels of a digital camera (DSC). Due to the limitation of the light collecting capability, the reduced signal & 叼 (4) is less effective than the 'dynamic range limitation and reduced low-light scene sensitivity, and the camera in the action phase stack exhibits poor execution efficiency. SUMMARY OF THE INVENTION A camera array, an imaging device including a camera array, and/or a method of capturing images using a plurality of imagers are disclosed, wherein each imager includes a plurality of sensor components and can be used according to the present invention. Inventive embodiments are used for lens stack arrays in camera arrays. The plurality of imagers can include at least a first-imager and a second imager, wherein the first imager and the second imager can have the same imaging features or different imaging features. In an embodiment, the first imager and the second imager have different imaging features, and the imaging features may include, in particular, the imager size, a pixel type contained in the imager, the imager shape, and the imaging Filter associated with the filter, exposure time of the imager, aperture size associated with the imager, optical component architecture associated with the imager, gain of the imager, resolution of the imager, and operation of the imager Timing. In an embodiment the first imager comprises a filter for transmitting a spectrum. The second imager also includes the same type of filter for transmitting the same spectrum as the first imager, but captures images resulting from sub-pixel phase shifts in the image captured by the first imager. Images from the first imager and the 201228382 second imager are combined using a super-resolution method to obtain a higher resolution image. In one embodiment, the first imager includes a first sheet for transmitting a first light ", and the second imager includes a first filter for transmitting a second spectrum. Images from the first and second imagers are then processed to obtain a higher quality image. In a preferred embodiment, 'lens members are provided to direct and condense on the imagers." The lens members form a lens stack to create optical channels, and each lens stack focuses light onto an imager. Because each lens member is associated with an imager' each lens member can be designed and framed to provide a narrow spectrum. Further, the thickness of the lens member can be reduced to reduce the overall thickness of the camera array. In such embodiments, the lens members can be fabricated using any suitable fabrication technique, for example, using wafer level optical (WLO) technology, injection molding, and/or glass molding. In an embodiment, the plurality of imagers comprise at least one near infrared imager dedicated to receiving a near IR (infrared) spectrum. The image produced by the near infrared imager can be blended with images produced by other imagers having color filters to reduce noise and increase the quality of the images. Imagers that encompass other spectral ranges including far infrared and ultraviolet spectra may also be incorporated in another such embodiment. In an embodiment, the plurality of imagers can incorporate a lens member that provides zoom capability. In one such embodiment, different imagers can combine lenses of different focal lengths to have different fields of view and provide varying degrees of zoom capability. Imagers with different sensor sizes/formats can also be used for different fields of view, which are derived from different pixel sizes or different pixels/photosensitive members by g 7 201228382. - A mechanism can be provided to provide a smooth transition from one zoom level to another zoom level. In one or more embodiments, the plurality of imagers are coordinated to obtain at least one of a high dynamic range image, a panoramic image, a hyperspectral image, a distance to the object, and a high aspect rate video command. . In accordance with an embodiment of the invention, an imaging device includes at least an imaging array, and each of the imagers includes a plurality of photosensitive members and a lens stack including at least one lens surface, wherein the lens stack is structured to Forming an image on the photosensitive member; a control circuit configured to capture an image formed on the photosensitive member of each of the imagers; and an ultra-resolution processing module configured to use a plurality of The image is captured to produce at least one higher resolution super resolution image. In accordance with another embodiment of the present invention, a lens stack array includes lens members formed on a substrate separated by spacers, wherein the lens members, the substrate, and the spacers are structured to form a plurality of optical channels, each at each At least one aperture in an optical channel, at least one spectral filter positioned in each optical channel, and a light blocking material positioned within the lens stacking array to optically isolate three optical channels, wherein each • A spectral subtraction is clamped to pass a specific spectral band. The features and advantages of the present invention are not intended to be exhaustive, and many additional features and advantages will be apparent to those skilled in the art. It should be noted that the language used in the specification is in principle selected based on the ease of reading and teaching purposes, and is not intended to describe or limit the invention. 201228382 [Embodiment] Now, the embodiment of the present invention is described above, wherein the reference number indicates the same or similar function. In the figure, the leftmost digit of each reference number is in the number of the number. The first discussed embodiment is a distributed method for capturing images produced by a plurality of imagers using different imaging features. Each of the imagers can be constructed in such a way that each image captures an image that is offset by a single pixel amount, like images similar to those captured by other imagers. The mother-imager can also include separate optical instruments with different filters and operate with different operating parameters (e.g., 'exposure time). The different images produced by the imagers are processed to obtain a enhanced image. In many embodiments, the individual optical instruments integrated into each of the imagers are configured using a mirror stack array. The array of lens stacks can include one or more optical components fabricated using wafer level optical (WLO) technology. A sensor member or pixel is referenced to an individual photosensitive member within an imager. The photosensitive member may be a conventional CIS (Complementary Metal Oxide Semiconductor Image Sensing), a CCD (Charge Coupled Device), a high dynamic range pixel, a multi-edge pixel, and various alternative members thereof, but is not limited thereto. A sensor is referenced to a two-dimensional array of pixels for capturing images formed by the optical instrument of the imager on the sensor. The sensor components of each sensor have similar physical properties and receive light through the same optical components. Further, the sensor components within each sensor can incorporate the same color filter. 201228382 A camera array reference to the apricot. ... 1 And I used a lot of imaging smashing in the field. The camera array can be fabricated from a single wafer #%# on a daily basis or installed in or installed in various devices. The array of camera arrays is referenced to the aggregation of two or more camera arrays. Second, more camera arrays can operate together to provide an extended function of the single-camera array, for example, imaging features of a beta-imager such as stereo resolution refer to any feature or parameter of the imager associated with the image capture. The imaging feature may include, in particular, the imager scale: a type of pixel included in the image n, a shape of the imager, a cymbal associated with the imaging benefit, and a wide and a slanted slanting umbrella of the imaging test Exposure time, aperture size associated with the imager, optical component architecture associated with the imager (eg, number of components, shape, contour, and size of the lens surfaces, including radius of curvature 'non-spherical coefficients, The focal length and field of view of the objective lens, color correction, aperture ratio / 2 distance, etc.), the gain of the imager, the resolution of the imager, and the operational timing of the imager. Camera Array Structure Figure 1 is a plan view of a camera array 1 具有 having imagers i Α to 根据 according to a consistent embodiment. The sinusoidal camera array 100 is fabricated on a semiconductor wafer to include a plurality of imagers 1A through NM. Each of the imagers 1 to ^^^ may comprise a plurality of pixels (eg, 0. 32 megapixels). In one embodiment, the imagers 1 to ΝΜ are arranged in the mesh format shown in Fig. 1. In other embodiments, the imagers are arranged in a non-mesh format. For example, the imagers can be arranged in an annular pattern, a zigzag pattern or a scattering pattern or an irregular pattern containing sub-pixel shifts. The 201228382 5 camera array can contain two or more heterogeneous imager types, each of which includes two or more sensor components or pixels. Each of the imagers can have different imaging features. Alternatively, there may be two or more different imaging "types" in which the same imager type shares the same imaging characteristics. In the case of Bayes, each imager has its own filter and/or optical member (e.g., a lens). In particular, the imagers may incorporate certain spectral wavelengths in conjunction with a spectral color filter for each or a group of imagers. The demonstration filter contains the traditional cymbal, infrared cymbal, near-infrared, (four), polarized film and high suitable for use in the Bell pattern (red, green, blue or their complementary colors (cyan, magenta, yellow). Custom imaging filters for money imaging. Some imagers may have no filter to allow reception of both the visible and near infrared rays, which increases the signal to noise ratio of the imager. The number of different images can be correlated with the camera array. The number of imagers is as much as it is advanced. Each of the imagers 1 to 1 or a group of imagers can receive light through lenses having different optical characteristics (eg, focal length) or different aperture sizes. The camera array includes other associated circuitry. The circuitry may include, inter alia, circuitry for controlling imaging parameters and sensors that sense physical parameters. The control circuitry may control imaging parameters such as exposure time, increased black-order offset. The sensor can include dark pixels to estimate dark current at the operating temperature. The dark current can be measured to damage the substrate that may be subject to any thermal creep. Motion compensation. Alternatively, W, such as the thermal effect compensation associated with the optical instrument due to changes in the refractive index of the lens material, may be obtained by weighting the point spread function at different temperatures. In one embodiment, 'for The circuitry that controls the imaging parameters can trigger the parent imager individually or in a synchronized manner. The activation of the exposure periods of various imagers in the camera array (similar to opening the shutter) can be staggered in an overlapping manner such that the scenes are Simultaneous sampling allows some imagers to be exposed at the same time. When a conventional camera samples a scene with N exposures per second, the exposure time of each sample is limited to 1/N second, using multiple imagers because multiple The imager can be operated to capture images in a staggered arrangement, so there is no such exposure time limitation. The parent imager can be operated independently. The overall or majority of operations associated with each individual imager can be processed individually. In an embodiment, a main set parameter is programmed and the error of such a main set parameter of each imager (ie, offset or increase) Is configured to reflect, for example, a high dynamic range, a gain setting parameter, an integrated time setting parameter, a digital processing setting parameter, or a combination thereof. These errors may indicate that the particular camera array is at a low level (eg, the gain Error) or a high level (eg, a difference in the open system link number, which is then automatically converted to a delta integral for gain, integration time, or other aspects as shown in the context/master control register). Setting the master values and the errors of the master values, higher level control abstractions can be obtained, facilitating simpler program modules for many operations. In one embodiment, the imagers are The parameters are arbitrarily fixed for a target application. In another embodiment, the parameters are architected to allow for flexibility and programmability. In one embodiment, the camera array is designed as a direct replacement component 12 201228382 Instead of using it on mobile phones and straightforward, it’s overwhelming. The heart is too far from the existing camera image in the ', 匕 装置 。 。. Based on this w, although in many photographic situations, the resolution of the resulting camera may exceed that of a conventional image sensor, the camera array can still be set
計成與大致相同解析声沾俏此& & a、, X 啊度的傳統影像感測器實體上相容。取 得該增加的執行效率優匏,椒械士 ^ 人午馒勢,根據本發明實施例的相機陣 相較於傳統影像感測器可包含較〉 ^3权 > 像素而得到相同或更佳 品質影像。替代性地,在兮忐德 在該成像益中的像素大小相較於傳 統影像感測器中的像素可被降低而取得可觀成果。 為了 傳統衫像感測器的原始像素總數卻不增加 _面積’經常用㈣些個別成像器的邏輯電路較佳地係限 制於該石夕面積内。方_杳# &丨1 ^ 貫施例中,許多像素控制邏輯電路 係單一函數集,共用於 、州於具有了應用於母一個成像器的較小 函數集的成像器中的全部式客勃。产士 — J王σ丨或夕數。在本實施例中,因為該 些成像器的資料輪出$合^ 出不會.4者地增加’故用於該成像器的 傳統外部介面可被使用。 在貫她例中,包含該些成像器的相機陣列取代M百 萬像素的傳統影像^器。該相機陣列包含Νχ像器, Μ 每個感測器包含iV像素。該相機陣列中的每一個成像器 也/、有/、所取代的傳統景》像感測器相同的長寬比。表1列 出取代傳統影像感測器的本發明相機陣列的示範架構。 13 201228382 表1The conventional image sensor that is calculated to be substantially identical to the resolution of the sound is a physical compatibility with the conventional image sensor of the & a, X degree. Obtaining the increased execution efficiency is superior, and the camera array according to the embodiment of the present invention may have the same or better than the conventional image sensor. Quality image. Alternatively, the pixel size in the imaging benefit of Jude can be reduced compared to the pixels in the conventional image sensor to achieve considerable results. In order to reduce the total number of original pixels of the conventional image sensor without increasing the area, the logic of the individual imagers is preferably limited to the area of the stone. __################################################################################################# Bo.产士 — J 王σ丨 or 夕数. In this embodiment, the conventional external interface for the imager can be used because the data of the imagers is not increased. In her example, a camera array containing these imagers replaces the traditional image device of M million pixels. The camera array includes a keyer, Μ each sensor contains iV pixels. Each imager in the camera array also has the same aspect ratio as the /, replaced by the traditional scene sensor. Table 1 lists an exemplary architecture of a camera array of the present invention that replaces conventional image sensors. 13 201228382 Table 1
表1中的超解析度係數係估測I,且該些有效解析度 值可依據處理所得的實際超解析度係數而有所不同。又 在該相機陣列内的成像器數量尤其可依據⑴解析度、 (ii)視差、(iii)靈敏度及(iv)動態範圍因素來決定。用於成像 器尺寸的第-因素係解析度。由解析度觀點來看,較佳成 像器數里範圍由2x2至6x6,此因大於6x6的陣列大小很可 能破壞頻率資訊而無法藉由該超解析度程序重新產生之 故。例如,配合2x2成像器的8百萬像素解析度會需要每 個成像态具有2百萬像素。類似地’配合5χ5成像器的8 百萬像素解析度會需要每一個成像器具有〇 32百萬像素。 在許多實施例巾’在該陣列中的成像器數量係依據一特定 應用需求來決定。 、可限制成像器數量的第二因素係視差及遮蔽議題。觀 於〜像中所捕捉的物體,該成像器視野被遮蔽的背景部 14 201228382 ^可被稱A遮蔽組當二成像器自二不同位置捕捉該物體 日寸’每-個成像器的遮蔽組係不同。因此,只可有一成像 益所捕捉的%景像素β A 了解決本遮蔽議題,—給予成像 态類型在一疋程度上要包含最少成像器組,並將該些成像 器對稱地分佈於該相機陣列的中心軸四周。 可對成像器數量設下限的第三因素係在低照明條件下 的感光度4題。為了改善低光靈敏度,用於偵測近紅外線 光《曰的成像器可被要求。在該相機陣列中的成像器數量可 能需要增加以容納這類近紅外線成像器。 決定該成像器尺寸的第四因素係動態範圍。為了提供 則目機陣列中的動態範圍,提供—些㈣遽片類型(色度或 党度)的成像器係有利的。每—個相同濾片類型成像器接著 可同時地搭配不同曝光時間來操作。^曝光時間所捕捉 到的影像可被處理以產生一高動態範圍影像。 依據這些因素,較佳成像器數量為2χ2至6χ6。4χ4及 5x5架構係比2χ2Λ3χ3架構更佳,此因前者很可能提供足 夠的成像器數量來解決遮蔽議題,增加感光度並增加該動 態範圍。此外,矩形陣列也是較佳的。同時,相較於該㈣ 陣列:需的計算負荷量,恢復這些陣列大小的解析度所需 的計算負荷量會是適度的。然而,大⑨W的陣列也許會 被使用以提供例如光學變焦及多光譜成像的額外特徵。在 此雖只描述正方形成像器’如同稍後會更詳加說明地,這 類成像器可具有不同X維及y維。 另一考量係專用於亮度取樣的成像器數量。藉由確保 15 201228382 該陣列中專用於近紅外線取樣的成像器不會降低該獲取的 解析度,來自該些近紅外線成像器的資訊被加至該些亮度 成像器所捕捉到的解析度中。基於本目的’至少百分之50 的成像器可被使用於取樣該亮度及/或近紅外線光譜。在一 4x4成像器的實施例中,4個成像器取樣亮度,4個成像器 取樣近紅外線,且剩餘8個成像器取樣二色度(紅色及藍 色)。在另一 5x5成像器的實施例中,9個成像器取樣亮度, 8個成像器取樣近紅外線,且剩餘8個成像器取樣二色度(紅 色及藍色進一步,具有這些濾片的成像器可被對稱安排 於該相機陣列内以對付因為視差所造成的遮蔽β在一進一 步的5x5成像器實施例中,17個成像器取樣亮度,4個成 像器取樣紅色’且4個成像器取樣藍色。 在一實施例中,在該相機陣列内的成像器係空間上彼 此分隔一預定距離。藉由增加該空間間隔,該些成像器所 捕捉到的影像間的視差會增加。該增加的視差在更精確距 離資訊係重要的地方是有利的。二成像器間的間隔也可增 加以接近一對人眼間的間隔。藉由接近人眼間的間隔,一 逼真立體3D影像可被提供以在一適當立體顯示裝置上呈現 該產生的影像。 在一實施例中’多個相機陣列被提供於一裝置上的不 同位置以克服空間限制。一相機陣列可被設計以安裝於一 有限工間而另一相機陣列可被放置於該裝置的另一有限空 間内。例如,若總共需要2〇個成像器但是可用空間只允許 在一裝置的每一側上提供一1x10成像器的相機陣列,每一 16 201228382 個包含10個成傻哭& t , 的一相機陣列可被放置於該裝置兩 可用空間上。每一個知秘咕, m 個相機陣列可被製造於一基板上且牢 固定至一裝置的主機4ε;4·、 機板或其它部件。此外,這類成像器不 具有同質尺寸且可能具有不^維和y維。自多個相機陣 列所收集的影像可被處理以產生想要的解析度和執行效率 的影像。 用於單-成像器的設計可被施用至各包含其它成像器 類型的不同相機陣列。在該相機陣列中的其它變數,例如, 卫間距離、形色濾片及相同或不同感測器的結合,可被修 改以產生具有不同成像特徵的相機陣列。在本方式中,相 機陣列的各式各樣混合可被產生並保有經濟規模的好處。 晶圓級光學整合 在貫施例中,該相機陣列運用晶圓級光學(WL〇)技 術。儘官在許多實施例中,類似光學通道可使用包含射出 成型、玻璃模造及/或這些技術與包含晶圓級光學技術的結 :的各種技術中的任一者來建構之,但不限於此。晶圓級 光學技術它本身係包括一些製程的技術,該些製程包含例 如在玻璃晶圓上模造光學儀器(例如,透鏡模組陣列及那些 透鏡陣列的陣列)、以適當間隔物堆疊那些晶圓(包含具有複 製於該基板任一側上的透鏡的晶圓)、不是在一晶圓級就是 在晶粒級接著將具有該成像器的光學儀器直接封裝至一整 體式整合模組中。 除了別的程序外,該晶圓級光學程序還可涉及使用一 鑽石車削模件在一玻璃基板上產生每一個聚合物透鏡構The super-resolution coefficients in Table 1 are estimates I, and the effective resolution values may vary depending on the actual super-resolution factor obtained by the process. The number of imagers in the camera array can be determined in particular by (1) resolution, (ii) parallax, (iii) sensitivity, and (iv) dynamic range factor. The factor-factor resolution for the size of the imager. From a resolution point of view, the preferred number of imagers ranges from 2x2 to 6x6, which is more likely to corrupt the frequency information than the 6x6 array size and cannot be regenerated by the hyper-resolution program. For example, an 8 megapixel resolution with a 2x2 imager would require 2 megapixels per imaging state. Similarly, the 8 megapixel resolution of a 5 χ 5 imager would require 成像 32 megapixels per imager. In many embodiments, the number of imagers in the array is determined by a particular application requirement. The second factor that limits the number of imagers is parallax and shadowing issues. Viewing the object captured in the image, the imager is shaded by the background portion 14 201228382 ^ can be called the A shadow group when the two imagers capture the object from two different positions of the shadow group of each imager The system is different. Therefore, there is only one % of the scene pixels β A captured by the imaging benefit to solve the problem of shielding. The imaging state type is to include a minimum of imager groups to a certain extent, and the imagers are symmetrically distributed to the camera array. Around the center axis. The third factor that sets the lower limit on the number of imagers is the sensitivity 4 under low lighting conditions. In order to improve the low light sensitivity, an imager for detecting near-infrared light can be required. The number of imagers in the camera array may need to be increased to accommodate such near infrared imagers. The fourth factor that determines the size of the imager is the dynamic range. In order to provide dynamic range in the eyepiece array, it is advantageous to provide an imager of (4) cymbal type (chroma or party). Each of the same filter type imagers can then be operated simultaneously with different exposure times. ^The image captured by the exposure time can be processed to produce a high dynamic range image. Based on these factors, the number of preferred imagers is 2χ2 to 6χ6. The 4χ4 and 5x5 architectures are better than the 2χ2Λ3χ3 architecture, which is likely to provide sufficient number of imagers to solve the shadowing problem, increase sensitivity and increase the dynamic range. In addition, a rectangular array is also preferred. At the same time, the amount of computational load required to restore the resolution of these array sizes would be modest compared to the (four) array: the amount of computational load required. However, large 9W arrays may be used to provide additional features such as optical zoom and multi-spectral imaging. Although only square imagers are described herein, such imagers may have different X and y dimensions as will be explained in more detail later. Another consideration is the number of imagers dedicated to luminance sampling. By ensuring that the imager dedicated to near-infrared sampling in the array does not reduce the resolution of the acquisition, the information from the near-infrared imagers is added to the resolution captured by the luminance imagers. At least 50 percent of the imager based on the present purpose can be used to sample the brightness and/or near infrared spectrum. In an embodiment of a 4x4 imager, four imagers sample the brightness, four imagers sample the near infrared, and the remaining eight imagers sample the two chrominance (red and blue). In another 5x5 imager embodiment, 9 imagers sample brightness, 8 imagers sample near infrared, and the remaining 8 imagers sample dichroism (red and blue further, imagers with these filters) Can be symmetrically arranged within the camera array to account for shadowing due to parallax. In a further 5x5 imager embodiment, 17 imagers sample brightness, 4 imagers sample red' and 4 imager samples blue In an embodiment, the imagers in the camera array are spatially separated from each other by a predetermined distance. By increasing the spatial spacing, the parallax between the images captured by the imagers is increased. Parallax is advantageous where the more precise distance information is important. The spacing between the two imagers can also be increased to approximate the spacing between a pair of human eyes. By approaching the interval between human eyes, a realistic stereoscopic 3D image can be provided. The resulting image is rendered on a suitable stereoscopic display device. In one embodiment, multiple camera arrays are provided at different locations on a device to overcome spatial limitations. The array can be designed to be mounted in a limited workspace while another camera array can be placed in another limited space of the device. For example, if a total of 2 imagers are required but the available space is only allowed for each device A camera array with a 1x10 imager is provided on the side, and each of the 16 201228382 arrays containing 10 idiots and amps can be placed on the two available spaces of the device. Each of the secrets, m cameras The array can be fabricated on a substrate and secured to a host 4 ε; 4, board or other component of a device. Furthermore, such imagers do not have a homogenous size and may have a non-dimensional and y-dimensional. The images collected by the array can be processed to produce images of desired resolution and efficiency of execution. The design for the single-imager can be applied to different camera arrays each containing other imager types. Other variables, such as inter-set distances, color filters, and combinations of the same or different sensors, can be modified to produce a camera array with different imaging features. In this manner, the camera array A wide variety of columns can be created and preserved on an economic scale. Wafer-level optical integration In a common example, the camera array utilizes wafer level optics (WL〇) technology. In many embodiments, Similar optical channels can be constructed using any of a variety of techniques including injection molding, glass molding, and/or these techniques and junctions containing wafer level optical technology: but not limited to this. Wafer-level optical technology itself Included are some process techniques, including, for example, molding optical instruments on a glass wafer (eg, an array of lens modules and arrays of those lens arrays), stacking those wafers with appropriate spacers (including having replicated on the substrate) The wafer of the lens on either side, either at the wafer level or at the grain level, then directly encapsulates the optical instrument with the imager into a monolithic integrated module. Among other procedures, the wafer level optical process may involve the use of a diamond turning module to create each polymer lens structure on a glass substrate.
S 17 201228382 件。更特別地’在晶圓級光學技術中的製程鏈大體上包含 產生一鑽石車削透鏡基板(在個別及陣列級兩者上),接著產 生負模來複製那個基板(亦稱之為壓模或工具),並接著最後 在一玻璃基板上形成一聚合物複本,其已利用例如孔徑、 擋光材料、濾片等類的適當支撐光學構件來構造。 圖2A係根據一實施例的具有晶圓級光學儀器2丨〇及一 感測器陣列230的相機陣列組件200的透視圖。該晶圓級 光干儀器210包含複數個透鏡構件220,每一個透鏡構件 220含有該感測器陣列230的二十五個成像器24〇中的一 者。/主思,相較於含有該整個感測器陣列2 3 〇的單一大型 透鏡,該相機陣列組件200具有佔據非常少空間的較小透 鏡構件陣列。也應注意,該些透鏡中的每一個可為一不同 類型。例如,每一個基板層級可包含繞射、折射、菲涅爾 或其結合的透鏡。應進一步注意,在該相機陣列内,一透 鏡構件220可包括彼此間軸向安排的一或多個獨立光學透 鏡構件。最後,應注意,對於多數透鏡材料而言,會是該 材料折射率的熱引發的變化,其必須校正以得到良好影像 品質。-溫度正規化程序會於稍後章節更詳加描述。圖2B 係根據一實施例的相機陣列組件25〇的剖面圖。該相機組 件250包含-頂部透鏡晶圓262、—底部透鏡晶圓268、形 成於其上的多個感測器和相關感光構件的基板278、及間隔 物258、264和270。該相機陣列组件⑽係封裝於一密封 物254内…光學頂部間隔& 258可放置於該密封物w 及該頂部透鏡晶圓262之間1而’它對於該相機組件25〇 18 201228382 的建構並不重要。光學構件288係形成於該頂部透鏡晶圓 262上。儘管圖2B中所示這些光學構件288係一模一樣, 但應了解,不同構件類型、大小和外形仍可被使用。一中 間間隔物264被放置於該頂部透鏡晶圓262和一底部透鏡 晶圓268之間。另一組光學構件286係形成於該底部透鏡 晶圓268上。一底部間隔物27〇被放置於該底部透鏡晶圓 268和該基板278之間。直通矽晶穿孔274也被提供至路徑 以傳送來自該些成像器的訊號。該頂部透鏡晶圓262可部 分塗佈著擋光材料284(見下面的討論)以擋光。該頂部透鏡 晶圓262中未塗佈著擋光材料284的部分充當讓光穿透而 至s玄底部透鏡晶圓268和該些感光構件的孔徑攔。雖然圖 2B所提供的本實施例中只顯示單一孔徑欄,但應了解,額 外孔徑欄可由配置於該相機組件的基板面中的任一者或全 部上的不透明層來形成以改善雜散光執行效率和降低光學 串a °光學串音抑制的完整討論係提供於下。此外,儘管 上面霄施例係顯示間隔物2 5 8、2 6 4和2 7 0,然該間隔物功 能也可藉由修改該些透鏡結構(或基板)以使該些透鏡可直 接相互連接而被直接實行。在這類實施例中,該透鏡高度 可被延伸’且該透鏡直接黏接至該上基板,藉此消除對間 隔物層的需求。 在圖2B實施例中,濾片282係形成於該底部透鏡晶圓 268上。擔光材料280也可塗佈於該底部透鏡268上以充.當 一光學隔離器。一擔光材料280也可塗佈於該基板278上 以保護該些感測器電子儀器遠離射入光線。間隔物2 8 3也 19 201228382 可置於该底部透鏡晶圓268和該基板278間及該些透鏡晶 圓262、268間。在許多實施例中,該間隔物283係類似於 該些間隔物264和270。在—些實施例中,每一間隔物層係 使用-單板來配置。雖未示於圖2B巾,但本發明許多實施 例也包含位在該頂部透鏡晶圓262的頂上的每一個光學通 道間的間隔物’其係類似於或配置於單層中在該透鏡堆疊 陣列邊緣處所示的間隔物258。如下所進一步討論地,該些 間隔物可由擋光材料所建構及/或塗佈擋光材料以隔離該晶 圓級光學儀器所形成的光學通道。基於本應用目的,合適 撞光材料可包含任何不透明材料,例如,像鈦和鉻的金屬 材料、或像黑鉻(鉻或鉻氧化物)或黑矽的這些材料的氧化 物、或像一黑基質聚合物(布魯爾科技公司的pSK2〇〇〇)的黑 色微粒填充光阻劑之類。該基板底部表面係涵蓋著一背部 重分佈層(“RDL”)及錫球276。在一實施例中,該相機陣列 組件250包含5x5成像器陣列。該相機陣列25〇具有一宽 度W為7.2毫米及一長度為8.6毫米。在該相機陣列中的每 一個成像器可具有一寬度3為K4毫米。該些光學元件的總 高度tl接近1.26毫米,且該相機陣列組件的總高度t2係小 於2毫米。不同透鏡設計可具有不同高度tl&t2 光學串音抑制 如上所述,該相機陣列組件250係Λ夕Vrn ^ & 保由多個成像器所構 成,如圖2A和圖2B所示地,每一個且古 取 昇有—相對應光學路 徑或通道以導引光自該場景經過該頂部透鏡晶圓262、該中 間間隔物264 '該底部透鏡晶圓268、該底部間隔物27〇至 20 201228382 形成置於該基板278上的感測器24〇的複數個感光構件 上。撞擊在任何特定成像器上的光只來自於它所指定的光 學路徑或通道對最終影像品質係重要的。當射在—成像器 頂部上的光同時被該陣列内的另一成像器的感光構件所: 收時,可視為發生光學串音。來自例如繞射的光學通道間 的任何串音及/或來自該相機内部構件的光散射可弓丨起在= 影像上的瑕疵。尤其,光學通道間的串音意謂著—成像°器 會感測來自該成像器上的來源通量,其係與那個僧測器影 像的重新建構位置和該影像位置不一致。這個導致遺 像資料並引進無法與真實影像資料做區分的重疊雜訊兩 者。據此,該相機陣列的所有光學通道應被光學隔離^使 來自-透鏡或光學通it的光線不能自—光學通道跨越至另 -光學通道。在圖2C所示實施例中,不透明間隔物28ι或 不透明垂直壁282係置於各光學通道284之間。儘管不透 明間隔物提供-光學串音抑制級,但不透明垂直壁係較 佳’此因在.這類實施财’基板間的空間和該些基板它們 本身的相關區段兩者係呈現不透明之故。 該光學串音抑制式不透明垂直壁可使用提供該相機陣 列組件286的光學通道284間的不透明表面或材料的引進 的任何合適技術來製造之。在—實施例中,該不透明垂直 壁係藉由將溝槽全部或部分引入至該相機陣列組件286的 透鏡堆疊288中而形成。較佳的,不要切割完整地透過該 透鏡堆疊的溝槽以保持該相機陣列組件的機械完整性。這 類溝槽可由例如使用一晶圓切割機(碟片/刀片)切入該透鏡 21 201228382 陣列堆疊286的前面或背部、或雷射切割技術、或水喷射 切割技術之類的任何合適技術所引人。—旦該些溝槽被形 成,它們被填充著一擋光材料。替代性地,該些溝槽的内 壁可塗佈著-擋光材料’且該溝槽的其餘部分可塗佈著呈 有低收縮特性的另一材料。如上所述,—擋光材料係任;可 不透月材料’例如,一金屬材料、一金屬氧化物、黑矽或 像一黑基質聚合物的黑微粒填充光阻劑之類。 在圖2D概示的另一實施例中,光學串音抑制係藉由產 生由一串堆疊孔徑所形成的虛擬不透明壁而得。在本實施 例中 串孔徑攔係藉由將該些基板塗佈著配備有一窄開 口或孔徑296的不透明層294來形成於該相機陣列組件292 的不同基板層級290上。若形成足夠的這些孔徑時,則可 模擬一不透明垂直壁所提供的光學隔離。在這類系統中, 一垂直壁會是在彼此頂上堆疊孔徑的數學限制。較佳地, 以彼此間分開充足空間的方式,儘可能提供許多孔徑以便 產生這類虛擬不透明壁。對於任何相機陣列組件而言,用 以形成這類虛擬垂直壁所需的不透明層數量和配置可透過 一光線執跡分析而定。 在圖2E概示的進一步實施例中,光學串音抑制係使用 由不透明材料所建構的間隔物295而得。在圖2F概示的再 一貫施例中,光學串音抑制係使用塗佈著一不透明塗層2 9 7 的間隔物296而得》圖2E和圖2F所示實施例包含與圖2D 所示堆疊孔徑294類似的堆疊孔徑294。在一些實施例中, 光學串音抑制係不使用堆疊孔徑而得。在許多實施例中, 22 201228382 各類擋光材料中的任一者可被使用於間隔物的建構或塗佈 以得到光學隔離。 透鏡特性 圖3A和圖3B係說明隨xy平面維度變化而改變的一透 牛π»度t。圖3B的透鏡構件320相較於圖3A的透鏡構 的比例為l/η。注意’在度量期間保持相同孔徑比以 象特H不改變係重要的。在該透鏡構件似的直徑[Μ :比該直徑q、n因子時,該透鏡構件32q的高度*也是 2透鏡構件310的高度小n因子。因此,藉由使用較小 ^鏡構件㈣列,該相機陣列組件的高度可被顯著地降 低 6亥相機陣列組件所降低的古洚-Γ 1丄 低的问度可破使用以設計具有例 如改善的主光線角、減少的變 特性的平滑透鏡。 “及改善的色差之較佳光學 圖3C說明藉由降低該相機陣列組件厚度來改善一主光 、=CRA)。主光線角1係涵蓋-整個相機陣列的單一透鏡 的主光線角。雖然該主光線角可 早透鏡 透鏡間的距離來降低,但該: : 亥相機陣列和該 係大的,因而降低光學執行陣列的主光線角1 列中的成像—其係按二:::::: 設計。該主光線角2仍與該傳統相機陣列的主光::例來 同且該主光線角並未改善。然而,藉由如圖3二: 該成像器及該透鏡間的距離, 厅不地修改 角3相較於主光線肖1或主:拽陣列組件中的主光線 先線角2可被降低,而產生較S 17 201228382 pieces. More particularly, the process chain in wafer level optics generally involves producing a diamond turning lens substrate (both at the individual and array levels), and then creating a negative mode to replicate that substrate (also known as a stamp or Tool), and finally a polymer replica is formed on a glass substrate that has been constructed using suitable supporting optical members such as apertures, light blocking materials, filters, and the like. 2A is a perspective view of a camera array assembly 200 having a wafer level optical instrument 2 and a sensor array 230, in accordance with an embodiment. The wafer level optical dry instrument 210 includes a plurality of lens members 220, each lens member 220 containing one of twenty-five imagers 24 of the sensor array 230. / In conclusion, the camera array assembly 200 has a smaller array of lens members occupying very little space compared to a single large lens containing the entire sensor array 2 3 〇. It should also be noted that each of the lenses may be of a different type. For example, each substrate level can include a lens that is diffractive, refractive, Fresnel, or a combination thereof. It should be further noted that within the camera array, a lens member 220 can include one or more separate optical lens members that are axially arranged relative to one another. Finally, it should be noted that for most lens materials, it will be a thermally induced change in the refractive index of the material that must be corrected for good image quality. - The temperature normalization procedure will be described in more detail later. 2B is a cross-sectional view of a camera array assembly 25A in accordance with an embodiment. The camera assembly 250 includes a top lens wafer 262, a bottom lens wafer 268, a plurality of sensors and associated photosensitive member substrates 278 formed thereon, and spacers 258, 264, and 270. The camera array assembly (10) is packaged in a seal 254... an optical top spacer & 258 can be placed between the seal w and the top lens wafer 262 1 and it is constructed for the camera assembly 25〇18 201228382 not important. An optical member 288 is formed on the top lens wafer 262. Although the optical members 288 shown in Figure 2B are identical, it should be understood that different component types, sizes, and shapes can still be used. An intermediate spacer 264 is placed between the top lens wafer 262 and a bottom lens wafer 268. Another set of optical members 286 are formed on the bottom lens wafer 268. A bottom spacer 27 is placed between the bottom lens wafer 268 and the substrate 278. Straight through twin vias 274 are also provided to the path to transmit signals from the imagers. The top lens wafer 262 can be partially coated with a light blocking material 284 (discussed below) to block light. The portion of the top lens wafer 262 that is not coated with the light blocking material 284 acts as an aperture stop for light to pass through to the sinusoidal lens wafer 268 and the photosensitive members. Although only a single aperture bar is shown in the embodiment provided in FIG. 2B, it should be understood that the additional aperture bar may be formed by an opaque layer disposed on any or all of the substrate faces of the camera assembly to improve stray light execution. A complete discussion of efficiency and reduced optical string a ° optical crosstalk suppression is provided below. In addition, although the above embodiments show the spacers 2 58 , 2 6 4 and 2 70 , the spacer function can also be modified by directly modifying the lens structures (or substrates) so that the lenses can be directly connected to each other. It was implemented directly. In such embodiments, the lens height can be extended' and the lens is directly bonded to the upper substrate, thereby eliminating the need for a spacer layer. In the embodiment of Figure 2B, a filter 282 is formed on the bottom lens wafer 268. Light carrying material 280 can also be applied to the bottom lens 268 to act as an optical isolator. A light-carrying material 280 can also be applied to the substrate 278 to protect the sensor electronics from incident light. Spacer 2 8 3 also 19 201228382 can be placed between the bottom lens wafer 268 and the substrate 278 and between the lens circles 262, 268. In many embodiments, the spacer 283 is similar to the spacers 264 and 270. In some embodiments, each spacer layer is configured using a - single board. Although not shown in FIG. 2B, many embodiments of the present invention also include spacers between each of the optical channels on top of the top lens wafer 262, which are similar or arranged in a single layer at the lens stack. A spacer 258 is shown at the edge of the array. As discussed further below, the spacers may be constructed of a light blocking material and/or coated with a light blocking material to isolate the optical channels formed by the wafer level optical instrument. Suitable light-blocking materials may comprise any opaque material, for example, metal materials such as titanium and chromium, or oxides of such materials as black chromium (chromium or chromium oxide) or black enamel, or like a black, for the purposes of this application. The matrix polymer (Purror Technologies' pSK2〇〇〇) is filled with a black particle-filled photoresist or the like. The bottom surface of the substrate covers a back redistribution layer ("RDL") and solder balls 276. In an embodiment, the camera array assembly 250 includes a 5x5 imager array. The camera array 25 has a width W of 7.2 mm and a length of 8.6 mm. Each imager in the camera array can have a width of 3 K4 mm. The total height t1 of the optical elements is approximately 1.26 mm and the total height t2 of the camera array assembly is less than 2 mm. Different lens designs may have different heights tl & t2 Optical crosstalk suppression As described above, the camera array assembly 250 is constructed of a plurality of imagers, as shown in Figures 2A and 2B, each And a corresponding optical path or channel to direct light from the scene through the top lens wafer 262, the intermediate spacer 264 'the bottom lens wafer 268, the bottom spacer 27 to 20 201228382 A plurality of photosensitive members of the sensor 24A disposed on the substrate 278 are formed. Light striking any particular imager comes only from the optical path or channel it specifies, which is important for the final image quality. When the light incident on the top of the imager is simultaneously received by the photosensitive member of another imager within the array: it can be considered to have an optical crosstalk. Any crosstalk from, for example, the diffracted optical channels and/or light scattering from the internal components of the camera can smash the 瑕疵 on the = image. In particular, crosstalk between optical channels means that the imager senses the source flux from the imager that is inconsistent with the reconstructed position of the image of the detector and the image location. This leads to imagery and the introduction of overlapping noise that cannot be distinguished from real imagery. Accordingly, all of the optical channels of the camera array should be optically isolated so that light from the lens or optical pass cannot pass from the optical channel to the other optical channel. In the embodiment illustrated in Figure 2C, an opaque spacer 28i or an opaque vertical wall 282 is disposed between each optical channel 284. Although the opaque spacer provides an optical crosstalk suppression stage, the opaque vertical wall system is preferred because the space between the substrate and the associated segments of the substrate itself are opaque. . The optical crosstalk suppression opaque vertical wall can be fabricated using any suitable technique that provides for the introduction of an opaque surface or material between the optical channels 284 of the camera array assembly 286. In an embodiment, the opaque vertical wall is formed by introducing all or a portion of the trench into the lens stack 288 of the camera array assembly 286. Preferably, the grooves that are completely through the stack of lenses are not cut to maintain the mechanical integrity of the camera array assembly. Such trenches may be derived, for example, by cutting into the front or back of the lens 21 201228382 array stack 286 using a wafer cutter (disc/blade), or by any suitable technique such as laser cutting techniques or water jet cutting techniques. people. Once the grooves are formed, they are filled with a light blocking material. Alternatively, the inner walls of the grooves may be coated with a -blocking material' and the remainder of the grooves may be coated with another material that exhibits low shrinkage characteristics. As described above, the light blocking material is optional; for example, a metal material, a metal oxide, black enamel or black particles like a black matrix polymer is filled with a photoresist or the like. In another embodiment, schematically illustrated in Figure 2D, optical crosstalk suppression is achieved by creating a virtual opaque wall formed by a series of stacked apertures. In this embodiment, the string apertures are formed on different substrate levels 290 of the camera array assembly 292 by coating the substrates with an opaque layer 294 provided with a narrow opening or aperture 296. If sufficient of these apertures are formed, the optical isolation provided by an opaque vertical wall can be simulated. In such systems, a vertical wall would be a mathematical limitation of stacking apertures on top of each other. Preferably, a plurality of apertures are provided as much as possible to create such virtual opaque walls in a manner that separates each other from ample space. For any camera array assembly, the number and configuration of opaque layers required to form such a virtual vertical wall can be determined by a ray tracing analysis. In a further embodiment as outlined in Figure 2E, optical crosstalk suppression is achieved using spacers 295 constructed of opaque materials. In the re-consistency example outlined in Figure 2F, the optical crosstalk suppression is obtained using a spacer 296 coated with an opaque coating 297. The embodiment shown in Figures 2E and 2F is shown in Figure 2D. Stack aperture 294 is similar to stacked aperture 294. In some embodiments, the optical crosstalk suppression is obtained without using a stacked aperture. In many embodiments, 22 201228382 any of a variety of light blocking materials can be used in the construction or coating of the spacer for optical isolation. Lens Characteristics Figs. 3A and 3B are diagrams showing a change in the xy plane as a function of the xy plane dimension. The ratio of the lens member 320 of Fig. 3B to the lens configuration of Fig. 3A is l/η. Note that it is important to keep the same aperture ratio during the measurement without changing the system. In the diameter of the lens member [Μ: the height of the lens member 32q is also smaller than the height of the lens member 32q by an factor of n. Therefore, by using a smaller mirror member (four) column, the height of the camera array assembly can be significantly reduced by the reduced resolution of the 6-camera array assembly. The low degree of difficulty can be broken to design with, for example, improvement. A smoothing lens with a dominant ray angle and reduced variability. "The preferred optics of the improved chromatic aberration Figure 3C illustrates the improvement of a main light, = CRA by reducing the thickness of the camera array assembly. The chief ray angle 1 covers the chief ray angle of a single lens of the entire camera array. The chief ray angle can be reduced by the distance between the early lens lenses, but this: : The camera array and the system are large, thus reducing the imaging in the 1st column of the chief ray angle of the optical execution array - it is pressed by two::::: : Design. The chief ray angle 2 is still the same as the main light of the conventional camera array: and the chief ray angle is not improved. However, by the distance between the imager and the lens, as shown in Fig. 3: The hall does not modify the angle 3 compared to the main ray xiao 1 or the main: 主 array component in the main ray first line angle 2 can be reduced, resulting in
23 S 201228382 佳光學執行效率。如上所述,根據本發明的相機陣列具有 降低厚度需求,因此,該透鏡構件及該相機陣列間的距離 可被增加以改善該主光線角。接著,本降低的主光線角產 生一較低孔徑比及改進的調變轉移函數(MTF)。 尤其,相機設計所產生的議題之一係如何校正場曲 率。透過一透鏡所投射的影像不是平面,但具有一固有彎 曲表面。一種校正本場曲率的方式係將一厚的負透鏡構件 312接近或直接在該成像器表面314上定位。該負透鏡構件 使來自該影像的各種角度光束316變平坦,藉此對付該場 曲率問題。這類場平坦影像提供優良的影像執行效率,可 製造具有降低電晶體-電晶體邏輯電路需求的陣列相機,並 傳送非常同質的調變轉移函數β然而,本方法的一個問題 在於本場平坦方法本質上需要一高的主光線角。這個使該 技術不適用於多數相機;然而,本發明相機陣列可使用背 照式成像技術(B SI)。將該影像感測器定位在該基板後免除 該主光線角的需求,藉此可使用圖3D所示的負透鏡構件場 平坦方法。 s亥陣列相機的另一優勢關於色差。尤其,在—傳統多 色透鏡中,因為不同光波長至該透鏡的焦距係不同,故該 透鏡必須校正色差。因此,需要協調該些色波長中其中一 些的透鏡執行效率以取得可接受的整體色彩執行效率。藉 由製造每一個光學通道窄光譜帶,色差被降低及/或阻止,a 且每一個透鏡可被最佳化至—特定色波長。例如,接收可 見光或近紅外線光譜的成像器可具有為了本光譜帶所特定 24 201228382 最佳化的透鏡構件。對於偵測其它光譜的成像器而言,該 透鏡構件了被建構以具有例如曲率半徑的不同特性,如 此杈跨所有光波長的固定焦距被取得以接著使不同光譜 帶的聚焦平面係相同。橫跨不同光波長的聚焦平面的匹對 增加該成像器所捕捉影像的精準度並降低縱向色差。因為 每一個透鏡構件可被設計以導引一窄光譜帶,故缺乏色差 相伴意謂著該些透鏡構件承受較不嚴格的設計限制,但相 車乂於涵盍一廣大光譜的傳統透鏡構件卻產生較佳或等效的 執仃效率。尤其,不需要進行昂貴的色差平衡校正❶甚至, 簡單的透鏡大體上具有較佳調變轉移函數及較低孔徑比(較 同感光度)。應注意,雖然這些陣列相機所使用的透鏡相較 於傳統多色透鏡時具有小得多的色差,但仍設計每一個透 鏡聚焦於某波長頻寬。據此,在每一個實施例中,這些“單 色透鏡中的每·一個可藉由使用高及低阿貝數材料(不同光 4·色政)的結合而得最佳色彩校正。 不同波長的光於多色光學 ° 一透鏡的折射率係視穿 —透鏡會給予不同波長顏 色波長帶可具有較綠色稍 具有不同焦距(縱向色差)的 系統中所發生的不只是色差類型 透該透鏡的光波長而定。因此, 色不同的放大倍數。例如,該紅 小的放大倍數,且接著綠色可具有較藍色稍小的放大倍 數。若自這些不同光波長所取得的影像接著被重疊而未進 行校正,則該影像會因為該些不同色不會正確地重疊而失 去解析度。依據該材料特性’該色彩放大倍數的不同橫向 變形可被決定並校正。校正可藉由限制該些透鏡輪廓以使 5 25 201228382 每一種顏色具有相同放大倍數來完成,但是這樣降低可用 於透鏡製造的最大自由度,並降低最佳化調變轉移函數的 能力。據此,在一相機陣列實施例中,橫向變形係光學上 允許,並接著在計算成像後進行校正。該透鏡橫向顏色的 電子杈正貫際上可提供的系統執行效率的改善超越對於該 原始變形的簡單校正,此因這類校正直接改善該系統在多 色調變轉移函數方面的解析度。尤其,一透鏡中的橫向色 差可被視為該透鏡的色彩相關變形^藉由將一物體的所有 不同變形的單色影像映射回到相同矩形上,可在產生與該 單色者(不只是因為該個別色彩通道色彩模糊校正,也是因 為不同顏色的正確疊置之故)相同的多色調變轉移函數的全 彩影像中得到完美的重疊。 使用母一個透鏡被最佳化以配合一窄光譜帶來使用的 許多透鏡的又一優勢係在於沒有使用透鏡類型上的限制。 尤其’該陣列相機可使用於繞射、折射、菲涅爾透鏡或這 些透鏡類型的結合。繞射透鏡係吸引人的,因為它們可利 用一實質平坦光學構件來產生複合波前,且製造它們也相 當簡易之故。在傳統相機中,因為具有單一成像器機構且 該透鏡必須能夠有效地傳送一廣大光譜,故使用繞射透鏡 係不可行’且繞射透鏡在傳送窄光波長帶時係非常有效率 時’本最佳化範圍外的光波長的執行效率具有一陡續下 降。因為該目前相機的每一個陣列可被聚焦於^一窄光波長 上,故這些繞射透鏡的窄最佳化波長帶不是一限制因素。 較小透鏡構件的其它優勢尤其包含成本降低、材料量 26 201228382 降低及製造步驟減少。藉由提供在χ及y維(因而為Ik厚) 上的大小為l/η的的n2透鏡,用於製造該透鏡構件的晶圓 大小也可被降低。這個降低相當可觀的材料成本及數量。 進步,透鏡基板數量被降低,而致使製造步驟數降低及 伴隨的生產成本降低。提供至該些成像器的透鏡陣列所需 的配置精確度典型地不比一傳統成像器的例子更迫切,此 因根據本發明的相機陣列的像素大小實際上可與一傳統影 像感測器相同之故。此外,單色色差隨透鏡直徑而定。因 為陣列相機能夠使用較小透鏡,故現存任何色差係較小, 因而使用具有較簡單輪廓的透鏡係可行的。這個產生製造 品質較佳且成本較少同時存在的系統。較小尺寸透鏡也具 有:較小體積,其在製造期間產生較低下陷或收縮。收縮 對複製係有害的,此因它使想要的透鏡輪廓變形並導致對 於該製造者預先補償該預測的下陷水準以便校正該最終透 鏡外形的需求之故。本預先補償係難以控制。較低的下陷/ 收縮則不需具有這些嚴格的製造控制,又降低該些透鏡的 整體製造成本。 在一 把例中,該晶圓級光學製程包含··⑴在透鏡製 膜前,藉由電鍍透鏡構件攔將該些透鏡構件欄整合至該基 板上,及(u)蝕刻該基板中的孔洞並於該基板各處執行兩側 透鏡製膜技術。因為塑料及基板間不會引起指數不匹配, 故在該基板中的孔洞蝕刻係具優勢的,在本方式中,形成 所有透鏡構件(類似於塗黑透鏡邊緣)的天然攔的光吸收基 板可被使用。在一實施例中,濾片係該成像器的一部分。 27 201228382 在另一實施例中,濾片係一晶圓級光學子系統的一部分。 在一包含濾片的實施例中,因為當定位於離該成像感測器 表面一距離時’在那些濾片層中的小缺陷被平均分佈於所 有入射瞳位置上’因而較不可見,故將該濾片(不論為彩色 遽片陣列、紅外線濾片及/或可見光濾片)置入或接近該孔徑 欄表面而不在該成像感測器表面係較佳。 成像系統及處理程序 圖4係根據一實施例說明一成像系統4〇〇的功能性方 塊圖。該成像系統400除了其它元件外還可包含該相機陣 列4 1 0、一影像處理程序模組42〇及一控制器。該相機 陣列410包含如上參考至圖!和目2所詳述的二或更多成 像器。影I 412係由該相機陣列41〇中的二或更多成像器 所捕捉。 迓衩制器440係硬 子人肢、籾髖或其結合,川於 制該相機陣列41〇的錄操作參數。該控制器44〇接收 自一使用者或其它外部元件的輸人446並送出操作訊號4 來控制該相機陣列41〇。該控制器440也送出資訊444至 影像處理程序模組42G以協助該㈣像412的處理。 該影像處理程序模組42〇係硬體、初體、軟體或其 ===理自該相機陣列41G所接收的影像^影像: 像tnr處理例如下面參考至圖5料述的多⑷ 送或進—步處理。 ❹被送出以供顯示、儲存、彳 圖5根據一實施例說明該影像處理程序模組42… 28 201228382 能性方塊圖。該影像處理程序模組42〇除了其它構件外還 可包含上游程序處理模組5丨〇、影像像素相關性模組5丨4、 視差確認及測量模組5丨8、視差補償模組522、超解析度模 組526、位址轉換模組53〇、位址及相位移校準模組554及 下游色彩處理模組564。 該位址及相位移校準模組554係儲存裝置,用於儲存 在該製程或下一個重新校準程序中的相機陣列特徵化期間 所產生的校準資料。在一些實施例中,該校準資料可標示 3亥些成像器中的實體像素572的位址及影像的邏輯位址 546、548之間的映射。在其它實施例中,適合特定應用的 各式各樣校準資料可被運用於該位址及相位移校準模組 中〇 該位址轉換模組530依據該位址及相位移校準模組554 中所儲存的校準資料來執行正規化。尤其,該位址轉換模 、、且530轉換该影像中的個別像素的“實體,,位址成為該些成 像器t的個別像素的“邏輯,,位址548,或者反之亦然。為了 超解析度處理以產生增加解析度的影像,在該些個別成像 7中的相對應像素間的相位差需要解決。該超解析度程序 σ假對於在5亥產生影像中的每一個像素而言,來自每一 成像器的輸入像素組係貫地映射,且每一個成像器所捕 捉的影像的相位移在該產生影像t的像素位置係已知。替 弋性地忒些相位移可在該超解析度程序前先被估測。該 位址轉換模組53G藉由轉換該些影像412中的實體位址成 為該產生影像中的邏輯位M 548來解決用於後續處理的這 g 29 201228382 類相位差。 該些成像器540所捕捉到的影像412係提供至該上游 程序處理模組51〇。該上游程序處理模組51〇可執行色彩平 面正規化、黑階計算及調整、固定雜關償、光學聊(點 擴散函數)解迴旋、雜訊減少、橫向色差校正及串音減少中 之一或更多。 在貫施例中,该上游程序處理模組也執行溫度正規 化二溫度正規化校正該些光學元件的折射率變化該些成 像态透過δ亥些光學件接收由於該相機使用期間的溫度變化 所產生的光。在-些實施例中,該溫度正規化程序涉及藉 由測量該相機障列的一些成像器中之一的暗電流或其心 的暗電流來決定該相機陣列溫度。使用本測量方式反射 率正規化係藉由自溫度校準資料中選取該正確點擴散函數 來執仃之。不同點擴散函數可在製造時該相機的溫度相關 折射率特徵化期間取得’並儲存於該成像系統以供該溫度 正規化程序使用。 在該上游程序處理模组51〇處理該影像後,影像像^ i Μ莫、’且5 1 4執行視差計算,其在所捕捉物體趨近該本 陣列時變得更加明顯。尤其’該影像像素相關性模组51 不同成像器所捕极影像的部分以進行該視差補償。^ 、,實施例中’該影像像素相關性模組514比較相鄰像素纪 ::值間的差值與臨界值,並在該差值超過該臨界值時, 疋-亥視差可此存在的旗標。該臨界值可隨該相機陣列的 呆作條件函數而動態地改變。進—步,該些鄰近區計算也 30 201228382 是適宜的,且可反應所選成像器的特定操作條件。 該影像接著係由該視差確認及測量模組518所處理以 偵測並計量該視差。在一實施例中,視差偵測係由運轉中 的像素相關性監視器所完成。本操作發生於遍及具有類似 總合時間條件的成像器各處的邏輯像素空間中。當該場景 係在實際無限空間下時,來自該些成像器的資料係高度相 關且只取決於以雜訊為主的變化。然而,當物體係足夠接 近該相機時,視差效應被引進而改變該些成像器間的相關 性因為s亥些成像器的空間佈局之故,該視差引發的變化 天生於所有成像器各處係一致的。在該測量精確度限制 内任何成像器對間的相關性差異指定任何其它成像器對 間的差異及遍及#它成像器各處的纟^冗餘藉由 對其它成像器對執行相同或類似計算而可得到高度精確視 差確認及測量。若視差存在於其它成像器冑中,則該視差 應在大略與考慮的成像器位置相同的場景實體位置處發 生°亥視差測里可藉由保持各種對方式測量的追蹤及搭配 該樣本資料求最小平方值(或類似統計值)來計算“實際,,視 差差,、H亥視差的其它方法可包含偵測並追蹤來自各 晝面的垂直及水平高頻影像構件。 。玄視差補饧杈組522處理包含足夠接近該相機陣列影 像以引發大於超解析度 差差異的物體的影像。 程序所需的相位移資訊精確度的視 §玄視差補償模組522使用在該視差 偵測及測量模組5 1 8中所產生的 以在該超解析度程序前,先進一 以掃瞄線為主的視差資訊 步調整實體像素位址及邏 31 201228382 輯像素位址間的映射。有兩例在本處理期間發生。在更普 遍例子中’當該些輸人像素的位置相對於其它成像器中才曰目 關影像上對應的像t已偏料,位土止及相位移言周整係需要 的。在本财,在執行超解析度程序前,不需對視差^進 -步處理。在較不普遍例子中,像素或像素群組係以曝露 該遮蔽組的這類方式來偏移。在本例令,該視差補償程序 產生交錯像素資料以㈣著該遮蔽組的像素不應被考慮於 該超解析度程序中。 在特殊成像器的視差變化已被精確地決定後,該視差 資訊524被送至該位址轉換模組53〇。該位址轉換模組53〇 使用該視差資訊524與來自該位址及相位移校準模組554 的杬準貝料558來決定應用至邏輯像素位址計算的適當X ί Υ偏移值。§亥位址轉換模組5 3 〇同時決定相對於該超解 析度耘序所產生影像428中的像素的特定成像器像素的相 關次像素偏移。該位址轉換模組53〇考慮該視差資訊524 並提供說明該視差的邏輯位址546。 在執行该視差補償後,該影像係由該超解析度模組$ 2 6 進行處理以自低解析度影像中得到高解析度合成影像 422,如下所詳述。該合成影像422接著被饋入該下游色彩 處理杈組564以執行下列操作中之一或更多:焦點復原' 白平衡、色彩校正、伽瑪校正、RGB至γυν校正、邊緣自 動銳化、對比增強及抑制。 s亥影像處理程序模組420可包含用於額外處理該影像 的元件。例如,該影像處理程序模組420可包含用於校正 32 201228382 由單像素缺陷或—像素缺陷群所引起的影像異常。該校 正模組可被具體實現於與該相機陣列相同的晶片上、成為 與該相機陣列分開的元件、或成為該超解析度模組似的 一部分。 超解析度處理 在一實施例中,該超解析度模’组526 II由處理該些成 像器所捕捉到的低解析度影像來產生較高解析度合成 象/ 5成衫像的整體影像品質係高於該些成像器中任 者個別捕捉的影像。換言之,個別成像器協同操作,每 一個使用匕們的⑥力來捕捉該光譜中的窄波部分以貢獻較 局品質影像而不進杆4t 个進仃-人取樣。與該些超解析度技術有關的 影像形成可被表示如下: ·尤+以,= 公式(2) /、中Wk代表將該高解析度場景(χ)(透過模糊、移動 及人取樣)貢獻至該些k成像器中的每—個所捕捉的低解析 度影像⑽中的每-個,且《“系該雜訊貢獻。 成像器架構 圖6A至圖6F根據本發明實施例說明透過超解析度程 序來得,高解析度影像的各種成像器架構。在圖6A至圖 6F R代表具有紅色渡片的成像器、“G”代表具有綠色渡 片的成像益、B”代表具有藍色濾片的成像器、“p,,代表具 有遍m個可見光譜及近紅外線光譜的靈敏度的多色成 33 201228382 像器 '且“i”代表具有近紅外線濾片的成像器。該多色成像 器可取樣來自該可見光譜全部及近紅外線區域(也就是自 650奈米至800奈米)的影像β在圖6A實施例中,該些成像 器的中間行列包含多色成像器《該相機陣列的其餘區域係 佈滿具有綠色濾片、藍色濾片及紅色濾片的成像器。圖6Α 實施例不包含只偵測近紅外線光譜的任何成像器。 圖6Β實施例具有類似傳統貝爾濾片映射的架構。本實 施例不包含任何多色成像器或近紅外線成像器。如上參考 至圖1所詳述地’圖6Β實施例不同於傳統貝爾濾片架構的 地方在於每一個彩色濾片被映射至每一個成像器以取代被 映射至一個別像素。 圖6C說明該些多色成像器形成一對稱棋盤圖案的實施 例。圖6D說明提供四個近紅外線成像器的實施例。圖6Ε 說明具有不規格映射的成像器實施例。圖6F說明5χ5感測 器陣列被組織成17個具有綠色濾片的成像器、四個具有紅 色濾片的成像器及四個具有藍色濾片的成像器的實施例。 該些感測器係對稱分佈於該成像陣列中心軸四周。如下進 一步所述地,以本方式分佈該些成像器阻止感測器所成像 的像素受到捕捉其它光波長的感測器所遮蔽。圖6Α至圖6F 實施例只是說明,各種其它成像器佈局也可被使用。 因為這些感測器可以低照明條件來捕捉高品質影像, 故多色成像β及近紅外線成像器的使用係具優勢。由該多 色成像益或該近紅外線成像器所捕捉的影像被使用於去除 一般彩色成像器所得影像中的雜訊。然而,如上所述,這 34 201228382 :夕色透鏡需要使用相關色彩校正技術來對抗試著捕捉所 =波長並將它傳送至相同聚焦平面的單一透鏡内固有的 、。任何傳統色彩校正技術可被運用於所提的陣列相機。 成像器佈局 #藉由集中多個低解析度影像來增加解析度的希望仰賴 者代表相同場景中稍微不同視角的不同低解析度影像。若 该些低解析度影像全以一像素的整數單位進行偏移,則每 I個影像主要包含相同資訊。因此,在該些低解析度影像 中沒有可使用於產生一高解析度影像的新資訊。在根據本 發明實施例的相機陣列中,該陣列中的成像器佈局可被預 口又iU工制以{吏一列或一行中的每一個成像器捕捉一影像, 其係相料它相鄰成像器所捕捉的影像進行一固定次像素 距離的偏移。理想地,每一個成像器所捕捉的影像係以提 供均勻取樣該場景或該光場的這類方式相較於#它成像器 進行空間偏移,且該取樣均勾度使得該些成像H中的每— f所捕捉的低解析度影像產生關於該取樣場景(光場)的非 冗餘資訊°關於該場景的這類非冗餘資訊可被運用於後續 訊號處理程序以合成單一高解析度影像。 然而,二成像器所捕捉的影像間的次像素偏移不足以 確保取樣均勻度。二成像器的取樣或取樣多樣性的均勻度 係物體距離函數。一成(像器對的像素的取樣空間係示於 圖6G。第一組光線(61〇)映射至成像器a的像素,而第二組 光線(620)映射至成像器B的像素。概念上,來自一給予成 像器的二相鄰光線定義該物體空間中由那個成像器内的一23 S 201228382 Best optical performance. As described above, the camera array according to the present invention has a reduced thickness requirement, and therefore, the distance between the lens member and the camera array can be increased to improve the chief ray angle. Next, the reduced chief ray angle produces a lower aperture ratio and an improved modulation transfer function (MTF). In particular, one of the issues raised by camera design is how to correct field curvature. The image projected through a lens is not flat but has an inherently curved surface. One way to correct the curvature of the field is to position a thick negative lens member 312 proximally or directly on the imager surface 314. The negative lens member flattens the various angular beams 316 from the image, thereby coping with the field curvature problem. Such field flat images provide excellent image execution efficiency, can produce array cameras with reduced transistor-transistor logic circuit requirements, and deliver very homogeneous modulation transfer functions. However, one problem with this method is the nature of the field flat method. A high main ray angle is required. This makes this technique unsuitable for most cameras; however, the camera array of the present invention can use back-illuminated imaging technology (BSI). The position of the chief ray angle is eliminated after positioning the image sensor on the substrate, whereby the negative lens member field flatning method shown in Fig. 3D can be used. Another advantage of the s-ray array camera is the chromatic aberration. In particular, in a conventional multicolor lens, since the different wavelengths of light to the focal length of the lens are different, the lens must correct the chromatic aberration. Therefore, it is desirable to coordinate lens execution efficiencies of some of the color wavelengths to achieve acceptable overall color performance. By fabricating a narrow spectral band for each optical channel, the chromatic aberration is reduced and/or prevented, a and each lens can be optimized to a particular color wavelength. For example, an imager that receives a visible or near-infrared spectrum can have a lens member that is optimized for the specific band of the band 201222382. For imagers that detect other spectra, the lens members are constructed to have different characteristics such as radius of curvature, such that a fixed focal length across all wavelengths of light is taken to then make the focal planes of the different spectral bands the same. Pairs of focal planes across different wavelengths of light increase the accuracy of the image captured by the imager and reduce longitudinal chromatic aberration. Since each lens member can be designed to guide a narrow spectral band, the lack of chromatic aberration is accompanied by the less stringent design constraints of the lens members, but the conventional lens components that cover a vast spectrum are Produce better or equivalent enforcement efficiency. In particular, there is no need to perform expensive color difference balance correction. Even simple lenses generally have a better modulation transfer function and a lower aperture ratio (relative sensitivity). It should be noted that although the lenses used in these array cameras have much smaller chromatic aberrations than conventional multicolor lenses, each lens is designed to focus on a certain wavelength bandwidth. Accordingly, in each of the embodiments, each of these "monochrome lenses can be optimally color corrected by using a combination of high and low Abbe number materials (different light 4 color). The light is in the multi-color optics. The refractive index of a lens is visible through the lens. The lens will give different wavelengths. The color wavelength band can have a greener slightly different focal length (longitudinal chromatic aberration). Depending on the wavelength of the light, therefore, the color is different in magnification. For example, the red is small, and then the green color may have a slightly smaller magnification than blue. If the images taken from these different wavelengths of light are then overlapped If the correction is made, the image will lose its resolution because the different colors will not overlap correctly. According to the material characteristics, different lateral deformations of the color magnification can be determined and corrected. The correction can be achieved by limiting the lens contours. So that each of the colors of 5 25 201228382 has the same magnification, but this reduces the maximum degree of freedom that can be used in lens manufacturing, and reduces the best The ability to modulate the transfer function. Accordingly, in a camera array embodiment, the lateral deformation is optically allowed, and then corrected after computational imaging. The lateral color of the lens is automatically provided by the system. The improvement in execution efficiency goes beyond the simple correction of the original deformation, which directly improves the resolution of the system in terms of the multi-tone transfer function. In particular, the lateral chromatic aberration in a lens can be regarded as the color correlation of the lens. Deformation ^ by mapping all differently deformed monochromatic images of an object back to the same rectangle, can be produced with the monochromator (not just because of the individual color channel color blur correction, but also because of the correct overlay of different colors) A perfect overlap in the full-color image of the same multi-tone transfer function. Another advantage of using many lenses that are optimized to fit a narrow spectrum is that no lens type is used. Limitations. In particular, the array camera can be used for diffraction, refraction, Fresnel lenses or a combination of these lens types. Lenses are attractive because they can produce a composite wavefront using a substantially flat optical member, and they are relatively simple to manufacture. In conventional cameras, because of the single imager mechanism and the lens must be able to transmit efficiently When a large spectrum is used, it is not feasible to use a diffractive lens system, and the diffraction lens is very efficient when transmitting a narrow optical wavelength band. The execution efficiency of the optical wavelength outside the optimized range has a steep drop. At present, each array of cameras can be focused on a narrow wavelength of light, so the narrow optimized wavelength band of these diffractive lenses is not a limiting factor. Other advantages of smaller lens components include, inter alia, cost reduction, material volume 26 201228382 The reduction and manufacturing steps are reduced. By providing an n2 lens of size l/η on χ and y dimensions (hence the Ik thickness), the wafer size used to fabricate the lens member can also be reduced. This reduces the considerable cost and quantity of materials. Progressively, the number of lens substrates is reduced, resulting in a reduction in the number of manufacturing steps and a concomitant reduction in production costs. The configuration accuracy required to provide a lens array to such imagers is typically no more urgent than the example of a conventional imager, since the pixel size of the camera array according to the present invention can be substantially the same as that of a conventional image sensor. Therefore. In addition, the monochromatic chromatic aberration depends on the lens diameter. Since the array camera can use a smaller lens, any existing color difference system is small, and thus it is feasible to use a lens system having a simpler profile. This produces a system that is both manufactured at a higher quality and less costly. Smaller sized lenses also have a smaller volume that produces lower sag or shrinkage during manufacturing. Shrinkage is detrimental to the replication system because it deforms the desired lens profile and results in the need for the manufacturer to pre-compensate the predicted sink level in order to correct the final lens profile. This pre-compensation is difficult to control. Lower sag/shrinkage eliminates the need for these rigorous manufacturing controls and reduces the overall manufacturing cost of the lenses. In one example, the wafer level optical process includes (1) integrating the lens member columns onto the substrate by electroplating lens members prior to lens formation, and (u) etching the holes in the substrate. And the two-sided lens forming technology is performed throughout the substrate. Because the index mismatch does not occur between the plastic and the substrate, the hole etching in the substrate is advantageous. In this manner, the natural light absorbing substrate forming all the lens members (similar to the black lens edge) can be used. In an embodiment, the filter is part of the imager. 27 201228382 In another embodiment, the filter is part of a wafer level optical subsystem. In an embodiment comprising a filter, because small defects in those filter layers are evenly distributed over all incident pupil positions when positioned at a distance from the imaging sensor surface, they are less visible, so The filter (whether a color sputum array, an infrared filter, and/or a visible light filter) is placed in or near the surface of the aperture bar without being preferred on the imaging sensor surface. Imaging System and Processing Procedure Figure 4 is a functional block diagram illustrating an imaging system 4A in accordance with an embodiment. The imaging system 400 can include, among other components, the camera array 410, an image processing program module 42 and a controller. The camera array 410 includes the above reference to the figure! Two or more imagers detailed in item 2. Shadow I 412 is captured by two or more imagers in the camera array 41A. The controller 440 is a hard human limb, an ankle hip or a combination thereof, and is recorded in the camera array 41. The controller 44 receives an input 446 from a user or other external component and sends an operation signal 4 to control the camera array 41. The controller 440 also sends information 444 to the image processing program module 42G to assist in the processing of the (4) image 412. The image processing program module 42 is a hardware, a primary body, a software body or a video image received by the camera array 41G. The image is processed by the tnr, for example, as described below with reference to FIG. Progressive processing. ❹ is sent for display, storage, 彳 FIG. 5 illustrates the image processing program module 42 according to an embodiment. 28 201228382 Capability block diagram. The image processing program module 42 may include an upstream program processing module 5, an image pixel correlation module 5丨4, a parallax confirmation and measurement module 5丨8, a parallax compensation module 522, and other components. The super-resolution module 526, the address conversion module 53A, the address and phase shift calibration module 554, and the downstream color processing module 564. The address and phase shift calibration module 554 is a storage device for storing calibration data generated during characterization of the camera array in the process or the next recalibration procedure. In some embodiments, the calibration data may indicate a mapping between the address of the physical pixel 572 in the imager and the logical address 546, 548 of the image. In other embodiments, a variety of calibration data suitable for a particular application can be applied to the address and phase shift calibration module. The address translation module 530 is based on the address and phase shift calibration module 554. The stored calibration data is used to perform normalization. In particular, the address translation mode, and 530 converts the "entities of the individual pixels in the image, the address becomes the "logic," address 548 of the individual pixels of the imager t, or vice versa. For super-resolution processing to produce an image with increased resolution, the phase difference between the corresponding pixels in the individual images 7 needs to be resolved. The super-resolution program σ false for each pixel in the image generated at 5 Hz, the input pixel groups from each imager are systematically mapped, and the phase shift of the image captured by each imager is generated The pixel position of the image t is known. These phase shifts can be estimated before the hyper-resolution program. The address translation module 53G resolves the g 29 201228382 type phase difference for subsequent processing by converting the physical address in the images 412 into the logical bit M 548 in the generated image. The images 412 captured by the imagers 540 are provided to the upstream program processing module 51A. The upstream program processing module 51 can perform one of color plane normalization, black level calculation and adjustment, fixed miscellaneous compensation, optical chat (point spread function) solution cyclotron, noise reduction, lateral chromatic aberration correction, and crosstalk reduction. Or more. In an embodiment, the upstream program processing module also performs temperature normalization and two-temperature normalization to correct refractive index changes of the optical elements. The image states are received through the optical components of the optical device due to temperature changes during use of the camera. The light produced. In some embodiments, the temperature normalization procedure involves determining the camera array temperature by measuring the dark current of one of the imagers of the camera barrier or the dark current of its heart. Using this measurement method, the reflectance normalization is performed by selecting the correct point spread function from the temperature calibration data. Different point spread functions can be taken during the temperature-dependent index characterization of the camera at the time of manufacture and stored in the imaging system for use by the temperature normalization procedure. After the upstream program processing module 51 processes the image, the image image is read and the parallax calculation is performed, which becomes more apparent as the captured object approaches the array. In particular, the image pixel correlation module 51 captures portions of the image captured by different imagers to perform the parallax compensation. ^, in the embodiment, the image pixel correlation module 514 compares the difference between the adjacent pixel count:: value and the critical value, and when the difference exceeds the critical value, the 疋-hai parallax may exist. Flag. The threshold can be dynamically changed as a function of the camera array's stay condition. Further, these neighborhood calculations are also suitable for 2012 28382 and may reflect the specific operating conditions of the selected imager. The image is then processed by the parallax confirmation and measurement module 518 to detect and measure the parallax. In one embodiment, the parallax detection is performed by a running pixel correlation monitor. This operation occurs in a logical pixel space throughout the imager having similar aggregate time conditions. When the scene is in the actual infinite space, the data from the imagers is highly correlated and depends only on noise-based changes. However, when the object system is close enough to the camera, the parallax effect is introduced to change the correlation between the imagers. Because of the spatial layout of the imagers, the parallax-induced changes are inherent in all imagers. Consistent. The difference in correlation between any pair of imagers within the measurement accuracy limit specifies the difference between any other pair of imager and the ubiquity of it. The same or similar calculations are performed on other imager pairs. Highly accurate parallax confirmation and measurement are available. If the parallax exists in other imager frames, the parallax should occur at the same physical position of the scene as the position of the imager considered. In the case of the visual difference, the tracking of the various modes can be maintained and the sample data can be matched. The least squares value (or similar statistical value) to calculate "actual, parallax difference, other methods of H-heterodyne can include detecting and tracking vertical and horizontal high-frequency image components from each side." Group 522 processes an image containing an object that is sufficiently close to the camera array image to cause a difference greater than the difference in super-resolution differences. The phase shift information accuracy required by the program is used in the parallax detection and measurement mode. Before the super-resolution program, the advanced parallax information step based on the scan line adjusts the physical pixel address and the mapping between the logical blocks of the 201228382 series. There are two cases in the case of the super-resolution program. This process occurs. In the more general case, 'when the positions of the input pixels are compared with other imagers, the corresponding image t on the image has been biased. It is necessary to displace the whole week. In this fiscal case, there is no need to process the parallax before performing the super-resolution program. In the less common example, the pixel or pixel group is exposed to the masking group. Class-like offset. In this example, the disparity compensation program generates interlaced pixel data to (d) the pixels of the shadow group should not be considered in the hyper-resolution program. The parallax change in the special imager has been accurately After the decision, the disparity information 524 is sent to the address conversion module 53. The address conversion module 53 uses the disparity information 524 and the quasi-before material 558 from the address and phase shift calibration module 554. Determining the appropriate X Υ Υ offset value applied to the logical pixel address calculation. The HI address conversion module 5 3 〇 simultaneously determines a particular imager pixel relative to the pixel in the image 428 produced by the super-resolution sequence The associated sub-pixel offset. The address translation module 53 considers the disparity information 524 and provides a logical address 546 that describes the disparity. After performing the disparity compensation, the image is used by the super-resolution module $2 6 processing to self-low solution A high resolution composite image 422 is obtained in the image, as detailed below. The composite image 422 is then fed into the downstream color processing group 564 to perform one or more of the following operations: focus recovery 'white balance, color correction , gamma correction, RGB to γυν correction, edge auto-sharpening, contrast enhancement, and suppression. The image processing program module 420 can include components for additionally processing the image. For example, the image processing program module 420 can include For correcting 32 201228382 image anomalies caused by single pixel defects or - pixel defect groups. The correction module can be implemented on the same wafer as the camera array, become a separate component from the camera array, or become the A portion of the super-resolution module. Super-resolution processing In one embodiment, the super-resolution mode group 526 II generates a higher resolution composite image by processing the low-resolution images captured by the imagers. The overall image quality of the /5 percent shirt image is higher than the images captured by any of the imagers. In other words, the individual imagers work in concert, each using our 6 forces to capture the narrow-wave portion of the spectrum to contribute to the quality image without taking 4t into the human-sample. The image formation associated with these super-resolution techniques can be expressed as follows: • especially +, = formula (2) /, medium Wk represents the contribution of the high-resolution scene (χ) (through blur, motion, and human sampling) Each of the low resolution images (10) captured by each of the k imagers, and "" the noise contribution. Imager architecture Figures 6A-6F illustrate the use of super resolution in accordance with an embodiment of the present invention. Various imager architectures for high-resolution images, as shown in Figure 6A to Figure 6F, R represents an imager with a red cross, "G" represents an image with green flakes, and B" represents a blue filter. The imager, "p," represents a multicolor 33 with a sensitivity across m visible and near infrared spectra. The 201228382 imager and "i" represent an imager with a near infrared filter. The multicolor imager can Sampling images from all and near infrared regions of the visible spectrum (ie, from 650 nm to 800 nm) in the embodiment of Figure 6A, the middle rows of the imagers containing the multicolor imager "The rest of the camera array Regional department is full of The imager of the green filter, the blue filter and the red filter. Figure 6 实施 The embodiment does not include any imager that only detects the near-infrared spectrum. Figure 6 is an embodiment with an architecture similar to the traditional Bell filter mapping. Does not include any multi-color imager or near-infrared imager. As detailed above with reference to Figure 1, the Figure 6 embodiment differs from the conventional Bell filter architecture in that each color filter is mapped to each imager. The substitutions are mapped to a different pixel. Figure 6C illustrates an embodiment in which the multi-color imagers form a symmetric checkerboard pattern. Figure 6D illustrates an embodiment providing four near-infrared imagers. Figure 6A illustrates an imager with a non-specification map Embodiments Figure 6F illustrates an embodiment in which a 5χ5 sensor array is organized into 17 imagers with green filters, four imagers with red filters, and four imagers with blue filters. The sensors are symmetrically distributed around the central axis of the imaging array. As further described below, distributing the imagers in this manner prevents the pixels imaged by the sensor from being captured The light wavelength of the sensor is obscured. The embodiment of Figure 6A to Figure 6F is only illustrative, various other imager layouts can also be used. Because these sensors can capture high-quality images with low illumination conditions, multi-color imaging β and The use of a near-infrared imager is advantageous. The image captured by the multi-color imaging or the near-infrared imager is used to remove noise from the image obtained by a typical color imager. However, as described above, this 34 201228382 : The eclipse lens needs to use the associated color correction technique to counteract the inherent lens that is trying to capture the wavelength and transmit it to the same focal plane. Any conventional color correction technique can be applied to the proposed array camera. Layout # The desire to increase resolution by concentrating multiple low-resolution images represents different low-resolution images of slightly different perspectives in the same scene. If the low-resolution images are all shifted by an integer unit of one pixel, each of the images mainly contains the same information. Therefore, there is no new information that can be used to generate a high resolution image in these low resolution images. In a camera array according to an embodiment of the invention, the imager layout in the array can be pre-ported to capture an image of each imager in a column or row, which is adjacent to the image. The image captured by the device is offset by a fixed sub-pixel distance. Ideally, the image captured by each imager is spatially offset from the imager in such a manner as to provide a uniform sampling of the scene or the light field, and the sampling is such that the images are H The low resolution image captured by each of f produces non-redundant information about the sampled scene (light field). Such non-redundant information about the scene can be applied to subsequent signal processing programs to synthesize a single high resolution. image. However, the sub-pixel offset between the images captured by the second imager is not sufficient to ensure sample uniformity. The uniformity of sampling or sampling diversity of the two imagers is the object distance function. The sampling space of the pixels of the image pair is shown in Figure 6G. The first set of rays (61〇) maps to the pixels of imager a, while the second set of rays (620) maps to the pixels of imager B. Concept Above, two adjacent rays from an imager are defined in the object space by one of the imagers
S 35 201228382 特定像素所取樣的部分。在離該相機平面距離zl的地方, 具有足夠的取樣多樣性,因為成像器A的像素的光線係相 較於成像器B的像素的光線進行空間偏移之故。在該距離 降低時,有些特定距離(22、23、24),其中,在成像器八及 成像器B之間沒有取樣多樣性。在該二成像器間缺少取樣 多樣性相當簡單地隱含著相較於成像器B所捕捉的場景, 成像器B所捕捉的場景中不具有額外資訊。如下進一步所 述地,一陣列相機中的成像器數量增加可緩和成像器的樣 本空間對完全重疊的物體距離的影響。當一成像器對缺少 取樣多樣性時,該陣列中的其它成像器提供所需取樣多樣 性來取得解析度增加的成果。因此,一成像器系統運用一 2X2成像器陣列以取得超解析度的能力典型地係較根據本 發明實施例使用一較大相機陣列的相機系統更受到限制。 參考回圖2A-2D中所示的相機陣列結構,該晶圓級光 學儀器包含複數個透鏡構件,其中,每一個透鏡構件涵蓋 '•亥陣列的感測器中的一者。根據本發明實施例的相機陣列 的單一成像器中的像素實體佈局係示於圖6H。該成像器係 覆蓋著彩色濾片652和微透鏡654的像素650陣列。位在 些彩色濾片頂上的微透鏡被使用以將光聚焦在下面每一像 素的作用區上》該些微透鏡可被想成取樣由該主透鏡所取 樣的物體空間内的連續光場。鑑於該主透鏡取樣該場景轄 射光場’該些微透鏡取樣該感測器輻照光場。 與每一個成像器相關的主透鏡映射該物體空間各點至 s亥影像空間各點,使得該映射係映射函數(一對一函數且映 36 201228382 成函數)。每一個微透鏡取樣該感測器輻照光場的一有限範 圍。該感測器輻照光場係連續且為一映射函數映射自該物 體空間的結果。因此’ 5玄感測器輻照光場的一有限範圍的 微透鏡取樣也是物體空間内的場景輻射光場的相對應有限 範圍的取樣。 沿著該些成像器像素平原橫向移動該微透鏡約 δ可改變某一距離々的取樣物體空間約一相對應適當因子 δ。利用一 ηχη(η>2)陣列相機,我們可以選擇一底線微透鏡 偏移,其可由一底線成像器的主透鏡輪廓(例如’該主光線 角)所決定。對於與該底線成像器取樣相同波長的其它成像 器中的每-個Μ,該成像器中的每一個像素的微透鏡係 偏移一次像素量以取樣該場景輻射光場的不同部分。因 此,對於安排成一 ηχη格子的成像器組而言,用於以與在 格子位置扣η)處的底線成像器(1,丨)相同的波長進 行成像的成像器的次像素偏移係受(δχ,^)所主宰,其中, (,-1) η (7-1) η 像素大小< b 士><像素大小 像素大小像素大小 根據本發明實施例的許多相機陣列顯著地包含較红色 及藍色成像器更多的綠色成像器。例如,圖6f所示陣列相 機包含…固綠色成像器、4個紅色成像器及4個藍色成像 ^基於計算_次像素偏移目該綠色成像器可看待 成-腿格子。然而,基於計算該些次像素偏移目的,該 5 37 201228382 些紅色成像器及該些藍色成像器每—個可看待成—2χ2格S 35 201228382 Part of the sample taken by a particular pixel. At a distance z1 from the camera plane, there is sufficient sampling diversity because the light of the pixels of imager A is spatially offset from the light of the pixels of imager B. There are some specific distances (22, 23, 24) as the distance decreases, with no sample diversity between imager eight and imager B. The lack of sampling diversity between the two imagers is quite simple to imply that there is no additional information in the scene captured by Imager B compared to the scene captured by Imager B. As further described below, an increase in the number of imagers in an array camera mitigates the effect of the imager's sample space on the distance of completely overlapping objects. When an imager pair lacks sampling diversity, other imagers in the array provide the required sampling diversity to achieve increased resolution. Thus, the ability of an imager system to utilize a 2X2 imager array to achieve super-resolution is typically more limited than a camera system that uses a larger camera array in accordance with embodiments of the present invention. Referring back to the camera array structure illustrated in Figures 2A-2D, the wafer level optical instrument includes a plurality of lens members, wherein each lens member encompasses one of the sensors of the array. The pixel physical layout in a single imager of a camera array in accordance with an embodiment of the present invention is shown in Figure 6H. The imager is covered by an array of pixels 650 of color filter 652 and microlens 654. Microlenses positioned on top of the color filters are used to focus the light on the active area of each of the pixels below. The microlenses can be thought of as sampling a continuous light field within the object space taken by the main lens. In view of the fact that the main lens samples the scene illuminating the light field, the microlenses sample the sensor illuminating light field. The main lens associated with each imager maps points of the object space to points in the image space such that the mapping is a mapping function (a one-to-one function and a function). Each microlens samples a limited range of the irradiated light field of the sensor. The sensor illuminates the light field continuously and maps the results from the object space to a mapping function. Thus, a limited range of microlens sampling of the radiant field of the 5 sensible sensor is also a corresponding limited range of sampling of the scene radiant field within the object space. Moving the microlens approximately δ along the imager pixel plains can change the sampling object space of a certain distance 约 by a corresponding appropriate factor δ. Using an ηχη(η>2) array camera, we can choose a bottom line microlens offset that can be determined by the main lens profile of a bottom line imager (e.g., 'the chief ray angle'). For each of the other imagers that sample the same wavelength as the bottom line imager, the microlens of each pixel in the imager is offset by a single pixel amount to sample different portions of the scene radiation field. Therefore, for an imager group arranged in a ηχη lattice, the sub-pixel offset of the imager for imaging at the same wavelength as the bottom line imager (1, 丨) at the lattice position η) is subjected to ( χ χ, ^) is dominated, where (, -1) η (7-1) η pixel size < b 士 ><pixel size pixel size pixel size Many camera arrays according to embodiments of the present invention significantly contain More green imagers for red and blue imagers. For example, the array camera shown in Figure 6f includes a solid green imager, four red imagers, and four blue images. The green imager can be viewed as a leg-leg grid. However, based on the calculation of the sub-pixel shifting purposes, the red imagers and the blue imagers can be regarded as -2χ2 grids.
移係均勻地分佈而可得到最大的取樣多樣性。 對上面所定義的微透鏡次像素偏移的限制在多樣性上The shifting system is evenly distributed to obtain the largest sampling diversity. Limitations on the microlens sub-pixel offset defined above in terms of diversity
得到最大的增加且可透過超解析度處理 最大的增加。雖不能湛足玆此眼在,丨相h —队心现爛砂,叩定在許多例子中運 用各式各樣的不同微透鏡偏移架構以在取樣多樣性上提供 至少一些增加並滿足一特定應用的需求。 在相機陣列中的成像器配置對稱性 。。將該些感光構件分成不同成像器的議題係由該些成像 盗的貫體隔離所引起的視差。藉㈣保該些成像器被對稱 放置至少一成像器可捕捉一前景物體邊緣四周的像 素。在本方式中,在一前景物體邊緣四周的像素可被集中 以增加解析度並避免任何遮蔽。在沒有對稱分佈中,在一 前景物體邊緣四周可讓例如一紅色相機的第一成像器看見 的一像素對於例如一藍色成像器的捕捉不同波長的第二成 像器係遮蔽的人據此,該像素的色彩資訊無法被精確地重 38 201228382 新建構。藉由對稱地分佈該些感測器,一前景物體會遮蔽 像素的可能性係顯著地降低。 在一簡單陣列中的紅色及藍色成像器的不對稱分佈所 引起的像素遮蔽係示於圖61。一對紅色成像器672係位在 該相機陣列670的左手邊上,且一對藍色成像器674係位 在該相機陣列的右手邊上。一前景物體676係存在,且該 些紅色成像器672能夠成像超過該前景物體的左手邊上的 前景物體的區域。然而,該前景物體遮蔽該些紅色成像器 而無法成像這些區域《因此,該陣列相機不能重新建構這 些區域的色彩資訊。 根據本發明一實施例包含一紅色及藍色成像器對稱分 佈的陣列係示於圖6J。該相機陣列780包含對稱地分佈於 該相機陣列中心軸四周的一對紅色成像器782及對稱地分 佈於該相機陣列中心軸四周的一對藍色成像器784。因為該 均勻分佈之故,一紅色成像器及一藍色成像器兩者可成像 超過該前景物體的左手邊上的前景物體786,且一紅色成像 器及一藍色成像器兩者可成像超過該前景物體的右手邊上 的前景物體。 圖6J中所示簡單實施例的對稱配置可被歸納至包含紅 色、綠色、藍色相機及/或額外多色或近紅外線相機的陣列 相機。藉由將不同類型成像器的每一個對稱地分佈於該相 機陣列的中心轴四周’前景物體所引進的視差效應可被顯 著地降低’且在其它方面所引進的色彩瑕疵被避開。 色彩取樣上的視差效應也可藉由使用多色成像器中的The biggest increase is achieved and the maximum increase can be handled by super resolution. Although it is not enough to be in the eye, the h phase h - the team is now rotten, and in many cases a variety of different microlens offset architectures are used to provide at least some increase in sampling diversity and satisfy one. The needs of a particular application. The imager configuration symmetry in the camera array. . The problem of dividing the photosensitive members into different imagers is the parallax caused by the isolation of the image-trapped bodies. By (4) the imagers are symmetrically placed with at least one imager to capture pixels around the edge of a foreground object. In this manner, pixels around the edge of a foreground object can be concentrated to increase resolution and avoid any shadowing. In the absence of a symmetrical distribution, a pixel visible by a first imager, such as a red camera, around a periphery of a foreground object, for a person such as a blue imager that captures a second imager of a different wavelength, accordingly, The color information of this pixel cannot be accurately weighted by 201228382. By symmetrically distributing the sensors, the likelihood that a foreground object will obscure the pixels is significantly reduced. The pixel masking caused by the asymmetric distribution of the red and blue imagers in a simple array is shown in FIG. A pair of red imagers 672 are tied to the left hand side of the camera array 670 and a pair of blue imagers 674 are tied to the right hand side of the camera array. A foreground object 676 is present and the red imagers 672 are capable of imaging an area of the foreground object on the left hand side of the foreground object. However, the foreground object obscures the red imagers and cannot image these areas. Therefore, the array camera cannot reconstruct the color information of these areas. An array comprising a symmetric distribution of red and blue imagers in accordance with an embodiment of the invention is shown in Figure 6J. The camera array 780 includes a pair of red imagers 782 symmetrically distributed about the central axis of the camera array and a pair of blue imagers 784 symmetrically distributed about the central axis of the camera array. Because of the uniform distribution, both a red imager and a blue imager can image foreground objects 786 on the left hand side of the foreground object, and both a red imager and a blue imager can be imaged more than The foreground object on the right hand side of the foreground object. The symmetrical configuration of the simple embodiment shown in Figure 6J can be generalized to an array camera comprising a red, green, blue camera and/or an additional multi-color or near-infrared camera. The parallax effect introduced by the foreground object by symmetrically distributing each of the different types of imagers around the central axis of the camera array can be significantly reduced' and the color ridges introduced in other respects are avoided. The parallax effect on color sampling can also be achieved by using a multi-color imager
S 39 201228382 視差資訊來改善來自該色彩過濾成像器的色彩取樣精確度 而被降低。 使用近紅外線成像器來得到改善的高解析度影像 在一實施例中’近紅外線成像器被使用以決定相較於 可見光谱成像器的相對亮度差異。物體具有不同材料反射 能力導致由S玄可見光譜及該近紅外線光譜所捕捉影像上的 差異。在低照明條件下,該近紅外線成像器展現較高訊雜 比。因此’來自該近紅外線感測器的訊號可被使用以強化 該亮度影像。來自該近紅外線影像的細部轉移至該亮度影 像可在透過該超解析度程序來集中不同成像器的光譜影像 前先被執行《在本方式中,有關場景的邊緣資訊可被改善 以建構可有效地使用於該超解析度程序的邊緣維持影像。 使用近紅外線成像器的優勢在公式(2)中係顯而易見,其 中,對該雜訊(也就是n)評量的任何改善導致該原始高解析 度場景(X)的較佳評量。 高解析度影像的產生 圖7係根據一實施例說明自複數個成像器所捕捉的低 解析度影像中產生一高解析度影像的方法流程圖。首先, 亮度影像 '近紅外線影像及色度影像係由該相機陣列中的 成像器所捕捉。接著’對該些捕捉影像的正規化被執行於 步驟714。可以各種方式將該些影像正規化,包含正規化該 些影像的色彩平面、執行溫度補償及映射該些影像的實體 位址至该強化影像的邏輯位址,但不限於此。在其它實施 例中,各式各樣正規化程序適用於該些特定成像器及成像 201228382 應用。視差補償接著被執行於步驟720以解決因為該些成 像器間的空間隔離所致的成像器視野上的任何差異。超解 析度處理接著被執行於步驟724以得到超解像亮度影像、 超解像近紅外線影像及超解像色度影像。 接著’步驟728決定是否該照明條件係優於一預設參 數。若該照明條件係優於該參數’則該方法繼續進行正規 化與一超解像亮度影像有關的超解像近紅外線影像。一焦 點復原接著被執行於步驟742。在一實施例中,步驟742所 執行的焦點復原係使用P S F (點擴散函數)來去除每個色彩 通道上的模糊不清。接著,該超解析度係依據近紅外線影 像及該些亮度影像而於步驟746進行處理。一合成影像接 著被建構於步驟7 5 0。 若步驟728決定該照明條件並未優於該預設參數,則 該超解像近紅外線影像及亮度影像被對準於步驟734。接 著,該些超解像亮度影像係於步驟738使用該些近外線超 解像影像來去除雜^接著,該方法繼續執行焦點復原步 驟742並在該照明條件係優於該預設參數時重複相同步驟。 色彩平面正規化 遍及該成像平面各處的紅色、綠色、藍色成像器中每 :個的相對響應不@。該變異可以是包含該透鏡的光學對 準及非對稱感測器光路和幾伯_的耸少m^ 崎^工戍何的6午多因素的結果。對於給 予透鏡及成像器而古,兮織g丄l »,5亥變異可經由校準及正規化來補 償0沒有補償時,該變旦合?丨4 支異a引起例如色彩變暗的瑕疵。 根據本發明一實祐存丨田 &例用於正規化與典型地位在該相機 201228382 陣列中心的綠色成像器的底線成像器有關的成像器的方法 係參考至與底線綠色成像器有關的紅色成像器正規化來說 明於下。一類似方法可被使用以正規化與底線綠色成像器 有關的藍色成像器。在許多實施例中,該方法被施用於正 規化相機陣列内的每一個紅色及藍色成像器。 正規化表面可藉由先捕捉具有同樣反射係數的場景並 計算一色彩比表面以充當正規化基準而被校準。理想正規 化表面係均勻並可被描述為: 色彩比 G/R=G(i,j)/R(i,j) = K=Gcenter/Rcenter 其中,(i,j)描述該像素位置,K係一常數,且Gcen…、 Reenter描述在該中心位置的像素值。 該校準場景的輸出像素值内含該些理想像素值加上雜 訊加上黑階偏移,且可被描述如下: SR(i’j)=R(i,j) +雜訊 R(i,j)+黑階偏移 SG(i,j)=G(i,j) +雜訊 G(i,j)+黑階偏移 其中,SR和SG係每一個成像器的輸出像素值。 根據本發明-實施例用於校準該感測器的方法係示於 圖7A m 76〇包含自該些感測器像素值中移除(步 762)該黑階偏移,並低通_(步驟764)該些影像平面以降 低雜訊。該正規化平面被計算(步驟褐),且—些實施例被 42 201228382 . 計算如下: 正規化 R=G(i,j)/(R(i,j)x(Gcenter/Rcenter)) 〆、中’ Gcenter和Rcenter係在該中心位置的像素值。 接著計算該正規化平面,平均濾片可被施用(步騍 768),且该正規化R平面的值可被儲存(77〇)。 攜帶一感測器陣列中的每一個感測器的全部正規化資 料的成本可能相當高。因此,許多實施例使用空間填充曲 線來掃目田δ亥正規化R平面以形成一維陣列。該產生的—維 陣列可以各式各樣不同方式來建立模型,包含建立成具有 合適階層的多項式模型。在一些實施例中,該合適多項式 的多項 <被儲存(810)為參數以於校準期間使用纟重新建構 該二維正規化平面。根據本發明一些實施例的空間填充曲 線建構進一步被描述於下。 在一些實施例中,空間填充曲線被使用以形成描述正 規化平面的-維陣列。❹螺旋掃目结所建構的空間填充曲 線係示於圖7B。空間填充曲線谓可由該正規化平面78ι 的中心開始並向外橫越四邊方塊。該方塊的每一邊相較於 則方塊擴大一個像素,使得每一個像素會被正確地越過 人在所示貫施例中,標記為“X”的每一個位置782對應 至有效像素位置。該成像器可不具有方形幾何,如此,該 掃瞎路徑可橫越未佔用空間(如虛線所示)。對於橫越的每一 個位置而m係有效像素位置,新資料項被加至該一S 39 201228382 Parallax information is used to improve the color sampling accuracy from this color filter imager. Using a near infrared ray imager to obtain an improved high resolution image In one embodiment, a near infrared ray imager is used to determine the relative brightness difference compared to the visible spectrum imager. The ability of an object to have different material reflections results in differences in the images captured by the S-visible spectrum and the near-infrared spectrum. The near-infrared imager exhibits a higher signal-to-noise ratio under low lighting conditions. Therefore, a signal from the near-infrared sensor can be used to enhance the luminance image. The transfer of the detail from the near-infrared image to the luminance image can be performed before the spectral image of the different imagers is concentrated by the super-resolution program. In this mode, the edge information about the scene can be improved to construct effectively. The image is used to maintain the image at the edge of the super-resolution program. The advantage of using a near-infrared imager is evident in equation (2), where any improvement in the evaluation of the noise (i.e., n) results in a better assessment of the original high-resolution scene (X). High Resolution Image Generation Figure 7 is a flow diagram illustrating a method for generating a high resolution image from a low resolution image captured by a plurality of imagers, in accordance with an embodiment. First, the luminance image 'near-infrared image and chrominance image are captured by the imager in the camera array. The normalization of the captured images is then performed in step 714. The images may be normalized in a variety of ways, including normalizing the color planes of the images, performing temperature compensation, and mapping physical addresses of the images to logical addresses of the enhanced image, but are not limited thereto. In other embodiments, a variety of normalization procedures are available for these particular imagers and imaging 201228382 applications. The disparity compensation is then performed at step 720 to account for any differences in the field of view of the imager due to spatial isolation between the inter-imagers. The super-resolution processing is then performed in step 724 to obtain a super-resolution luminance image, a super-resolution near-infrared image, and a super-resolution chrominance image. Next step 728 determines if the lighting condition is superior to a predetermined parameter. If the illumination condition is superior to the parameter' then the method continues to normalize the super-resolution near-infrared image associated with a super-resolution luminance image. A focus recovery is then performed in step 742. In one embodiment, the focus restoration performed in step 742 uses P S F (point spread function) to remove blurring on each color channel. Next, the super-resolution is processed in step 746 based on the near-infrared image and the brightness images. A composite image is then constructed in step 705. If step 728 determines that the illumination condition is not superior to the preset parameter, then the super-resolution near-infrared image and the luminance image are aligned to step 734. Then, the super-resolution luminance images are used in step 738 to remove the noise using the near-outline super-resolution images. The method continues to perform the focus restoration step 742 and repeats when the illumination condition is superior to the preset parameters. The same steps. Color plane normalization The relative response of each of the red, green, and blue imagers throughout the imaging plane is not @. The variation can be the result of the optical alignment of the lens and the optical path of the asymmetric sensor and the multiple factors of the six noon. For the lens and imager, the 丄 丄 丄 , , , , , , , , , , , , , , , , 可 可 可 可 可 可 校准 校准 校准 校准 校准 校准 校准 校准 校准 校准 校准 校准丨4 The difference a causes, for example, a darkened color. A method for normalizing an imager associated with a bottom line imager of a green imager typically located at the center of the camera 201228382 array is referenced to a red associated with a bottom line green imager in accordance with the present invention. The imager is normalized to illustrate the following. A similar approach can be used to normalize the blue imager associated with the bottom line green imager. In many embodiments, the method is applied to each of the red and blue imagers within the normalized camera array. The normalized surface can be calibrated by first capturing a scene with the same reflection coefficient and calculating a color ratio surface to serve as a normalization reference. The ideal normalized surface is uniform and can be described as: Color ratio G/R=G(i,j)/R(i,j) = K=Gcenter/Rcenter where (i,j) describes the pixel position, K A constant is specified, and Gcen..., Reenter describes the pixel value at the center position. The output pixel value of the calibration scene contains the ideal pixel values plus noise plus black level offset, and can be described as follows: SR(i'j)=R(i,j) +no noise R(i , j) + black level offset SG(i, j) = G(i, j) + noise G(i, j) + black level offset where SR and SG are the output pixel values of each imager. A method for calibrating the sensor in accordance with the present invention is shown in FIG. 7A, and includes removing (step 762) the black-order offset from the sensor pixel values and low-passing _ ( Step 764) the image planes to reduce noise. The normalization plane is calculated (step brown), and some embodiments are calculated as 42 201228382. The normalization is R=G(i,j)/(R(i,j)x(Gcenter/Rcenter)) 〆, In 'Gcenter and Rcenter are the pixel values at the center position. The normalization plane is then calculated, an average filter can be applied (step 768), and the value of the normalized R plane can be stored (77 〇). The cost of carrying all of the normalized data for each sensor in a sensor array can be quite high. Thus, many embodiments use spatial fill curves to scan the δ-hai normalized R-plane to form a one-dimensional array. The resulting-dimensional array can be modeled in a variety of different ways, including building a polynomial model with the appropriate hierarchy. In some embodiments, a plurality of < of the appropriate polynomials are stored (810) as parameters to reconstruct the two-dimensional normalization plane during use during calibration. Spatial fill curve construction in accordance with some embodiments of the present invention is further described below. In some embodiments, a space fill curve is used to form a -dimensional array that describes a normalized plane. The space filling curve constructed by the ❹ spiral sweep is shown in Fig. 7B. The space fill curve can be started from the center of the normalization plane 78ι and traverses the four squares outward. Each side of the square is enlarged by one pixel compared to the square, such that each pixel is correctly crossed. In each of the illustrated embodiments, each location 782 labeled "X" corresponds to an effective pixel location. The imager may not have a square geometry such that the broom path may traverse unoccupied space (as indicated by the dashed lines). For each position traversed and m is the effective pixel position, a new data item is added to the one
S 43 201228382 維資料陣列。否則,該橫越動作繼續,不將新值加入該資 料陣列。在許多實施例十,該一維資料陣列可使用6階多 項式來有效地估計’可使用該多項式的七個係數來代表 之。每一個紅色及藍色成像器典型地需給予那個校準資 料,將該些正規化平面表示為多項式係數代表對儲存需求 的顯著減少。在許多實施例中,較高或較低階多項式、其 匕函數及/或其它壓縮表示式被運用以根據特定應用的需求 來表示該正規化平面。 沿著每一邊的資料值展現固定幾何關係。該光學路徑 至該透鏡的焦點對於靠近該中心線的單元而言係短的。該 f本靈敏度可被認為在該校準表面中的一維中心且由低階 員式來估之。该靈敏度多項式不是被儲存為機器常數 就是對具有相同設計的所有裝置係共同的),就是連同該 掃猫多項式-起儲存以提供額外彈性。據此,本發明許多 =施例如下所述地依據該距離因素來調整該像素值。對於 =一邊掃,該些座標中的—者會為常數,也就是, 常數“y”為水平掃瞄且常數“x”為垂直掃目运。對於該邊掃瞄中 的每個像素而言,該靈敏度因素係朝向該常數“X”或“y” 距離進行調整β 、舉例來說,對於一水平掃晦,該基本值可依據與該中 心的距離丫來評量該靈敏度多項式而被建立^在許多實施 」中^ 多項式係四階多項式。然^其它多項式及/或其 匕函數可根據-特定應用的需求來運用。對於該掃猫路徑 中的每-個像素而|,以相同方式使用與該表面原點的距 201228382 離以自°亥夕項式中求取相對應靈敏度。該像素值乘上調整 因子並接著儲存於該掃瞄資料陣列中。該調整因子係以該 目則靈敏度值除以該基本值而算出。對於該垂直掃瞄而 言,類似方法可被施用。儘管本範例使用以多項式為主的 靈敏度σ周整,但其它靈敏度函數及/或調整也可根據本發明 各種實施例視特定應用的需求而被運用。 一旦取得用於成像器的校準資料時,該校準資料可被 使用於該成像器所捕捉的像素資訊正規化中。該方法典型 地涉及取出該儲存校準資料、自該捕捉影像中移除該黑階 偏移及將具有該正規化平面的結果值相乘。當該正規化平 面係以上面概述方式來表示成多項式時,肖多項式被使用 ^產生一維陣列,且該一維陣列的反向掃瞄被使用以形成 該二維正規化平面。在校準期間施用靈敏度調整的地方, 5周整因子被計# ’其係在該校準㈣期間所施用調整因子 的倒數,且該調整因子係在該反向掃瞒期間施用至 陣列内的#。當其它空間填充曲線、該產生的一維資料陣 列的表示式及/或靈敏度調整係執行於該校準程序士, 該正規化程序據此進行調整。 ’ $ 如同可輕易理解地, 化程序可被施用至該相機 一個。在許多實施例中, 器係於執行該校準時使用 成像器及/或多個綠色成像 及藍色成像器校準中。 根據本發明實施例的校準及正規 陣列的紅色及藍色成像器中的每 位在該相機陣列中心的綠色成像 在其它實施例中,不丄 益可被運用於該相機陣列的紅色 45 201228382 彩色影像與近紅外線影像的影像融合 互補式金屬氧化物半導體成像器的光譜響應典型地在 涵蓋650奈米至800奈米的近紅外線區域内係非常良好且 在800奈米至丨000奈米間係相當良好。因為近紅外線成像 器係相對地無雜訊,故該些近紅外線成像器雖沒有色度資 戒,但在本光譜區域内的資訊於低照明條件中係有用的。 因此,忒些近紅外線影像可被使用以去除該低照明條件下 的彩色影像雜訊》 在一實施例中,來自近紅外線成像器的影像係與來自 可見光成像器的另一影像融合。在進行融合前,—登錄被 執行於該近紅外線影像及該可見光影像之間以解決視角差 異。該登錄程序可被執行於離線的一次性處理步驟中。在 該登錄被執行後,該近紅外線影像的亮度資訊被内插至對 應至該可見光影像上的每一個格子點的格子點中。 在該近紅外線影像及該可見光影像間的像素相關性被 建立後,去除雜訊及細部轉移程序可被執行。該去除雜訊 程序允許訊號資訊自該近紅外線影像轉移至該可見光影像 以改善該融合影像的整體訊雜比。該細部轉移確保該近紅 外線影像及該可見光影像的邊緣被保存並強調以改善該融 合影像中的物體的整體可見度。 在一實施例中,近紅外線閃光燈可於該些近紅外線成 像器捕捉一影像期間充當一近紅外線光源。使用該近紅外 線閃光燈係有利的,除了其它理由外,還因為⑴可防止對 有興趣物體的惡劣照明,(ii)可保存該物體的背景顏色,及 46 201228382 (iii)可防止紅眼效應。 在-實施例中,只允許近紅外線通過的可見光渡片被 使用以進一步最佳化用於近紅外線成像的光學儀器。因為 該光濾、片在該近紅外線影像中產生較精準細部,故該可見 光濾片改善該近紅外線光學儀器轉移函數。接著,該些細 部可使用一雙兩側據片來轉移至該些可見光影像,如同例S 43 201228382 Dimensional data array. Otherwise, the traversal action continues without adding new values to the data array. In many embodiments, the one-dimensional data array can use a sixth-order polynomial to effectively estimate 'seven coefficients that can be used to represent the polynomial. Each red and blue imager typically needs to give that calibration data, and representing these normalized planes as polynomial coefficients represents a significant reduction in storage requirements. In many embodiments, higher or lower order polynomials, their 匕 functions, and/or other compressed representations are utilized to represent the normalization plane as needed for a particular application. A fixed geometric relationship is presented along the data values on each side. The optical path to the focus of the lens is short for cells near the centerline. This sensitivity can be considered as a one-dimensional center in the calibration surface and is estimated by the low-level formula. The sensitivity polynomial is not stored as a machine constant or is common to all devices having the same design, and is stored along with the sweeping polynomial to provide additional flexibility. Accordingly, many of the present invention adjust the pixel value in accordance with the distance factor as described below. For the = side sweep, the ones in the coordinates will be constant, that is, the constant "y" is the horizontal scan and the constant "x" is the vertical sweep. For each pixel in the side scan, the sensitivity factor is adjusted towards the constant "X" or "y" distance, for example, for a horizontal broom, the base value can be based on the center The distance is evaluated by evaluating the sensitivity polynomial ^ In many implementations ^ Polynomial is a fourth-order polynomial. However, other polynomials and/or their 匕 functions can be applied according to the needs of the particular application. For each pixel in the sweeping cat path, the distance from the origin of the surface is used in the same way as 201228382 to determine the corresponding sensitivity from the equation. The pixel value is multiplied by the adjustment factor and then stored in the scan data array. This adjustment factor is calculated by dividing the sensitivity value of the target by the basic value. For this vertical scan, a similar method can be applied. Although the present example uses polynomial-based sensitivity σ, other sensitivity functions and/or adjustments may be utilized in accordance with various embodiments of the present invention depending on the needs of the particular application. Once the calibration data for the imager is obtained, the calibration data can be used in the normalization of pixel information captured by the imager. The method typically involves taking the stored calibration data, removing the black level offset from the captured image, and multiplying the resulting value having the normalized plane. When the normalized plane is represented as a polynomial in the manner outlined above, the Xiao polynomial is used to generate a one-dimensional array, and a reverse scan of the one-dimensional array is used to form the two-dimensional normalized plane. Where the sensitivity adjustment is applied during calibration, the 5-week factor is counted as the reciprocal of the applied adjustment factor during the calibration (iv) and the adjustment factor is applied to # within the array during the reverse broom. When other spatial fill curves, representations and/or sensitivity adjustments of the resulting one-dimensional data array are performed on the calibration program, the normalization procedure is adjusted accordingly. As can be easily understood, the program can be applied to the camera one. In many embodiments, the device is used in performing image calibration and/or multiple green imaging and blue imager calibrations. Green imaging of each of the red and blue imagers of the calibration and regular arrays in accordance with embodiments of the present invention at the center of the camera array is not beneficial in other embodiments, and can be applied to the red color of the camera array 45 201228382 color Image fusion with near-infrared image The spectral response of a complementary metal oxide semiconductor imager is typically very good in the near-infrared region covering 650 nm to 800 nm and is between 800 nm and 丨000 nm. Quite good. Because near-infrared imagers are relatively free of noise, these near-infrared imagers have no chromaticity constraints, but the information in this spectral region is useful in low lighting conditions. Thus, some near-infrared images can be used to remove color image noise in the low illumination conditions. In one embodiment, the image from the near infrared imager is fused to another image from the visible light imager. Before the fusion, the registration is performed between the near-infrared image and the visible image to resolve the difference in viewing angle. The login procedure can be performed in an offline one-time processing step. After the registration is performed, the brightness information of the near-infrared image is interpolated into the grid points corresponding to each of the grid points on the visible light image. After the pixel correlation between the near-infrared image and the visible light image is established, the noise removal and detail transfer procedure can be performed. The noise removal program allows signal information to be transferred from the near infrared image to the visible light image to improve the overall signal to noise ratio of the fused image. The detail transfer ensures that the near infrared image and the edges of the visible image are preserved and emphasized to improve the overall visibility of the object in the blended image. In one embodiment, the near-infrared flash can act as a near-infrared source during the capture of an image by the near-infrared imager. The use of the near-infrared line flash is advantageous, among other reasons, because (1) prevents poor illumination of objects of interest, (ii) preserves the background color of the object, and 46 201228382 (iii) prevents red-eye effects. In an embodiment, visible light passages that only allow near infrared rays to pass are used to further optimize optical instruments for near infrared imaging. Since the optical filter and the sheet produce a more precise detail in the near-infrared image, the visible optical filter improves the transfer function of the near-infrared optical instrument. Then, the details can be transferred to the visible light images using a pair of two-sided sheets, as in the example
女由Eric P. Bennett等人於電腦圖形學會(ACM 公報)(2GG6年7月25日)的“多光譜視訊融合(施⑴〒。㈣ Vldeo Fus —”文章中所述的,在此將其全體—併整合參考 之。 藉由成像器的曝光不同來決定動態範圍 自動曝光(AE)演算法對於得到欲捕捉場景的適當曝光 ㈣要的。該自動曝光演算法的設計影響捕捉影像的動態 範圍。該自動曝光演算法決定讓該擷取影像“該相機陣 列j感光範圍料性區域㈣光值。因為在本線性區域中 可得到良好訊雜比’故該線性區域係較佳的。若該曝光太 少’則該圖像變得不夠飽滿,而若該曝光太多,則該圖像 變得過度飽滿。在傳統相機中,重複程序被執行以將測量 的圖像亮度及先前定義的亮度間的差異降低至低於臨界 值。本重複程序需要大量時間進行收斂,且有時產生無法 接受的快門延遲。 >在一實施例中,由複數個成像器所捕捉的影像的圖像 売度係各自進行測量。尤其,複數個成像器被設定 不同曝光影像以降低計算該適當曝光的時間。例如,在具 g 47 201228382 有5x5成像器的相機陣列中’其中,8個亮度成像器及9個 近紅外線成像器被提供,該些成像器中的每一個可被設定 具有不同曝光。該些近紅外線成像器被使用以捕捉該場景 的低光樣子’且該些亮度成像器被使用以捕捉該場景的高 照明樣子。這個產生總共17個可能曝光。若每一個成像器 的曝光係自一相鄰成像器偏移例如2因子,則可捕捉的最 大動態範圍為217或102分貝。本最大動態範圍係遠大於具 有8位元影像輸出的傳統相機中可得到的典型4 8分貝。 在每個瞬間,來自該多個成像器中的每一個成像器的 響應(曝光不足、過度曝光或最佳曝光)係依據下一個瞬間需 要多少曝光來分析。相較於一次只有曝光被測試的例子 中,同時詢問該可能|光範圍内的多個曝光的能力加速該 搜尋。藉由降低決定該適當曝光的處理時間,快門延遲及Female is described by Eric P. Bennett et al. in the Computer Graphics Society (ACM Bulletin) (July 25, 2G, 6th), "Multi-spectral video fusion (Shi (1) 〒. (4) Vldeo Fus -"), which is here The whole - and integrated reference. The dynamic range auto-exposure (AE) algorithm is determined by the different exposure of the imager to obtain the appropriate exposure (4) of the scene to be captured. The design of the automatic exposure algorithm affects the dynamic range of the captured image. The automatic exposure algorithm determines that the captured image "the camera array j photosensitive region (4) light value. Because a good signal-to-noise ratio is obtained in the linear region, the linear region is preferred. If the exposure is too small, the image becomes insufficiently full, and if the exposure is too much, the image becomes excessively full. In a conventional camera, the repeating program is executed to measure the brightness of the image and the previously defined brightness. The difference between the two decreases below the critical value. This repetitive procedure requires a lot of time to converge and sometimes produces an unacceptable shutter delay. > In one embodiment, by a plurality of imagers The image intensity of the captured image is measured separately. In particular, a plurality of imagers are set to different exposure images to reduce the time to calculate the appropriate exposure. For example, in a camera array having a 5x5 imager with g 47 201228382 8 brightness imagers and 9 near-infrared imagers are provided, each of the imagers being set to have different exposures. The near-infrared imagers are used to capture the low-light appearance of the scene' and These brightness imagers are used to capture the high illumination of the scene. This produces a total of 17 possible exposures. If the exposure of each imager is offset from an adjacent imager by, for example, 2 factors, the maximum dynamic range that can be captured It is 217 or 102 decibels. The maximum dynamic range is much larger than the typical 48 decibels available in conventional cameras with 8-bit image output. At each instant, the response from each of the multiple imagers (underexposed, overexposed, or optimal exposure) is based on how much exposure is needed in the next instant. Compared to an example where only exposure is tested The ability to simultaneously ask for multiple exposures within the possible range of light accelerates the search. By reducing the processing time that determines the appropriate exposure, the shutter delay and
在實施例中,藉由結合對每一個曝光的成像器響應In an embodiment, by combining the imager response to each exposure
該高動態範圍影 進行登錄以說明 該些成像器的視角差異。The high dynamic range shadow is logged in to account for the difference in viewing angles of the imagers.
’至少—成像器包含高動態範圍像素以 t像°高動態範圍像素係捕捉高動態範圍 °儘官高動態範圍像素相較於其它像素顯 率’但是相較於近紅外線成像器,高動態 月條件下顯示不良的執行效率。為了改善 不良執行效率,來自該些近紅外線成像 48 201228382 器的訊號可結合來自該高動態範圍成像器來使用以在不同 照明條件下得到較佳品質影像。 在一實施例中,一高動態範圍影像係藉由處理多個成 像器所捕捉的影像而得,如同例如由Paul Debevec等人於 電腦圖形學會(ACM SIGGRAPH公報)(1997年8月16日) 的自相片中復原高動態範圍輻射映圖(Recovering High'At least—the imager contains high dynamic range pixels to capture high dynamic range with a high dynamic range pixel system. The high dynamic range pixels are compared to other pixel resolutions. But compared to near infrared imagers, high dynamic months Poor execution efficiency is shown under conditions. To improve poor execution efficiency, signals from these near-infrared imaging 48 201228382 can be used in conjunction with images from the high dynamic range imager to achieve better quality images under different lighting conditions. In one embodiment, a high dynamic range image is obtained by processing images captured by a plurality of imagers, as for example by Paul Debevec et al. in the Computer Graphics Society (ACM SIGGRAPH Gazette) (August 16, 1997) Recover high dynamic range radiation map from photos (Recovering High
Dynamic Range Radiance Maps from Photographs))”文章中 所揭示的’在此將其全體一併整合參考之。因為在該場景 中的物體移動所引起的瑕疵可被減少或消除,故使用該成 像器同時捕捉多個曝光的能力係有利的。 多個成像器的超光譜成像 在一實施例中,多光譜影像係由多個成像器所產生以 協助場景中的物體分割或辨識。因為該些光譜反射係數在 多數真貫世界物體中的變化平穩,故該些光譜反射係數可 使用具有不同彩色濾片的成像器以多個光譜範圍來捕捉該 場景並使用主成分分析法(PCA)來分析該些捕捉影像而被 估測之。 在一實施例中,在該相機陣列的成像器中的一半係專 用於該基本光譜範圍(紅色、綠色及藍色)的取樣,且該些成 像器中的另一半係專用於偏移基本光譜範圍(R,、G,及B,) 的取樣。該偏移基本光譜範圍係自該基本光譜範圍偏移某 一波長(例如,10奈米)。 在實施例中,像素相關性及非線性内插法被執行以 說明該場景的次像素偏移視野。接著,該場景的光譜反射 5 49 201228382 係數係使用一組正父光4基底函數來合成,如同例如sDynamic Range Radiance Maps from Photographs)) "disclosed here in the article", all of which are incorporated herein by reference. Because the artifacts caused by the movement of objects in this scene can be reduced or eliminated, the imager is simultaneously used. The ability to capture multiple exposures is advantageous. Hyperspectral Imaging of Multiple Imagers In one embodiment, multispectral images are generated by multiple imagers to assist in segmentation or recognition of objects in the scene because of these spectral reflections. The coefficients change smoothly in most real world objects, so the spectral reflectance coefficients can be captured in multiple spectral ranges using imagers with different color filters and analyzed using principal component analysis (PCA). Capturing the image is estimated. In one embodiment, half of the imager of the camera array is dedicated to sampling of the fundamental spectral range (red, green, and blue), and another of the imagers Half is dedicated to the sampling of the offset basic spectral range (R, G, and B,). The offset fundamental spectral range is offset from the fundamental spectral range by a certain Long (eg, 10 nm). In an embodiment, pixel correlation and nonlinear interpolation are performed to account for the sub-pixel offset field of view of the scene. Next, the spectral reflectance of the scene is 5 49 201228382. Group positive parent light 4 base function to synthesize, as for example s
Parkkinen、J· Hallikainen 及 T. Jaashelainen 於“j. 0pt s〇cParkkinen, J. Hallikainen and T. Jaashelainen at "j. 0pt s〇c
Am., A 6:318(1989年8月)”的“孟塞爾色度的特徵光譜,,文 章中所揭示的,在此將其全體一併整合參考之。該些基底 函數係由相關性矩陣的主成分分析法所衍生出的特徵向 量’且該相關性矩陣係由儲存著例如以代表廣大真實世界 材料範圍的光譜分佈的孟塞爾色度晶片(總共i 257個)所測 3:的光譜反射係數的資料庫中衍生而出以重新建構該場景 中的每一點的光譜》 在第一次掃視中,透過該相機陣列中的不同成像器捕 捉該場景的不同光譜影像表現出以解析度換取較高尺寸的 光譜取樣。然而’該損失解析度中的一些可被復原。該多 個成像器取樣不同光譜範圍的場景,其中,每一個成像器 的每一個取樣格子與別人相較係偏移一次像素偏移。在一 實施例中,該成像器中沒有二取樣格子重疊。也就是,來 自所有成像器的所有取樣格子的疊置構成稠密且可能不均 勻的點拼圖。散佈資料内插法可被使用以決定每一個光譜 影像的本非均勻拼圖中的每一個樣本點處的光譜密度,如 同例如Shiaofen Fang等人於加州(1996年2月)紐波特海灘 的SPIE央特爾討論會(spie Inti Symposium)上,1996年份 SPIE第2710冊公報中,有關醫學成影技術(Medical Imaging)第404-415頁的“用於以地標為主的立體影像變形 的形體變形方法(Volume Morphing Methods for Landmark Based 3D Image Deformation)”文章中所述,在此將其全體 50 201228382 -併整合參考之。在本方式中’在使用不同光譜濾片取樣 該場景過程中所遺失的某一解析度量可被復原。 如上所述,影像分割及物體辨識係藉由決定該物體的 光譜反射係數來協助之。該情形通常發生於安全性應用 中,其中,相機網路被使用以在物體自一相機操作區移動 至另一個相機操作區時追蹤該物體。每一區可具有它自己 獨一無一的照明條件(螢光、白熱燈光、D65等等)以引起該 物體在不同相機所捕捉的每一影像中具有一不同色貌。若 这些相機以超光譜模式來捕捉該些影像,則所有影像可被 轉換成相同光源以強化物體辨識的執行效率。 在一實施例中,具有多個成像器的相機陣列被使用於 提供醫學診斷影像。因為醫生及醫療人員對該產生的診斷 具有較高信心’故樣本的全光譜數位影像貢獻精確診斷。 在該些相機陣列中的影像可配備彩色濾片以提供全光譜資 料。這類相機陣列可被安裝於手機以捕捉並傳送診斷資訊 至达ί而,如同例如由Andres W Martinez等人於分析化學(美 國化學協會)(2008年4月11日)的“用於發展各區域的簡單 遠距醫學:用於即時、離場診斷的相機電話及以報告為主 的 k 流體裝置(Simple Telemedicine for Developing Regions:Am., A 6: 318 (August 1989) "The characteristic spectrum of the Munsell chromaticity, as disclosed in the article, is hereby incorporated by reference in its entirety. The basis functions are feature vectors derived from principal component analysis of the correlation matrix and the correlation matrix is composed of Munsell chromaticity wafers storing, for example, spectral distributions representing a wide range of real world materials (total i 257) The spectrum of the measured spectral reflectance of 3: derived from the database to reconstruct each point in the scene. In the first scan, the scene is captured by different imagers in the camera array. The different spectral images show a higher spectral size sample in exchange for resolution. However, some of this loss resolution can be recovered. The plurality of imagers sample scenes of different spectral ranges, wherein each sample grid of each imager is offset by one pixel offset from others. In one embodiment, there are no two sample grid overlaps in the imager. That is, the stacking of all of the sample grids from all of the imagers constitutes a dense and possibly uneven point puzzle. Scattered data interpolation can be used to determine the spectral density at each sample point in each non-uniform puzzle of each spectral image, as for example, SPIA of Newport Beach, California (February 1996) by Xiaofan Fang et al. In the Spie Inti Symposium, in the 1996 SPIE 2710 Bulletin, "Medical Imaging" on pages 404-415, "Formation Deformation for Landmark-Based Stereoscopic Image Deformation" The method described in the article (Volume Morphing Methods for Landmark Based 3D Image Deformation) is hereby incorporated by reference in its entirety. In this mode, a certain analytical metric lost during the sampling of the scene using different spectral filters can be recovered. As described above, image segmentation and object recognition are assisted by determining the spectral reflectance of the object. This situation typically occurs in security applications where a camera network is used to track an object as it moves from one camera operating area to another. Each zone can have its own unique lighting conditions (fluorescent, incandescent, D65, etc.) to cause the object to have a different color appearance in each image captured by a different camera. If these cameras capture the images in hyperspectral mode, all images can be converted to the same source to enhance the efficiency of object recognition. In an embodiment, a camera array having a plurality of imagers is used to provide medical diagnostic images. Because doctors and medical staff have a high level of confidence in the diagnosis produced, the full-spectrum digital image of the sample contributes to accurate diagnosis. Images in these camera arrays can be equipped with color filters to provide full spectrum data. Such camera arrays can be installed on mobile phones to capture and transmit diagnostic information to, for example, by Andres W Martinez et al., Analytical Chemistry (American Chemical Society) (April 11, 2008). Simple telemedicine in the area: camera phones for immediate, off-site diagnosis and report-based k-fluid devices (Simple Telemedicine for Developing Regions:
Camera Phones and Paper-Based Microfluidic Devices for Real-Time,Off-Site Diagnosis)’’文章中所述的,在此將其全 體一併整合參考之。進一步,包含多個成像器的相機陣列 可提供具有大景深的影像以強化傷口、疹子及其它症狀的 影像捕捉的可靠性。 51 201228382 在一實施例中,具有窄光譜帶通濾片的小型成像器(包 含例如20-500像素)被使用以產生場景中的背景和局部光 源的特徵。藉由使用該小型成像器,該曝光及白平衡特徵 可以更快速度做更精確決定。該光譜帶通濾片可為一般彩 色濾片或繞射構件’具有足以允許相機陣列數量可涵蓋約 400奈米可見光譜的帶通寬度。這些成像器可以非常高晝面 速率運轉並得到資料(其可以或不可以使用於它的圖晝内容) 以處理成控制相同相機陣列中的其它較大成像器的曝光及 白平衡資訊。 使用多個成像器來配置的光學變焦 在一實施例中,在該相機陣列中的成像器子集包含望 运·透鏡。§亥成像器子集與具有非望遠透鏡的成像器可具有 相同的其它成像特徵。來自本成像器子集的影像被結合並 進行超解析度處理以形成一超解析度遠距影像。在另一實 施例中’該相機陣列包含配備大於二放大倍數的透鏡的二 或更多成像器子集以提供不同的變焦放大倍數。 該些相機陣列實施例可透過超解析度來集中影像而得 到它的最終解析度。選取提供具有3倍光學變焦特徵的5x5 成像器範例’若17個成像器被使用以取樣該亮度(G)且8 個成像被使用以取樣該色度(R及B),則17個亮度成像器 允許較該17個成像器組中的任一成像器所得者高四倍的解 析度。若該些成像器數量係自5x5增加至6x6,則增加11 個額外成像器可用。相較於裝有3倍變焦透鏡的8百萬像 素的傳統影像感測器,當該額外丨丨個成像器中的8個係專 52 201228382 Z亮度⑼且剩餘3個成像器係專用於色度(R及b) ^ 3:變焦下的近紅外線取樣時,可得到該傳統影像感測 %解析度。本相當可觀地降低該色度取樣(或近紅外 、=樣清亮度取樣的比值。該降低的色度對亮度取樣比值 係使用3倍變焦的超解像亮度影像做為先前該色度(及近紅 外線)影像的辨識以在較高觫蚯疮 稍獲補償。 解析度下重新取樣該色度影像而 利用6x6成像器’等效於傳統影像感測器解析度的解 析度係以1倍變焦取得。在3倍變焦下,編像器可得 到約專效於配備3倍變焦的傳統影像感測器的6〇%解析 度同τ相車乂於具有3倍變焦解析度的傳統影像感測器, 3倍變焦亮度解析度下降。然而,該下降亮度解析度係因串 音及光學色差所致的傳統影像感測器的光學儀器在3倍變 焦下的效率下降的事實而獲補償。 由夕個成像器所取得的變焦操作具有下列優勢。第 ,因為忒些透鏡構件可被量身打造以適用於每一個焦距 ^化的事A t & ’所得變焦品質係相當可觀地高於該傳統 影像感測器所取得的品質。在傳同影像感測器巾,橫跨該 透鏡的整個操作範圍的光學色差及場曲率必須被校正,其 在具有移動構件的變焦透鏡中比在只有用於固定焦距的色 差需被杈正的固定透鏡構件相當可觀地更加困難。此外, 在該些成像器中的固定透鏡具有一給予高度的固定主光線 角,其不是具有移動變焦透鏡的傳統影像感測器例子。第 二’該些成像器允許變焦透鏡模擬而沒有顯著地增加該光 53 201228382 千軌跡冋度。3亥降低的高度可配置薄型模組,即使針對具 有變焦ι力的相機陣列亦然。 根據二貫施例,用以支援相機陣列内的某一光學變 焦級所需的經常支出係列表顯示於表2 ^ 表2 -量 不同變焦級的亮度成像器 不同變焦級的色度成像器 數量 數量 — 1倍 2倍 3倍 1倍 2倍 3倍 17 0 0 8 0 〇 16 0 8 Lj__ 0 4 相機陣列内的成像器數量 25 b. 36 在一實施例中,該些影像中的像素被映射至具有對應 f所要變焦量的尺寸及解析度的輸出影像上,肖以提供自 ^最廣角視野至該最大放大倍數視野的平滑變焦能力。假 設該些較高放大倍數透鏡具有與該些較低放大倍數透鏡相 同的視野中^,可用影像資訊使得該影像的中心區域較該 外部區域具有較高有效解析度。在三或更多不同放大倍數 中不同解析度的套疊區域可隨解析度往該中心增加 而提供之。 具有最大遠距效應的影像具有由配備著該些望遠透鏡 的^象器的超解析度能力所決定的解析度。具有最廣視野 的〜像可以下列二方式中的至少一者進行格式化。第一, 廣角視野影像可被格式化成具有一均勾解析度的影像,其 中4解析度係由具有更廣角透鏡的成像器組的超解析能 、定。第- ’ 6亥廣角視野影像被格式化成較高解析度 54 201228382 影像,其中’該影像中心部分的解析度係由配備有望遠透 鏡的成像器組的超解析度能力所決定。在該些較低解析声 區域中’“每影像區域的像素減少數量的資訊係平轉: 内插於更,量“數位,’像素各處。在這類影像中,該像素資訊Camera Phones and Paper-Based Microfluidic Devices for Real-Time, Off-Site Diagnosis) is described in the article, and is hereby incorporated by reference in its entirety. Further, a camera array comprising multiple imagers can provide images with large depth of field to enhance the reliability of image capture of wounds, rashes and other symptoms. 51 201228382 In one embodiment, a small imager with a narrow spectral bandpass filter (including, for example, 20-500 pixels) is used to produce features of the background and local light sources in the scene. By using this small imager, the exposure and white balance features allow for more precise decisions at faster speeds. The spectral bandpass filter can be a generally colored filter or diffractive member' having a bandpass width sufficient to allow the number of camera arrays to encompass a visible spectrum of about 400 nm. These imagers can operate at very high face rates and obtain data (which may or may not be used in its image) to be processed to control exposure and white balance information for other larger imagers in the same camera array. Optical Zoom Configured Using Multiple Imagers In one embodiment, the subset of imagers in the camera array includes a telephoto lens. The HM imager subset can have the same other imaging features as an imager with a non-telephoto lens. Images from a subset of the present imager are combined and subjected to super-resolution processing to form a super-resolution long-range image. In another embodiment, the camera array includes two or more subsets of imagers equipped with lenses greater than two magnifications to provide different zoom magnifications. The camera array embodiments can focus the image through super-resolution to obtain its final resolution. Select a 5x5 imager example with 3x optical zoom feature. 'If 17 imagers are used to sample the brightness (G) and 8 images are used to sample the chrominance (R and B), then 17 brightness imaging The device allows four times higher resolution than any of the 17 imager groups. If the number of imagers is increased from 5x5 to 6x6, then 11 additional imagers are added. Compared to the 8 megapixel conventional image sensor with 3x zoom lens, 8 of the additional imagers are 52 201228382 Z brightness (9) and the remaining 3 imagers are dedicated to color Degree (R and b) ^ 3: The conventional image sensing % resolution is obtained when sampling near infrared rays under zoom. This considerably reduces the ratio of the chroma samples (or near-infrared, =-sampled brightness samples. The reduced chroma-to-brightness sampling ratio uses a 3x zoom super-resolution brightness image as the previous chroma (and Near-infrared) image recognition for slightly compensated for higher acne. Re-sampling the chrominance image at resolution and using a 6x6 imager's resolution equivalent to traditional image sensor resolution with 1x zoom Acquired. With 3x zoom, the imager can get about 6〇% resolution and τ phase 专 with traditional image sensor with 3x zoom for traditional image sensing with 3x zoom resolution. The resolution of the 3x zoom brightness is reduced. However, the reduced brightness resolution is compensated for the fact that the optical instrument of the conventional image sensor due to crosstalk and optical chromatic aberration is reduced in efficiency at 3x zoom. The zoom operation achieved by the imager has the following advantages. First, because some of the lens members can be tailored to suit each focal length, the resulting zoom quality is considerably higher than that. Traditional Like the quality achieved by the sensor. In the same image sensor wiper, the optical chromatic aberration and field curvature across the entire operating range of the lens must be corrected, which is only used in zoom lenses with moving components. The fixed focal length chromatic aberration needs to be considerably more difficult to be fixed by the fixed lens member. Furthermore, the fixed lens in the imagers has a fixed chief ray angle that gives a height, which is not a conventional image sensing with a moving zoom lens. An example of the second 'the imager allows the zoom lens to be simulated without significantly increasing the light 53 201228382 thousand track trajectory. 3 Hai lowered height configurable thin module, even for camera arrays with zoom force According to the second embodiment, the series of recurring expenses required to support an optical zoom level in the camera array is shown in Table 2 ^ Table 2 - Chroma Imager with different zoom levels for different brightness levels of different zoom stages Quantity Quantity - 1x 2x3x1x2x3x17 0 0 8 0 〇16 0 8 Lj__ 0 4 Number of imagers in the camera array 25 b. 36 in a real In the embodiment, the pixels in the images are mapped onto the output image having the size and resolution corresponding to the desired amount of f, to provide a smooth zoom capability from the widest field of view to the maximum magnification field of view. The higher magnification lenses have the same field of view as the lower magnification lenses, and the available image information allows the central region of the image to have a higher effective resolution than the outer region. In three or more different magnifications The nested regions of different resolutions may be provided as the resolution increases toward the center. The image with the largest telephoto effect has a resolution determined by the super-resolution capability of the imager equipped with the telephoto lenses. The image of the widest view can be formatted in at least one of the following two ways. First, the wide-angle view image can be formatted into an image with a uniform resolution, where the resolution is determined by the super-resolution energy of the imager group with a wider-angle lens. The first-'6 Hz wide-angle view image is formatted into a higher resolution 54 201228382 image, where 'the resolution of the center portion of the image is determined by the super-resolution capability of the imager set equipped with telescopes. In the lower resolution sound regions, 'the reduction in the number of pixels per image area is flat: interpolated more, the amount "digits," everywhere. In this type of image, the pixel information
可被處理並内插以使自齡其5柄加I 使自較问至較低解析度區域的轉 地發生。 在實細例中,變焦係藉由將—似桶狀變形誘至該陣 列透鏡的-些或全部中以使不均衡像素數量被專用於每一 個影像的中心部分。在本實施例中,每—個影像必須被處 理以移除該桶狀變形。《了產生廣角影像,較接近該中心 的像素係次取樣’相對的外部像素係超取樣。在變焦被執 订時,在該些成像器周圍的像素逐漸被丟棄,並增加較靠 近該成像器中心的像素取樣β 在-實施例中’紋理硬射濾片被建立以允許影像按該 些光學構件的特定變焦範圍(例如,該相機陣列的1侪及3 倍變焦規模)間的變焦規模來表現。紋理硬射係附帶一°底線 影像的預先計算的最佳影像組。與該3倍變焦亮卢与 關的影像組可由3倍的底線規模下降至丨倍時產生。本組 中的每一個影像係該底線3倍變焦影像在降低的細部水準 下的版本。使用該紋理硬射來表現—想要變焦級的影像係 藉由i倍變焦的影像並計算用於該想要變焦級的場 景涵蓋範圍(也就是,在該底線影像中那些像素需要以所要 求的規模表現以產生該輸出影像),(ii)對於在該涵蓋範圍組 内的每一個像素而言,決定該像素是否在該3倍變焦亮度 55 201228382 ==像中,(1Η)若該像素可由該3倍變焦亮度影 滑背片選擇—最接近的紋理硬射影像並内插(使用平 ^片)來自該二紋理硬射影像的相對應像素以產 及㈣若該像素無法“3倍變焦亮度影像中取 :規模=來^該底線丨倍色度影像的像素並放大至該想 生4輸出像素而得。藉由使用紋理硬射,平滑 1變焦可在二給予不連續級數(也就是It can be processed and interpolated so that the 5 handles from the age increase the transition from the lower to the lower resolution region. In a practical example, the zoom is induced into some or all of the array of lenses by a barrel-like deformation such that the number of unbalanced pixels is dedicated to the central portion of each image. In this embodiment, each image must be processed to remove the barrel deformation. In the case of producing a wide-angle image, the pixel adjacent to the center is subsampled with respect to the external pixel system oversampling. When the zoom is applied, the pixels around the imagers are gradually discarded and the pixel samples closer to the center of the imager are added. In the embodiment, the texture hard shot filter is created to allow the image to be pressed. The zoom scale between the specific zoom range of the optical component (eg, 1 侪 and 3x zoom scale of the camera array) is expressed. The texture hard-frame is accompanied by a pre-computed optimal image set of the bottom line image. The image group with the 3x zoom and light can be generated by a 3x bottom line size down to 丨. Each image in this group is the version of the bottom line 3x zoom image at a reduced detail level. Use this texture hard shot to represent—the image that you want to zoom in is the image that is zoomed in by i and the scene coverage for that desired zoom level is calculated (that is, those pixels in the bottom line image need to be required Scale performance to produce the output image), (ii) for each pixel in the coverage group, determine whether the pixel is at the 3x zoom brightness 55 201228382 == image, (1Η) if the pixel The 3x zoom brightness slider can be selected - the closest texture hard shot image and interpolated (using a flat film) from the corresponding pixels of the two texture hard shot image to produce (4) if the pixel cannot be "3 times In the zoom brightness image, the scale = the pixel of the bottom line 丨 chromaticity image is enlarged and enlarged to the desired 4 output pixel. By using the texture hard shot, the smooth 1 zoom can give the discontinuous series in two ( That is
變焦)間的任一點處進行模擬。 U 3L ,剌f斗實施例中’變焦係藉由電子式切換於具有不同感 心彳但有固定有效焦距(EFL)的不同光學通道之間來實 現不同視野(F〇V)而得。在略示於圖8a的這類實施例中, 可變視野係藉由在相同以有效焦距mT具有不同成像 盗…00的相同基板上產生光學通道而得。使用本結構, 可錯由包含具有更大或更小像素數量的影像感測器來產生 任意變焦放大倍數值。在這些可變焦感測器陣列獅及咖 可直接製造於該底部相機陣列基板上而不需對該陣列相機 組件本身的設計做任何進一步修改時,本技術對整合至晶 圓級光學陣列相機中係特別簡單。 在圖8B所示的另一實施例中,不同視野係藉由將不同 有效焦距805建至該相機陣列8〇6的特定光學通道中且保 持固定成像器尺寸8G8而得。在相同基板堆疊上配置不同 有效焦距,也就是,具有固定厚度及空隔的基板堆疊係更 複雜,此因該主要平面及具有它的入射瞳及所產生與該影 像感測器814有關的孔徑欄81〇的距離需要改變,用以改 56 201228382 變該光學通道的焦距。在目前實施例中,這個可由將“虛擬 基板”816、818和820引進至該堆疊806中而完成,如此, 每一個變焦通道822、824和826具有置放在不同基板或基 板的不同表面上的相關孔徑攔8 2 8、8 3 0和8 3 2以使不同有 效焦距可被取得。如所示,在該些透鏡(834、836和838) 及該些孔徑攔(828、830和832)分佈並定位於該特定基板或 基板面係完全視該想要有效焦距而定時,在所有例子中, 該些基板厚度及距離仍是固定不變。替代性地,在這類實 施例中,該些基板中的每一個雖可配備透鏡,但分佈方式 不同以便允許用於不同有效焦距。這類結構可雖有較高影 像品質但是成本較高。 在圖8C所示的再一實施例中,不同視野也可藉由圖8B 所示的類似方式,除了每一個光學通道内的所有基板上具 有透鏡構件的例外,使用“虛擬”基板來建立不同有效焦距 805而得。然而,該些透鏡構件具有不同法規以便提供不同 的有效焦距。據此,光學儀器和感測器尺寸及/或感光構件 尺寸的各式各樣架構中的任一者可搭配根據本發明實施例 的陣列相機來運用以取得不同有效焦距。 捕捉視訊影像 在一實施例中,該相機陣列產生高畫面影像序列。該 2機陣列中的成像器可獨立操作來捕捉影像。相較於傳絲 2感測器,該相機陣列可以高達N次(其中,像器 幻的晝面速率來捕捉影像。進一步,每—個成像器的晝 週』可重:a:以改善低光條件下的操作。為了增加該解析 57 i 201228382 度’成像器子集可以同步方式來操作以產生較高解析度的 影像。在本例中,該最大晝面速率係藉由以同步方式操作 的成像器數量來降低。該高速視訊畫面速率可使慢動作視 訊能夠以正常視訊速率播放。 在一範例中’二亮度成像器(綠色成像器或近紅外線成 像器)、二藍色成像器及二綠色成像器被使用以得到高解析 度l〇8〇p影像。使用四亮度成像器(二綠色成像器及二近紅 外線成像器或三綠色成像器及一近紅外線成像器)連同一藍 色成像器及一紅色成像器的排列,該些色度成像器可被增 加取樣以得到用於1〇8〇p視訊的每秒12〇晝面。對於較高 晝面速率成像裝置而言,晝面速率值可被線性增大。針對 標準解析度(480p)操作,240晝面/秒的晝面速率可使用該相 同相機陣列來得到。 具有高解析度影像感測器(例如,8百萬像素)的傳統成 像裝置使用放入法或跳過法來捕捉較低解析度影像(例如, 1〇80ρ30、720p30及480p30)。在放入法中,在該些捕捉影 像中的列和行被内插至該電荷、電壓或像素域,用以得到 5玄目標視訊解析度而降低該雜訊。在跳過法中,列和行被 略過’用以降低該感測器的功率消耗。這兩個技術產生影 像品質下降的結果。 在一實施例中,在該些相機陣列中的成像器係選擇性 地啟動來捕捉一視訊影像。例如,9個成像器(包含一近紅 外線成像器)可被使用以得到1〇8〇ρ(192〇χ1〇8〇像素)影像, 且6個成像器(包含一近紅外線成像器)可被使用以得到 58 201228382 720p(1280x720像素)影像或4個成像器(包含一近紅外線成 像器)可被使用以得到480p(720x480像素)影像。因為在該 成像器及該些目標視訊影像間具有精確的一對一像素關 係’故所得解析度係高於傳統方法。進一步,既然只有一 成像器子集被啟動以捕捉該些影像,顯著的功率節省也可 取得。例如,1080p的功率消耗可取得6〇%的減少且48〇p 中的功率消耗可取得80%的減少。 因為來自該近紅外線成像器的資訊可被使用以去除每 一個視訊影像的雜訊,故使用該近紅外線成像器來捕捉視 訊影像係有利的。在本方式中,實施例的相機陣列展示優 秀低光靈敏度並可操作於極低光條件中。在一實施例中, 對來自多個成像器的影像執行超解析度處理以得到較高解 析度視訊影像。該超解析度程序的雜訊降低特徵連同來自 該近紅外線成像器的影像融合產生非常低雜訊的影像。 在一實施例中,高動態範圍(HDR)視訊捕捉係藉由啟動 更多成像器而得。例如,在操作於1〇8〇p視訊捕捉模式的 5X5相機陣列中,只有9個相機在動作。該16個相機的子 本可藉由兩組或四組中的攔而被過度曝光或曝光不足以取 得具有非常高動態範圍的視訊輸出。 多個成像器的其它應用 在貫施例中,該多個成像器被使用於估測至場景中 的物體的距離。既狄右關5 丄 尤有關至影像中的每一點的距離資訊係 可用於該相機陣列遠同__吾彡推_ 連N 〜像構件的X及y座標内的範圍 中’影像構件尺寸可姑·、本令、& 卞八丁 J破决疋。進—步,實體項目的絕對尺 59 201228382 寸及外形可被測量而不用其它參考資訊。例如,一張足部 圖像可被取得且該產生資訊可被使用以精確地估側一適當 鞋子的尺寸。 在一實施例中’景深降低係使用距離資訊來模擬於該 相機陣列所捕捉的影像中。根據本發明的相機陣列產生具 有厅、冰大幅增加的影像。然而,在一些應用中也許無法期 待忒長景深。在這類例子中,特定距離或一些距離可被選 擇做為該影像的“最佳焦點,’距離,並依據來自視差資訊的距 離(Ζ)資訊’可使用例如簡易高斯模糊技術使該影像的像素 一個個地變模糊。在一實施例中,自該相機陣列中取得的 深度映圖被運用以使一:色相映射演算法使用該深度資訊來 執行該映射以引導該色階,藉以強調或誇大該立體效果。 在一貫施例中,不同尺寸的孔徑被提供以得到孔經多 樣性。該孔徑尺寸與該景深具有一直接關係。然而,在小 型相機中’所製造的孔控大體上係儘可能地大以允許更多 的光到達該相機陣列。不同成像器可透過不同尺寸的孔徑 來接收光。對於產生一大景深的成像器而言,該孔徑可被 降低’然而其它成像,器可具有大孔徑以極大化所接收的 光。藉由融合來自不同孔徑尺寸的感測器影像的影像,可 得到具有大景深的影像而不犧牲該影像品質。 在一實施例中,根據本發明的相機陣列依據捕捉自視 角偏移的影像進行再聚焦。不像一傳統全光學相機,得自 本發明相機陣列的影像不會遭受解析度大損失。然而,相 較於該全光學相機,根據本發明的相機陣列產生稀少的資 60 201228382 料占來進行再聚焦。為了克服該些稀少的資料點,内插法 可被執行以再聚焦來自该些稀少的資料點的資料。 在實轭例中,在s亥相機陣列令的每一個成像器具有 -不同距心。也就是’每—個成像器的光學儀器被設計並 女排以使每一個成像器的視野稍微重疊但對於大部分而 言,構.成一較大視野的不同磚塊。來自該些磚塊中的每一 個磚塊的影像係全景地縫接一起以表現單一高解析度影 像。 在一實施例中,相機陣列可形成於獨立基板上並安裝 在具有空間隔離的相同主機板上。在每一個成像器上的透 鏡構件可被安排以使該視野角落稍微包括垂直於該基板的 線。因此,若四個成像器被安裝在該主機板上,每一個成 像器相對於另一個成像器係旋轉90度,該些視野會是四個 稍微重疊的磚塊。這個允許單一晶圓級光學透鏡及成像器 晶片設計被使用來捕捉一全景影像的不同碑塊。 在一實施例中’一或更多成像器組被安排以捕捉為了 產生具有重疊視野的全景影像所縫接的影像,而另一成像 益或成像器組具有一包括所產生的磚塊影像的視野。這個 實施例提供具有不同有效解析度給不同特徵的成像器。例 如’可期待具有較色度解析度更多的亮度解析度。因此, 一些成相器組可利用它門全景縫接的視野來偵測亮度。較 少成像器可被使用以利用包括該些亮度成像器的縫接視野 的視野來偵測色度。 在一實施例中,具有多個成像器的相機陣列被安裝在 201228382 一可撓主機板上以使該主機板可以手折彎以改變該影像的 長寬比。例如,一成像器組可以一水平線方式安裝於一可 撓主機板上以使在該主機板的靜止狀態中,全部成像器的 視野係大約相同❶若有四個成像器,則得到具有每個個別 成像器的解析度兩倍的成像器,使得該主影像中的細部係 個別衫像可解像的細部範圍的一半。若該主機板被彎曲 以使它形成一垂直圓柱體的一部分,則該些成像器向外 指。隨著部分彎曲,因為在該主影像t的每一點係在二個 成像器而非四個成像器的視野内,故該主影像的寬度變兩 倍而可解像的細部減少。在最大彎曲下,該主影像係四倍 寬而在該主成像中可解像的細部進一步減少。 離線重新建構及處理 4成像系統400所處理的影像可在該影像資料儲存於 例如一快閃記憶體裝置或一硬碟的儲存媒體上之前或同時 進行預覽。在一實施例中,該些影像或視訊資料包含亮光 場資料組及該相機陣列原先捕捉到的其它有用影像資訊。 其它傳統檔案格式也可被使用。該儲存影像或視訊可被播 放或透過各種有線或無線通訊方法來傳送至其它裝置。 在一實施例中,工具係經由一遠端伺服器來提供給使 用者。該遠端伺服器可充當該些影像或視訊的貯存庫及離 線處理引擎兩者使用。此外,搗成例如Flikr、picasaweb、 臉書等等叉歡迎的照片分享網站的一部分的applet小程式 可允許影像進行不是個別地就是共同合作地互動式操控。 進一步,軟體外掛至影像編輯程式可被提供以處理例如桌 62 201228382 上型電腦及膝上型電腦的電腦裝 生的影像。 Θ成像裝置400所產 在此所述的各種模組可包括由儲存於一通用型電腦中 的電腦程式所選擇性啟動或重新架構的該電腦。這類電腦 程式可被儲存於一電腦可讀取儲存媒體,例如,包含軟碟' 光碟、唯讀記憶體光碟、磁性光碟的任何碟片類型、唯讀 記憶體(ROM)、隨機存取記憶體(RAM)、可拭去可程式化; 讀記憶體、電性可栻去可程式化唯讀記憶體、磁卡或光學 卡、特殊用i#積體電路(ASIC)或適合儲存電子指令的任何媒 體類型,但不限於此,且每一個耗接至一電腦系統匯流排。 更進一步’該說明書中所參考的電腦可包含單一處理器或 可為運用多個處理器設計以增加電腦能力的架構。 儘管本發明特定實施例及應用在此已被顯示並描述, …了解,本發明不%限於在此所揭示的明確架構及元件, 且在本發明方法及設備的配置、操作及細部上的各種修 正、改變及變化可被進行而不偏離在所附申請專利範圍中 所定義的本發明精神及範圍。 【圖式簡單說明】 圖1係根據一實施例的具有複數個成像器的相機陣列 平面圖。 圖2Α係根據一實施例的具有透鏡構件的相機陣列透視 圖。 圖2β係根據一實施例的相機陣列剖面圖。 63 201228382 圖2C係根據一實施例的具㈣音抑制作用的相機陣列 剖面圖。 圖2D係根據一第二實施例的具有争音抑制作用的相機 陣列剖面圖。 圖2 E係根據一進一步實施例的整合不透光間隔物以提 供光學性串音抑制作用的相機陣列剖面圖。 圖2F係根據另一實施例的整合塗佈著不透光材料以提 供串音抑制作用的間隔物的相機陣列剖面圖。 圖3A及圖3B係根據一實施例說明依據成像器尺寸變 化來改變透鏡構件高度的剖面圖。 圖3 C係說明依據該些透鏡構件的尺寸不同來改變主要 光線角度的圖形。 圖3 D係根據一實施例的具有場平坦的相機陣列剖面 圖。 圖4係根據一實施例的成像裝置的功能性方塊圖。 圖5係根據一實施例的影像處理程序模組的功能性方 塊圖。 圖6A至圖6F係根據實施例的具有不同異質成像器佈 局的相機陣列平面圖。 圖6 G係概念性地說明取樣多樣性可視物體距離而定的 方式的圖形。 圖6H係根據本發明一實施例的成像器的像素剖面圖。 圖61係概念性地說明在紅色及籃色成像器未對稱地分 ;-相機陣列的中心存取處四周時所產生的遮蔽區域的 64 201228382 圖形。 圖6J係概念性地說明藉由將紅色及藍色成像器對稱地 分佈於一相機陣列的中心存取處四周來消除圖61所示遮蔽 區域的方式的圖形。 圖7係根據本發明一實施例說明自複數個成像器所捕 捉的較低解析度影像中產生一強化影像的方法的流程圖。 圖7 A係根據本發明一實施例說明用於在校準期間建構 —正規化平面的方法的流程圖。 圖7B係根據圖7A所示的本發明一實施例來概念性地 說明在校準期間建構一正規化平面的方法。 圖8 A係根據—實施例的具有光學變焦的相機陣列的剖 面圖。 圖8B係根據一第二實施例的具有光學變焦的相機陣列 的剖面圖。 圖8C係拫據—進一步實施例的内含具有不同視野的成 像器的相機陣列的剖面圖。 【主 100 2〇〇 21〇 221 231 要元件符號說明】 相機陣列 相機陣列組件 晶圓級光學儀器 透鏡構件 感測器陣列 成像器 65 240 201228382 250 相機陣列組件 254 密封物 258 頂部間隔物 262 頂部透鏡晶圓 264 中間間隔物 268 底部透鏡晶圓 270 底部間隔物 274 直通矽晶穿孔 276 錫球 278 基板 280 擋光材料 281 不透明間隔物 282 遽片 283 間隔物 284 擋光材料 286 光學構件 288 光學構件 290 基板層級 292 相機陣列組件 294 不透明層 295 間隔物 296 孔徑 297 不透明塗層 310 透鏡構件 66 201228382 3 12 負透鏡構件 314 成像器表面 316 光束 320 透鏡構件 400 成像糸統 410 相機陣列 412 影像 420 影像處理程序模組 422 合成影像 428 產生影像 422 已處理影像 440 控制器 442 操作訊號 444 資訊 446 輸入 510 上游程序處理模組 514 影像像素相關性模組 518 視差確認及測量模組 522 視差補償模組 524 視差貧訊 526 超解析度模組 530 位址轉換模組 540 成像器 546 邏輯位址 67 201228382 548 554 558 564 572 610 620 650 652 654 670 672 674 676 780 782 784 786 800 802 805 806 808 810 邏輯位址 位址及相位移校準模組 校準資料 下游色彩處理模組 實體像素 光線組 光線組 像素陣列 彩色渡片 微透鏡 相機陣列 紅色成像器 藍色成像器 背景物體 相機陣列 紅色成像器 藍色成像器 背景物體 可變焦感測器陣列 可變焦感測器陣列 有效焦距 相機陣列 成像器尺寸 孔徑欄 68 201228382 814 影像感測器 816 虛擬基板 818 虛擬基板 820 虛擬基板 8 22 變焦通道 824 變焦通道 826 變焦通道 828 孔徑欄 830 孔徑欄 832 孔徑欄 834 透鏡 836 透鏡 838 透鏡 710-750 步驟 760-768 步驟 B 具有藍色渡片 的成像器 G 具有綠色濾片 的成像器 L 直徑 R 具有紅色遽片 的成像器 S 寬度 W 寬度 t 高度 tl 總兩度 t2 總高度 5 69 201228382 CRAl 主光線角 CRA2 主光線角 CRA3 主光線角 z 1 -z4、Zk 距離 1A-NM 成像器Simulate at any point between zooms). In the U 3L, the zoom system is achieved by electronically switching between different optical channels having different sensations but having a fixed effective focal length (EFL) to achieve different fields of view (F 〇 V). In such an embodiment as schematically illustrated in Figure 8a, the variable field of view is obtained by creating an optical channel on the same substrate having the same imaging s... with the effective focal length mT. With this configuration, it is possible to generate an arbitrary zoom magnification value by including an image sensor having a larger or smaller number of pixels. The technology can be integrated into a wafer level optical array camera when these zoom sensor arrays are directly fabricated on the bottom camera array substrate without any further modification to the design of the array camera assembly itself. It is especially simple. In another embodiment, illustrated in Figure 8B, different fields of view are obtained by constructing different effective focal lengths 805 into a particular optical channel of the camera array 8〇6 and maintaining a fixed imager size of 8G8. Different effective focal lengths are disposed on the same substrate stack, that is, the substrate stack having a fixed thickness and space is more complicated due to the main plane and its entrance pupil and the aperture associated with the image sensor 814. The distance of column 81〇 needs to be changed to change the focal length of the optical channel by 56 201228382. In the current embodiment, this can be accomplished by introducing "virtual substrates" 816, 818, and 820 into the stack 806, such that each zoom channel 822, 824, and 826 has a different surface disposed on a different substrate or substrate. The associated apertures are 8 2 8, 8 3 0 and 8 3 2 so that different effective focal lengths can be obtained. As shown, the lenses (834, 836, and 838) and the aperture stops (828, 830, and 832) are distributed and positioned on the particular substrate or substrate surface at exactly the desired effective focal length, at all In the example, the substrate thickness and distance are still fixed. Alternatively, in such embodiments, each of the substrates may be equipped with a lens, but distributed differently to allow for different effective focal lengths. Such structures can have higher image quality but higher cost. In still another embodiment illustrated in FIG. 8C, different fields of view may also be created in a similar manner as shown in FIG. 8B except that there are exceptions to the lens members on all of the substrates within each optical channel, using a "virtual" substrate to establish a different The effective focal length is 805. However, these lens members have different regulations to provide different effective focal lengths. Accordingly, any of a wide variety of optical instrument and sensor sizes and/or photosensitive member sizes can be utilized with array cameras in accordance with embodiments of the present invention to achieve different effective focal lengths. Capturing Video Images In one embodiment, the camera array produces a high picture sequence. The imager in the 2-machine array can operate independently to capture images. Compared to the wire 2 sensor, the camera array can be used up to N times (where the image captures the image at a phantom speed. Further, the circumference of each imager can be heavy: a: to improve the low Operation under light conditions. To increase the resolution 57 i 201228382 degrees, the imager subset can be operated in a synchronous manner to produce a higher resolution image. In this example, the maximum face rate is operated in a synchronous manner. The number of imagers is reduced. The high-speed video rate allows slow-motion video to be played at normal video rates. In one example, a 'two-brightness imager (green imager or near-infrared imager), two blue imagers and Two green imagers are used to obtain high resolution l〇8〇p images. Use a four-brightness imager (two green imagers and two near-infrared imagers or three green imagers and one near-infrared imager) to connect the same blue An arrangement of an imager and a red imager that can be sampled to obtain 12 frames per second for 1〇8〇p video. For higher face rate imaging devices In other words, the face rate value can be linearly increased. For standard resolution (480p) operation, a facet rate of 240昼/sec can be obtained using the same camera array. With a high resolution image sensor (for example, A conventional imaging device of 8 megapixels uses a drop method or a skip method to capture lower resolution images (eg, 1〇80ρ30, 720p30, and 480p30). In the put method, the columns in the captured images And the row is interpolated to the charge, voltage or pixel domain to obtain the 5th target video resolution and reduce the noise. In the skip method, the columns and rows are skipped 'to reduce the sensor's Power consumption. These two techniques produce a result of image quality degradation. In one embodiment, the imagers in the camera arrays are selectively activated to capture a video image. For example, nine imagers (including a near Infrared imager can be used to get 1〇8〇ρ (192〇χ1〇8〇 pixels) image, and 6 imagers (including a near-infrared imager) can be used to get 58 201228382 720p (1280x720 pixels) Image or 4 imagers ( A near-infrared imager can be used to obtain a 480p (720x480 pixels) image. Because of the precise one-to-one pixel relationship between the imager and the target video images, the resulting resolution is higher than conventional methods. Further, since only a subset of the imager is activated to capture the images, significant power savings can be achieved. For example, a power consumption of 1080p can achieve a reduction of 6〇% and a power consumption of 48〇p can achieve 80%. The reduction from the near-infrared imager can be used to remove the noise of each of the video images, so it is advantageous to use the near-infrared imager to capture the video image. In this manner, the camera array of the embodiment Shows excellent low light sensitivity and can be operated in extremely low light conditions. In one embodiment, super-resolution processing is performed on images from a plurality of imagers to obtain a higher resolution video image. The noise reduction feature of the super-resolution program, along with image fusion from the near-infrared imager, produces very low noise images. In one embodiment, high dynamic range (HDR) video capture is achieved by activating more imagers. For example, in a 5X5 camera array operating in the 1〇8〇p video capture mode, only 9 cameras are active. The 16 cameras can be overexposed or underexposed by two or four sets of blocks to achieve a very high dynamic range of video output. Other Applications of Multiple Imagers In various embodiments, the plurality of imagers are used to estimate the distance to an object in the scene. The distance information of each of the points in the image can be used for the camera array. The size of the image component can be used in the range of the X and y coordinates of the image member. Gu, the decree, & Step-by-step, the absolute rule of the physical project 59 201228382 Inch and shape can be measured without other reference information. For example, a foot image can be taken and the generated information can be used to accurately estimate the size of a suitable shoe. In one embodiment, the depth of field reduction uses distance information to be modeled in the image captured by the camera array. The camera array according to the present invention produces an image with a large increase in the hall and ice. However, in some applications it may not be possible to expect a long depth of field. In such an example, a certain distance or some distance may be selected as the "best focus, 'distance, and based on the distance (Ζ) information from the disparity information' can be made using, for example, simple Gaussian blurring techniques The pixels are blurred one by one. In one embodiment, the depth map taken from the camera array is utilized such that a: hue mapping algorithm uses the depth information to perform the mapping to direct the level, thereby emphasizing or Exaggerating the stereoscopic effect. In a consistent embodiment, apertures of different sizes are provided to obtain aperture diversity. The aperture size has a direct relationship with the depth of field. However, in a compact camera, the aperture control is generally As large as possible to allow more light to reach the camera array. Different imagers can receive light through different sized apertures. For an imager that produces a large depth of field, the aperture can be reduced' however other imaging Can have a large aperture to maximize the received light. By blending images from sensor images of different aperture sizes, a large depth of field can be obtained Image without sacrificing the image quality. In one embodiment, the camera array in accordance with the present invention refocuses based on images captured from a viewing angle shift. Unlike a conventional all-optical camera, images from the camera array of the present invention do not. Suffering from a large loss of resolution. However, compared to the all-optical camera, the camera array according to the present invention generates a rare amount of resources for refocusing. To overcome these rare data points, interpolation can be performed. To refocus the data from the rare data points. In the real yoke case, each imager in the shai camera array has a different distance centroid. That is, the optical instrument of each imager is designed and The women's volleyball makes the field of view of each imager slightly overlap but for the most part, constructs a different brick with a larger field of view. The images from each of the bricks are stitched together to perform Single high resolution image. In an embodiment, the camera array can be formed on a separate substrate and mounted on the same motherboard with spatial isolation. In each imager The lens member can be arranged such that the field of view corners slightly include a line perpendicular to the substrate. Thus, if four imagers are mounted on the motherboard, each imager is rotated 90 degrees relative to the other imager, The fields of view will be four slightly overlapping bricks. This allows a single wafer level optical lens and imager wafer design to be used to capture different monuments of a panoramic image. In one embodiment, one or more imagers The set is arranged to capture images stitched to produce a panoramic image with overlapping fields of view, while the other imaging benefit or imager set has a field of view including the resulting tile image. This embodiment provides different effective resolutions for Different features of the imager, for example, can be expected to have more brightness resolution than the chroma resolution. Therefore, some phaser groups can use the panoramic view of the door to detect the brightness. Less imager can be The chromaticity is detected to utilize a field of view of the stitched field of view including the brightness imagers. In one embodiment, a camera array having a plurality of imagers is mounted on a flexible motherboard in 201228382 to allow the motherboard to be manually bent to change the aspect ratio of the image. For example, an imager group can be mounted on a flexible motherboard in a horizontal line such that in the stationary state of the motherboard, the field of view of all of the imagers is approximately the same. If there are four imagers, then each has The imager of the individual imager has twice the resolution, so that the details in the main image are half of the range of detail that can be resolved. If the motherboard is bent such that it forms part of a vertical cylinder, the imagers are pointed outward. As the portion is curved, since each point of the main image t is within the field of view of the two imagers instead of the four imagers, the width of the main image is doubled and the resolution of the image is reduced. At maximum bending, the main image is four times wider and the detail that is solvable in the main image is further reduced. Offline Reconstruction and Processing 4 The image processed by imaging system 400 can be previewed before or at the same time as the image data is stored on a flash memory device or a storage medium of a hard disk. In one embodiment, the image or video material includes a bright field data set and other useful image information originally captured by the camera array. Other traditional file formats can also be used. The stored image or video can be played or transmitted to other devices via various wired or wireless communication methods. In one embodiment, the tool is provided to the user via a remote server. The remote server can act as both a repository of these images or video and an off-line processing engine. In addition, applet applets that are part of a photo-sharing site such as Flikr, picasaweb, Facebook, etc. can allow images to be interactively controlled, either individually or collectively. Further, the soft-to-software-to-image editing program can be provided to process images generated by a computer such as the desktop computer and laptop computer of Table 62 201228382. The various modules produced by the imaging device 400 can include the computer selectively activated or re-architected by a computer program stored in a general purpose computer. Such computer programs can be stored in a computer readable storage medium, for example, any disc type including a floppy disk, a CD-ROM, a magnetic disk, a read-only memory (ROM), and a random access memory. Body (RAM), erasable and programmable; read memory, electrical can be programmed to read-only memory, magnetic or optical card, special i# integrated circuit (ASIC) or suitable for storing electronic instructions Any media type, but not limited to, and each one is connected to a computer system bus. Further, the computer referenced in this specification may include a single processor or an architecture that can be designed to increase the power of a computer using multiple processors. Although specific embodiments and applications of the present invention have been shown and described herein, it is understood that the invention is not limited to the specific structures and elements disclosed herein, and the various arrangements, operations and details of the methods and apparatus of the present invention. The modifications, changes and variations can be made without departing from the spirit and scope of the invention as defined in the appended claims. BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 is a plan view of a camera array having a plurality of imagers in accordance with an embodiment. 2 is a perspective view of a camera array having a lens member in accordance with an embodiment. 2 is a cross-sectional view of a camera array in accordance with an embodiment. 63 201228382 FIG. 2C is a cross-sectional view of a camera array with (4) sound suppression according to an embodiment. Fig. 2D is a cross-sectional view of a camera array having a contention suppression function according to a second embodiment. Figure 2E is a cross-sectional view of a camera array incorporating an opaque spacer to provide optical crosstalk suppression in accordance with a further embodiment. 2F is a cross-sectional view of a camera array incorporating a spacer coated with an opaque material to provide crosstalk suppression, in accordance with another embodiment. 3A and 3B are cross-sectional views illustrating changes in the height of a lens member in accordance with changes in size of an imager, in accordance with an embodiment. Fig. 3C illustrates a graph in which the main ray angle is changed depending on the size of the lens members. Figure 3D is a cross-sectional view of a camera array having field flatness in accordance with an embodiment. 4 is a functional block diagram of an imaging device in accordance with an embodiment. Figure 5 is a functional block diagram of an image processing program module in accordance with an embodiment. 6A-6F are plan views of a camera array having different heterogeneous imager layouts, in accordance with an embodiment. Figure 6G is a diagram conceptually illustrating the manner in which the diversity of visible objects is sampled. Figure 6H is a cross-sectional view of a pixel of an imager in accordance with an embodiment of the present invention. Figure 61 is a conceptual illustration of the 64 201228382 graphic of the shadowed area produced when the red and basket color imagers are not symmetrically divided; - around the center of the camera array. Figure 6J conceptually illustrates a pattern of the manner in which the masking regions shown in Figure 61 are eliminated by symmetrically distributing the red and blue imagers around the center access of a camera array. 7 is a flow chart illustrating a method of generating a enhanced image from a lower resolution image captured by a plurality of imagers, in accordance with an embodiment of the present invention. 7A is a flow chart illustrating a method for constructing a normalized plane during calibration, in accordance with an embodiment of the present invention. Figure 7B conceptually illustrates a method of constructing a normalized plane during calibration in accordance with an embodiment of the present invention illustrated in Figure 7A. Figure 8A is a cross-sectional view of a camera array with optical zoom according to an embodiment. Figure 8B is a cross-sectional view of a camera array with optical zoom in accordance with a second embodiment. Figure 8C is a cross-sectional view of a camera array incorporating an imager having different fields of view, according to a further embodiment. [Main 100 2〇〇21〇221 231 Element Symbol Description] Camera Array Camera Array Component Wafer Level Optical Instrument Lens Member Sensor Array Imager 65 240 201228382 250 Camera Array Assembly 254 Seal 258 Top Spacer 262 Top Lens Wafer 264 Intermediate spacer 268 Bottom lens wafer 270 Bottom spacer 274 Straight through perforated 276 Tin ball 278 Substrate 280 Light blocking material 281 Opaque spacer 282 Septum 283 Spacer 284 Light blocking material 286 Optical member 288 Optical member 290 Substrate level 292 Camera array assembly 294 opaque layer 295 spacer 296 aperture 297 opaque coating 310 lens member 66 201228382 3 12 negative lens member 314 imager surface 316 beam 320 lens member 400 imaging system 410 camera array 412 image 420 image processing program Module 422 Composite Image 428 Generate Image 422 Processed Image 440 Controller 442 Operation Signal 444 Information 446 Input 510 Upstream Program Processing Module 514 Image Pixel Dependency Module 518 Parallax Confirmation and Measurement Module 522 Parallax Compensation Module 524 Parallax Information 526 Super Resolution Module 530 Address Conversion Module 540 Imager 546 Logic Address 67 201228382 548 554 558 564 572 610 620 650 652 654 670 672 674 676 780 782 784 786 800 802 805 806 808 810 logical address and phase shift calibration module calibration data downstream color processing module entity pixel light group light group pixel array color ferrite micro lens camera array red imager blue imager background object camera array red imager blue Color imager background object zoom sensor array zoom sensor array effective focus camera array imager size aperture bar 68 201228382 814 image sensor 816 virtual substrate 818 virtual substrate 820 virtual substrate 8 22 zoom channel 824 zoom channel 826 Zoom Channel 828 Aperture Bar 830 Aperture Bar 832 Aperture Bar 834 Lens 836 Lens 838 Lens 710-750 Steps 760-768 Step B Imager with Blue Pulsator G Imager with Green Filter L Diameter R with Red Sepals Imager S width W width t Tl t2 total of twice the overall height 5 69 201228382 CRAl CRA2 principal ray angle principal ray angle of a principal ray angle CRA3 z 1 -z4, Zk from the imager 1A-NM
Claims (1)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW099147177A TWI535292B (en) | 2010-12-31 | 2010-12-31 | Capturing and processing of images using monolithic camera array with heterogeneous imagers |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW099147177A TWI535292B (en) | 2010-12-31 | 2010-12-31 | Capturing and processing of images using monolithic camera array with heterogeneous imagers |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| TW201228382A true TW201228382A (en) | 2012-07-01 |
| TWI535292B TWI535292B (en) | 2016-05-21 |
Family
ID=46933612
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| TW099147177A TWI535292B (en) | 2010-12-31 | 2010-12-31 | Capturing and processing of images using monolithic camera array with heterogeneous imagers |
Country Status (1)
| Country | Link |
|---|---|
| TW (1) | TWI535292B (en) |
Cited By (40)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103926629A (en) * | 2013-01-11 | 2014-07-16 | 原相科技股份有限公司 | Optical device, photosensitive element using microlens and manufacturing method thereof |
| TWI456985B (en) * | 2012-10-17 | 2014-10-11 | Vivotek Inc | A multiple camera system and method therefore |
| TWI502212B (en) * | 2013-01-11 | 2015-10-01 | Pixart Imaging Inc | Optical device, photosensitive element using microlens and manufacturing method thereof |
| US10430682B2 (en) | 2011-09-28 | 2019-10-01 | Fotonation Limited | Systems and methods for decoding image files containing depth maps stored as metadata |
| US10455218B2 (en) | 2013-03-15 | 2019-10-22 | Fotonation Limited | Systems and methods for estimating depth using stereo array cameras |
| US10547772B2 (en) | 2013-03-14 | 2020-01-28 | Fotonation Limited | Systems and methods for reducing motion blur in images or video in ultra low light with array cameras |
| US10638099B2 (en) | 2013-03-15 | 2020-04-28 | Fotonation Limited | Extended color processing on pelican array cameras |
| US10674138B2 (en) | 2013-03-15 | 2020-06-02 | Fotonation Limited | Autofocus system for a conventional camera that uses depth information from an array camera |
| US10694114B2 (en) | 2008-05-20 | 2020-06-23 | Fotonation Limited | Capturing and processing of images including occlusions focused on an image sensor by a lens stack array |
| US10708492B2 (en) | 2013-11-26 | 2020-07-07 | Fotonation Limited | Array camera configurations incorporating constituent array cameras and constituent cameras |
| US10735635B2 (en) | 2009-11-20 | 2020-08-04 | Fotonation Limited | Capturing and processing of images captured by camera arrays incorporating cameras with telephoto and conventional lenses to generate depth maps |
| US10767981B2 (en) | 2013-11-18 | 2020-09-08 | Fotonation Limited | Systems and methods for estimating depth from projected texture using camera arrays |
| US10839485B2 (en) | 2010-12-14 | 2020-11-17 | Fotonation Limited | Systems and methods for synthesizing high resolution images using images captured by an array of independently controllable imagers |
| TWI710803B (en) * | 2018-09-07 | 2020-11-21 | 美商半導體組件工業公司 | Image sensors with multipart diffractive lenses |
| US10909707B2 (en) | 2012-08-21 | 2021-02-02 | Fotonation Limited | System and methods for measuring depth using an array of independently controllable cameras |
| US10944961B2 (en) | 2014-09-29 | 2021-03-09 | Fotonation Limited | Systems and methods for dynamic calibration of array cameras |
| US10958892B2 (en) | 2013-03-10 | 2021-03-23 | Fotonation Limited | System and methods for calibration of an array camera |
| TWI723529B (en) * | 2018-09-12 | 2021-04-01 | 耐能智慧股份有限公司 | Face recognition module and face recognition method |
| US11022725B2 (en) | 2012-06-30 | 2021-06-01 | Fotonation Limited | Systems and methods for manufacturing camera modules using active alignment of lens stack arrays and sensors |
| US11270110B2 (en) | 2019-09-17 | 2022-03-08 | Boston Polarimetrics, Inc. | Systems and methods for surface modeling using polarization cues |
| US11290658B1 (en) | 2021-04-15 | 2022-03-29 | Boston Polarimetrics, Inc. | Systems and methods for camera exposure control |
| US11302012B2 (en) | 2019-11-30 | 2022-04-12 | Boston Polarimetrics, Inc. | Systems and methods for transparent object segmentation using polarization cues |
| WO2022115829A1 (en) * | 2020-11-26 | 2022-06-02 | Schott Corporation | Light isolating arrays |
| US11525906B2 (en) | 2019-10-07 | 2022-12-13 | Intrinsic Innovation Llc | Systems and methods for augmentation of sensor systems and imaging systems with polarization |
| US11568526B2 (en) | 2020-09-04 | 2023-01-31 | Altek Semiconductor Corp. | Dual sensor imaging system and imaging method thereof |
| US11580667B2 (en) | 2020-01-29 | 2023-02-14 | Intrinsic Innovation Llc | Systems and methods for characterizing object pose detection and measurement systems |
| US11689813B2 (en) | 2021-07-01 | 2023-06-27 | Intrinsic Innovation Llc | Systems and methods for high dynamic range imaging using crossed polarizers |
| CN116569085A (en) * | 2020-11-26 | 2023-08-08 | 肖特公司 | optical isolation array |
| US11792538B2 (en) | 2008-05-20 | 2023-10-17 | Adeia Imaging Llc | Capturing and processing of images including occlusions focused on an image sensor by a lens stack array |
| US11797863B2 (en) | 2020-01-30 | 2023-10-24 | Intrinsic Innovation Llc | Systems and methods for synthesizing data for training statistical models on different imaging modalities including polarized images |
| US11954886B2 (en) | 2021-04-15 | 2024-04-09 | Intrinsic Innovation Llc | Systems and methods for six-degree of freedom pose estimation of deformable objects |
| US11953700B2 (en) | 2020-05-27 | 2024-04-09 | Intrinsic Innovation Llc | Multi-aperture polarization optical systems using beam splitters |
| US12020455B2 (en) | 2021-03-10 | 2024-06-25 | Intrinsic Innovation Llc | Systems and methods for high dynamic range image reconstruction |
| US12067746B2 (en) | 2021-05-07 | 2024-08-20 | Intrinsic Innovation Llc | Systems and methods for using computer vision to pick up small objects |
| US12069227B2 (en) | 2021-03-10 | 2024-08-20 | Intrinsic Innovation Llc | Multi-modal and multi-spectral stereo camera arrays |
| US12172310B2 (en) | 2021-06-29 | 2024-12-24 | Intrinsic Innovation Llc | Systems and methods for picking objects using 3-D geometry and segmentation |
| US12175741B2 (en) | 2021-06-22 | 2024-12-24 | Intrinsic Innovation Llc | Systems and methods for a vision guided end effector |
| TWI871562B (en) * | 2022-12-28 | 2025-02-01 | 宏碁股份有限公司 | Image processing device and intelligent synthesizing method for person and scenes using the same |
| US12293535B2 (en) | 2021-08-03 | 2025-05-06 | Intrinsic Innovation Llc | Systems and methods for training pose estimators in computer vision |
| US12340538B2 (en) | 2021-06-25 | 2025-06-24 | Intrinsic Innovation Llc | Systems and methods for generating and using visual datasets for training computer vision models |
Families Citing this family (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| TWI704502B (en) * | 2018-06-08 | 2020-09-11 | 晟風科技股份有限公司 | Thermal imager with temperature compensation function for distance and its temperature compensation method |
| US20250107268A1 (en) * | 2023-09-22 | 2025-03-27 | Taiwan Semiconductor Manufacturing Company, Ltd. | Lenses and methods of manufacturing the same |
-
2010
- 2010-12-31 TW TW099147177A patent/TWI535292B/en active
Cited By (67)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US12041360B2 (en) | 2008-05-20 | 2024-07-16 | Adeia Imaging Llc | Capturing and processing of images including occlusions focused on an image sensor by a lens stack array |
| US11792538B2 (en) | 2008-05-20 | 2023-10-17 | Adeia Imaging Llc | Capturing and processing of images including occlusions focused on an image sensor by a lens stack array |
| US12022207B2 (en) | 2008-05-20 | 2024-06-25 | Adeia Imaging Llc | Capturing and processing of images including occlusions focused on an image sensor by a lens stack array |
| US10694114B2 (en) | 2008-05-20 | 2020-06-23 | Fotonation Limited | Capturing and processing of images including occlusions focused on an image sensor by a lens stack array |
| US11412158B2 (en) | 2008-05-20 | 2022-08-09 | Fotonation Limited | Capturing and processing of images including occlusions focused on an image sensor by a lens stack array |
| US10735635B2 (en) | 2009-11-20 | 2020-08-04 | Fotonation Limited | Capturing and processing of images captured by camera arrays incorporating cameras with telephoto and conventional lenses to generate depth maps |
| US11423513B2 (en) | 2010-12-14 | 2022-08-23 | Fotonation Limited | Systems and methods for synthesizing high resolution images using images captured by an array of independently controllable imagers |
| US12243190B2 (en) | 2010-12-14 | 2025-03-04 | Adeia Imaging Llc | Systems and methods for synthesizing high resolution images using images captured by an array of independently controllable imagers |
| US11875475B2 (en) | 2010-12-14 | 2024-01-16 | Adeia Imaging Llc | Systems and methods for synthesizing high resolution images using images captured by an array of independently controllable imagers |
| US10839485B2 (en) | 2010-12-14 | 2020-11-17 | Fotonation Limited | Systems and methods for synthesizing high resolution images using images captured by an array of independently controllable imagers |
| US12052409B2 (en) | 2011-09-28 | 2024-07-30 | Adela Imaging LLC | Systems and methods for encoding image files containing depth maps stored as metadata |
| US10430682B2 (en) | 2011-09-28 | 2019-10-01 | Fotonation Limited | Systems and methods for decoding image files containing depth maps stored as metadata |
| US10984276B2 (en) | 2011-09-28 | 2021-04-20 | Fotonation Limited | Systems and methods for encoding image files containing depth maps stored as metadata |
| US11729365B2 (en) | 2011-09-28 | 2023-08-15 | Adela Imaging LLC | Systems and methods for encoding image files containing depth maps stored as metadata |
| US11022725B2 (en) | 2012-06-30 | 2021-06-01 | Fotonation Limited | Systems and methods for manufacturing camera modules using active alignment of lens stack arrays and sensors |
| US12437432B2 (en) | 2012-08-21 | 2025-10-07 | Adeia Imaging Llc | Systems and methods for estimating depth and visibility from a reference viewpoint for pixels in a set of images captured from different viewpoints |
| US10909707B2 (en) | 2012-08-21 | 2021-02-02 | Fotonation Limited | System and methods for measuring depth using an array of independently controllable cameras |
| US12002233B2 (en) | 2012-08-21 | 2024-06-04 | Adeia Imaging Llc | Systems and methods for estimating depth and visibility from a reference viewpoint for pixels in a set of images captured from different viewpoints |
| TWI456985B (en) * | 2012-10-17 | 2014-10-11 | Vivotek Inc | A multiple camera system and method therefore |
| CN103926629A (en) * | 2013-01-11 | 2014-07-16 | 原相科技股份有限公司 | Optical device, photosensitive element using microlens and manufacturing method thereof |
| US9989640B2 (en) | 2013-01-11 | 2018-06-05 | Pixart Imaging Inc. | Optical apparatus, light sensitive device with micro-lens and manufacturing method thereof |
| TWI502212B (en) * | 2013-01-11 | 2015-10-01 | Pixart Imaging Inc | Optical device, photosensitive element using microlens and manufacturing method thereof |
| US11985293B2 (en) | 2013-03-10 | 2024-05-14 | Adeia Imaging Llc | System and methods for calibration of an array camera |
| US11272161B2 (en) | 2013-03-10 | 2022-03-08 | Fotonation Limited | System and methods for calibration of an array camera |
| US12549701B2 (en) | 2013-03-10 | 2026-02-10 | Adeia Imaging Llc | System and methods for calibration of an array camera |
| US10958892B2 (en) | 2013-03-10 | 2021-03-23 | Fotonation Limited | System and methods for calibration of an array camera |
| US11570423B2 (en) | 2013-03-10 | 2023-01-31 | Adeia Imaging Llc | System and methods for calibration of an array camera |
| US10547772B2 (en) | 2013-03-14 | 2020-01-28 | Fotonation Limited | Systems and methods for reducing motion blur in images or video in ultra low light with array cameras |
| US10455218B2 (en) | 2013-03-15 | 2019-10-22 | Fotonation Limited | Systems and methods for estimating depth using stereo array cameras |
| US10638099B2 (en) | 2013-03-15 | 2020-04-28 | Fotonation Limited | Extended color processing on pelican array cameras |
| US10674138B2 (en) | 2013-03-15 | 2020-06-02 | Fotonation Limited | Autofocus system for a conventional camera that uses depth information from an array camera |
| US10767981B2 (en) | 2013-11-18 | 2020-09-08 | Fotonation Limited | Systems and methods for estimating depth from projected texture using camera arrays |
| US11486698B2 (en) | 2013-11-18 | 2022-11-01 | Fotonation Limited | Systems and methods for estimating depth from projected texture using camera arrays |
| US10708492B2 (en) | 2013-11-26 | 2020-07-07 | Fotonation Limited | Array camera configurations incorporating constituent array cameras and constituent cameras |
| US10944961B2 (en) | 2014-09-29 | 2021-03-09 | Fotonation Limited | Systems and methods for dynamic calibration of array cameras |
| US11546576B2 (en) | 2014-09-29 | 2023-01-03 | Adeia Imaging Llc | Systems and methods for dynamic calibration of array cameras |
| US12501023B2 (en) | 2014-09-29 | 2025-12-16 | Adeia Imaging Llc | Systems and methods for dynamic calibration of array cameras |
| TWI710803B (en) * | 2018-09-07 | 2020-11-21 | 美商半導體組件工業公司 | Image sensors with multipart diffractive lenses |
| US10957730B2 (en) | 2018-09-07 | 2021-03-23 | Semiconductor Components Industries, Llc | Image sensors with multipart diffractive lenses |
| TWI749896B (en) * | 2018-09-07 | 2021-12-11 | 美商半導體組件工業公司 | Image sensors with multipart diffractive lenses |
| TWI723529B (en) * | 2018-09-12 | 2021-04-01 | 耐能智慧股份有限公司 | Face recognition module and face recognition method |
| US11270110B2 (en) | 2019-09-17 | 2022-03-08 | Boston Polarimetrics, Inc. | Systems and methods for surface modeling using polarization cues |
| US11699273B2 (en) | 2019-09-17 | 2023-07-11 | Intrinsic Innovation Llc | Systems and methods for surface modeling using polarization cues |
| US11525906B2 (en) | 2019-10-07 | 2022-12-13 | Intrinsic Innovation Llc | Systems and methods for augmentation of sensor systems and imaging systems with polarization |
| US12099148B2 (en) | 2019-10-07 | 2024-09-24 | Intrinsic Innovation Llc | Systems and methods for surface normals sensing with polarization |
| US11982775B2 (en) | 2019-10-07 | 2024-05-14 | Intrinsic Innovation Llc | Systems and methods for augmentation of sensor systems and imaging systems with polarization |
| US11842495B2 (en) | 2019-11-30 | 2023-12-12 | Intrinsic Innovation Llc | Systems and methods for transparent object segmentation using polarization cues |
| US12380568B2 (en) | 2019-11-30 | 2025-08-05 | Intrinsic Innovation Llc | Systems and methods for transparent object segmentation using polarization cues |
| US11302012B2 (en) | 2019-11-30 | 2022-04-12 | Boston Polarimetrics, Inc. | Systems and methods for transparent object segmentation using polarization cues |
| US11580667B2 (en) | 2020-01-29 | 2023-02-14 | Intrinsic Innovation Llc | Systems and methods for characterizing object pose detection and measurement systems |
| US11797863B2 (en) | 2020-01-30 | 2023-10-24 | Intrinsic Innovation Llc | Systems and methods for synthesizing data for training statistical models on different imaging modalities including polarized images |
| US11953700B2 (en) | 2020-05-27 | 2024-04-09 | Intrinsic Innovation Llc | Multi-aperture polarization optical systems using beam splitters |
| US11568526B2 (en) | 2020-09-04 | 2023-01-31 | Altek Semiconductor Corp. | Dual sensor imaging system and imaging method thereof |
| WO2022115829A1 (en) * | 2020-11-26 | 2022-06-02 | Schott Corporation | Light isolating arrays |
| CN116569085A (en) * | 2020-11-26 | 2023-08-08 | 肖特公司 | optical isolation array |
| US12020455B2 (en) | 2021-03-10 | 2024-06-25 | Intrinsic Innovation Llc | Systems and methods for high dynamic range image reconstruction |
| US12069227B2 (en) | 2021-03-10 | 2024-08-20 | Intrinsic Innovation Llc | Multi-modal and multi-spectral stereo camera arrays |
| US11954886B2 (en) | 2021-04-15 | 2024-04-09 | Intrinsic Innovation Llc | Systems and methods for six-degree of freedom pose estimation of deformable objects |
| US11290658B1 (en) | 2021-04-15 | 2022-03-29 | Boston Polarimetrics, Inc. | Systems and methods for camera exposure control |
| US11683594B2 (en) | 2021-04-15 | 2023-06-20 | Intrinsic Innovation Llc | Systems and methods for camera exposure control |
| US12067746B2 (en) | 2021-05-07 | 2024-08-20 | Intrinsic Innovation Llc | Systems and methods for using computer vision to pick up small objects |
| US12175741B2 (en) | 2021-06-22 | 2024-12-24 | Intrinsic Innovation Llc | Systems and methods for a vision guided end effector |
| US12340538B2 (en) | 2021-06-25 | 2025-06-24 | Intrinsic Innovation Llc | Systems and methods for generating and using visual datasets for training computer vision models |
| US12172310B2 (en) | 2021-06-29 | 2024-12-24 | Intrinsic Innovation Llc | Systems and methods for picking objects using 3-D geometry and segmentation |
| US11689813B2 (en) | 2021-07-01 | 2023-06-27 | Intrinsic Innovation Llc | Systems and methods for high dynamic range imaging using crossed polarizers |
| US12293535B2 (en) | 2021-08-03 | 2025-05-06 | Intrinsic Innovation Llc | Systems and methods for training pose estimators in computer vision |
| TWI871562B (en) * | 2022-12-28 | 2025-02-01 | 宏碁股份有限公司 | Image processing device and intelligent synthesizing method for person and scenes using the same |
Also Published As
| Publication number | Publication date |
|---|---|
| TWI535292B (en) | 2016-05-21 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| TWI535292B (en) | Capturing and processing of images using monolithic camera array with heterogeneous imagers | |
| US12041360B2 (en) | Capturing and processing of images including occlusions focused on an image sensor by a lens stack array | |
| US10735635B2 (en) | Capturing and processing of images captured by camera arrays incorporating cameras with telephoto and conventional lenses to generate depth maps | |
| US11792538B2 (en) | Capturing and processing of images including occlusions focused on an image sensor by a lens stack array | |
| KR20170051526A (en) | Capturing and processing of images using monolithic camera array with heterogeneous imagers |