[go: up one dir, main page]

HK1178296A - Systems, methods, and media for recording an image using an optical diffuser - Google Patents

Systems, methods, and media for recording an image using an optical diffuser Download PDF

Info

Publication number
HK1178296A
HK1178296A HK13105207.3A HK13105207A HK1178296A HK 1178296 A HK1178296 A HK 1178296A HK 13105207 A HK13105207 A HK 13105207A HK 1178296 A HK1178296 A HK 1178296A
Authority
HK
Hong Kong
Prior art keywords
diffuser
image
scene
lens
sensor
Prior art date
Application number
HK13105207.3A
Other languages
Chinese (zh)
Other versions
HK1178296B (en
Inventor
S.K.那亚
O.科塞尔特
周昌印
Original Assignee
纽约市哥伦比亚大学理事会
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 纽约市哥伦比亚大学理事会 filed Critical 纽约市哥伦比亚大学理事会
Publication of HK1178296A publication Critical patent/HK1178296A/en
Publication of HK1178296B publication Critical patent/HK1178296B/en

Links

Description

Systems, methods, and media for recording images using an optical diffuser
Cross Reference to Related Applications
This application claims the benefit of U.S. provisional patent application No.61/297,667, filed on 22/1/2010, the entire contents of which are incorporated herein by reference.
Technical Field
The subject matter of the present disclosure relates to systems, methods, and media for recording images using an optical diffuser.
Background
For conventional cameras, there is a fundamental tradeoff between depth of field (DOF) and noise. Typically, a camera has a single focal plane, and objects that deviate from this plane are blurred due to defocus. The amount of defocus blur depends on the aperture size and the distance from the focal plane. To reduce defocus blur and improve DOF, the aperture size must be reduced, also reducing the signal strength of the recorded image. In many cases it is desirable to have as large a DOF as possible so that all details in the scene are preserved. This is the case, for example, in machine vision applications such as object detection and recognition, where all objects of interest are desired to be in focus. However, reducing the lens aperture is not always an option, especially in low light conditions, as reducing the lens aperture increases noise, which in turn affects the recorded image to a greater extent.
Disclosure of Invention
Systems, methods, and media for recording an image of a scene are provided. According to some embodiments, there is provided a system for recording an image of a scene, the system comprising: a diffuser diffusing light representing the scene and having a scattering function independent of aperture coordinates; a sensor that receives diffused light representing a scene and generates data representing an image; and a hardware processor that deblurs the image using a point spread function.
According to some embodiments, there is provided a method for recording an image of a scene, the method comprising: diffusing light representing the scene using a diffuser having a scattering function independent of aperture coordinates; receiving diffused light representing a scene and generating data representing an image; and deblurring the image using a point spread function.
Drawings
FIG. 1 is a diagram of a mechanism for recording an image, according to some embodiments.
FIG. 2 is a combination of two images, one without a diffuser (a) and one with a diffuser (b), according to some embodiments.
FIG. 3 is a diagram of a lens and a sensor according to some embodiments.
FIG. 4 is a diagram of a lens, diffuser, and sensor according to some embodiments.
FIG. 5 is a diagram illustrating a light field on a sensor according to some embodiments.
FIG. 6 is a diagram of light and scattering of light according to some embodiments.
Fig. 7 is a combination of a pair of plots of a point spread function and a modulation transfer function according to some embodiments.
Fig. 8 is an illustration of an optical system including a wedge (a) and a randomly varying surface, according to some embodiments.
Fig. 9 is a combination of diagrams of diffuser distribution (a), diffuser height map (b), diffuser scattering pdf (c), and diffuser (d), according to some embodiments.
Detailed Description
Systems, methods, and media for recording an image using an optical diffuser are provided.
Turning to fig. 1, a diagram of an image recording mechanism 102 (e.g., a camera, a camcorder, a mobile phone with a camera, and/or any other suitable image recording mechanism) that is used to capture an image including three objects a 104, B106, and C108 is shown. It can be seen that the objects are at different depths relative to the mechanism 102. Due to the depth of field limitations of the mechanism 102, the objects a 104 and C108 may be out of focus when the mechanism 102 is focused on the object B106. For example, the objects may be the toys shown in fig. 2. As shown in fig. 2 (a), when the camera is focused on an object in the middle (which may correspond to object B106 in fig. 1), other objects may be out of focus. However, by using the mechanisms described herein, images of these objects can be recorded so that they appear to be in focus as shown in fig. 2 (b). This may be referred to as the mechanism 102 having an extended depth of field.
According to some embodiments, extended depth of field may be achieved by incorporating a diffuser 110 or 120 into the image recording mechanism 102. Recording an image using a diffuser in a pupil plane of an image recording mechanism may be referred to as diffusion coding. Such a diffuser may be located at any suitable point in the image recording mechanism. For example, diffuser 110 may be positioned between the light sources (e.g., objects 104, 106, and 108) and lens 114 (e.g., as a lens attachment), diffuser 112 may be positioned between lens 114 and sensor 116 (e.g., as a lens or a portion of a camera body), and so on.
The diffusion encoded image may then be detected by the sensor 116 and subsequently provided to a hardware processor 118 (incorporated into the mechanism 102) and/or a hardware processor 120 (external to the mechanism 102) for subsequent processing. The processing may include deblurring the sensed image using a PSF that matches the PSF of the diffuser. Any other suitable process may additionally or alternatively be used. After such processing, the extended depth image may be presented on the display 124 (internal to the facility 102) and/or the display 122 (external to the facility 102).
To illustrate how such an image may be recorded using a diffuser, the optical components of certain embodiments will now be described.
As shown in fig. 3, the light fieldCan be used to represent a virtual perspective from a perspective having an Effective Focal Length (EFL) fThe mirror propagates to the four-dimensional ray set of the sensor. VectorCan be used to represent coordinates in a u-v plane coincident with the exit pupil of the lens, and vectorsCan be used to represent coordinates in an x-y plane coincident with the sensor. Irradiance observed on the sensor (irradiance)Can be defined as the light field integrated over all ray angles:
wherein the content of the first and second substances,is thatThe domain of (2). For scenes with smooth depth variations, locally, captured imagesCan be modeled as a depth dependent PSF kernelAnd all-in-focus imageAnd a convolution between them.
As described further below, according to some embodiments, the camera PSF may be shaped so that a single PSF may be usedDeblurring from captured imagesRestoring imagesThe depth dependence of the camera PSF can be analyzed by considering the image produced by the point source of unit energy. For example, consider a point source whose image is at a distance d from the aperture of the lens, as shown in FIG. 30And focusing is performed. Assuming a rectangular aperture of width a, the light field generated by this point can be expressed as:
wherein s is0=(d0-f)/d is the defocus slope (defocus slope) in the light field space,is a box function (box function).
The image of the point is at depth d0A camera PSF having a defocus blur width s0Box-shaped PSF of a:
the effect of a generic kernel D applied to the light field L, which represents the effect of a diffuser placed in the aperture of the camera lens, can then be analyzed. The kernel can generate a new filtered light fieldFrom the filtered light fieldThe altered PSF can be derived
Wherein the content of the first and second substances,is thatThe domain of (2). This method allows to express a large class of operations applied to the light field. For example, consider the following form of nucleus:
note that D is here taken to beIn the domain in the form of separable convolution kernels with limited support. The geometric meaning of the kernel can be illustrated as shown in fig. 4. As shown, each ray in the light field is blurred, such that each ray does not pass through the sensor at a single location, but instead contributes to a square of width w. To understand the effect of the diffuser, the image captured without the diffuser and the image captured with the diffuser may be combinedA comparison is made. For this diffuser kernel, substituting equation 7 into equations 5 and 6 yields:
whereinRepresenting a convolution. The modified PSF may be a camera PSF that is blurred using a box function. The effect of the diffuser is therefore to blur the image that would be captured if it were not present. However, the diffuser given by the kernel in equation 7 may not be useful for extending the depth of field, since the diffuser does not improve depth independence or maintain high frequencies in the camera PSF.
In general, the core of any diffuser placed in the aperture can be represented as:
where k is called the scattering function. It can be seen that the diffuser is inHas no effect in the domain, but is inWith the effect of convolution in the domain. For the diffuser given by equation 7, the scattering function can be expressed as a two-dimensional box function:
by converting rectangular coordinates (u, v, x, y) into polar coordinates (ρ, Φ, r, θ) using the relationship u ═ ρ cos Φ, v ═ ρ sin Φ, x ═ rcos θ, and y ═ rsin θ, a polar coordinate system in which ρ, r ∈ (— ∞, and θ, Φ ∈ (0, pi) and a circular aperture with a diameter a can be considered. In this system, the representation is located at a distance d0The light field per energy point source at can be written as:
it is independent of theta and phi because the source is isotropic. Please note that the unit energy can be verifiedOver pair L in polar coordinatesδ(ρ, r) is easily implemented by integration. Comparing the parameterization of the light field for the point source in equations 2 and 10, it can be seen that,the segment of (b) represents a single ray, while the segment L (ρ, r) represents a set of 2D rays. In the radially symmetrical parameterization, the segments of the light field represent the conical surface of a circle of radius r connecting the circle with radius ρ in the aperture plane to the sensor (see fig. 5).
A radially symmetric diffuser produces a distinct effect from the diffuser given in equation 7. When a symmetric diffuser is introduced, neither the diffuser nor the lens deflect the light independently, so the diffuser kernel and the altered light field can be represented using simplified coordinates (ρ, r). Then, equation 5 and equation 6 become:
and the general form of the diffuser core becomes:
the same box-like scattering function as used for the diffuser kernel in equation 7 can be used for equation 13:
however, the physical interpretation of this diffuser is different from the previous diffuser. With the previous diffuser, each light ray in the light field is scattered so that it extends across a square on the sensor. However, the effect of the scattering function in equation 14 is shown in FIG. 6. As shown, without the diffuser, light from a circle of width d ρ and radius ρ in the aperture plane is projected onto a circle of width dr and radius r on the sensor. The effect of the scattering function in equation 14 is to spread the light incident on the sensor so that it instead produces a circular ring of width w.
As illustrated by volume 602 in fig. 6, in polar coordinates, the light rays may be small annular portions that propagate from the aperture plane to the sensor plane. The effect of a diffuser to scatter light along radial lines of width w can be illustrated by volume 604.
For obvious convenience, a box-like scattering function may be used here, but a Gaussian scattering function (e.g., as shown in fig. 9 (c)) would be better for extended DOF imaging. The light field of the point source filtered by the diffuser kernel and PSF can be shown as:
the analytical solution for this PSF is a piecewise function due to contributions from terms in parentheses, which are convolutions between two rectangular functions (rect functions), one weighted with | r |. Note that as the scattering width w decreases to zero, the first rectangle (in combination with 1/w) approaches the delta function (delta function) and the result is a pill-box (pillbox-shaped) defocused PSF. Further, note that if a different diffuser with a different scattering function is used, the first rectangle is simply replaced by the new scattering function. However, the convolution term is much less important than the 1/| r | term, the effect of which is dominant, resulting in a PSF that can be strongly depth independent while still maintaining a strong peak and keeping high frequencies.
As shown in fig. 6, light incident on a small annular area of width δ r and radius r is emitted from a ring in the aperture and its energy may be proportional to ρ or equivalent to r/s0. This explains the existence of the | r | multiplier within the term in parentheses of equation 16. This item in parentheses, as shown on the right hand side of FIG. 6, can in a pill box defocused PSF circleThe volume is uniformly spread by the diffuser along a radial line of width w. The 1/| r | term in equation 16 can be attributed to the fact that: the closer the light is scattered to the center in the PSF, the greater its energy density.
Fig. 7 shows a number of pairs of plots of PSF 702 and Modulation Transfer Function (MTF) 704 for a camera with (714, 716, 718, 720, 722, 724, 726, and 728) the diffuser given by equation 16 and without (715, 717, 719, 721, 723, 725, 727, and 729) the diffuser given by equation 16. Defocus blur diameter s0A varies between 0 pixels 706, 25 pixels 708, 50 pixels 710, and 100 pixels 712. The scattering function of equation 14 is gaussian, not a box function, and the diffuser parameter w (a variable of gaussian) is chosen such that w =100 pixels. Note that there is little change to the PSF or MTF depth when a diffuser is present. The introduction of the diffuser also eliminates zero crossings in the MTF. For smaller defocus values, the scatterer suppresses the high frequencies in the MTF. However, since the diffuser MTF does not vary significantly with depth, high frequencies can be recovered by deconvolution.
According to certain embodiments, Diffusers of the "Kinoform" type (as described by Caufield, h.j., "Kinoform diffuisers" In SPIE filters series, vol.25, p.111, 1971, the entire contents of which are incorporated herein by reference) may be used, wherein the scattering effect is entirely caused by roughness variations on the surface. Such a diffuser can be considered as a random phase screen and, according to statistical optics, for a phase having an effective focal length f and a center wavelengthThe effect of placing the screen in the aperture of the camera may result in the following:
wherein phi isuAnd phivIs the derivative of the phase shift induced by the surface, and pφx,φyIs the joint probability of these derivatives. The result of equation 18 is that a diffuser can be achieved by creating an optical element with a thickness t (u, v) where the gradient of the plane isIs sampled from the probability distribution of the PSF that is also desired. Intuitively, the formula can be understood as follows: p is a radical ofφu,φyIs shown to have a slope (phi)uy) Of surface t (u, v). For small angles, all incident rays on that part of the surface will be deflected at the same angle, since the slope is constant over that region. Thus, the amount pφu,φvAlso reflects the slope (phi) to be measuredxy) A portion of the deflected light.
The kinoform diffuser has a randomly varying surface with a general probability distribution of slopes as shown in fig. 8 (b). Kino full ofThe information diffuser may be considered a generalized phase plate. For example, as shown in FIG. 8 (a) having a thicknessThe conventional deterministic phase plate of (b) can also be considered to have a secondary probability function p (phi)u) Derived slope, the probability function p (phi)u) As a function of delta. The result of placing the phase plate in the pupil plane of the camera is to shift the PSF, which can be considered as ρ (φ)u) Convolution with the PSF.
To implement the diffuser defined in equation 14, the diffuser surface may be implemented as a sequence of quadratic elements whose diameters and sag are derived from a random distribution as described in Sales, t.r.m. "structural arrays for beam profiling", Optical Engineering 42,11, pp.3084-3085,2003 (the entire contents of which are incorporated herein by reference). The scattering function of the diffuser can be designed to be roughly gaussian with a variance (variance) of 0.5mm (corresponding to w =1mm in equation 16) as shown in fig. 9 (c). To create a radially symmetric diffuser, a one-dimensional random distribution (profile) may be created, and then a polarity transformation applied to create a two-dimensional surface (see, e.g., fig. 9 (a) and 9 (b)).
In some embodiments, the diffuser may be fabricated using laser etching.
In some embodiments, the maximum height of the diffuser surface may be 3 μm, and the diffuser may be fabricated using laser machining techniques with a minimum spot size of about 10 μm. To ensure that each secondary element in the diffuser is manufactured with high precision, the minimum diameter of a single element may be chosen to be 200 μm, resulting in a diffuser with 42 distinct annular portions.
Any suitable hardware may be used to implement mechanism 102 according to some embodiments. For example, a Cannon EOS 450D sensor from Cannon u.s.a., inc. may be used as sensor 116, a 22mm diameter diffuser laser etched in a sheet of suitable optical glass by RPC Photonics of Rochester, n.y. (e.g., as shown in fig. 9 (D)) may be used as diffuser 110 or 112, and a 50mmf/1.8 lens from Cannon u.s.a., inc. may be used as lens 114. As another example, lens 114 may have any focal length and be constructed of refractive optics, reflective optics, or both. For example, the 3048mm focal length Meade LX200 telescope (available) may be used in some embodiments.
According to some embodiments, any suitable processing may be performed to deblur an image incident on the camera sensor after passing through the lens and diffuser (in any order). For example, Wiener deconvolution with PSF at the center depth may be used to deblur the sensed image. Any suitable additional or alternative processing of the image may be used. For example, additional deblurring of diffusion coded images may be performed using the BM3D deblurring algorithm as described In Dabov, k., Foi, a., Katkovnik, v., and Egiazarian, k., "Image restoration by space 3D transform-domain deblurring filtering," In SPIE Conference Series, vol.6812,681207,2008 (the entire contents of which are incorporated herein by reference). In some embodiments, the BM3D deblurring algorithm enforces piecewise smoothing before suppressing the noise amplified by the deblurring process.
Any suitable hardware processor, such as: microprocessors, digital signal processors, special purpose computers (which may include: microprocessors, digital signal processors, controllers, and the like, memories, communication interfaces, display controllers, input devices, and the like), appropriately programmed general purpose computers (which may include: microprocessors, digital signal processors, controllers, and the like, memories, communication interfaces, display controllers, input devices, and the like), servers, programmable gate arrays, and the like, may be used to deblur images captured by the sensors. Any suitable hardware may be used to transfer the image from the sensor to the processor. Thus, any suitable display, storage device, or printer may be used to display, store, or print the deblurred image.
In certain embodiments, any suitable computer-readable memory may be used for storing instructions for performing the processes described herein. For example, in certain embodiments, the computer-readable memory may be transitory or non-transitory. For example, a non-transitory computer readable medium may include media that: such as magnetic media (e.g., hard disk, floppy disk, etc.), optical media (e.g., compact disk, digital video disk, blu-ray disk, etc.), semiconductor media (e.g., flash memory, electrically programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), etc.), any suitable media that does not detract or lack any persistent appearance during transmission, and/or any suitable tangible media. As another example, a transitory computer-readable medium may include signals on a network, signals in a wire, conductors, optical fibers, circuits, any suitable medium that does not detract from or lacks any persistent appearance during transmission, and/or any suitable tangible medium.
While the present invention has been described and illustrated in the foregoing exemplary embodiments, it is understood that the present disclosure has been made only by way of example, and that numerous changes in the details of implementation of the invention may be made without departing from the spirit and scope of the invention, which is limited only by the claims which follow. The features of the disclosed embodiments may be combined and rearranged in various ways.

Claims (18)

1. A system for recording an image of a scene, comprising:
a diffuser diffusing light representing the scene and having a scattering function independent of aperture coordinates;
a sensor that receives diffused light representing a scene and generates data representing an image; and
a hardware processor that deblurs the image using a point spread function.
2. The system of claim 1, wherein the diffuser has a random one-dimensional radial distribution.
3. The system of claim 1, wherein the diffuser is made by laser etching.
4. The system of claim 1, wherein the diffuser is radially symmetric.
5. The system of claim 1, wherein the diffuser is kinoform.
6. The system of claim 1, wherein the scattering function of the diffuser is substantially gaussian.
7. The system of claim 1, further comprising a lens that passes light representing the scene before the light is incident on the diffuser.
8. The system of claim 1, further comprising a lens disposed between the diffuser and the sensor.
9. The system of claim 1, further comprising a display that displays the deblurred image.
10. A method for recording an image of a scene, the method comprising:
diffusing light representing the scene using a diffuser having a scattering function independent of aperture coordinates;
receiving diffused light representing a scene and generating data representing an image; and
the image is deblurred using a point spread function.
11. The method of claim 10, wherein said diffuser has a random one-dimensional radial distribution.
12. The method of claim 10, wherein said diffuser is made by laser etching.
13. The method of claim 10, wherein said diffuser is radially symmetric.
14. The method of claim 10, wherein said diffuser is kinoform.
15. The method of claim 10, wherein the scattering function of the diffuser is substantially gaussian.
16. The method of claim 10, further comprising: the lens is positioned such that light representing the scene passes through the lens before being incident on the diffuser.
17. The method of claim 10, further comprising placing a lens between the diffuser and the sensor.
18. The method of claim 10, further comprising displaying the deblurred image.
HK13105207.3A 2010-01-22 2011-01-24 Systems, methods, and media for recording an image using an optical diffuser HK1178296B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US61/297,667 2010-01-22

Publications (2)

Publication Number Publication Date
HK1178296A true HK1178296A (en) 2013-09-06
HK1178296B HK1178296B (en) 2018-08-03

Family

ID=

Similar Documents

Publication Publication Date Title
CN102770873B (en) System, method and medium for recording image using optical diffuser
Cossairt et al. Diffusion coded photography for extended depth of field
Schechner et al. Separation of transparent layers using focus
CN109922255B (en) Dual-camera system for generating real-time depth maps
JP6887960B2 (en) Systems and methods for autofocus triggers
WO2014155790A1 (en) Phase filter, imaging optical system, and imaging system
EP2104877A2 (en) Imaging system with improved image quality and associated methods
CN107490842A (en) Camera module, imaging device and image processing method
US20160306184A1 (en) Apparatus and optical system including an optical element
JP5676843B2 (en) Imaging device for reading information
JP6149717B2 (en) Imaging apparatus and imaging method
US9176263B2 (en) Optical micro-sensor
CN105301864B (en) Liquid crystal lens imaging device and liquid crystal lens imaging method
CN106464808B (en) Image processing equipment, image pick up equipment, image processing method, image processing program and storage medium
HK1178296A (en) Systems, methods, and media for recording an image using an optical diffuser
Park et al. Defocus and geometric distortion correction for projected images on a curved surface
HK1178296B (en) Systems, methods, and media for recording an image using an optical diffuser
Zhao et al. Removal of parasitic image due to metal specularity based on digital micromirror device camera
US11832001B2 (en) Image processing method and image processing system
CN117528209A (en) Camera modules, electronic equipment, focusing methods, devices and readable storage media
Mascarenas et al. An event-driven light field, digital coded exposure imager architecture for per-pixel image regeneration in post processing
Lin et al. Single-viewpoint, catadioptric cone mirror omnidirectional imaging theory and analysis
Ikeoka et al. Accuracy improvement of depth estimation with tilted optics by optimizing neural network
Fukino et al. Accuracy improvement of depth estimation with tilted optics and color filter aperture
Amin et al. Motionless active depth from defocus system using smart optics for camera autofocus applications