WO2012144162A1 - 3次元撮像装置、光透過部、画像処理装置、およびプログラム - Google Patents
3次元撮像装置、光透過部、画像処理装置、およびプログラム Download PDFInfo
- Publication number
- WO2012144162A1 WO2012144162A1 PCT/JP2012/002517 JP2012002517W WO2012144162A1 WO 2012144162 A1 WO2012144162 A1 WO 2012144162A1 JP 2012002517 W JP2012002517 W JP 2012002517W WO 2012144162 A1 WO2012144162 A1 WO 2012144162A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- light
- imaging device
- depth
- image processing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
- G01C11/04—Interpretation of pictures
- G01C11/30—Interpretation of pictures by triangulation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/207—Image signal generators using stereoscopic image cameras using a single 2D image sensor
- H04N13/214—Image signal generators using stereoscopic image cameras using a single 2D image sensor using spectral multiplexing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/271—Image signal generators wherein the generated image signals comprise depth maps or disparity maps
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B35/00—Stereoscopic photography
- G03B35/08—Stereoscopic photography by simultaneous recording
- G03B35/10—Stereoscopic photography by simultaneous recording having single camera with stereoscopic-base-defining system
Definitions
- This application relates to a monocular three-dimensional imaging technique for creating a parallax image using one optical system and one imaging device.
- image sensors In recent years, there has been a remarkable increase in functionality and performance of digital cameras and digital movies using solid-state image sensors such as CCDs and CMOSs (hereinafter sometimes referred to as “image sensors”).
- image sensors due to advances in semiconductor manufacturing technology, the pixel structure in a solid-state image sensor has been miniaturized. As a result, higher integration of pixels and drive circuits of solid-state image sensors has been attempted. For this reason, in a few years, the number of pixels of the image sensor has increased significantly from about 1 million pixels to over 10 million pixels. Furthermore, the quality of the image obtained by imaging has improved dramatically.
- a thin liquid crystal display or a plasma display enables high-resolution and high-contrast display without taking up space, and high performance is realized.
- Such a flow of improving the quality of video is spreading from a two-dimensional image to a three-dimensional image.
- polarized glasses are required, but high-quality three-dimensional display devices are being developed.
- a typical method having a simple configuration is a method of acquiring a right-eye image and a left-eye image using an imaging system including two cameras.
- an imaging system including two cameras.
- two-lens imaging method since two cameras are used, the imaging apparatus becomes large and the cost can be high. Therefore, a method (monocular imaging method) for acquiring a plurality of images having parallax (hereinafter sometimes referred to as “multi-viewpoint images”) using one camera has been studied.
- Patent Document 1 discloses a system using two polarizing plates whose transmission axes are orthogonal to each other and a rotating polarizing filter.
- FIG. 14 is a schematic diagram illustrating a configuration of an imaging system according to the method.
- the imaging device includes a polarizing plate 11 with 0 degree polarization, a polarizing plate 12 with 90 degree polarization, a reflecting mirror 13, a half mirror 14, a circular polarizing filter 15, a driving device 16 that rotates the circular polarizing filter 15, the optical lens 3, An imaging device 9 that acquires an image formed by the optical lens is provided.
- the half mirror 14 reflects the light transmitted through the polarizing plate 11 and reflected by the reflecting mirror 13, and transmits the light transmitted through the polarizing plate 12.
- the light transmitted through the polarizing plates 11 and 12 disposed at positions separated from each other enters the imaging device 9 via the half mirror 14, the circular polarizing filter 15, and the optical lens 3, An image is acquired.
- the principle of imaging in this method is that by rotating the circular polarizing filter 15, light incident on each of the two polarizing plates 11 and 12 is captured at different timings, and two images having parallax are acquired. That's it.
- Patent Document 2 discloses a method for simultaneously acquiring two images with parallax without using mechanical drive.
- An imaging apparatus collects light incident from two incident areas by a reflecting mirror and receives the light with an imaging element in which two types of polarizing filters are alternately arranged, without using a mechanical drive unit. Two images with parallax are acquired.
- FIG. 15 is a schematic diagram showing a configuration of an imaging system in this method.
- This imaging system includes two polarizing plates 11 and 12 whose transmission axes are orthogonal to each other, a reflecting mirror 13, an optical lens 3, and an imaging element 2.
- the imaging device 2 includes a plurality of pixels 10 and polarizing filters 17 and 18 arranged in a one-to-one correspondence with the pixels on the imaging surface.
- the polarizing filters 17 and 18 are alternately arranged on all pixels.
- the directions of the transmission axes of the polarizing filters 17 and 18 coincide with the directions of the transmission axes of the polarizing plates 11 and 12, respectively.
- incident light passes through the polarizing plates 11 and 12, is reflected by the reflecting mirror 13, passes through the optical lens 3, and enters the imaging surface of the imaging device 1.
- the light that passes through the polarizing plates 11 and 12 and enters the image sensor 1 is photoelectrically converted by the pixels that pass through the polarizing filters 17 and 18 and face each other.
- an image formed by light incident on the image sensor 1 through the polarizing plates 11 and 12 is called a right-eye image and a left-eye image, respectively
- the right-eye image and the left-eye image are polarization filters. 17 and 18 are obtained from the pixel group facing to each other.
- Patent Document 3 discloses a technique that can acquire a plurality of images having parallax and a normal image with a single image sensor in order to reduce the amount of light received by the image sensor. According to this technique, when two images having parallax are acquired and when a normal image is acquired, a part of the constituent elements are mechanically interchanged so that the two images having the parallax and the normal image have one image sensor. Obtained by.
- the point that two polarizing filters are arranged on the optical path when acquiring two images having parallax is the same as the technique disclosed in Patent Document 2.
- the polarizing filter is mechanically removed from the optical path. By incorporating such a mechanism, it is possible to obtain a plurality of images having parallax and a normal image having a high light utilization rate.
- FIG. 16 is a diagram schematically illustrating an imaging system disclosed in Patent Document 4.
- the imaging system includes a lens 3, a lens diaphragm 19, a light flux limiting plate 20 on which two color filters 20a and 20b having different transmission wavelength ranges are arranged, and a photosensitive film 21.
- the color filters 20a and 20b are filters that transmit, for example, red and blue light, respectively.
- the incident light passes through the lens 3, the lens diaphragm 19, and the light beam limiting plate 20, and forms an image on the photosensitive film.
- a magenta color image is formed on the photosensitive film by the light transmitted through each of these two color filters.
- the positions of the color filters 20a and 20b are different, parallax occurs in the image formed on the photosensitive film.
- a photograph is made from a photosensitive film and glasses with red and blue films attached for the right eye and the left eye, respectively, an image with a sense of depth can be seen.
- a multi-viewpoint image can be created using two color filters.
- FIG. 17 is a diagram schematically showing a light flux limiting plate in this technique.
- a light flux limiting plate in which an R region 22R that transmits red light, a G region 22G that transmits green light, and a B region 22B that transmits blue light are provided on a plane perpendicular to the optical axis of the imaging optical system. 22 is used.
- the light transmitted through these areas is received by a color imaging device having R pixels for red, G pixels for green, and B pixels for blue, whereby an image of light transmitted through each area is acquired.
- Patent Document 6 discloses a technique for acquiring a plurality of images having parallax using the same configuration as Patent Document 5.
- FIG. 18 is a diagram schematically showing the light flux limiting plate disclosed in Patent Document 6. As shown in FIG. Also with this technique, an image with parallax can be created by transmitting incident light through the R region 23R, the G region 23G, and the B region 23B provided on the light flux limiting plate 23.
- Patent Document 7 discloses a technique for generating a plurality of images having parallax by using a pair of filters of different colors arranged symmetrically with respect to the optical axis.
- a red filter and a blue filter as a pair of filters
- the R pixel that detects red light observes the light transmitted through the red filter
- the B pixel that detects blue light receives the light transmitted through the blue filter.
- the positions of the red filter and the blue filter are different, the incident direction of light received by the R pixel and the incident direction of light received by the B pixel are different from each other.
- the image observed at the R pixel and the image observed at the B pixel are images having different viewpoints.
- the amount of parallax is calculated by obtaining corresponding points for each pixel from these images.
- the distance from the camera to the subject is obtained from the calculated amount of parallax and the focal length information of the camera.
- Patent Document 8 a diaphragm to which two color filters having different aperture sizes (for example, red and blue) are attached, or two color filters having different colors are attached to positions symmetrical with respect to the optical axis.
- a technique for obtaining distance information of a subject from two images acquired using a diaphragm In this technique, when light transmitted through red and blue color filters having different aperture sizes is observed, the degree of blur observed for each color is different. For this reason, the two images corresponding to the red and blue color filters have different degrees of blur depending on the distance of the subject. By obtaining corresponding points from these images and comparing the degree of blur, distance information from the camera to the subject can be obtained.
- the two images corresponding to the red and blue color filters are images having parallax.
- an image with parallax can be generated by arranging RGB color filters on the light flux limiting plate.
- the amount of incident light is reduced.
- Patent Document 9 discloses a technique capable of obtaining a plurality of images having parallax and a normal image having no problem in terms of light quantity by using a diaphragm in which RGB color filters are arranged. Yes.
- this technique only light that has passed through the RGB color filter is received when the diaphragm is closed, and the RGB color filter area is removed from the optical path when the diaphragm is opened, so that all incident light can be received. .
- an image with parallax can be obtained when the diaphragm is closed, and a normal image with a high light utilization rate can be obtained when the diaphragm is opened.
- a multi-viewpoint image can be generated, but since a polarizing plate or a color filter is used, the amount of light incident on the image sensor is reduced. In order to secure the amount of incident light, a mechanism that removes the polarization portion or the color filter region from the optical path is necessary. In the prior art, a multi-viewpoint image and an image with a high light utilization rate are simultaneously used without using such a mechanism. Can't get.
- the depth information is usually estimated by extracting a feature portion from each image and performing matching between the feature portions.
- depth information is estimated by calculating a pixel shift based on a linear color model in an RGB color space.
- Embodiments of the present invention provide an imaging technique that can simultaneously acquire an image with high light utilization and depth information without using a conventional method for estimating depth information.
- a three-dimensional imaging device includes a light transmission unit including a transmission region in which spectral transmittance characteristics change along a first direction, and the light transmission unit.
- An image sensor that outputs a photoelectric conversion signal corresponding to the received light, an image forming unit that forms an image on an image pickup surface of the image sensor, and an image output from the image sensor.
- the contour in the first direction of the subject included in the image generated based on the photoelectric conversion signal is extracted, and information on the depth of the subject is obtained based on the lightness or hue pattern of the background in the vicinity of the extracted contour.
- an image processing unit for estimation is performed by the contour in the first direction of the subject included in the image generated based on the photoelectric conversion signal.
- the depth information can be calculated because the information about the depth of the subject can be converted into the brightness or color information on the image.
- depth information and a high-sensitivity image can be acquired simultaneously by increasing the transmittance of a region other than the transmission region of the light transmission unit.
- FIG. 1 is a block diagram illustrating an overall configuration of an imaging apparatus according to an exemplary embodiment 1.
- FIG. FIG. 2 is a schematic diagram illustrating a schematic configuration of a light transmitting plate, an optical lens, and an image sensor in Exemplary Embodiment 1. It is a front view of the translucent board in exemplary Embodiment 1.
- FIG. 3 is a basic color configuration diagram of an imaging unit of a solid-state imaging element in an exemplary embodiment 1. It is a figure which shows typically the imaging
- FIG. FIG. 3 is a flowchart showing a flow of image processing in exemplary embodiment 1. It is a figure which shows the pixel signal of the right imaging area
- FIG. 1 It is a figure which shows the pixel signal of the right imaging area
- FIG. It is a figure which shows rotation operation
- FIG. It is a front view of the translucent board in exemplary Embodiment 2. It is a transmission characteristic figure of the strip
- FIG. 2 is a configuration diagram of an imaging system in Patent Document 1.
- FIG. 10 is a configuration diagram of an imaging system in Patent Document 2.
- FIG. 10 is a configuration diagram of an imaging system in Patent Document 4. It is an external view of the light beam restricting plate in Patent Document 5. It is an external view of the light beam restricting plate in Patent Document 6.
- a three-dimensional imaging apparatus includes a light transmission unit having a transmission region in which spectral transmittance characteristics change along a first direction, and light transmitted through the light transmission unit.
- An image sensor that is arranged to receive and outputs a photoelectric conversion signal corresponding to the received light; an imaging unit that forms an image on an imaging surface of the image sensor; and the photoelectric conversion signal output from the image sensor
- An image processing unit that extracts a contour in the first direction of a subject included in an image generated based on the information, and estimates information on the depth of the subject based on a background brightness or hue pattern in the vicinity of the extracted contour And.
- the transmission region has three or more types of transmission wavelength regions along the first direction.
- the three-dimensional imaging device includes, in one aspect, a rotation drive unit that rotates the transmission region on a plane perpendicular to the optical axis.
- the image processing unit extracts the contour of the subject by comparing a plurality of images acquired in different rotation states.
- the image processing unit is configured to acquire the first acquired when the transmissive region is in the first state.
- the contour of the subject is extracted based on the difference between the image and the second image acquired when the transmissive region is in the second state rotated 180 degrees from the first state.
- the spectral transmittance characteristics of the transmission region continuously and periodically change along the first direction. is doing.
- the transmissive region is arranged in blue, cyan, green, yellow arranged along the first direction. , Red and magenta wavelength regions, each of which has six types of regions.
- the image processing unit is configured to set the depth of the subject and the brightness at the peripheral pixels of the contour, The depth of the subject is estimated based on information indicating the relationship with the hue pattern.
- a portion other than the transmission region in the light transmission portion is transparent.
- the image processing unit calculates the depth amount based on information indicating the estimated depth. A depth image is generated.
- the image processing unit generates a color image based on the photoelectric conversion signal output from the imaging element. Generate.
- the light transmission unit according to an aspect of the present invention can be used in the three-dimensional imaging device according to any one of items (1) to (11).
- the image processing apparatus can be used for the three-dimensional imaging apparatus according to any one of items (1) to (11).
- the image processing apparatus extracts a contour in the first direction of a subject included in an image generated based on the photoelectric conversion signal output from the image sensor, and brightness or hue of a background in the vicinity of the extracted contour
- An image processing unit that estimates information on the depth of the subject based on the pattern.
- the image processing program according to an aspect of the present invention can be used for the three-dimensional imaging device according to any one of items (1) to (11).
- the program extracts a contour in the first direction of a subject included in an image generated based on the photoelectric conversion signal output from the image sensor, and a background in the vicinity of the extracted contour.
- a step of estimating information relating to the depth of the subject based on a lightness or hue pattern is executed.
- FIG. 1 is a block diagram illustrating the overall configuration of the imaging apparatus according to the present embodiment.
- the imaging apparatus according to the present embodiment is a digital electronic camera, and includes an imaging unit 100 and a signal processing unit 200 that generates a signal (image signal) indicating an image based on a signal generated by the imaging unit 100. ing.
- the imaging unit 100 includes a color solid-state imaging device 2a (hereinafter simply referred to as “imaging device”) including a plurality of photosensitive cells (pixels) arranged on an imaging surface, and spectral transmittance along a specific direction.
- imaging device a color solid-state imaging device 2a
- a translucent plate (light transmitting portion) 1 that transmits incident light on which a band-shaped color filter whose characteristics are changed, and an optical lens 3 for forming an image on the imaging surface of the color solid-state imaging device 2a;
- An infrared cut filter 4 is provided.
- the imaging unit 100 also generates a basic signal for driving the color solid-state imaging device 2a, receives an output signal from the color solid-state imaging device 2a, and sends it to the signal processing unit 200.
- An element driving unit 6 that drives the color solid-state imaging device 2a based on the basic signal generated by the signal generating / receiving unit 5 is provided.
- the color solid-state imaging device 2a is typically a CCD or CMOS sensor, and is manufactured by a known semiconductor manufacturing technique.
- the signal generation / reception unit 5 and the element driving unit 6 are composed of an LSI such as a CCD driver, for example.
- the “spectral transmittance characteristic” means the wavelength dependency of the transmittance in the wavelength range of visible light.
- the signal processing unit 200 processes the signal output from the imaging unit 100 to generate a color image and subject depth information, and a memory 30 that stores various data used to generate the image signal. And an interface (IF) unit 8 for transmitting the generated image signal and depth information to the outside.
- the image processing unit 7 can be suitably realized by a combination of hardware such as a known digital signal processor (DSP) and software that executes image processing including image signal generation processing.
- DSP digital signal processor
- the image processing unit 7 generates a color image, extracts a contour (edge) included in the image, and calculates depth information from color information in the vicinity of the contour.
- the image processing unit 7 also converts the depth information into a luminance signal, and generates a black and white image indicating the depth distribution.
- the memory 30 is configured by a DRAM or the like.
- the memory 30 records the signal obtained from the imaging unit 100 and temporarily records the image data generated by the image processing unit 7 and the compressed image data. These image data are sent to a recording medium (not shown) or
- the imaging apparatus may include known components such as an electronic shutter, a viewfinder, a power source (battery), and a flashlight, but a description thereof is omitted because it is not particularly necessary for understanding the present embodiment. .
- FIG. 2 is a diagram schematically showing an arrangement relationship of the light-transmitting plate 1, the lens 3, and the image pickup device 2a in the image pickup unit 100.
- the lens 3 may be a lens unit composed of a plurality of lens groups, but is illustrated as a single lens in FIG. 2 for simplicity.
- the translucent plate 1 has a band-like color filter (transmission region) 1a whose spectral transmittance characteristics change along the horizontal direction, and transmits incident light.
- the lens 3 is a known lens, collects the light transmitted through the translucent plate 1, and forms an image on the imaging surface 2b of the imaging element 2a.
- the “horizontal direction” means the x direction shown in the referenced drawings and does not necessarily coincide with a direction parallel to the ground surface.
- FIG. 3 is a front view of the translucent plate 1 in the present embodiment.
- the shape of the translucent plate 1 in the present embodiment is a circle like the lens 3.
- the translucent plate 1 has a band-like color filter 1a at the center thereof, and the other region 1b is transparent.
- This band-shaped color filter 1a gradually changes its transmission wavelength range from left to right in the drawing, from red (R), yellow (Ye), green (G), cyan (Cy), blue (B), and magenta (Mg).
- R red
- Ye yellow
- G green
- Cy cyan
- B blue
- Mg magenta
- the transmittance of the R, G, and B filter portions is substantially equal, and the transmittance of each of the Ye, Cy, and Mg filter portions is about 2 of the transmittance of each of the R, G, and B filter portions. It shall be doubled.
- FIG. 4 shows a part of the plurality of photosensitive cells 50 arranged in a matrix on the imaging surface 2b of the imaging device 2a.
- Each photosensitive cell 50 is typically a photodiode, and outputs an electrical signal (hereinafter referred to as “photoelectric conversion signal” or “pixel signal”) corresponding to the amount of received light by photoelectric conversion.
- a color filter is disposed on the light incident side facing each photosensitive cell 50.
- the color filter arrangement in this embodiment is a horizontal stripe arrangement having a basic configuration of 3 rows and 1 column. The first row is a red element (R), and the second row is a green element (G). The third line is the blue element (B).
- the color filter of each element is manufactured using a known pigment or the like.
- the red color filter selectively transmits light in the red wavelength range
- the green color filter selectively transmits light in the green wavelength range
- the blue color filter selectively transmits light in the blue wavelength range.
- each photosensitive cell 50 light incident on the imaging device during exposure is imaged on the imaging surface 2b of the imaging device 2a through the light transmitting plate 1, the lens 3, and the infrared cut filter 4, and is photoelectrically converted by each photosensitive cell 50. Is done.
- the photoelectric conversion signal output by each photosensitive cell 50 is sent to the signal processing unit 200 via the signal generation / reception unit 5.
- the image processing unit 7 in the signal processing unit 200 performs colorization of the image and calculation of depth information based on the transmitted signal.
- the depth information is converted into a luminance signal according to the amount of the depth, and is output as a monochrome image.
- FIG. 5 is a diagram schematically showing a situation when the background 32 and the foreground subject 31 are imaged through the light transmitting plate 1.
- the imaging device 2 a When imaging is performed under the situation shown in the drawing, the imaging device 2 a outputs a photoelectric conversion signal based on light reflected by the foreground subject 31 and the background 32.
- the photoelectric conversion signal is sent from the image sensor 2 a to the image processing unit 7 via the signal generation / reception unit 5.
- the image processing unit 7 performs the following two processes using the sent photoelectric conversion signal, and outputs two images as respective processing results. As a first process, a predetermined colorization process is performed, and a color image is generated.
- the contour extraction of the image As the second processing, the contour extraction of the image, the depth estimation based on the coloring in the horizontal direction in the vicinity of the contour, and the generation of a black and white image (hereinafter referred to as “depth image”) having the depth amount as the luminance value of the image. Is done.
- depth image a black and white image having the depth amount as the luminance value of the image.
- FIG. 6 is a flowchart showing the flow of image generation processing in the present embodiment.
- the image processing unit 7 generates a color image based on the photoelectric conversion signal generated by the image sensor 2a.
- step S11 the horizontal contour of the subject included in the generated color image is extracted.
- step S12 the background color near the contour is detected.
- step S13 a hue pattern of pixels near the contour is detected.
- step S14 the depth of the subject is calculated from the hue pattern of the pixels near the contour.
- a depth image is generated based on the calculated depth amount. Details of each step will be described below.
- a color image is generated from the output of the image sensor 2a.
- the image sensor 2a has an RGB horizontal stripe arrangement. Therefore, RGB signals (color signals) can be directly obtained with 3 pixels in 3 rows and 1 column as one unit, and a color image is generated based on these color signals.
- RGB signals color signals
- the horizontal stripe color arrangement shown in FIG. 4 since the horizontal stripe color arrangement shown in FIG. 4 is used, a color image with high resolution in the horizontal direction can be created.
- the band-shaped color filter 1a is disposed at the center of the light-transmitting plate 1, a part of light is absorbed by the band-shaped color filter 1a and the amount of light is lost, but the other light is lost. Without photoelectric conversion.
- the color scheme of the band-shaped color filter 1a is not only the primary colors (R, G, B) but also the complementary colors (Ye, Cy, Mg), and therefore, compared to the configuration using only the primary color filters.
- the light transmittance is high.
- the image of the subject is not basically colored by the band-shaped color filter 1a. From the above, by sufficiently reducing the size of the band-shaped color filter 1a, it is possible to generate a color image having no problem with image sensitivity and color characteristics.
- the image processing unit 7 extracts a contour in the horizontal direction of an image (captured image) acquired by imaging.
- the contour is extracted using the result of the color image generation process. Specifically, first, color components are removed from the color image obtained by the color image generation process described above to create a monochrome image. Next, in the black-and-white image, signal difference processing between two pixels adjacent in the horizontal direction is performed, and if the difference value is equal to or higher than a preset level, the portion is set as the image outline in the horizontal direction.
- the image processing unit 7 extracts a contour in the horizontal direction by performing such signal difference processing on the entire image.
- step S12 the background color near the extracted contour is detected.
- the region in which the color is detected is the left side region of the contour (corresponding to the left imaging region 33 shown in FIG. 5) in the contour in the left half of the image, and the right region of the contour in the contour in the right half of the image ( Corresponding to the right imaging region 34 shown in FIG.
- the specific area width (the number of pixels in the horizontal direction) of the left area 33 and the right area 34 is set to a width corresponding to the assumed depth range, assuming a depth range of the imaging area in advance. .
- the color of the right imaging region 34 in FIG. 5 is white
- the area ratio of the band-like color filter 1a of the light-transmitting plate 1 shown in FIG. 3 to the other transparent region 1b is 1: k.
- the areas on the image corresponding to the left imaging area 34 and the right imaging area 34 are affected by the band-like color filter 1a, and the color continuously changes in the horizontal direction.
- light incident on the image sensor 2a from the left end portion of the right imaging region 34 in FIG. 5 is transmitted only through the right end (Mg region) of the band-like color filter 1a, so that the pixel corresponding to the portion is Mg Colored.
- the portion on the image corresponding to that portion is not colored.
- FIG. 7 is a diagram schematically showing the signal levels of six pixels in the horizontal direction on the image corresponding to the right imaging region 34. As illustrated in FIG. 7, the signal level increases from the pixel corresponding to the left end of the right imaging region 34 toward the pixel corresponding to the right end.
- the signals of 7 pixels arranged in the horizontal direction are represented by S (i), S (i + 1), S (i + 2),..., S (i + 6), and the contour is S ( Assume that this is the boundary between i) and S (i + 1).
- a pixel signal S corresponding to the left end of the right imaging region 34 is displayed.
- (i + 1) is expressed as Mg + k (1) (R + G + B).
- the pixel signal S (i + 6) at the right end of the right imaging region 34 is expressed as (Mg + B + Cy + G + Ye + R) + k (6) (R + G + B).
- S (i + 2) is expressed as (Mg + B) + k (2) (R + G + B).
- S (i + 3) is expressed as (Mg + B + Cy) + k (3) (R + G + B).
- S (i + 4) is expressed as (Mg + B + Cy + G) + k (4) (R + G + B).
- S (i + 5) is expressed as (Mg + B + Cy + G + Ye) + k (5) (R + G + B).
- the pixel unit is assumed to be 3 pixels of 3 rows and 1 column shown in FIG. 4, and these 3 pixels are collectively considered as 1 pixel.
- the background color can be determined to be Cy.
- a database that associates the RGB ratio with the corresponding background color is prepared in advance and recorded in the memory 30.
- the image processing unit 7 refers to the database, and detects the color in the background region near the contour from the RGB ratio of the pixels near the contour on the image.
- step S13 a hue pattern in the contour peripheral region is detected. Specifically, after removing the color component signal detected in step S12 from each pixel signal in the peripheral area of the contour, the color component is calculated by performing a difference calculation between the pixels. For example, in the example shown in FIG. 7, since the color of the right imaging region 34 is white, the white component k (j) (R) from each pixel signal S (i + 1), S (i + 2),. + G + B) is removed. Then, the pixel signals S (i + 1) to S (i + 6) in the right imaging region 34 are expressed by the following equations 1 to 6, respectively.
- step S14 the characteristics of the color pattern around the contour are examined, and the depth corresponding to the characteristics is calculated.
- the depth means the distance between the subject 31 and the background 32 shown in FIG. If the depth distance is short, the hue pattern is also limited. Conversely, if the depth distance is long, the range of the hue pattern is also widened. That is, there is a correlation between the depth information of the subject 31 and the hue pattern near the contour.
- data indicating the correlation is prepared in advance and recorded in the memory 30.
- the image processing unit 7 refers to the correlation data and calculates the depth from the hue pattern in the contour.
- the distance between the foreground subject 31 and the background 32 is halved with respect to the state in which the hue pattern of the right imaging region 34 is expressed by Equations 1 and 7 to 11, that is, six colors are obtained with six pixels. Then, as shown in FIG. 8, three colors are obtained with three pixels.
- the image processing unit 7 in the present embodiment obtains depth information using this correlation.
- the depth information is not absolute distance information but relative information.
- step S15 the depth information obtained in step S14 is converted into brightness information (monochrome image). Specifically, the image processing unit 7 horizontally scans the captured image and calculates the depth every time a contour is detected. The calculated depth information is cumulatively added to calculate the maximum depth value. The brightness of each pixel is determined using the maximum depth value as the maximum brightness value. For example, when converting to an 8-bit monochrome image, the maximum luminance value is converted to an image of 255.
- a color image with little light loss can be obtained and a depth image of a subject can be obtained by arranging the translucent plate 1 having the band-shaped color filter 1a in the color imaging system. Is obtained.
- an excellent effect is obtained that a relative depth between subjects can be calculated by extracting a contour from the obtained color image and examining a coloring hue in the peripheral region of the contour. If relative depth information between subjects can be obtained, it is also possible to obtain subject depth information based on the position of the imaging device by calculation using the information.
- each filter portion of the band-shaped color filter 1a is configured to transmit only light in a corresponding specific wavelength region and not transmit other light. Further, the transmittances of the R, G, and B filter portions are substantially equal, and the transmittances of the Ye, Cy, and Mg filter portions are approximately twice the transmittances of the R, G, and B filter portions. It was supposed to be. However, the strip-shaped color filter 1a does not have to strictly satisfy these conditions. Ideally, the above conditions should be satisfied, but even if the characteristics of the band-like color filter 1a deviate from the ideal characteristics, the above signal processing has been corrected to compensate for the deviation. If there is no problem.
- the color arrangement of the band-like color filter 1a in the translucent plate 1 is red (R), yellow (Ye), green (G), cyan (Cy), blue (B), and magenta (Mg).
- the present invention is not limited to this, and other color arrangements may be used.
- the types of color arrangement of the band-like color filter 1a are not limited to seven types. However, from the viewpoint of increasing the depth calculation accuracy, it is preferable that there are three or more types of color arrangement. Further, from the viewpoint of obtaining an image having no problem as a color image, it is preferable that the result of adding all the colors of the band-like color filter 1a is white as in the present embodiment. Good. However, if the band-like color filter 1a whose result of adding up all colors is close to white is used, depth information can be calculated without any problem as a color image.
- the basic color configuration of the color image pickup device 2a is an RGB horizontal stripe arrangement, but the present invention is not limited to this, and other three or more basic color arrangements may be used.
- the depth calculation accuracy is lower than that of the horizontal stripe arrangement, but there is no problem even if a Bayer arrangement comprising a red element, a blue element, and two green elements is used.
- the condensing lens 3 and the translucent plate 1 are assumed to be circular, there is no problem in effect even if the condensing lens 3 and the translucent plate 1 have other shapes such as a square shape.
- the color filter 1a in the translucent plate 1 is formed in a band shape
- the present invention is not limited to this, and there is no problem even if the color filter is arranged on the entire surface of the translucent plate 1.
- a filter that changes the hue (transmission wavelength range) in the rotation direction and changes the color intensity (transmittance) in the radial direction such as the Munsell hue ring
- the depth in all directions as well as the horizontal direction is used. Can be calculated.
- a filter whose transmission wavelength region or transmittance changes concentrically may be used. In any of these filters, it is preferable that the transmittance characteristics are designed so that the total sum of transmitted light is close to that of white light.
- the lens 3 may be arranged farther from the imaging element 2a than the translucent plate 1 as long as an image can be formed on the imaging surface 2b.
- the lens 3 and the translucent plate 1 may be configured integrally.
- the image processing unit 7 in the present embodiment generates a color image and a depth image at the same time, but may generate only a depth image without generating a color image. Further, only depth information may be generated by the above processing, and a depth image may not be generated. Furthermore, the image processing according to the present embodiment may be executed by another device independent of the imaging device. For example, a computer in which a signal obtained by an imaging apparatus having the imaging unit 100 according to the present embodiment is read by another apparatus (image processing apparatus) and a program for defining the above signal calculation processing is incorporated in the image processing apparatus. The same effect as that of the present embodiment can also be obtained by executing the above.
- FIG. 9 is a block diagram showing the overall configuration of the imaging apparatus of the present embodiment.
- the imaging apparatus according to the present embodiment includes a rotation drive unit 40 as a rotation mechanism.
- the rotation driving unit 40 has a motor for rotating the light transmissive plate 1, and rotates the light transmissive plate 1 based on a command from the element driving unit 6.
- a specific method for rotating the light transmitting plate 1 can be realized by attaching a belt to the light transmitting plate 1 and rotating the belt with a motor as described in Non-Patent Document 1, for example.
- the imaging apparatus captures an image once in a state (a) in which the band-shaped color filter 1a of the translucent plate 1 is kept horizontal, and then rotates the translucent plate 1 by 180 degrees (b) ) Take another picture. Difference processing is performed on the two captured images, and different portions are extracted as image data from the difference result. Thereby, areas on the image corresponding to the left imaging area 33 and the right imaging area 34 can be specified.
- the left imaging region 33 and the right imaging region 34 cannot be clearly specified, but in the present embodiment, these regions can be clearly specified. This is because each pixel corresponding to the left imaging region 33 and the right imaging region 34 changes in signal level for each imaging due to a change in the color arrangement of the band-shaped color filter 1a. Therefore, the left imaging region 33 and the right imaging region 34 can be clearly identified from the difference result between the two images.
- the processing after specifying the left imaging region 33 and the right imaging region 34 is the same as in the first embodiment. Thus, since the left imaging area 33 and the right imaging area 34 can be clearly identified, the accuracy of depth calculation can be improved.
- the translucent plate 1 is rotated 180 degrees and imaged twice before and after the rotation, whereby the left imaging area 33 and the right imaging area 34 are obtained from the difference between the two captured images. It can be clearly identified. As a result, the depth calculation accuracy can be further improved.
- the imaging is performed twice by rotating the translucent plate 1 by 180 degrees by the rotation driving unit 40, but the rotation angle and the number of imaging may be changed.
- the light transmitting plate 1 may be rotated by 90 degrees, and four pairs of multiple viewpoint images may be generated by performing imaging four times.
- the arrangement of the band-shaped color filter 1a of the light transmitting plate 1 is the same as that of the first embodiment, but the method of changing the spectral transmittance characteristics of the band-shaped color filter 1a is different.
- the strip-shaped color filter 1a in the present embodiment is gray in color, that is, does not have wavelength selectivity, and its transmittance periodically changes along the horizontal direction (x direction). Since the color filter 1a is gray, in this embodiment, the depth is calculated using the luminance signal of the color image instead of the RGB pixel signals.
- omitted a different point from Embodiment 1 is demonstrated and description about a common matter is abbreviate
- FIG. 11 shows a front view of the translucent plate 1 in the present embodiment.
- the band-shaped color filter 1a does not have wavelength selectivity, but the transmittance changes like a cos function, and has two locations where the transmittance is high and low.
- FIG. 12 shows a graph of the transmittance.
- the horizontal axis represents the coordinate x on the color filter 1a
- the vertical axis represents the transmittance Tr.
- Tr (1/2) cosX + 1/2
- the luminance values of the areas on the image corresponding to the left imaging area 33 and the right imaging area 34 are in the horizontal direction of the transmittance of the band-shaped color filter 1a.
- FIG. 13A shows a graph of the integral value ⁇ Tr represented by Expression 13.
- the periodic function of the first term before and after the integration is characterized in that the waveform does not change because the cos function is merely a sin function. Therefore, the image processing unit 7 in the present embodiment removes signals corresponding to the terms of the linear function shown in Expression 13 from the pixel signals corresponding to the left imaging region 33 and the right imaging region 34, and then is arranged in the horizontal direction. Analyze the waveform of the signal of multiple pixels.
- FIG. 13B shows a graph of a function obtained by removing a linear function term from Tr represented by Expression 13. The image processing unit 7 detects a waveform similar to the periodic function shown in FIG.
- the image processing unit 7 obtains the depth from the waveform of the periodic function based on the image data by referring to the information.
- the translucent plate 1 having the band-shaped color filter 1a in which the color of the band-shaped color filter 1a is gray and the transmittance periodically changes along the horizontal direction is used.
- a waveform of a periodic function appears in the luminance value of the area on the image corresponding to the left imaging area 33 and the right imaging area 34, and the depth can be calculated from the waveform.
- information related to the depth of the subject is estimated based on the background brightness pattern in the vicinity of the contour.
- the color of the band-shaped color filter 1a of the translucent plate is gray.
- the present invention is not limited to this, and any color can be used as long as the transmittance changes periodically. Good.
- the band-shaped color filter 1a may have wavelength selectivity. Even when the spectral transmittance characteristic pattern of the band-shaped color filter 1a is different from the above, depth information can be obtained by executing signal processing according to the spectral transmittance characteristic pattern.
- the image sensor 2a may be a monochrome image sensor instead of a color image sensor. Further, in the present embodiment, as in the second embodiment, the contour may be extracted by rotating the light transmitting plate 1.
- the three-dimensional imaging device is effective for all cameras using a solid-state imaging device.
- it can be used for consumer cameras such as digital cameras and digital video cameras, and industrial solid-state surveillance cameras.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- General Physics & Mathematics (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Studio Devices (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Stereoscopic And Panoramic Photography (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Measurement Of Optical Distance (AREA)
Abstract
Description
まず、本発明の第1の実施形態による3次元撮像装置(以下、単に「撮像装置」と呼ぶ。)を説明する。図1は、本実施形態における撮像装置の全体構成を示すブロック図である。本実施形態の撮像装置は、デジタル式の電子カメラであり、撮像部100と、撮像部100で生成された信号に基づいて画像を示す信号(画像信号)を生成する信号処理部200とを備えている。
(式1)S(i+1)=Mg
(式2)S(i+2)=Mg+B
(式3)S(i+3)=Mg+B+Cy
(式4)S(i+4)=Mg+B+Cy+G
(式5)S(i+5)=Mg+B+Cy+G+Ye
(式6)S(i+6)=Mg+B+Cy+G+Ye+R
(式7)D12=S(i+2)-S(i+1)=B
(式9)D34=S(i+4)-S(i+3)=G
(式10)D45=S(i+5)-S(i+4)=Ye
(式11)D56=S(i+6)-S(i+5)=R
次に、本発明の第2の実施形態を説明する。本実施形態では、透光板1に回転機構を組み込み、当該回転機構が、光軸に対して垂直な面上で透光板1を回転させることにより、連続で2回の撮像が行われる。その他の構成は実施形態1と同じであるため、重複する事項についての説明は省略する。
次に本発明の第3の実施形態を説明する。本実施形態では、透光板1の帯状色フィルタ1aの配置は実施形態1と同じであるが、帯状色フィルタ1aの分光透過率特性の変化の仕方が異なっている。本実施形態における帯状色フィルタ1aは、色が灰色、即ち、波長選択性を有しておらず、その透過率が水平方向(x方向)に沿って周期的に変化している。色フィルタ1aが灰色なので、本実施形態では、RGBの各画素信号ではなく、カラー画像の輝度信号を用いて奥行きが算出される。以下、実施形態1と異なる点を説明し、共通する事項についての説明は省略する。
(式12)Tr=(1/2)cosX+1/2
(式13)ΣTr=(1/2)sinX+(1/2)X
1a 帯状色フィルタ
1b 透明部
2 固体撮像素子
2a カラー固体撮像素子
2b 撮像面
3 光学レンズ
4 赤外カットフィルタ
5 信号発生/受信部
6 素子駆動部
7 画像処理部
8 画像インターフェース部
9 撮像装置
10 画素
11 0度偏光の偏光板
12 90度偏光の偏光板
13 反射鏡
14 ハーフミラー
15 円形の偏光フィルタ
16 偏光フィルタを回転させる駆動装置
17、18 偏光フィルタ
19 レンズ絞り
20、22、23 光束制限板
20a 赤系統の光を透過させる色フィルタ
20b 青系統の光を透過させる色フィルタ
21 感光フィルム
22R、23R 光束制限板のR光透過領域
22G、23G 光束制限板のG光透過領域
22B、23B 光束制限板のB光透過領域
30 メモリ
31 前景被写体
32 背景
33 左撮像領域
34 右撮像領域
40 回転駆動部
50 画素
Claims (14)
- 第1の方向に沿って分光透過率特性が変化している透過領域を有する光透過部と、
前記光透過部を透過した光を受けるように配置され、受けた光に応じた光電変換信号を出力する撮像素子と、
前記撮像素子の撮像面に像を形成する結像部と、
前記撮像素子から出力された前記光電変換信号に基づいて生成される画像に含まれる被写体の前記第1方向における輪郭を抽出し、抽出した前記輪郭の近傍における背景の明度または色相のパターンに基づいて前記被写体の奥行きに関する情報を推定する画像処理部と、
を備えている、3次元撮像装置。 - 前記透過領域は、前記第1の方向に沿って、透過波長域が3種類以上に変化している、請求項1に記載の3次元撮像装置。
- 前記透過領域は、無彩色光が前記透過領域を透過したとき、透過光の総和が無彩色光になるように設計されている、請求項1または2に記載の3次元撮像装置。
- 光軸に対して垂直な平面上で、前記透過領域を回転させる回転駆動部を有し、
前記画像処理部は、異なる回転状態で取得された複数の画像を比較することにより、前記被写体の輪郭を抽出する、請求項1から3のいずれかに記載の3次元撮像装置。 - 前記画像処理部は、前記透過領域が第1の状態にあるときに取得された第1の画像と、前記透過領域が前記第1の状態から180度回転した第2の状態にあるときに取得された第2の画像との差分に基づいて前記被写体の輪郭を抽出する、請求項4に記載の3次元撮像装置。
- 前記透過領域の分光透過率特性は、前記第1の方向に沿って連続かつ周期的に変化している、請求項1から5のいずれかに記載の3次元撮像装置。
- 前記透過領域は、前記第1の方向に沿って配列された、青、シアン、緑、黄、赤、マゼンタの波長域の光をそれぞれ透過させる6種類の領域を有している、請求項1から6のいずれかに記載の3次元撮像装置。
- 前記画像処理部は、予め設定された、前記被写体の奥行きと前記輪郭の周辺画素における明度または色相のパターンとの関係を示す情報に基づいて、前記被写体の奥行きを推定する、請求項1から7のいずれかに記載の3次元撮像装置。
- 前記光透過部における前記透過領域以外の部分は透明である、請求項1から8のいずれかに記載の3次元撮像装置。
- 前記画像処理部は、推定した前記奥行きを示す情報に基づいて、前記奥行きの量を画素値とする奥行き画像を生成する、請求項1から9のいずれかに記載の3次元撮像装置。
- 前記画像処理部は、前記撮像素子から出力された前記光電変換信号に基づいて、カラー画像を生成する、請求項1から10のいずれかに記載の3次元撮像装置。
- 請求項1から11のいずれかに記載の3次元撮像装置に利用される光透過部。
- 請求項1から11のいずれかに記載の3次元撮像装置に利用される画像処理装置であって、
前記撮像素子から出力された前記光電変換信号に基づいて生成される画像に含まれる被写体の前記第1方向における輪郭を抽出し、抽出した前記輪郭の近傍における背景の明度または色相のパターンに基づいて前記被写体の奥行きに関する情報を推定する画像処理部を備えている、
画像処理装置。 - 請求項1から11のいずれかに記載の3次元撮像装置に利用される画像処理プログラムであって、
コンピュータに対し、
前記撮像素子から出力された前記光電変換信号に基づいて生成される画像に含まれる被写体の前記第1方向における輪郭を抽出し、抽出した前記輪郭の近傍における背景の明度または色相のパターンに基づいて前記被写体の奥行きに関する情報を推定するステップ
を実行させる画像処理プログラム。
Priority Applications (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2013510864A JP5927570B2 (ja) | 2011-04-22 | 2012-04-11 | 3次元撮像装置、光透過部、画像処理装置、およびプログラム |
| CN201280001375.9A CN102918355B (zh) | 2011-04-22 | 2012-04-11 | 三维摄像装置、图像处理装置 |
| US13/807,059 US9544570B2 (en) | 2011-04-22 | 2012-04-11 | Three-dimensional image pickup apparatus, light-transparent unit, image processing apparatus, and program |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2011096333 | 2011-04-22 | ||
| JP2011-096333 | 2011-04-22 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2012144162A1 true WO2012144162A1 (ja) | 2012-10-26 |
Family
ID=47041296
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2012/002517 Ceased WO2012144162A1 (ja) | 2011-04-22 | 2012-04-11 | 3次元撮像装置、光透過部、画像処理装置、およびプログラム |
Country Status (4)
| Country | Link |
|---|---|
| US (1) | US9544570B2 (ja) |
| JP (1) | JP5927570B2 (ja) |
| CN (1) | CN102918355B (ja) |
| WO (1) | WO2012144162A1 (ja) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPWO2022244176A1 (ja) * | 2021-05-20 | 2022-11-24 |
Families Citing this family (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP6173065B2 (ja) * | 2013-06-21 | 2017-08-02 | オリンパス株式会社 | 撮像装置、画像処理装置、撮像方法及び画像処理方法 |
| KR102112298B1 (ko) * | 2013-09-30 | 2020-05-18 | 삼성전자주식회사 | 컬러 영상 및 깊이 영상을 생성하는 방법 및 장치 |
| US20150229910A1 (en) * | 2014-02-07 | 2015-08-13 | National University Of Singapore | Method and apparatus for stereoscopic imaging |
| US9491442B2 (en) | 2014-04-28 | 2016-11-08 | Samsung Electronics Co., Ltd. | Image processing device and mobile computing device having the same |
| US10416511B2 (en) * | 2016-08-31 | 2019-09-17 | Panasonic Liquid Crystal Display Co., Ltd. | Liquid crystal display device |
| CN113867076B (zh) * | 2021-11-08 | 2023-04-28 | Oppo广东移动通信有限公司 | 切趾滤镜及其制备方法、及相关产品 |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH02171740A (ja) * | 1988-12-26 | 1990-07-03 | Minolta Camera Co Ltd | ステレオ写真用カメラ |
| JP2001074432A (ja) * | 1999-09-08 | 2001-03-23 | Fuji Xerox Co Ltd | 画像撮像装置 |
| JP2009276294A (ja) * | 2008-05-16 | 2009-11-26 | Toshiba Corp | 画像処理方法 |
| WO2011142062A1 (ja) * | 2010-05-11 | 2011-11-17 | パナソニック株式会社 | 3次元撮像装置 |
| WO2011151948A1 (ja) * | 2010-06-02 | 2011-12-08 | パナソニック株式会社 | 3次元撮像装置 |
Family Cites Families (34)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH0748878B2 (ja) | 1986-03-19 | 1995-05-24 | 日本放送協会 | 立体画像撮像表示システム |
| JPS62291292A (ja) | 1986-06-11 | 1987-12-18 | Sharp Corp | 撮像装置 |
| JPS62292292A (ja) * | 1986-06-12 | 1987-12-18 | Yamazaki Mazak Corp | レ−ザ加工機 |
| JPH02171737A (ja) | 1988-12-26 | 1990-07-03 | Minolta Camera Co Ltd | ステレオカメラ用撮影レンズ |
| JPH1198418A (ja) * | 1997-09-24 | 1999-04-09 | Toyota Central Res & Dev Lab Inc | 撮像装置 |
| DE69921240T2 (de) * | 1998-07-09 | 2006-02-02 | Matsushita Electric Industrial Co., Ltd., Kadoma | Vorrichtung zur Herstellung eines Stereoskopischen Bildes |
| US6807295B1 (en) | 1999-06-29 | 2004-10-19 | Fuji Photo Film Co., Ltd. | Stereoscopic imaging apparatus and method |
| JP3863319B2 (ja) | 1999-06-29 | 2006-12-27 | 富士フイルムホールディングス株式会社 | 視差画像撮像装置及びカメラ |
| KR100379763B1 (ko) * | 2001-04-14 | 2003-04-10 | 강승연 | 입체안경 없이 볼 수 있는 입체영상촬영장치 |
| JP2002344999A (ja) | 2001-05-21 | 2002-11-29 | Asahi Optical Co Ltd | ステレオ画像撮像装置 |
| JP3869702B2 (ja) | 2001-10-30 | 2007-01-17 | ペンタックス株式会社 | ステレオ画像撮像装置 |
| JP2005037378A (ja) * | 2003-06-30 | 2005-02-10 | Sanyo Electric Co Ltd | 奥行計測方法と奥行計測装置 |
| WO2005013623A1 (en) * | 2003-08-05 | 2005-02-10 | Koninklijke Philips Electronics N.V. | Multi-view image generation |
| EP1714251A1 (en) * | 2004-02-03 | 2006-10-25 | Koninklijke Philips Electronics N.V. | Creating a depth map |
| KR100660519B1 (ko) * | 2005-12-12 | 2006-12-22 | (주)토핀스 | 3차원영상촬영대가 구비된 실험물 촬영장치 |
| US7742128B2 (en) | 2006-11-22 | 2010-06-22 | Canon Kabushiki Kaisha | Hybrid color display apparatus having large pixel and small pixel display modes |
| JP5408863B2 (ja) * | 2006-11-22 | 2014-02-05 | キヤノン株式会社 | 表示装置 |
| CN101720480B (zh) * | 2007-07-03 | 2012-07-18 | 皇家飞利浦电子股份有限公司 | 计算深度图 |
| KR101327794B1 (ko) * | 2007-10-23 | 2013-11-11 | 삼성전자주식회사 | 깊이 정보 획득 방법 및 장치 |
| JP5099704B2 (ja) | 2008-08-06 | 2012-12-19 | 独立行政法人産業技術総合研究所 | 高さを測定する方法及び高さ測定装置 |
| GB2463480A (en) | 2008-09-12 | 2010-03-17 | Sharp Kk | Camera Having Large Depth of Field |
| KR101497503B1 (ko) * | 2008-09-25 | 2015-03-04 | 삼성전자주식회사 | 2차원 영상의 3차원 영상 변환을 위한 깊이 맵 생성 방법 및 장치 |
| US8538135B2 (en) * | 2009-12-09 | 2013-09-17 | Deluxe 3D Llc | Pulling keys from color segmented images |
| US8638329B2 (en) * | 2009-12-09 | 2014-01-28 | Deluxe 3D Llc | Auto-stereoscopic interpolation |
| US8514269B2 (en) * | 2010-03-26 | 2013-08-20 | Microsoft Corporation | De-aliasing depth images |
| KR101690297B1 (ko) * | 2010-04-12 | 2016-12-28 | 삼성디스플레이 주식회사 | 영상 변환 장치 및 이를 포함하는 입체 영상 표시 장치 |
| CA2797302C (en) * | 2010-04-28 | 2019-01-15 | Ryerson University | System and methods for intraoperative guidance feedback |
| JP5197683B2 (ja) * | 2010-06-30 | 2013-05-15 | 株式会社東芝 | 奥行き信号生成装置及び方法 |
| JP5406151B2 (ja) * | 2010-09-24 | 2014-02-05 | パナソニック株式会社 | 3次元撮像装置 |
| JP5406163B2 (ja) * | 2010-10-21 | 2014-02-05 | パナソニック株式会社 | 3次元撮像装置および画像処理装置 |
| US20150029312A1 (en) * | 2012-02-21 | 2015-01-29 | Chung-Ang University Industry-Academy Cooperation Foundation | Apparatus and method for detecting object automatically and estimating depth information of image captured by imaging device having multiple color-filter aperture |
| US10387960B2 (en) * | 2012-05-24 | 2019-08-20 | State Farm Mutual Automobile Insurance Company | System and method for real-time accident documentation and claim submission |
| US9030470B2 (en) * | 2012-08-14 | 2015-05-12 | Hong Kong Applied Science and Technology Research Institute Company Limited | Method and system for rapid three-dimensional shape measurement |
| US8891905B2 (en) * | 2012-12-19 | 2014-11-18 | Hong Kong Applied Science And Technology Research Institute Co., Ltd. | Boundary-based high resolution depth mapping |
-
2012
- 2012-04-11 CN CN201280001375.9A patent/CN102918355B/zh active Active
- 2012-04-11 JP JP2013510864A patent/JP5927570B2/ja active Active
- 2012-04-11 WO PCT/JP2012/002517 patent/WO2012144162A1/ja not_active Ceased
- 2012-04-11 US US13/807,059 patent/US9544570B2/en active Active
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH02171740A (ja) * | 1988-12-26 | 1990-07-03 | Minolta Camera Co Ltd | ステレオ写真用カメラ |
| JP2001074432A (ja) * | 1999-09-08 | 2001-03-23 | Fuji Xerox Co Ltd | 画像撮像装置 |
| JP2009276294A (ja) * | 2008-05-16 | 2009-11-26 | Toshiba Corp | 画像処理方法 |
| WO2011142062A1 (ja) * | 2010-05-11 | 2011-11-17 | パナソニック株式会社 | 3次元撮像装置 |
| WO2011151948A1 (ja) * | 2010-06-02 | 2011-12-08 | パナソニック株式会社 | 3次元撮像装置 |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPWO2022244176A1 (ja) * | 2021-05-20 | 2022-11-24 | ||
| WO2022244176A1 (ja) * | 2021-05-20 | 2022-11-24 | 日本電気株式会社 | 欠品検出装置、欠品検出方法、及びプログラム |
| JP7544269B2 (ja) | 2021-05-20 | 2024-09-03 | 日本電気株式会社 | 欠品検出装置、欠品検出方法、及びプログラム |
Also Published As
| Publication number | Publication date |
|---|---|
| US20130107009A1 (en) | 2013-05-02 |
| CN102918355A (zh) | 2013-02-06 |
| CN102918355B (zh) | 2017-05-31 |
| JP5927570B2 (ja) | 2016-06-01 |
| US9544570B2 (en) | 2017-01-10 |
| JPWO2012144162A1 (ja) | 2014-07-28 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP5421365B2 (ja) | 3次元撮像装置 | |
| JP5227368B2 (ja) | 3次元撮像装置 | |
| JP5879549B2 (ja) | ライトフィールド撮像装置、および画像処理装置 | |
| JP5923754B2 (ja) | 3次元撮像装置 | |
| JP5927570B2 (ja) | 3次元撮像装置、光透過部、画像処理装置、およびプログラム | |
| US20170264811A1 (en) | Phase detection autofocus using opposing filter masks | |
| JP5406151B2 (ja) | 3次元撮像装置 | |
| US20120112037A1 (en) | Three-dimensional imaging device | |
| CN105872525A (zh) | 图像处理装置和图像处理方法 | |
| JP5186517B2 (ja) | 撮像装置 | |
| JP5995084B2 (ja) | 3次元撮像装置、撮像素子、光透過部、および画像処理装置 | |
| WO2013136809A1 (ja) | 画像処理装置、撮像装置および画像処理プログラム | |
| JP5507362B2 (ja) | 3次元撮像装置および光透過板 | |
| JP5406163B2 (ja) | 3次元撮像装置および画像処理装置 | |
| JP2019075703A (ja) | 偏光撮像装置 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| WWE | Wipo information: entry into national phase |
Ref document number: 201280001375.9 Country of ref document: CN |
|
| ENP | Entry into the national phase |
Ref document number: 2013510864 Country of ref document: JP Kind code of ref document: A |
|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12773977 Country of ref document: EP Kind code of ref document: A1 |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 13807059 Country of ref document: US |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 12773977 Country of ref document: EP Kind code of ref document: A1 |