HK1208295B - Image processing apparatus, image processing method and image pickup apparatus having the image processing apparatus - Google Patents
Image processing apparatus, image processing method and image pickup apparatus having the image processing apparatus Download PDFInfo
- Publication number
- HK1208295B HK1208295B HK15108853.2A HK15108853A HK1208295B HK 1208295 B HK1208295 B HK 1208295B HK 15108853 A HK15108853 A HK 15108853A HK 1208295 B HK1208295 B HK 1208295B
- Authority
- HK
- Hong Kong
- Prior art keywords
- image
- region
- pixels
- sub
- image pickup
- Prior art date
Links
Description
Technical Field
The present invention relates to an image processing apparatus, an image processing method, a program, and an image pickup apparatus having the image processing apparatus, and more particularly to an image processing apparatus, an image processing method, and a program for processing a parallax image or image data for refocusing, and an image pickup apparatus having the image processing apparatus.
Background
An image pickup apparatus is proposed which divides an exit pupil of an image pickup lens into a plurality of areas and can simultaneously capture a plurality of parallax images corresponding to the divided pupil areas.
PTL 1 discloses an image pickup apparatus having a two-dimensional image pickup element in which one pixel corresponds to one microlens and the pixel is formed of a plurality of divided photoelectric conversion units. The divided photoelectric conversion units are configured to receive light from different partial pupil areas of the exit pupil of the imaging lens through one microlens, and perform pupil division. From an image signal generated by photoelectrically converting the subject light received by each of the divided photoelectric conversion units, a plurality of parallax images corresponding to the divided partial pupil areas can be formed. PTL 2 discloses such a technique: an image pickup image is generated by pixel-by-pixel addition of all the image signals output from the divided photoelectric conversion units.
The plurality of captured parallax images correspond to Light Field (LF) data serving as spatial distribution information and angular distribution information of light intensity. NPL 1 discloses a refocus technique of changing the focus position of a captured image after image capturing by forming an image on a virtual focus plane different from the image capturing plane using acquired LF data.
List of documents
Patent document
PTL 1 U.S. Pat. No. 4410804
PTL 2 Japanese unexamined patent publication No. 2001-083407
Non-patent document
NPL 1:Stanford Tech Report CTSR 2005-02,1(2005)
Disclosure of Invention
Technical problem
However, there are the following problems: since the LF data is composed of a plurality of parallax images and contains angular distribution information for each pixel in addition to spatial distribution information of light intensity, the data amount is large.
The present invention has been made in view of the above-mentioned problems, and an object of the present invention is to provide an image processing apparatus and method that can hold necessary information and suppress the data amount of LF data.
Solution to the problem
According to the present invention, there is provided an image processing apparatus for processing image pickup signals of sub-pixels acquired by an image pickup element configured by arranging a plurality of pixels each of which is configured by a plurality of sub-pixels for receiving light passing through different partial pupil areas of a focusing optical system, and generating image data of an image pickup image picked up by the image pickup element, the image processing apparatus comprising: an area setting unit configured to set at least one area on the captured image; an addition processing unit configured to perform addition processing on image pickup signals of sub-pixels of the set region; an image processing unit configured to obtain focus information based on the image pickup signals of the sub-pixels of the region set by the region setting unit, control the region setting unit to set a first region and a second region different from the first region based on the obtained focus information, and control the addition processing unit to perform addition processing of different degrees on the image pickup signals of the sub-pixels of the set first region and second region, thereby generating first image data and second image data as image data of a captured image.
Advantageous effects of the invention
According to the image processing apparatus of the present invention, it is possible to suppress the data amount of LF data while maintaining necessary information.
Other features of the present invention will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
Drawings
Fig. 1 is a block diagram of an image capturing apparatus to which image processing according to an embodiment of the present invention is applied.
Fig. 2 is a diagram schematically showing a pixel array in the image pickup element according to the embodiment of the present invention.
Fig. 3A and 3B are diagrams illustrating a pixel structure of an image pickup element according to an embodiment of the present invention.
Fig. 4A, 4B, 4C, and 4D are diagrams showing a pixel structure optically substantially equivalent to that of the image pickup element according to the embodiment of the present invention.
Fig. 5 is a diagram conceptually showing a relationship between pixels and pupil division in the image pickup element according to the embodiment of the present invention.
Fig. 6 is a diagram conceptually showing a relationship between the image pickup element and pupil division according to the embodiment of the present invention.
Fig. 7 is a diagram conceptually showing an amount of image shift and an amount of defocus between parallax images in the image pickup element according to the embodiment of the present invention.
Fig. 8 is a diagram conceptually showing a relationship between sub-pixels and angle information that can be acquired in the image pickup element according to the embodiment of the present invention.
Fig. 9A and 9B are diagrams conceptually illustrating a refocus process in the image pickup element according to the embodiment of the present invention.
Fig. 10 is a diagram showing a flowchart of an image processing operation according to the first embodiment of the present invention.
Fig. 11 is a diagram schematically showing the flow of the image processing operation according to the first embodiment of the present invention.
Fig. 12 is a diagram showing a readout circuit of an image pickup element according to an embodiment of the present invention.
Fig. 13 is a diagram showing a structure of recording data according to the first embodiment of the present invention.
Fig. 14 is a diagram showing another arrangement example of the first region and the second region in the first embodiment of the present invention.
Fig. 15 is a diagram showing a flowchart of an image processing operation according to the second embodiment of the present invention.
Fig. 16 is a diagram schematically showing the flow of the image processing operation according to the second embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention are described in detail below based on the drawings.
Example 1
Fig. 1 is a block diagram showing the configuration of a camera serving as an image pickup apparatus to which an image processing apparatus and method according to the present invention are applied. The image pickup apparatus has an image pickup element that can acquire LF data as described later.
In the figure, the first lens group 101 is disposed in front of the focusing optical system and held movably back and forth in the optical axis direction. The aperture shutter 102 performs light amount adjustment at the time of image capturing by adjusting the aperture diameter, and also has a function as a shutter for adjusting the exposure time at the time of image capturing of a still image. A second lens group 103 is provided. The diaphragm shutter 102 and the second lens group 103 integrally move back and forth in the optical axis direction, thereby achieving a magnification effect (zoom function) by interlocking with the backward/forward operation of the first lens group 101.
The third lens group 105 performs focus adjustment by moving backward/forward in the optical axis direction. The optical low-pass filter 106 is an optical element for reducing false colors and moire patterns of a captured image. The image pickup element 107 is constituted by a two-dimensional CMOS photosensor and peripheral circuits, and is arranged on a focusing plane of the focusing optical system.
To perform the magnification-varying operation, the zoom actuator 111 rotates a cam barrel (not shown), and drives the first lens group 101 or the third lens group 105 forward and backward in the optical axis direction. To adjust the imaging light amount, the stop shutter actuator 112 controls the aperture diameter of the stop shutter 102, and controls the exposure time at the time of imaging of a still image. For focus adjustment, the focus actuator 114 drives the third lens group 105 back and forth in the optical axis direction.
Although a flash illumination apparatus using a xenon tube is preferable as the electronic flash 115 for illuminating the object at the time of image capturing, an illumination apparatus having a continuous light emitting LED may be used. The AF auxiliary light unit 116 projects an image of a mask having a predetermined opening pattern to the field of view through a light projection lens, thereby improving focus detection capability for a dark object or a low-contrast object.
The CPU 121 provided in the camera performs various types of control over the camera body. The CPU 121 has an arithmetic operation unit, ROM, RAM, a/D converter, D/a converter, communication interface circuit, and the like. The CPU 121 loads and executes a predetermined program stored in the ROM, thereby driving and controlling various types of circuits provided for the camera, and realizing AF, image capturing, image processing, and a series of operations such as recording. Therefore, the CPU 121 is a unit constituting the image processing apparatus according to the present invention, and performs image processing control in the image processing apparatus.
The electronic flash control unit 122 controls on/off of the illumination unit 115 in synchronization with the image capturing operation. The auxiliary light driving unit 123 controls on/off of the AF auxiliary light unit 116 in synchronization with the focus detection operation. The image pickup element driving unit 124 controls an image pickup operation of the image pickup element 107, a/D converts the acquired image pickup signal, and sends it to the CPU 121. The image processing unit 125 performs processing such as gamma conversion, color interpolation, and JPEG compression on the image signal acquired by the image pickup element 107.
The focus drive unit 126 drives the focus actuator 114 based on the focus detection result, and drives the third lens group 105 forward and backward in the optical axis direction to perform focus adjustment. The stop-shutter driving unit 128 drives the stop-shutter actuator 112, and controls the opening of the stop shutter 102. The zoom driving unit 129 drives the zoom actuator 111 according to a zoom operation of the photographer.
The display unit 131 such as an LCD displays information about an image capturing mode of the camera, a preview image before image capturing, an image for confirmation after image capturing, a focus state display image at the time of focus detection, and the like. The operation switch group 132 is constituted by a power switch, a release (image pickup trigger) switch, a zoom operation switch, an image pickup mode selection switch, and the like. The captured image is recorded in the detachable flash memory 133 in a predetermined recording format by the recording unit 134.
Next, an image pickup element 107 provided for the image pickup apparatus is explained.
Fig. 2 schematically illustrates a pixel array and a sub-pixel array of the image pickup element 107 according to the present embodiment.
FIG. 2 shows a pixel array of a two-dimensional CMOS sensor (image pickup element) according to the present embodiment by a range of (4 rows × 4 columns FIG. 2 shows a range of sub-pixels (of 16 rows × 16 columns) when the sub-pixel array is considered since each pixel of the image pickup element according to the present embodiment is divided into (4 × 4) sub-pixels, the actual image pickup element is not limited to the pixel array (4 rows × 4 columns) (sub-pixel array of 16 rows × 16 columns) shown in FIG. 2, but may be configured in such a manner that many pixels are arranged on a light receiving plane, thereby enabling to pick up an object imageLFEqual to 10.14 megapixels (2600 rows in vertical direction 3900 columns × in lateral direction), while the subpixel period ax is equal to 2.3 μm and the number of active subpixels N is equal to 162 megapixelsElement (10400 rows in the vertical direction of 15600 columns × in the lateral direction).
In the present embodiment, in the pixel group 200 (2 rows × 2 columns) shown in fig. 2, the pixels 200R having the spectral sensitivity of R (red) are arranged at the upper left position, the pixels 200G having the spectral sensitivity of G (green) are arranged at the upper right position and the lower left position, and the pixels 200B having the spectral sensitivity of B (blue) are arranged at the lower right position, respectivelyθ×Nθ(4 rows × 4 columns) of subpixels 201-216.
Fig. 3A illustrates a plan view of one pixel 200G of the image pickup element illustrated in fig. 2, when viewed from the light receiving side (+ z side) of the image pickup element. Fig. 3B shows a cross-sectional view taken along line 3B-3B in fig. 3A, viewed on the-y side.
As shown in fig. 3A and 3B, in the pixel 200G of the present embodiment, microlenses 350 for condensing incident light to the light receiving plane of each pixel are formed separately, and division into N in the x direction is formedθEach region (4 min) and divided into N in the y directionθPhotoelectric conversion units 301 to 316 in each region (4 minutes). The photoelectric conversion units 301 to 316 correspond to the sub-pixels 201 to 216, respectively.
Each of the photoelectric conversion units 301 to 316 may be a pin structure photodiode in which an intrinsic layer is interposed between a p-type layer and an n-type layer, or a pn junction photodiode in which an intrinsic layer is omitted as necessary.
In each pixel, a color filter 360 is formed between the microlens 350 and the photoelectric conversion units 301 to 316. The spectral transmittance of the color filter may be changed for each sub-pixel, or the color filter may be omitted as needed.
Light incident on the pixel 200G shown in fig. 3A and 3B is condensed by the microlens 350, subjected to color separation by the color filter 360, and thereafter, received by the photoelectric conversion units 301 to 316.
In the photoelectric conversion units 301 to 316, electrons and holes are generated by pair generation (pair production) according to the light receiving amount, and are separated by a depletion layer. Thereafter, electrons of negative charge are accumulated in an n-type layer (not shown), and on the other hand, holes are discharged to the outside of the image pickup element through a p-type layer connected to a constant voltage source (not shown).
In the present embodiment, as shown in fig. 3B, capacitor sections (FD: floating diffusion) 320 and transfer gates 330 are formed on the right and left sides of every other two photoelectric conversion units. The wiring layer 340 is formed on the microlens 350 side of the capacitor section (FD) 320.
The electrons accumulated in the n-type layers (not shown) of the photoelectric conversion units 301 to 316 are transferred to the capacitor section (FD)320 through the transfer gate 330, converted into a voltage signal, and output as a photoelectric conversion signal.
In this embodiment, as shown in FIGS. 3A and 3B, N is passedθ(4) The microlenses 350 are formed in such a shape that the optical axes (vertexes) of the respective sub microlenses are offset in different directions and the adjacent sub microlenses are in line contact with each other. The alternate long and short dash line in fig. 3B indicates the optical axis (vertex) of the sub microlens. Dividing a photoelectric conversion unit into Nθ×Nθ(4 × 4) region, and the capacitor portion (FD)320 is disposed adjacent to the photoelectric conversion units 301 to 316. furthermore, a wiring layer 340 also serving as a light shielding layer is formed between the microlens 350 and the FD 320, the FD 320 is disposed adjacent to the photoelectric conversion units 301 to 316 in a region where light condensed by the microlens is not incident.
Fig. 4A and 4B show schematic cross-sectional and plan views of a pixel of the present embodiment constituted by the microlens 350 including a plurality of sub-microlenses whose respective optical axes (apexes) are deviated and the plurality of divided photoelectric conversion units 301 to 316 shown in fig. 3A and 3B. Fig. 4C and 4D show schematic cross-sectional views and schematic plan views of a pixel structure which is substantially optically equivalent to the structure of the pixel in the present embodiment. If the pixel structure of the present embodiment shown in fig. 4A and 4B is reconstructed in such a manner that the optical axes (vertices) of all the sub-microlenses constituting the microlens overlap each other, the pixel structure becomes optically substantially equivalent to the pixel structure shown in fig. 4C and 4D. It is possible to optically suppress the influence of the insulating region between the photoelectric conversion units 306, 307, 310, and 311 in the central portion of the pixel, and the influence of the FD 320 and the region of the wiring layer 340 that also serves as a light-shielding layer.
Fig. 5 conceptually shows an optical correspondence relationship between the photoelectric conversion units and pupil division of the pixel structure shown in fig. 4C and 4D, in which the pixel structure shown in fig. 4C and 4D is optically substantially equivalent to the pixel structure in the present embodiment, and in which the insulating region and the wiring layer in the central portion of the pixel are eliminated. This figure shows a cross section in the case of viewing a cross sectional view taken along line 4C-4C of the pixel structure shown in fig. 4C and 4D, which is optically substantially equivalent to the pixel structure in the present embodiment, at the + y side, and also shows an exit pupil plane of the focusing optical system. In fig. 5, in order to acquire the correspondence with the coordinate axes of the exit pupil plane, the x-axis and the y-axis of the cross-sectional view are inverted as compared with those in fig. 3A, 3B, and 4A to 4D.
The image pickup element is arranged in the vicinity of a focus plane of an image pickup lens (focus optical system). A light flux from an object passes through an exit pupil 400 of the focusing optical system and is incident to each pixel. Part of the pupil area 501-516 passes through the micro-lens and Nθ×Nθ(4 × 4) the divided photoelectric conversion units 301 to 316 (sub-pixels 201 to 216) are substantially conjugate, and show a partial pupil area of each photoelectric conversion unit (sub-pixel) that can receive light the pupil area 500 is when all N's are combinedθ×NθIn the case of the photoelectric conversion units 301 to 316 (sub-pixels 201 to 216) divided by (4 × 4), the pupil region in the entire pixel 200G that can receive light.
When the pupil distance is equal to several tens of millimeters, since the diameter of the microlens 350 is equal to several micrometers, the aperture value of the microlens 350 is equal to several tens of thousands, and diffraction blur at the level of several tens of millimeters occurs. Therefore, the image at the light receiving plane of the photoelectric conversion unit does not become an effective pupil area or a partial pupil area, but becomes a pupil intensity distribution (distribution of incident angles of light receiving ratios).
Fig. 6 conceptually illustrates the correspondence between the image pickup element and pupil division of the present embodiment. Light beams passing through different partial pupil regions of the partial pupil regions 501 to 516 are incident on pixels of the image pickup element at different angles and pass through Nθ×Nθ(4 × 4) divided photoelectric conversion units 301 to 316 (sub-pixels 201 to 216) receive LF data showing the spatial distribution and angular distribution of light intensity can be acquired by the image pickup element of the present embodiment LF data in the present embodiment is acquired by an image pickup element formed by arranging a plurality of pixels in each of which a plurality of sub-pixels for receiving light beams passing through different partial pupil areas of the focusing optical system are provided.
From the LF data, parallax images corresponding to specific partial pupil areas of the partial pupil areas 501 to 516 of the focusing optical system are acquired by selecting signals of specific sub-pixels from the sub-pixels 201 to 216 of each pixel. For example, by selecting a signal of the sub-pixel 209 (photoelectric conversion unit 309) for each pixel, a parallax image having a resolution of the effective pixel number corresponding to the partial pupil area 509 of the focusing optical system can be acquired. The same applies to the other sub-pixels. Therefore, in the present embodiment, a plurality of (the number of pupil division N) per different partial pupil region is acquired by the image pickup element configured by arranging a plurality of pixelsp=Nθ×NθAnd) parallax images, wherein each pixel is composed of a plurality of sub-pixels for receiving light beams passing through different partial pupil areas of the focusing optical system.
By adding the signals of all the sub-pixels 201 to 216 for each pixel, a captured image having a resolution of the effective number of pixels can be generated.
Next, a relationship between the defocus amount and the image shift amount of LF data acquired from the image pickup element according to the present embodiment is described.
Fig. 7 is a diagram conceptually showing a relationship between the amount of image shift and the amount of defocus between parallax images. An image pickup element (not shown) according to the present embodiment is arranged at an image pickup plane 800, and divides the exit pupil 400 of the focusing optical system into N in the same manner as in fig. 5 and 6p(16) Partial pupil areas 501-516.
Assuming that the magnitude | d | is equal to the distance from the in-focus position of the object to the image sensing plane, the state where the focus of the object is on the object side of the image sensing plane is negative (d <0), and the state where the focus of the object is on the opposite side of the image sensing plane is positive (d >0), the defocus amount d is defined. The focus state of the focal point of the object on the image pickup plane is d equal to 0. In fig. 7, an example is shown in which an object 701 is in a focused state (d ═ 0), and an example is shown in which an object 702 is in a state (d <0) in which the focus is on the object side of the imaging plane. A state where the focus of the object is on the object side of the imaging plane (d <0) and a state where the focus of the object is on the opposite side of the imaging plane (d >0) are collectively referred to as a defocus state (| d | > 0).
In a state where the focus of the object is on the object side of the image pickup plane (d <0), the light flux passing through the partial pupil region 509(501 to 516) among the light fluxes from the object 702 converges, and thereafter, spreads to a region where the position of the gravity center G09(G01 to G16) of the light flux is set to the center, the width is 09(01 to 16), and the light flux becomes a blurred image at the image pickup plane 800. The blurred image is received by sub-pixels 209(201 to 216) constituting each pixel arranged in the image pickup element, and a parallax image is generated. Therefore, in the parallax images generated from the signals of the sub-pixels 209(201 to 216), the object 702 is recorded at the position of the center of gravity G09(G01 to G16) as a blurred object image blurred with the blur width 09(01 to 16) of the object image. The blur width 09 (01-16) of the object image increases in substantial proportion to the increase in the magnitude | d | of the defocus amount d. Similarly, the magnitude | p | of the image shift amount p (the difference between the positions of the centers of gravity of the light beams: e.g., G09-G12, etc.) of the object images between the parallax images also increases in substantial proportion to the increase in the magnitude | d | of the defocus amount d. In addition, in a state where the focus of the object is on the opposite side of the imaging plane (d >0), the same phenomenon occurs although the image shift direction of the object image between the parallax images is opposite to the image shift direction in a state where the focus of the object is on the object side of the imaging plane (d < 0). In the in-focus state (d ═ 0), the positions of the centers of gravity of the object images between the parallax images coincide (p ═ 0), and no image shift occurs.
Therefore, in the LF data of the present invention, in association with an increase in the magnitude of the defocus amount of the LF data, the magnitude of the image shift amount between the plurality of parallax images for each different partial pupil area generated from the LF data increases.
In the present embodiment, focus detection by the image sensing plane phase difference method is performed by calculating the amount of image shift between parallax images with a correlation arithmetic operation according to the relationship that the magnitude of the amount of image shift between parallax images increases in association with an increase in the magnitude of defocus amount of LF data. Although this calculation may be performed by the image processing unit 125 under the control of the CPU 121, focus detection may be performed by using a focus detection apparatus of a phase difference method configured separately from the image pickup element, as necessary. Focus detection of the contrast method can be performed by using LF data.
Next, a description is given of a refocus process using the above-described LF data acquired from the image pickup element and a refocus possible range in this case.
Fig. 8 conceptually shows a relationship between sub-pixels and angle information that can be acquired in the present embodiment. Suppose the sub-pixel period is equal to Δ x and the number of divisions of sub-pixels per pixel is NpIs equal to Np=Nθ×NθAs shown in the figure, the pixel period is Δ X ═ NθΔ x. Assuming that the angular resolution is equal to Δ θ and the angle for observing the exit pupil of the focusing optical system is equal to θ, Δ θ is equal to θθ/Nθ. When paraxial approximation is used, assuming that the aperture value of the focusing optical system is equal to F, the relational expressionBasically, it is true. Each sub-pixel 212-209 receives all light beams incident to the pixel from an incident angle theta0To an angle of incidence theta3A light beam within the range of (1). A light beam having an incident angle of a width of the angular resolution Δ θ is incident to each sub-pixel.
Fig. 9A shows a conceptual diagram of the refocus processing in the present embodiment.
In fig. 9A, each pixel X of the image pickup element arranged at the image pickup plane is schematically illustrated by a line segmenti(i=0~NLF-1). Receiving at an angle theta by each sub-pixela(a=0~NΘ-1) incident ith pixel XiOf the light beam of (1). Assume that the received sub-pixel signal is Li,a(a=0~NΘ-1). In the image pickup element of the present embodiment, LF data serving as information of spatial distribution and angular distribution of light intensity can be acquired, and the LF data is composed of a plurality of parallax images for each different partial pupil region.
After image capturing, a virtual focus plane image different from an image capturing plane at which the image capturing element is arranged and from which the sub-pixel signal L is acquired may be generated from LF data (refocus processing)i,a. All sub-pixel signals Li,aAlong each angle thetaaAnd moving in parallel from the image pickup plane to the virtual focus plane so as to be assigned to each virtual pixel at the virtual focus plane, and performing weighted addition thereof, thereby making it possible to generate a refocused image at the virtual focus plane. For the coefficients used in the weighted addition, all values are positive and their sum equals 1.
Fig. 9B shows a conceptual diagram of the refocusing possible range in the present embodiment. Assuming that by representing the allowable circle of confusion and the aperture value of the focusing optical system is equal to F,the depth of field at aperture value F is equal to ± F. On the other hand, (N)θ×Nθ) Effective aperture value F of divided narrow-part pupil region09(F01~F16) Is equal to F09=NθF, and thus darker. The effective depth of field of each parallax image is equal to the greater depth NθMultiple +/-NθF. Widening the focus range by NθAnd (4) doubling. Within effective depth of field NθIn the range of F, since the subject image in focus for each parallax image is acquired, after image capturing, the parallax image can be used to follow the above-described angle θaThe refocus processing of each parallax image is moved in parallel, and the focus position is adjusted again (refocus). Within effective depth of field NθOutside the range of F, since only a blurred object image is acquired, the focus position cannot be adjusted again (refocusing).
Therefore, the defocus amount d from the image sensing plane in which the in-focus position (refocus) can be adjusted again after image sensing is limited, and the refocus possible range of the defocus amount d is substantially equal to the range shown by the following equation (1):
|d|≤NθF...(1)
here, the allowable circle of confusion is specified by 2 Δ X (the inverse of nyquist frequency 1/(2 Δ X) of the pixel period Δ X) or the like.
Next, an image processing method for holding information necessary for refocusing from LF data acquired from the image pickup element of the present embodiment and generating compressed LF data with a suppressed data amount will be described by using fig. 10 to 14. The operations described with reference to fig. 10 are performed by the CPU 121 (program) and the image processing unit 125 serving as the image processing unit of the present invention. Although addition of signal data (addition processing unit) is also performed by the image processing unit 125, if this operation is at least partially performed by addition readout in the image pickup element 107, the load of the processing can be reduced (can be shared).
Fig. 10 is a diagram showing a flowchart of the image processing operation of the present embodiment. Fig. 11 is a diagram schematically showing a flow of performing the image processing operation of the present embodiment by using a captured image. In fig. 11, (a) to (F) correspond to steps (a) to (F) in fig. 10, respectively. The figure schematically and exemplarily shows regions that are provided on a display screen and that have undergone different image processing. Fig. 12 shows a structure of a pixel circuit of the image pickup element according to the present embodiment and a structure in which photoelectric conversion signals of sub-pixels in a pixel can be read out by addition or non-addition. Although fig. 12 shows the structure of the sub-pixel readout circuit in only one pixel for convenience of explanation, in reality, the apparatus has the same circuit structure for all pixels arranged on the light receiving plane and used for taking an image. The image processing of the present embodiment will be described below with reference to reference numerals a to F shown in fig. 10 and 11.
In step (a), as shown by 1120 in fig. 11, a focus detection region 1100 to be focused is set as an image capturing region (setting unit).
Next, LF data of the focus detection area 1100 is read out, focus detection by the imaging plane phase difference method is performed, and the defocus amount d of the focus detection area 1100 is calculated1100. According to the calculated defocus amount d1100The lens is driven so that the object image in the focus detection area 1100 is in a focused state. The focus detection of the focus detection area 1100 can be performed by using a focus detection apparatus of a phase difference method that is configured separately from the image pickup element. Focus detection by a contrast method that performs focus detection according to a contrast evaluation value of a plurality of refocused images generated by using LF data can be performed.
In step (B), as illustrated by 1130 in fig. 11, a first region 1101a including the focus detection region 1100 is set. The set LF data of the first area 1101a is read out, focus detection by the phase difference method is performed for each of a plurality of partial pupil areas of the first area 1101a, and a defocus map (focus information) is calculated. Based on the calculated defocus map, the first region 1101a is set again to include a defocus amount d whose magnitude is smaller than a predetermined value (| d ≦ d |)0) The area of (a). After the first region 1101a is set again, a region excluding the first region 1101a from the entire region of the image pickup element is set as a second region 1102 a. It is desirable to set the predetermined value d according to the refocusing possible range shown by the formula (1)0Is set to d0=NθF so that the first region 1101a includes a refocusable region. However, the reference of the defocus amount d may be changed according to the image pickup plane, as necessary.
It can also be configured in the following way: as necessary, the second area 1102a excluding the focus detection area 1100 is set, the defocus map is calculated, and based on the calculated defocus map, the second area is set again and the first area is set. That is, it can also be configured in the following manner: the second area 1102a is set again to include the magnitude of the defocus amount d greater than the predetermined value d0(|d|>d0) And a region excluding the second region 1102a from the entire region of the image pickup element is set as the first region 1101 a.
Therefore, in the present embodiment, the first area 1101a and the second area 1102a are selected according to the defocus amount of the LF data.
In the case where a processing speed is preferably set in image capturing of a moving object or the like, the processing for performing the calculation of the defocus map and the resetting of the first region 1101a as described above may be omitted as necessary.
It can also be configured in the following way: the setting of only the focus detection area 1100 is performed in step (a), and after the first area 1101a and the second area 1102a are set in step (B), the lens is driven so that the object image in the focus detection area 1100 is in a focused state.
In step (C), as shown in 1160 in fig. 11, image data corresponding to the first region is generated. First, LF data of the first region 1101a is read out as signal data of the sub-pixels 201 to 216 for each pixel. The readout signal data is used as it is as image data corresponding to the first area in the present embodiment.
Reading operation control in the case of reading out signal data of each sub-pixel for each pixel of the image pickup element in the present embodiment will now be described by using fig. 12. In the figure, P01 to P16 respectively represent Photodiodes (PD) corresponding to the sub-pixels 201 to 216 (photoelectric conversion units 301 to 316). VDDAnd VSSRepresenting the supply voltage.A transfer gate voltage supplied from the vertical scanning circuit 1202 in order to transfer a photoelectric conversion signal from the PD to the FD is indicated.Anda reset gate voltage supplied from the vertical scanning circuit 1202 to the reset lines 1208 and 1209 for resetting the PD is shown.Andrepresenting the row select voltage supplied from the vertical scan circuit 1202 to the row select lines 1210 and 1211.Representing the load voltage. Vertical output lines 1204 and 1205 are provided to read out photoelectric conversion signals transferred from the PD to the FD to a horizontal output line 1207 by a horizontal scanning circuit 1201.
First, in order to reset the PD of each sub-pixel, the vertical scanning circuit 1202 turns on the transfer gate voltages of all the rows at the same timeAnd reset gate voltageAndthereafter, the transfer gate voltage is simultaneously cut off from the vertical scanning circuit 1202And reset gate voltageAndand the accumulation operation begins. In the accumulation operation, the PD (P01 to P16) of each sub-pixel accumulates signal charges in the n-type layer of the PD (P01 to P16) of each sub-pixel according to the light receiving amount.
Accumulation is performed for a predetermined time in synchronization with the mechanical shutter. Thereafter, the vertical scanning circuit 1202 first turns on the transfer gate voltageAnd then turned off again, thereby transferring the signal charges of the sub-pixels P01, P03, P09, and P11 to the corresponding FDs, respectively.
Then, when the vertical scanning circuit 1202 turns on the row selection voltageAnd the horizontal scanning circuit 1201 sequentially selects the vertical output lines 1204 and 1205, the signal charges P01 and P03 are sequentially read out to the horizontal output line 1207 by a CDS (correlated double sampling) circuit 1203. Thereafter, the vertical scanning circuit 1202 makes a row selection voltageAnd is returned to OFF. Similarly, by turning on the row select voltageAnd sequentially select verticalsOutput lines 1204 and 1205, signal charges P09 and P11 are sequentially read out to a horizontal output line 1207 through a CDS circuit 1203, and a row selection voltageIs restored to OFF.
Then, the transfer gate voltage is turned onAnd is turned off again, transferring the signal charges of the sub-pixels P05, P07, P13, and P15 to the corresponding FDs, respectively. The readout operation of the signal charges is the same as that of the sub-pixels P01, P03, P09, and P11.
Then, the transfer gate voltage is turned onAnd is turned off again, transferring the signal charges of the sub-pixels P02, P04, P10, and P12 to the corresponding FDs, respectively. The readout operation of the signal charges is the same as that of the sub-pixels P01, P03, P09, and P11.
Finally, the gate voltage is transferred by switching onAnd is turned off again, transferring the signal charges of the sub-pixels P06, P08, P14, and P16 to the corresponding FDs, respectively. The readout operation of the signal charges is the same as that of the sub-pixels P01, P03, P09, and P11. By these readout operations, the photoelectric conversion signals of all the sub-pixels of one pixel can be read out, respectively.
By performing a readout operation on each pixel in the first region 1101a, the signal data of the sub-pixels 201 to 216 are separately read out and generated as LF data to which the image data of the first region 1101a is not added.
In step (D), as shown in 1140 in fig. 11, image data corresponding to the second region is generated. The LF data of the second region 1102a is generated as captured image data in which the signal data of the sub-pixels 201 to 216 of each pixel read out are added by the addition unit of the image processing unit 125, and the signal data of all the sub-pixels 201 to 216 are added. The CPU 121 synthesizes the LF data corresponding to the first area and the image data in which the signal data of all the sub-pixels corresponding to the second area are added, and stores it in a recording medium as image data of a final captured image in a predetermined recording format, for example, as shown in fig. 13 described later.
In the present embodiment, in the first region, a (non-addition) signal holding LF data of each sub-pixel is generated from the LF data serving as an image pickup signal. On the other hand, in the second region, compressed LF data serving as pixel signal data for adding part or all of the LF data of the sub-pixels is generated. As described above, the degree of addition of the LF data in the first region is smaller than that in the second region. That is, a part of the image pickup signals of the sub-pixels acquired from the image pickup element is added in the following manner: the number of image pickup signals per sub-pixel of a unit pixel in the image data of the image pickup image corresponding to the first region is larger than the number of image pickup signals per sub-pixel of a unit pixel in the image data of the image pickup image corresponding to the second region, and then the image data of the image pickup image is generated.
That is, in the present embodiment, the degree of addition of the LF data in the first region of the captured image and the degree of addition of the LF data in the second region of the captured image are made different, the first image data and the second image data are generated, and the third image data of the final captured image is acquired.
The suppression effect of the data amount according to the present embodiment is explained.
Due to the number of effective pixels NLFHas angular distribution data and spatial distribution data of light intensity, so that the number N of divisions through the pupil for each pixelp=Nθ×NθThe sub-pixel data of (a) constitute LF data. Therefore, the number of effective sub-pixels N is Nθ×Nθ×NLFData quantity N ═ Nθ×Nθ×NLFIs significantly increased.
However, the defocus amount d of the image data that can be refocused after image capturing is limited to a value within the refocusing possible range shown by the formula (1). Therefore, in the area outside the refocusing possible range shown by the formula (1), even if the data amount increases and the LF data is acquired, the image data cannot be refocused.
Therefore, in the present embodiment, in the first region including the region where the defocus amount d is substantially within the refocus possible range, the acquired LF data is held as it is. In another second region, in order to suppress the amount of data, compressed LF data is generated in which LF data of sub-pixels are added for each pixel, thereby forming image sensing data. The LF data of the first region and the LF data of the second region are synthesized, thereby acquiring image data of a final captured image. Therefore, the amount of data can be suppressed while maintaining information necessary for refocusing.
For example, in the case of an image pickup element in which the number of effective pixels is equal to 10.14 million pixels and the number of pupil divisions is equal to 16 — 4 × 4, the number of effective pixels required for LF data is approximately equal to 162 million pixels, and a large amount of data 16 times as large as the number of effective pixels is required. In the case where 25% of the entire area of the image pickup element is set as the first area and 75% is set as the second area, and the present embodiment is applied, the data amount of the acquired original LF data can be suppressed to a data amount of about 30% of the original LF data while maintaining information necessary for refocusing.
In the present embodiment, the LF data corresponding to the first region is held as it is without adding sub-pixels, all sub-pixels are added to the LF data corresponding to the second region for each unit pixel, and from both of these data, the final captured image data is generated. However, the present invention is not limited to this, but if the addition is performed in such a manner that the number of image pickup signals per sub-pixel for the image data of the image pickup image corresponding to the first area is larger than the number of image pickup signals per sub-pixel for the image data of the image pickup image corresponding to the second area, the number of sub-pixels to be added (the number of sub-pixels to be retained per unit pixel) and the composition will not be limited.
Fig. 13 shows the structure of recording data in the case where final image data of a captured image according to the present embodiment is stored in a recording apparatus or the like.
In the example of fig. 13, the degree of addition of the sub-pixels is configured to be different for each region as described above, and the storage region is different as the image data of the captured image from the image pickup element 107. That is, the recording area of the final image data of the captured image is constituted by a set of a plurality of recording areas that record the image data of each area in the captured image. The number 1301 of the set areas is stored in the recording area of the final image data of the captured image. The components constituting the recording area corresponding to the image data of each area are: information 1302 and 1305 (coordinate information showing a range in the final captured image in each region) of the set regions (1101a, 1102a), the numbers 1303 and 1306 of sub-pixels (the number of pupil division) in these regions, and image data 1304 and 1307 according to the numbers of sub-pixels in these regions are configured. Other constituent elements such as the set focus detection area may also be included as necessary. The generation and recording of recording data in the present embodiment are performed to a recording medium such as the flash memory 133 by the operations of the image processing unit 125 and the recording unit 134 under the control of the CPU 121 of the image pickup apparatus in fig. 1.
Next, image processing for generating refocused image data according to the compressed LF data in the present embodiment will be described using fig. 11. Also, the image processing is performed by the operation of the image processing unit 125 under the control of the CPU 121.
In step (E), as shown by 1170 in fig. 11, a predetermined refocusing amount is set in each area, and refocused image data corresponding to each area is generated. In the present embodiment, in the second region, since all the sub-pixels are added, the refocus processing is not performed, and a refocused image according to the predetermined refocusing amount of the first region is generated from the LF data of the first region acquired in step (C).
In step (F), as shown in 1150 in fig. 11, final refocused image data of the entire image is generated. In the present embodiment, a final refocused image corresponding to the entire captured image is generated by combining the refocused image of the first region generated in step (E) and the image data of the second region generated in step (D).
As shown in fig. 14, the region setting of the LF data is not limited to the first region 1401b and the second region 1402b, but a third region 1403b and 1404b, in which the degree of addition of the sub-pixels is different from the first region and the second region, may be set. In the third regions 1403b and 1404b in fig. 14, (2 × 2) divided LF data constituted by the following signals is generated: an addition signal a11 of signals of four sub-pixels P01, P02, P05, and P06, an addition signal a12 of signals of four sub-pixels P03, P04, P07, and P08, an addition signal a21 of signals of four sub-pixels P09, P10, P13, and P14, and an addition signal a22 of signals of four sub-pixels P11, P12, P15, and P16. The third region is not always one region, and may be constituted by a plurality of regions such as the third regions 1403b and 1404b in fig. 14. The same applies to the first region and the second region.
In the image processing unit 125, any one of the first image data, the second image data, and the third image data, or a plurality of combinations thereof, may be subjected to dark correction, shading correction, demosaic correction, and the like.
Based on the third image data generated by the above-described image processing operation, an image is displayed by the display unit 131.
The present embodiment is an example of an image pickup apparatus having an image processing unit for performing the above-described image processing operation. The present embodiment is an example of a display unit having an image processing unit for performing the above-described image processing operation.
With the above structure, necessary information is maintained, and the data amount of LF data can be significantly suppressed.
Example 2
Next, a second embodiment of the present invention will be described. The present embodiment is an embodiment for realizing suppression of the data amount in the case of generating parallax image data from LF data.
By using fig. 15 and 16, an image processing operation for holding information necessary for generating a plurality of parallax images for LF data according to the present embodiment and generating compressed LF data with a suppressed data amount is described.
Fig. 15 is a diagram showing a flowchart of the image processing operation of the present embodiment, in the same manner as fig. 11 of the first embodiment. Fig. 16 is a diagram schematically showing the flow of the image processing operation of the present embodiment by using a captured image, and schematically showing regions set on a display screen, and showing an example in which they become targets of different image processing. In these figures, the same parts as those in fig. 10 and 11 are denoted by the same reference numerals. The operations described with reference to fig. 15 and 16 are performed also by the CPU 121 (program) and the image processing unit 125. Although addition of signal data is also performed by the image processing unit 125, if addition readout of an image pickup signal in the image pickup element 107 is at least partially performed, the load of processing can be reduced (can be shared).
Step (a) is the same as step (a) in fig. 10.
At step (B1), as shown at 1610 in fig. 16, the first region 1602c excluding the focus detection region 1100 is set. Reading out the set first region1602c, focus detection by the phase difference method performed for each of the plurality of partial regions in the first region 1602c is performed, and an image shift amount map is calculated. Based on the calculated image shift amount map, the first region 1602c is again set to include the magnitude of the image shift amount p being larger than the predetermined value p0(|p|≥p0) The area of (a). After the first region 1602c is set, a region excluding the first region 1602c from the entire region of the image pickup element is set as a second region 1601 c.
It can also be configured in the following way: as necessary, the second region 1601c including the focus detection region 1100 is set, the image shift amount map is calculated, and based on the calculated image shift amount map, the second region may be set again and the first region may be set. That is, it can also be configured in the following manner: the second region 1601c is set again to include the size of the image shift amount p smaller than the predetermined value p0(|p|<p0) And a region excluding the second region 1601c from the entire region of the image pickup element may be set as the first region 1602 c.
Therefore, in the second embodiment, the first region and the second region are selected in accordance with the image shift amount of the LF data.
In the case where the processing speed is preferentially set in image capturing of a moving object or the like, the processing for performing the calculation of the image shift amount map and the resetting of the first region 1602c as described above may be omitted as necessary.
In step (C1), as shown in 1620 in fig. 16, the signal data of the sub-pixels 201 to 216 are read out as LF data of the first region 1602C, respectively. The operation for separately reading out the signal data of the sub-pixels of each pixel in the present embodiment is the same as that in the first embodiment.
For each pixel of the first region 1602c, signal data of the sub-pixels 201 to 216 is read out, and image data holding LF data is generated.
In step (D1), as shown by 1630 in fig. 16, for each pixel of the second region 1601c, LF data of all the sub-pixels 201 to 216 are added. The addition readout operation of the LF data of the sub-pixels in each pixel of the second region 1601c in the present embodiment is the same as that of the first embodiment.
For each pixel of the second region 1601c, the LF data of all the sub-pixels 201 to 216 are added, and captured image data is generated.
In the present embodiment, in the first region, from the LF data serving as the image pickup signal, a (non-addition) signal that holds the LF data as it is generated. On the other hand, in the second region, compressed LF data serving as pixel signal data for adding the LF data of all the sub-pixels is generated. As described above, the degree of addition of the LF data in the second region is greater than that in the first region.
In the case of generating a plurality of parallax images (viewpoint images) from LF data, the same captured image can be used for the plurality of parallax images, assuming that the image shift amount is an amount by which parallax is not significant in a region where the size of the image shift amount p is small.
Therefore, in the present embodiment, the amount of image shift p between sub-pixels including pupil areas corresponding to different portions of the captured image is smaller than the predetermined value p0In the first area of the area of (2), in order to suppress the data amount, LF data of sub-pixels of each pixel are added, and compressed LF data is generated. In the other second region, the image pickup data is generated by holding the acquired LF data as it is. Therefore, with the structure of the present embodiment, information necessary for generating a plurality of parallax images is retained, and the amount of data can be suppressed.
An image processing operation for generating a plurality of parallax images from compressed LF data in the present embodiment is described below by using fig. 16.
In steps (E1) to (E16), as shown in 1631 to 1646 in fig. 16, by selecting a signal of a specific sub-pixel among the sub-pixels 201 to 216 of each pixel, parallax images of a plurality of first regions are generated from the LF data of the second region acquired in step (C1). A plurality of parallax images are generated by synthesizing the generated parallax images of the plurality of first regions and the captured image of the second region generated at step (D1).
Since the recording and display of the third image data are the same as those in the first embodiment, a description thereof is omitted here.
With the above-described structure, even in the present embodiment, necessary information is maintained, and the data amount of LF data can be significantly suppressed.
The functions of the processes shown in fig. 10 and 15 of the above-described embodiment are realized by the following methods: a program for realizing the functions of these processes is read out from the memory, and the CPU 121 executes the program.
The present invention is not limited to the above-described structure, and all or part of the functions in the processes shown in fig. 10 and 15 may be realized by dedicated hardware. The memory may be constituted by a magneto-optical disk device, a nonvolatile memory such as a flash memory, a read-only recording medium such as a CD-ROM, or a volatile memory other than a RAM. The memory may also be constituted by a computer-readable writable recording medium combining them.
These processes can be performed by the following methods: a program for realizing the functions of the processing shown in fig. 10 and 15 is recorded in a computer-readable recording medium, and the program recorded in the recording medium is read out and stored in a computer system and executed. It is assumed that the "computer system" described herein includes an OS and hardware such as peripheral devices and the like. Specifically, the present invention includes the following cases: first, a program read out from a storage medium is written into a memory provided for a function expansion board inserted into a computer or a function expansion unit connected to the computer, a CPU or the like provided for the function expansion board or the function expansion unit executes part or all of actual processing based on instructions of the program, and the functions of the above-described embodiments are realized by these processes.
The "computer-readable recording medium" means a portable medium such as a flexible disk, a magneto-optical disk, a ROM, or a CD-ROM, or a storage device such as a hard disk built in a computer system or the like. Further, it is assumed that the "computer-readable recording medium" indicates a medium that holds the program for a predetermined time. For example, it may be a volatile memory (RAM) in a computer system serving as a server or a client in the case where the program is transmitted through a network such as the internet or a communication line such as a telephone line.
The program may be transmitted from a computer system that stores the program in a storage device or the like through a transmission medium or through a transmission wave in a transmission medium to other computer systems. "transmission medium" for transmitting the program means a medium having an information transmission function such as a network (communication network) such as the internet, a communication line such as a telephone line, or the like.
The program may be a program for realizing a part of the above-described functions. Further, the program may be a so-called difference file (difference program) which can realize the above-described functions by being combined with a program which has been recorded in the computer system.
A program product such as a computer-readable recording medium in which the program is recorded can also be used as an embodiment of the present invention. The above-described program, recording medium, transmission medium, and program product are also included in the scope of the present invention.
Although the embodiments of the present invention have been described in detail above with reference to the drawings, the specific structure is not limited to these embodiments, and designs and the like within a range not departing from the gist of the present invention are also included.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
Priority of japanese patent application 2012-208871, filed on 9/21 of 2012, the entire contents of which are incorporated herein by reference.
Claims (16)
1. An image processing apparatus for processing image pickup signals of sub-pixels acquired by an image pickup element configured by arranging a plurality of unit pixels each of which is configured of a plurality of sub-pixels for receiving light passing through different partial pupil areas of a focusing optical system, and generating image data of a picked-up image picked up by the image pickup element, the image processing apparatus characterized by comprising:
a setting unit configured to set a first region and a second region different from the first region on the captured image;
an image processing unit configured to add a part of the image pickup signals of the sub-pixels acquired from the image pickup element in such a manner that the number of image pickup signals of sub-pixels per unit pixel in the image data of the image pickup image corresponding to the first region is larger than the number of image pickup signals of sub-pixels per unit pixel in the image data of the image pickup image corresponding to the second region, and to generate the image data of the image pickup image; and
a recording unit configured to record image data of the photographic image in a recording medium in a predetermined recording format.
2. The image processing apparatus according to claim 1, further comprising an acquisition unit configured to acquire focus information of the captured image,
wherein the setting unit sets the first area and the second area based on the focus information acquired by the acquisition unit.
3. The apparatus according to claim 1, further comprising an acquisition unit configured to acquire an image shift amount between sub-pixels of the photographic image corresponding to different partial pupil areas,
wherein the setting unit sets the first region and the second region based on the image shift amount acquired by the acquisition unit.
4. The image processing apparatus according to claim 2, wherein the focus information is a magnitude of a defocus amount of a predetermined area on the photographic image.
5. The image processing apparatus according to claim 1, wherein the setting unit sets the first region as a region whose defocus amount is smaller than a predetermined value, and sets the second region as a region whose defocus amount is larger than the predetermined value.
6. The apparatus according to claim 3, wherein said setting unit sets the first region as a region whose image shift amount is larger than a predetermined value, and sets the second region as a region whose image shift amount is smaller than the predetermined value.
7. The apparatus according to claim 1, further comprising a refocus image generation unit configured to read out image data of the captured image from the recording medium, add image pickup signals of subpixels corresponding to the first area in the image data of the captured image, generate refocus image data corresponding to the first area, synthesize the generated refocus image data and image data of the captured image corresponding to the second area, and generate a refocus image of the captured image.
8. The image processing apparatus according to claim 1, wherein the image processing unit reads out image data of the photographic image from the recording medium, generates parallax image data from image data of the photographic image corresponding to the first region, synthesizes each generated parallax image data with image data of the photographic image corresponding to the second region, and generates a plurality of parallax images of the photographic image.
9. The image processing apparatus according to claim 1, wherein the image processing unit controls the setting unit, sets a focus detection region in the captured image, and sets the first region to include the focus detection region.
10. An image pickup apparatus, comprising:
an image pickup element configured by arranging a plurality of unit pixels each of which is configured of a plurality of sub-pixels for receiving light passing through different partial pupil areas of the focusing optical system, and for generating image data,
it is characterized by also comprising:
a setting unit configured to set a first region and a second region different from the first region on the image data generated by the image pickup element;
an image processing unit configured to add a part of the image pickup signals of the sub-pixels acquired from the image pickup element in such a manner that the number of image pickup signals of sub-pixels per unit pixel in the image data of the image pickup image corresponding to the first region is larger than the number of image pickup signals of sub-pixels per unit pixel in the image data of the image pickup image corresponding to the second region, and to generate the image data of the image pickup image; and
a recording unit configured to record image data of the photographic image in a recording medium in a predetermined recording format.
11. The image capturing apparatus according to claim 10, further comprising a display unit and a refocus image generation unit, wherein the refocus image generation unit is configured to read out image data of the captured image from the recording medium, add image capture signals of sub-pixels corresponding to the first area in the image data of the captured image, generate refocus image data corresponding to the first area, synthesize the generated refocus image data and image data of the captured image corresponding to the second area, and generate a refocus image of the captured image, wherein the display unit displays the refocus image.
12. The image capturing apparatus according to claim 10, further comprising a display unit, wherein the image processing unit reads out image data of the captured image from the recording medium, generates parallax image data from the image data of the captured image corresponding to the first region, synthesizes each generated parallax image data with the image data of the captured image corresponding to the second region, and generates a plurality of parallax images of the captured image, and the display unit displays the plurality of parallax images.
13. An image pickup apparatus, comprising:
an image pickup element configured by arranging a plurality of unit pixels, wherein each of the plurality of unit pixels is configured by a plurality of sub-pixels for receiving light passing through different partial pupil areas of the focusing optical system,
it is characterized by also comprising:
the image processing apparatus according to claim 2; and
a calculation unit configured to calculate the focus information based on image pickup signals of sub-pixels of a predetermined region of the picked-up image.
14. The image capturing apparatus according to claim 13, wherein the calculation unit calculates an image shift amount between parallax images of the predetermined region of the captured image by a correlation arithmetic operation, thereby calculating the focus information.
15. The image capturing apparatus according to claim 13, wherein the calculation unit calculates the focus information based on contrast evaluation values of a plurality of refocused images generated from image capturing signals of subpixels of the captured image.
16. An image processing method for processing image pickup signals of sub-pixels acquired by an image pickup element configured by arranging a plurality of unit pixels each of which is configured by a plurality of sub-pixels for receiving light passing through different partial pupil areas of a focusing optical system, and generating image data of an image pickup image picked up by the image pickup element, the image processing method comprising the steps of:
an acquisition step of acquiring focus information of the photographic image,
it is characterized by also comprising:
a setting step of setting a first region and a second region different from the first region on the photographic image based on the focus information acquired in the acquisition step;
an image processing step of adding a part of the image pickup signals of the sub-pixels acquired from the image pickup element in such a manner that the number of image pickup signals of sub-pixels per unit pixel in the image data of the image pickup image corresponding to the first region is larger than the number of image pickup signals of sub-pixels per unit pixel in the image data of the image pickup image corresponding to the second region, and generating the image data of the image pickup image; and
a recording step of recording image data of the photographic image in a recording medium in a predetermined recording format.
Applications Claiming Priority (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2012-208871 | 2012-09-21 | ||
| JP2012208871A JP6071374B2 (en) | 2012-09-21 | 2012-09-21 | Image processing apparatus, image processing method and program, and imaging apparatus including image processing apparatus |
| PCT/JP2013/075216 WO2014046152A1 (en) | 2012-09-21 | 2013-09-11 | Image processing apparatus, image processing method, program, and image pickup apparatus having the image processing apparatus |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| HK1208295A1 HK1208295A1 (en) | 2016-02-26 |
| HK1208295B true HK1208295B (en) | 2018-06-22 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN104662887B (en) | Image processing equipment, image processing method and the picture pick-up device with the image processing equipment | |
| US8964079B2 (en) | Image sensor and image capturing apparatus | |
| CN104041006B (en) | Image generating method and image forming apparatus | |
| JP5552214B2 (en) | Focus detection device | |
| JP5746496B2 (en) | Imaging device | |
| US9426349B2 (en) | Image processing apparatus, image processing method, image pickup apparatus, and display device | |
| KR101950689B1 (en) | Image processing apparatus, image capturing apparatus, image processing method, program, and storage medium | |
| JP2016111678A (en) | Image sensor, image capturing apparatus, focus detection apparatus, image processing apparatus, and control method thereof | |
| JP2017158018A (en) | Image processing apparatus, control method therefor, and imaging apparatus | |
| JP6254843B2 (en) | Image processing apparatus and control method thereof | |
| JP7150785B2 (en) | Image processing device, imaging device, image processing method, and storage medium | |
| US11122196B2 (en) | Image processing apparatus | |
| JP6600217B2 (en) | Image processing apparatus, image processing method, imaging apparatus, and control method thereof | |
| JP6748529B2 (en) | Imaging device and imaging device | |
| HK1208295B (en) | Image processing apparatus, image processing method and image pickup apparatus having the image processing apparatus | |
| JP2018019348A (en) | Imaging device, image processing method, image processing system, and image processing program | |
| JP2019092215A (en) | Image processing apparatus, imaging apparatus, image processing method, program, and storage medium | |
| JP2016184922A (en) | Image processing apparatus, image processing method, imaging apparatus, and display device |