WO2012011484A1 - Image capture device - Google Patents
Image capture device Download PDFInfo
- Publication number
- WO2012011484A1 WO2012011484A1 PCT/JP2011/066413 JP2011066413W WO2012011484A1 WO 2012011484 A1 WO2012011484 A1 WO 2012011484A1 JP 2011066413 W JP2011066413 W JP 2011066413W WO 2012011484 A1 WO2012011484 A1 WO 2012011484A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- value
- pixel
- image
- unit
- pixel value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/667—Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/40—Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
- H04N25/42—Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by switching between different modes of operation using different resolutions or aspect ratios, e.g. switching between interlaced and non-interlaced mode
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/48—Increasing resolution by shifting the sensor relative to the scene
Definitions
- the present invention relates to an imaging device and the like.
- Some modern digital cameras and video cameras can be used by switching between still image shooting mode and movie shooting mode. For example, there is a camera that can shoot a still image with a resolution higher than that of a moving image by a user operating a button during moving image shooting.
- moving image shooting is interrupted when shooting a high-resolution still image.
- the moving image shooting is performed at a resolution equivalent to a still image in order not to interrupt the moving image shooting, the frame rate of the moving image is lowered.
- the present inventor is considering generating a high-resolution still image from a low-resolution moving image by using a method of addition reading. Specifically, at the time of moving image shooting, the pixel values of a plurality of pixels are weighted and added and read out from the image sensor, and a high resolution image is restored from the pixel values obtained by the weighted addition.
- Patent Document 1 discloses a technique for mechanically shifting pixels of an optical system to perform moving image shooting and acquiring a high-definition image from the moving image.
- Patent Document 2 discloses a technique for performing exposure control according to a live view display gain.
- an imaging device or the like that enables simple exposure control.
- One embodiment of the present invention sets an imaging element that captures a subject image, a reading control unit that performs weighted addition of pixel values of a plurality of pixels of the imaging element and reads the result as an added pixel value, and a weighting coefficient in the weighted addition.
- the present invention relates to an imaging apparatus including a coefficient setting unit and an exposure control information output unit that outputs exposure control information for performing exposure control of the imaging unit based on the weighting coefficient.
- a weighting coefficient is set, weighted addition is performed using the weighting coefficient, an added pixel value is read, and exposure control information is output based on the weighting coefficient.
- the coefficient setting unit sets a first weighting coefficient in the first imaging mode, sets a second weighting coefficient in the second imaging mode, and performs the exposure control.
- the information output unit obtains a ratio of the sum of the first weighting factors and the sum of the second weighting factors as a weighting factor ratio, and outputs the exposure control information using a photometric evaluation value based on the weighting factor ratio May be.
- the exposure control information is output using the photometric evaluation value based on the weighting coefficient ratio between the first imaging mode and the second imaging mode, so that the exposure control for performing the exposure control of the imaging unit is performed.
- Information can be output.
- the coefficient setting unit may set a coefficient having the same value as the first weighting coefficient for each pixel to be weighted and added in the first imaging mode.
- the second weighting coefficient different from the first weighting coefficient is set, and the exposure control information output unit obtains a photometric evaluation value from the added pixel value in the first imaging mode.
- the exposure control information may be output using the obtained photometric evaluation value.
- a photometric evaluation value is obtained from the added pixel value that is not multiplied by the weighting factor ratio
- the pixel obtained by multiplying the added pixel value by the weighting factor ratio is obtained from the value.
- a display control unit that adjusts the luminance of the display image based on the weighting coefficient and performs control to display the adjusted display image may be included.
- the coefficient setting unit sets a first weighting coefficient in the first imaging mode, sets a second weighting coefficient in the second imaging mode, and performs the display control.
- the unit may obtain a ratio of the sum of the first weighting coefficients and the sum of the second weighting coefficients as a weighting coefficient ratio, and may adjust the luminance of the display image based on the weighting coefficient ratio .
- the brightness of the display image can be adjusted based on the weighting coefficient ratio between the first imaging mode and the second imaging mode.
- the coefficient setting unit may set a coefficient having the same value as the first weighting coefficient for each pixel to be weighted and added in the first imaging mode.
- the second weighting coefficient different from the first weighting coefficient is set, and the display control unit is weighted and added by the first weighting coefficient in the first imaging mode. Control the display of the display image based on the added pixel value, and multiply the added pixel value weighted and added by the second weighting coefficient in the second imaging mode by the weighting coefficient ratio. You may perform control which displays the said display image by the said addition pixel value after.
- a display image with the added pixel value not multiplied by the weighting coefficient ratio is displayed
- a display image with the added pixel value multiplied by the weighting coefficient ratio is displayed. Is displayed. Thereby, the display image can be adjusted to the same brightness in the first shooting mode and the second shooting mode.
- the light receiving unit is configured based on a storage unit that stores an image based on the added pixel value as a low-resolution frame image, and a plurality of low-resolution frame images stored in the storage unit.
- An estimation calculation unit that estimates a pixel value of each pixel included, and an image output that outputs a high-resolution frame image having a higher resolution than the low-resolution frame image based on the pixel value estimated by the estimation calculation unit
- the readout control unit sets a light receiving unit, which is a unit for obtaining the added pixel value, for each of a plurality of pixels of the imaging element, and sets pixel values of the plurality of pixels included in the light receiving unit.
- the weighted addition is performed, and the added pixel values are read while sequentially shifting the pixels while superimposing the light receiving units, and the estimation calculation unit is configured to obtain a plurality of added pixel values obtained by sequentially shifting the light receiving units. Based on, it may estimate the pixel value of each pixel included in the light receiving unit.
- an additional pixel value is acquired while sequentially shifting pixels while superimposing light receiving units, and a low-resolution frame image based on the additional pixel value is acquired. Then, a pixel value is estimated based on the plurality of low resolution frame images, and a high resolution frame image is output based on the pixel values. Thereby, it is possible to obtain a high-resolution still image from a moving image by a simple process.
- the light receiving unit is sequentially set to a first position and a second position next to the first position by the pixel shift, and the light receiving unit of the first position is set. And the light receiving unit of the second position overlap, the estimation calculation unit obtains a difference value between the added pixel values of the first and second positions, and overlaps the light receiving unit of the first position.
- a first intermediate pixel value that is a light receiving value of the first light receiving region excluding the light receiving value
- a second intermediate value that is a light receiving value of the second light receiving region excluding the overlapping region from the light receiving unit of the second position.
- a relational expression with a pixel value is expressed using the difference value, the first and second intermediate pixel values are estimated using the relational expression, and the light reception is performed using the estimated first intermediate pixel value. Even if the pixel value of each pixel included in the unit is obtained There.
- the estimation calculation unit may be included in the intermediate pixel value pattern when successive intermediate pixel values including the first and second intermediate pixel values are used as an intermediate pixel value pattern.
- a relational expression between intermediate pixel values is expressed using the added pixel values of the first and second positions, and successive added pixel values including the added pixel values of the first and second positions are added pixel value patterns.
- the intermediate pixel value included in the intermediate pixel value pattern may be determined so that the similarity is the highest.
- the intermediate pixel value can be estimated based on a plurality of added pixel values acquired by pixel shifting while superimposing the light receiving units.
- the estimation calculation unit obtains an evaluation function representing an error between the intermediate pixel value pattern and the addition pixel value pattern expressed by a relational expression between the intermediate pixel values, and the evaluation
- the intermediate pixel value included in the intermediate pixel value pattern may be determined so that the function value is minimized.
- the value of the intermediate pixel value is set so that the similarity between the intermediate pixel value pattern and the added pixel value is the highest. Can be determined.
- FIG. 1 is a comparative example of this embodiment.
- 2A and 2B are explanatory diagrams of a weighted addition method.
- FIG. 3A is an explanatory diagram of a range of added pixel values.
- FIG. 3B is an example of a program diagram for exposure control.
- FIG. 4 is an explanatory diagram of exposure control according to the present embodiment.
- FIG. 5 is a configuration example of the imaging apparatus according to the present embodiment.
- FIG. 6 is a flowchart of processing performed by the present embodiment.
- FIG. 7 is a flowchart of a modification.
- FIG. 8 is a flowchart of a second modification.
- FIG. 9 is a detailed explanatory diagram of the shooting mode.
- FIG. 10 is an explanatory diagram of addition readout control when performing pixel shift.
- FIG. 10 is an explanatory diagram of addition readout control when performing pixel shift.
- FIG. 11 is an explanatory diagram of addition readout control when pixel shift is performed without weighting.
- FIG. 12 is an explanatory diagram of addition readout control when performing pixel shift with weighting.
- FIG. 13 is an explanatory diagram of addition readout control when pixel shift is not performed.
- FIG. 14A is an explanatory diagram of addition readout control when no pixel shift is performed without weighting.
- FIG. 14B is an explanatory diagram of addition readout control when weighted and pixel shift is not performed.
- FIG. 15 is an explanatory diagram of weighting coefficients.
- FIG. 16A is an explanatory diagram of an added pixel value and an estimated pixel value.
- FIG. 16B illustrates the intermediate pixel value.
- FIG. 17 is an explanatory diagram of an intermediate pixel value estimation method.
- FIG. 18 is an explanatory diagram of an intermediate pixel value estimation method.
- FIG. 19 is an explanatory diagram of an intermediate pixel value estimation method.
- FIG. 1 shows a comparative example of this embodiment.
- the imaging apparatus starts moving image shooting, the imaging unit 100 captures an image, and the moving image signal processing circuit 116 processes the image to acquire moving image data.
- the still image switch 109 is turned on during moving image capturing, the moving image capturing is temporarily stopped, the image capturing unit 100 captures an image, and the still image signal processing circuit 117 processes the image to acquire still image data.
- a 12-megapixel high-pixel sensor can be driven at a high speed of 60 fps (fps: “frame” per-second)
- a 12-megapixel moving image can be captured, and one of them can be acquired as a still image.
- a high-resolution still image can be acquired without interrupting moving image shooting (image disappearance).
- recording a 12 megapixel moving image causes an increase in storage capacity, resulting in a decrease in recording time.
- Patent Document 1 discloses a shift unit that shifts the incident position of an optical image on an image sensor using a camera shake control signal from an optical camera shake control circuit and a pixel shift control signal from a pixel shift control circuit. Discloses a method of shifting pixels and obtaining a high-resolution still image from the pixel-shifted image.
- weighted addition is performed in reading out pixel values to further improve the reproducibility of high frequency components.
- the range (signal level) of the pixel value obtained by addition and reading differs depending on the mode, which affects the automatic exposure control and the brightness of the live view display. There are challenges.
- FIG. 2A and FIG. 2B schematically show a weighted addition method performed by this embodiment.
- the fused moving image mode for example, the moving image still image fused moving image 1 mode shown in FIG. 9
- Fig. 3 (B) shows an example of a program diagram for exposure control.
- a photometric evaluation value is assumed to be a 4-pixel addition value, and an exposure time program diagram will be described as an example.
- the 4-pixel addition value is 1024
- the exposure time is controlled to be T1
- the 4-pixel addition value is 576
- the exposure time is T2. Controlled.
- a program diagram is required for each mode, and the control becomes complicated.
- the 4-pixel addition value in the fusion moving image mode is increased by 1.78 times, and exposure control is performed using the increased 4-pixel addition value.
- the fusion video mode a video with a gain-up 4-pixel addition value is displayed in live view.
- the exposure control may be performed not only by controlling the exposure time but also by controlling the aperture value.
- Patent Document 2 discloses a technique that enables a photographer to display an image displayed in a live view with a desired brightness, and also allows an image in a more preferable exposure state to be captured.
- this technique does not mention a form for capturing an image displayed in the live view and an image in a more preferable exposure state. That is, no mention is made of performing exposure control or display control according to the weighting coefficient.
- FIG. 5 shows a configuration example of the imaging device of the present embodiment that performs gain control and exposure control by increasing the 4-pixel added value according to the weighting coefficient.
- the imaging apparatus includes an imaging unit 100, an A / D conversion unit 104, a user I / F unit 106, a control unit 113, and an imaging control unit 118.
- fusion video is an image that can generate a high-resolution still image. For example, it is acquired by a pixel shift described later in FIG. 10 and the like, and a still image is acquired by an estimation method described later in FIGS. Is possible.
- the imaging control unit 118 includes an aperture control unit 120 and an imaging element control unit 119.
- the imaging control unit 118 drives and controls the imaging unit 100.
- the image sensor control unit 119 includes a read control unit 160 that controls the image sensor 103 and controls reading of pixel values, and an exposure control unit 161 that controls exposure time.
- the imaging unit 100 is an optical system for performing imaging, and includes an imaging lens 101, an aperture 102, an imaging element 103 such as a CMOS sensor, and a shutter (not shown).
- the diaphragm controller 120 drives the diaphragm 102 and the shutter, whereby the operations of the diaphragm 102 and the shutter are performed.
- the A / D conversion unit 104 converts an analog signal obtained by imaging by the imaging unit 100 into digital data.
- the system controller 105 controls each part of the imaging apparatus (system).
- the system controller 105 includes a coefficient setting unit 130 that sets a weighting coefficient used for weighted addition.
- the user I / F unit 106 includes a mode switch 107 for setting a shooting mode by the user, a moving image switch 108 for instructing start / stop of moving image recording, and a still image switch 109 for instructing still image recording.
- the user I / F unit 106 includes a touch panel, operation buttons, and the like.
- the external memory 110 records captured video data and still image data.
- the display device 111 is, for example, a liquid crystal display device, and performs live view display and display of reproduced moving images and still images.
- the recording medium 112 is a medium for recording image data.
- the display device 111 and the recording medium 112 may be incorporated in the imaging device, or may be an external device that can be attached and detached by a USB or the like.
- the control unit 113 (signal processing system) includes a system controller 105, a compression / decompression circuit 121 (compression / decompression circuit, compression / decompression unit), a recording medium I / F circuit 126 (recording medium I / F unit), and an exposure control information output unit.
- 140 AE processing system
- a still image processing unit 141 (signal processing system)
- a moving image processing unit 142 (signal processing system)
- the control unit 113 performs processing of a captured image and control of each component.
- the moving image processing unit 142 processes the moving image data from the A / D conversion unit 104.
- the moving image processing unit 142 includes an electronic image stabilization circuit 114, a line memory 115, and a moving image signal processing circuit 116.
- the electronic image stabilization circuit 114 is an image stabilization circuit that electronically corrects camera shake by image processing.
- the line memory 115 holds image data for one line so that the electronic image stabilization circuit 114 performs a camera shake correction process of less than one pixel.
- the moving image signal processing circuit 116 performs processing such as luminance signal conversion and color difference signal conversion on the image data from the electronic image stabilization circuit 114.
- the still image processing unit 141 processes still image data from the A / D conversion unit 104.
- the still image processing unit 141 includes a still image signal processing circuit 117, a high resolution processing circuit 127 (estimation calculation unit), and a frame memory 128 (storage unit).
- the high resolution processing circuit 127 performs high resolution processing for resolving a moving image and estimating a still image.
- the frame memory 128 holds a frame image in order to perform resolution enhancement processing (estimation processing) by the high resolution processing circuit 127.
- the still image signal processing circuit 117 performs image processing on a still image that has been subjected to high resolution processing and image processing on a still image that has been shot in the normal still image mode. For example, the still image signal processing circuit 117 performs processing such as luminance signal conversion and color difference signal conversion on still image data.
- the exposure control information output unit 140 outputs AE control information (exposure control information) for the imaging control unit 118 to perform AE control (exposure control, AE: Auto Exposure).
- Exposure control information output unit 1 40 includes an AE processing circuit 122 and an AE gain setting circuit 123.
- the AE processing circuit 122 obtains an AE evaluation value from the digital image data from the A / D conversion unit 104.
- the system controller 105 controls the imaging control unit 118 based on this AE evaluation value, the aperture control unit 120 sets the aperture 102, and the imaging element control unit 119 sets the accumulation time of the imaging element 103. . In this way, the exposure is controlled to be appropriate.
- the AE gain setting circuit 123 sets an AE gain for adjusting the difference in the image signal range for AE processing in the normal moving image mode and the fused moving image mode.
- the AE processing circuit 122 obtains an AE evaluation value from the digital image data multiplied by the AE gain.
- the display control unit 150 performs control to display a display image on the display device 111, and includes a display device control circuit 124 and a display gain setting circuit 125. In the following, the operation of the display control unit 150 when used as a monitor during recording will be described as an example.
- the digital image data output from the A / D conversion circuit 104 is input to the display device control circuit 124 via the moving image signal processing circuit 116 and the system controller 105.
- the display device control circuit 124 performs control to send a display image with an appropriate signal level to the display device 111.
- the display gain setting circuit 125 sets a display gain for adjusting the difference in the range of the display image signal in the normal moving image mode and the fused moving image mode.
- the display device control circuit 124 performs control to display digital image data multiplied by the display gain.
- the compression / decompression circuit 121 compresses still image data generated by the still image signal processing circuit 117, compresses moving image data generated by the moving image signal processing circuit 116, and compresses the compressed image data. Perform decompression processing. For example, the compression / decompression circuit 121 compresses image data into a JPEG image or compresses moving image data into an MPEG image.
- the recording medium I / F circuit 126 controls reading and writing with respect to the recording medium 112.
- the system controller 105 performs read and write access to the external memory 110.
- AE Control and Display Control performed by the imaging apparatus will be described in detail.
- a problem of AE processing and display processing caused by the weighting coefficient described in FIG. 3B and the like will be described with a specific example, and then a flowchart of processing performed by the present embodiment will be described.
- a calculation example in the case where the pixel value GR11 is calculated from the 4-pixel addition value gr11 described later with reference to FIG. 10 and the AE gain is set using the GR11 will be described.
- the weighting coefficients are W1, W2, W3, and W4, they are expressed by the following equation (1).
- the four-pixel addition value gr11 is expressed by the following expression (2).
- r is a real number with r ⁇ 1
- GR11, GR13, GR31, and GR33 are pixel values that are added and read.
- W * gr11 W1 * GR11 + W2 * GR13 + W3 * GR31 + W4 * GR33 (2)
- a value different from the AE gain when weighted addition is not performed is set as the AE gain when weighted addition is performed.
- the display gain is set according to the weighting coefficient.
- FIG. 6 shows a flowchart of processing performed by the present embodiment.
- a mode is selected (step S1).
- the aperture is controlled (step S2), and the exposure time of the image sensor is controlled (step S3).
- pixel readout control is set to no weighting and no pixel shift (step S4), and 4-pixel addition readout is performed (step S5).
- AE processing is performed without multiplying the 4-pixel addition value by AE gain (step S6), an AE evaluation value is obtained (step S7), and an aperture value and exposure time are set based on the AE evaluation value (step S7).
- the live view display is controlled without applying the display gain (step S8), and the live view image is displayed (step S9).
- control is performed to record the captured moving image (step S10), and recording is performed on the recording medium (step S11).
- step S1 when the fused video mode is selected in step S1, the aperture is controlled (step S12), and the exposure time of the image sensor is controlled (step S13).
- step S14 pixel readout control is set to weighted (step S14), and a weighting coefficient is set (step S15).
- step S16 four pixels are added and read (step S16), the sum of the weighting coefficients is calculated (step S17), and the AE gain and the display gain are set (step S18).
- the 4-pixel addition value is multiplied by the AE gain (step S19), AE processing is performed using the image (step S20), an AE evaluation value is obtained (step S21), and the aperture value is determined by the AE evaluation value. Settings and exposure time are set (steps S12 and S13). Further, the 4-pixel addition value is multiplied by the display gain (step S22), display control of the live view image is performed (step S23), and the live view image is displayed (step S24). Further, control is performed to record the captured moving image (step S25), and recording is performed on a recording medium (step S26). This recorded video is not multiplied by gain.
- the AE process is, for example, a process of setting an area for performing photometric evaluation on a captured image, or a process of setting an aperture value setting characteristic and an exposure time setting characteristic (program diagram) with respect to the exposure amount.
- the AE evaluation value is a photometric evaluation value obtained based on the pixel value in the area set in the captured image.
- FIG. 7 shows a flowchart of a modified example of processing performed by the present embodiment. This modification is an example in which processing independent of the mode is made common. As shown in FIG. 7, when this process is started, the aperture is controlled (step S50), the exposure time of the image sensor is controlled (step S51), and the mode is selected (step S52).
- step S53 When the normal moving image mode is selected, pixel readout control is set (step S53), and 4-pixel addition readout is performed (step S54). Next, AE control (steps S61, S62, S50, S51), display control (steps S64, S65), and recording control (steps S66, S67) are performed.
- step S52 when the fused video mode is selected in step S52, pixel readout control is set (step S55), and a weighting coefficient is set (step S56). Next, four pixels are added and read (step S57), the sum of the weighting coefficients is calculated (step S58), and the AE gain and the display gain are set (step S59). Next, the 4-pixel addition value is multiplied by the AE gain (step S60), and AE control is performed using the image (steps S61, S62, S50, S51). Further, the display gain is multiplied by the 4-pixel addition value (step S63), and display control of the live view image is performed (steps S64 and S65). Further, control for recording a moving image (steps S66 and S67) is performed.
- FIG. 8 shows a flowchart of a second modification of the process performed by this embodiment.
- This modification is an example in which the weighting coefficient setting process is shared.
- the aperture is controlled (step S100)
- the exposure time of the image sensor and the like are controlled (step S101)
- the readout control is set to weighted addition
- the mode is set according to the mode.
- the presence / absence of pixel shift is set (step S102).
- the weighting coefficient is set to the value shown in the above formula (1) (step S103).
- the pixel values are weighted (step S104), and four pixels are added and read (step S105).
- a sum of weighting coefficients is calculated (step S106), and an AE gain and a display gain are set (step S107).
- the gain is 1, and in the fused moving image mode, the gain is a value represented by the following expression (7). 4 / (1 + 1 / r + 1 / r + 1 / r 2 ) (7)
- the 4-pixel addition value is multiplied by the AE gain (step S108), and AE control is performed (steps S109, S110, S100, and S101). Further, the 4-pixel addition value is multiplied by the display gain (step S111), and display control is performed (steps S112 and S113). Further, control for recording a moving image (steps S114 and S115) is performed.
- the imaging device weights and adds the pixel values of the imaging element 103 that captures the subject image and a plurality of pixels of the imaging element 103 to obtain an added pixel value.
- a readout control unit 160 that reads out, a coefficient setting unit 130 that sets a weighting coefficient in weighted addition, an exposure control information output unit 140 that outputs exposure control information for performing exposure control of the imaging unit 100 based on the weighting coefficient, including.
- the exposure control information is, for example, a photometric evaluation value, or information indicating an aperture value and an exposure time obtained from the photometric evaluation value. Exposure control is performed by setting the aperture and exposure time based on these pieces of information.
- the imaging unit 100 may be configured integrally with the imaging device such as a compact camera.
- the imaging unit 100 may be configured such that the imaging element 103 is integrated with the imaging device (body), and the interchangeable lens including the diaphragm 102 and the optical system 101 is configured separately.
- the coefficient setting unit 130 sets the first weighting coefficient in the first imaging mode (for example, the normal moving image mode).
- the coefficient setting unit 130 sets a second weighting coefficient in the second imaging mode (for example, the fused moving image mode).
- the exposure control information output unit 140 obtains the ratio of the sum of the first weighting coefficients and the sum of the second weighting coefficients as the weighting coefficient ratio.
- the exposure control information output unit 140 outputs exposure control information using a photometric evaluation value based on the weighting coefficient ratio.
- the exposure control information for performing the exposure control of the imaging unit 100 can be output by outputting the exposure control information using the photometric evaluation value based on the weighting coefficient ratio.
- the exposure control information output unit 140 obtains a photometric evaluation value from the added pixel value (such as gr11 described later in FIG. 10), and
- a photometric evaluation value is obtained from a pixel value obtained by multiplying the added pixel value by a weighting coefficient ratio.
- a photometric evaluation value is obtained from the added pixel value that is not multiplied by the weighting factor ratio
- the pixel obtained by multiplying the added pixel value by the weighting factor ratio is obtained from the value.
- the imaging apparatus includes a display control unit 150 that adjusts the luminance of the display image based on the weighting coefficient and performs control to display the adjusted display image.
- the display control unit 150 obtains the ratio of the sum of the first weighting coefficients and the sum of the second weighting coefficients as the weighting coefficient ratio. Then, the display control unit 150 adjusts the luminance of the display image based on the weighting coefficient ratio.
- the brightness of the display image can be adjusted based on the weighting coefficient ratio between the first imaging mode and the second imaging mode.
- the live view display can be adjusted and displayed with the same brightness in the first shooting mode and the second shooting mode.
- Normal Movie Mode The shooting mode of the present embodiment will be described in detail with reference to FIG. First, the normal moving image mode will be described. This mode is a mode in which only a moving image is captured without performing still image shooting in the middle, and addition reading is performed without weighting and without pixel shift.
- the system controller 105 controls the imaging unit 100, the aperture control unit 120, the imaging element control unit 119, and the control unit 113 in accordance with the mode setting content shown in FIG.
- the system controller 105 performs various settings according to instructions from the mode switch 107.
- the pixel addition signal without weighting and without superposition shift is read from the image sensor 103.
- the addition reading is performed by a method described later with reference to FIG.
- the image used for AE control and the image used for display are images that do not gain up and are the same as the images recorded on the recording medium 112.
- the subject image formed on the image sensor 103 is converted into an electrical signal and sequentially read out.
- the read image is converted into a digital image by the A / D conversion unit 104 and then input to the electronic image stabilization circuit 114.
- the image subjected to the image stabilization process is processed by the moving image signal processing circuit 116 to generate a luminance color difference signal.
- the image data from the moving image signal processing circuit 116 is held in the external memory 110.
- the image data held by the external memory 110 is output to the compression / decompression circuit 121 via the system controller 105 and converted into a format such as MPEG4 or Motion-JPEG.
- the converted image data is stored again in the external memory 110 via the system controller 105.
- the compressed image data recorded in the external memory 110 is output to the recording medium I / F circuit 126 via the system controller 105 and recorded on the recording medium 112. It is also possible to record on the recording medium 112 without performing compression processing.
- the image converted into a digital image by the A / D conversion unit 104 is input to the AE processing circuit 122.
- An image from the AE processing circuit 122 is output to the system controller 105, and an AE evaluation value is obtained.
- the aperture control unit 120 controls the aperture 102 using the AE evaluation value, and the image sensor control unit 119 performs accumulation time control of the image sensor 103 using the AE evaluation value. By these controls, AE control is performed so that an appropriate exposure value is obtained.
- the AE gain setting circuit 123 sets an AE gain (for example, 1) in the normal moving image mode, or does not set an AE gain in the normal moving image mode.
- the image converted into a digital image by the A / D conversion unit 104 is input to the AE processing circuit 122. Based on the image from the AE processing circuit 122, the system controller 105 obtains an evaluation value for displaying on the display device 111 with appropriate brightness. The display control unit 150 performs display control based on the evaluation value.
- the display gain setting circuit 125 sets a display gain (for example, 1) in the normal moving image mode, or does not set a display gain in the normal moving image mode.
- This mode is a mode for photographing only a still image, and is a mode for reading all pixels without performing addition reading.
- the system controller 105 controls the imaging unit 100, the aperture control unit 120, the imaging element control unit 119, and the control unit 113 in accordance with the setting contents shown in FIG. In this mode, the estimation calculation process by the high resolution processing circuit 127 is not performed.
- the system controller 105 performs various settings according to instructions from the mode switch 107.
- the image used for AE control and the image used for display are images that do not gain up and are the same as the images recorded on the recording medium 112.
- the subject image formed on the image sensor 103 is converted into an electrical signal and sequentially read out.
- the read image is converted into a digital image by the A / D conversion unit 104 and then input to the high resolution processing circuit 127.
- the high resolution processing circuit 127 is set to OFF (non-operating state).
- the signal from the high resolution processing circuit 127 is processed by the still image signal processing circuit 117 to generate a luminance color difference signal.
- the image data from the still image signal processing circuit 117 is held in the external memory 110.
- the image data held by the external memory 110 is output to the compression / decompression circuit 121 via the system controller 105 and converted into a format such as RAW or JPEG.
- the converted image data is stored again in the external memory 110 via the system controller 105.
- the compressed image data recorded in the external memory 110 is output to the recording medium I / F circuit 126 via the system controller 105 and recorded on the recording medium 112. It is also possible to record on the recording medium 112 without performing compression processing.
- the AE control is the same control as the AE control in the normal moving image mode.
- the display control is the same control as the display control in the normal moving image mode because the live view image is acquired and displayed by the same readout control as in the normal moving image mode.
- This mode is one of the fused moving image modes, in which a fused moving image for acquiring a still image is recorded as a moving image, and a still image is not estimated. In this mode, pixel shift readout is not performed. Note that it is possible to estimate and acquire a high-resolution still image from the fused video shot in this mode after the end of shooting.
- the system controller 105 controls the imaging unit 100, the aperture control unit 120, the imaging element control unit 119, and the control unit 113 in accordance with the mode setting content shown in FIG.
- the system controller 105 performs various settings according to instructions from the mode switch 107.
- a weighted pixel addition signal with a superimposed shift is read out.
- the addition reading is performed by a method described later with reference to FIG.
- the weighting of the pixel value is realized by a method of changing the gain for each pixel. Specifically, when each pixel has an A / D conversion circuit, weighting is performed during A / D conversion.
- the pixel readout circuit may be weighted in an analog manner by giving a gain, or may be weighted by digital processing after A / D conversion.
- the subject image formed on the image sensor 103 is converted into an electrical signal and sequentially read out.
- the read image is converted into a digital image by the A / D conversion unit 104 and then input to the electronic image stabilization circuit 114.
- the image subjected to the image stabilization process is processed by the moving image signal processing circuit 116 to generate a luminance color difference signal.
- the image data from the moving image signal processing circuit 116 is held in the external memory 110.
- the image data held by the external memory 110 is output to the compression / decompression circuit 121 via the system controller 105 and converted into a format such as MPEG4 or Motion-JPEG.
- the converted image data is stored again in the external memory 110 via the system controller 105.
- the compressed image data recorded in the external memory 110 is output to the recording medium I / F circuit 126 via the system controller 105 and recorded on the recording medium 112. It is also possible to record on the recording medium 112 without performing compression processing.
- the image converted into a digital image by the A / D conversion unit 104 is input to the AE processing circuit 122.
- An image from the AE processing circuit 122 is output to the system controller 105, and an AE evaluation value is obtained.
- the aperture control unit 120 controls the aperture 102 using the AE evaluation value, and the image sensor control unit 119 performs accumulation time control of the image sensor 103 using the AE evaluation value. By these controls, AE control is performed so that an appropriate exposure value is obtained.
- the AE gain setting circuit 123 sets the AE gain. That is, the AE image in the normal moving image mode is a pixel addition signal without weighting and without a superimposition shift, whereas the AE image in the moving image still image fusion moving image 1 mode is a pixel addition signal with weighting and a superimposition shift. Therefore, as described above, the range (value) of the signal after addition differs between the weighted addition signal and the weighted addition signal.
- the AE gain setting circuit 123 sets the AE gain (for example, 1.78), thereby aligning the pixel addition signal ranges in the normal moving image mode and the moving image still image fusion moving image 1 mode.
- the image converted into a digital image by the A / D conversion unit 104 is input to the AE processing circuit 122.
- a coefficient having the same value as the coefficient input to the AE gain setting circuit 123 set by the coefficient setting unit 130 in the system controller 105 or a coefficient suitable for the display device 111 is input to the display gain setting circuit 125.
- the display gain setting circuit 125 sets the display gain to match the characteristics of the display device 111 using the same value as the coefficient input to the AE gain setting circuit 123.
- the display device control circuit 124 performs control to display a moving image on the display device 111 with appropriate brightness by performing display control using the display gain.
- the display gain setting circuit 125 sets the display gain. That is, as described above, the added signal range differs between the weighted addition signal and the unweighted addition signal.
- the display gain setting circuit 125 sets the display gain (for example, 1.78), thereby aligning the pixel addition signal ranges in the normal moving image mode and the moving image still image fusion moving image 1 mode.
- Movie Still Image Fusion Movie 2 Mode Next, the movie still image fusion movie 2 mode shown in FIG. 9 will be described.
- This mode is one of the fused video modes, in which the fused video is recorded as a video and no still image is estimated. In this mode, pixel shift readout is performed. Note that it is possible to estimate and acquire a high-resolution still image from the fused video shot in this mode after the end of shooting.
- description of operations similar to those described in the moving image still image fusion moving image 1 mode will be omitted as appropriate.
- the system controller 105 controls the imaging unit 100, the aperture control unit 120, the imaging element control unit 119, and the control unit 113 in accordance with the mode setting content shown in FIG.
- the system controller 105 performs various settings according to instructions from the mode switch 107.
- a weighted pixel addition signal with a superimposed shift is read out.
- the addition reading is performed by a method described later with reference to FIG.
- the weighted addition is realized in the same manner as the method described in the moving image still image fusion moving image 1 mode.
- AE control such as AE gain setting and display control such as display gain setting are performed in the same manner as described in the video still image fusion video 1 mode.
- This mode is one of the fused video modes, in which a fused video is shot by pixel shift readout, and a high-resolution still image is acquired from the fused video.
- movement demonstrated in the moving image still image fusion moving image 1 mode description is abbreviate
- high-resolution still image processing is performed by the high-resolution processing circuit 127. That is, the high resolution processing circuit 127 is turned on (operating state) by the control signal from the system controller 105.
- the image from the A / D conversion unit 104 is temporarily stored in the frame memory 128 and subjected to high resolution processing by the high resolution processing circuit 127.
- the process of estimating a still image is performed by the method described later with reference to FIGS. Alternatively, the process may be performed by other methods such as a known super-resolution process.
- the image from the high resolution processing circuit 127 is processed by the still image signal processing circuit 117 to generate a luminance color difference signal.
- the image data from the still image signal processing circuit 117 is held in the external memory 110.
- the image data held by the external memory 110 is output to the compression / decompression circuit 121 via the system controller 105 and converted into a format such as RAW or JPEG.
- the converted image data is stored again in the external memory 110 via the system controller 105.
- the compressed image data recorded in the external memory 110 is output to the recording medium I / F circuit 126 via the system controller 105 and recorded on the recording medium 112. It is also possible to record on the recording medium 112 without performing compression processing.
- AE control such as AE gain setting and display control such as display gain setting are performed in the same manner as described in the video still image fusion video 1 mode.
- This mode is one of the fused video modes, and is a mode in which a fused video is shot without pixel shift and a high resolution still image is acquired from the fused video.
- movement demonstrated in the moving image still image fusion moving image 2 mode description is abbreviate
- high-resolution still image processing is performed by the high-resolution processing circuit 127. That is, the high resolution processing circuit 127 is turned on (operating state) by the control signal from the system controller 105.
- the image from the A / D conversion unit 104 is temporarily stored in the frame memory 128 and subjected to high resolution processing by the high resolution processing circuit 127.
- the process of estimating a still image is performed by the method described later with reference to FIGS.
- the process may be performed by other methods such as a known super-resolution process.
- a pixel value corresponding to the pixel shift is obtained by performing interpolation processing on the 4-pixel addition value captured in each frame.
- interpolation processing is not performed, and high resolution processing may be performed by a technique such as edge enhancement.
- the image from the high resolution processing circuit 127 is processed by the still image signal processing circuit 117 to generate a luminance color difference signal.
- AE control such as AE gain setting and display control such as display gain setting are performed in the same manner as described in the video still image fusion video 1 mode.
- the frame used in the following description is, for example, the timing at which one (one) image is captured by the image sensor or the timing at which one image is processed in image processing.
- one image in the image data is also referred to as a frame as appropriate.
- FIG. 10 shows an explanatory diagram in the case where pixel shift is performed at one pixel pitch in each frame. This read control is performed in the above-described moving image still image fusion moving image 1 mode and moving image still image fusion still image 1 mode.
- the image sensor is a Bayer array color image sensor
- the 4-pixel addition value shown in the following equation (8) is read.
- W1, W2, W3 and W4 are weighting coefficients shown in the above equation (1).
- GRij and GBij (i and j are natural numbers) represent green pixel values
- Rij represents a red pixel value
- Bij represents a blue pixel value.
- rope and rope represent a green 4-pixel addition value
- rij represents a red 4-pixel addition value
- bij represents a blue 4-pixel addition value.
- gr11 W1 * GR11 + W2 * GR13 + W3 * GR31 + W4 * GR33
- r12 W1 * R12 + W2 * R14 + W3 * R32 + W4 * R34
- b21 W1 * B21 + W2 * B23 + W3 * B41 + W4 * B43
- gb22 W1 * GB22 + W2 * GB24 + W3 * GB42 + W4 * GB44 (8)
- FIG. 13 is an explanatory diagram showing the addition read control when pixel shift is not performed. This reading control is performed in the above-described normal moving image mode, moving image still image fusion moving image 2 mode, and moving image still image fusion still image 2 mode.
- r12 W1 * R12 + W2 * R14 + W3 * R32 + W4 * R34
- r14 W1 * R16 + W2 * R18 + W3 * R36 + W4 * R38
- r32 W1 * R52 + W2 * R54 + W3 * R72 + W4 * R74
- r34 W1 * R56 + W2 * R58 + W3 * R76 + W4 * R78 (13)
- b21 W1 * B21 + W2 * B23 + W3 * B41 + W4 * B43
- b23 W1 * B25 + W2 * B27 + W3 * B45 + W4 * B47
- b41 W1 * B61 + W2 * B63 + W3 * B81 + W4 * B83
- b43 W1 * B65 + W2 * B67 + W3 * B85 + W4 * B87 (14)
- gb22 W1 * GB22 + W2 * GB24 + W3 * GB42 + W4 * GB44
- gb24 W1 * GB26 + W2 * GB28 + W3 * GB46 + W4 * GB48
- gb42 W1 * GB62 + W2 * GB64 + W3 * GB82 + W4 * GB84
- gb44 W1 * GB66 + W2 * GB68 + W3 * GB86 + W4 * GB88 (15)
- the light receiving unit (pixel group) used in the following description represents an area on the image sensor including a plurality of pixels to be added and read, and pixel values of a plurality of pixels included in the light receiving unit are weighted and added. Thus, the added pixel value is acquired.
- a direction along one axis is referred to as a horizontal direction
- a direction along the other axis is referred to as a vertical direction.
- the horizontal direction is the horizontal scanning direction in the imaging operation.
- the direction along one of the two orthogonal axes is referred to as a horizontal direction
- the direction along the other axis is referred to as a vertical direction.
- the weighting factors for addition reading are c 1 , c 2 , c 3 , and c 4 .
- c 1 1
- the weighting coefficient takes the ratio relationship rule shown in the following equation (16) (r is a real number where r ⁇ 1).
- FIG. 16A is an explanatory diagram of light reception units.
- v ij is an estimated pixel value estimated from the added pixel value, and is a pixel value corresponding to each pixel of the image sensor.
- the light reception unit is set for every four pixels of v ij, and the 4-pixel addition value a ij is acquired by reading from each light reception unit.
- Adjacent light receiving units have overlapping regions. For example, a 00 and a 10 are superimposed in v 10, v 01.
- 4-pixel addition values a 00 , a 10 , a 01 , and a 11 are read in frames fn to fn + 3, respectively.
- the 4-pixel addition values a 00 , a 20 ,... are read, and the 4-pixel addition values a 10 , a 01 , a 11 are the surrounding 4-pixel addition values a 00. , A 20 ,...
- FIG. 16B illustrates an intermediate pixel value (intermediate estimated pixel value).
- the intermediate pixel value b ij is first performed in the horizontal direction of the high-resolution, determining an estimated pixel value v ij by Kokai Zoka the b ij in the vertical direction.
- the intermediate pixel value b ij corresponds to v ij and v i (j + 1) .
- Bij adjacent to each other in the vertical direction has an overlapping region. For example, b 00 and b 01 are superimposed in v 01.
- b ij may be obtained by increasing the resolution in the vertical direction
- v ij may be obtained by increasing the resolution in the horizontal direction.
- the weighted pixel addition values are set to a 00 , a 10 , and a 20 in the order of shift.
- a 00 c 1 v 00 + c 2 v 01 + c 3 v 10 + c 4 v 11
- a 10 c 1 v 10 + c 2 v 11 + c 3 v 20 + c 4 v 21
- b 00 , b 10 , and b 20 are defined as shown in the following expression (19), and the above expression (17) is substituted.
- a pattern ⁇ a 00 , a 10 ⁇ based on sampling pixel values detected by weighted superimposition shift sampling is compared with a pattern based on intermediate pixel values ⁇ b 00 , b 10 , b 20 ⁇ . Then, an unknown number b 00 that minimizes the error E is derived and set as the intermediate pixel value b 00 .
- the evaluation function Ej shown in the following equation (24) is obtained. Then, the similarity between the pattern ⁇ a 00 , a 10 ⁇ and the intermediate estimated pixel value ⁇ b 00 , b 10 , b 20 ⁇ is evaluated using this evaluation function Ej.
- the processing for estimating the v ij from b ij is carried out in the same manner as method of estimating the b ij from above a ij. That is, the relational expression of v 00 , v 01 , v 02 is obtained using the difference value of b 00 , b 01 with v 00 as an unknown. Next, an error evaluation function of ⁇ v 00 , v 01 , v 02 ⁇ and ⁇ b 00 , b 01 ⁇ is obtained, v 00 that minimizes the evaluation function is obtained, and the obtained v 00 is substituted into the relational expression. Then, v 01 and v 02 are obtained.
- the light receiving unit is set for each of the plurality of pixels of the image sensor, and the pixel values of the plurality of pixels included in the light receiving unit are weighted and added to read as an added pixel value (light receiving value).
- the low resolution frame image is obtained.
- the acquired low-resolution frame image is stored, and the pixel value of each pixel included in the light receiving unit is estimated based on the plurality of stored low-resolution frame images.
- a high-resolution frame image having a higher resolution than the low-resolution frame image is output.
- the low-resolution frame image is acquired by reading the added pixel value while sequentially shifting the pixels while superimposing the light receiving units.
- the pixel value of each pixel included in the light reception unit is estimated based on a plurality of added pixel values obtained by sequentially shifting the light reception unit.
- the light receiving unit is set for every four pixels.
- the additional pixel values a 00 , a 20 and the like are added and read, and a low resolution frame image by a 00 , a 20 and the like is acquired.
- a low resolution frame image by a 10 , a 30, etc. a low resolution frame image by a 11 , a 31, etc.
- a low resolution frame image by a 01 , a 21, etc. are sequentially acquired.
- the light receiving units for acquiring a 00 , a 10 , a 11 , and a 01 are shifted by one pixel and overlapped by two pixels.
- the estimated pixel value v ij is estimated by the high resolution processing circuit 127 (estimation calculation unit).
- the still image signal processing circuit 117 image output unit processes v ij and outputs a high resolution image corresponding to the resolution of the image sensor.
- the estimation process can be simplified using the above-described estimation of the intermediate pixel value.
- the high-resolution still image can be generated at any timing of the low-resolution moving image, the user can easily obtain the high-resolution still image at the decisive moment.
- by capturing a low-resolution moving image at the time of shooting it is possible to capture at a high frame rate and acquire a high-resolution still image as necessary.
- the light receiving unit is sequentially set to a first position a 00 second position a 10 follows. These light receiving units are overlapped in a region including v 10 and v 11 . Then, as described above with reference to FIG. 17, the difference value ⁇ i 0 of the added pixel values obtained from these light receiving units is obtained.
- the first intermediate pixel value b 00 is the light reception value of the first light receiving regions v 00 and v 01 obtained by removing the overlapping regions v 10 and v 11 from the light receiving unit a 00. .
- the second intermediate pixel value b 20 is a received-light value in the second light receiving region v 20, v 21 excluding the overlapping area v 10, v 11 from the light receiving unit a 10. Then, as shown in the above equation (22), equation of b 00 and b 20 are represented by using a difference value .delta.i 0.
- the first and second intermediate pixel values b 00 and b 20 are estimated using the relational expression, and the pixel value of each pixel of the light receiving unit is obtained using the estimated first intermediate pixel value b 00 .
- successive intermediate pixel values ⁇ b 00 , b 10 , b 20 ⁇ including the intermediate pixel values b 00 and b 20 are converted into intermediate pixel values.
- the relational expression between the intermediate pixel values is expressed using the added pixel values a 00 and a 10 .
- the addition pixel value pattern addition pixel values a 00, a summing pixel values successive containing 10 ⁇ a 00, a 10 ⁇ .
- the similarity is evaluated by comparing the intermediate pixel value pattern with the added pixel value pattern, and based on the evaluation result, the intermediate pixel values b 00 , b 10 , and b 20 are determined so that the similarity becomes the highest. Is done.
- the intermediate pixel value can be estimated based on a plurality of added pixel values acquired by pixel shifting while superimposing the light receiving units.
- the intermediate pixel value pattern ⁇ b 00 , b 10 , b 20 ⁇ represented by the relational expression between the intermediate pixel values.
- an evaluation function Ej representing an error between the added pixel value pattern ⁇ a 00 , a 10 ⁇ .
- the intermediate pixel values b 00 , b 10 , and b 20 are determined so that the value of the evaluation function Ej is minimized.
- the value of the intermediate pixel value can be estimated by expressing the error by the evaluation function and obtaining the intermediate pixel value corresponding to the minimum value of the evaluation function.
- the initial value of the intermediate pixel estimation can be set with a simple process by obtaining the unknown using the least square method.
- each part of the exposure control information output unit 140 and the display control unit 150 is configured by hardware.
- the CPU may be configured to perform processing of each unit, and may be realized as software by the CPU executing a program.
- the CPU executes, for example, the processing of the flowcharts shown in FIGS.
- each unit constituting the still image processing unit 141 is configured by hardware.
- the present invention is not limited to this.
- a known computer system such as a personal computer
- a program for realizing processing performed by each unit of the still image processing unit 141 is executed by the CPU of the computer system
- each unit of the still image processing unit 141 The processing performed by may be implemented as software.
- imaging unit 101 imaging lens, 102 aperture, 103 imaging device, 104 A / D converter, 105 system controller, 106 User I / F section, 107 Mode switch, 108 Movie switch, 109 still image switch, 110 external memory, 111 display device, 112 recording medium, 113 control unit, 114 electronic image stabilization circuit, 115 line memory, 116 video signal processing circuit, 117 still image signal processing circuit, 118 imaging control unit, 119 Image sensor control unit, 120 Aperture control unit, 121 Compression / decompression circuit, 122 AE processing circuit, 123 AE gain setting circuit, 124 display device control circuit, 125 display gain setting circuit, 126 recording medium I / F circuit, 127 high resolution processing circuit, 128 frame memory, 130 coefficient setting unit, 140 exposure control information output unit, 141 still image processing unit, 142 video processing unit, 150 display control unit, 160 readout control unit, 161 exposure control unit, a ij pixel addition value, b ij intermediate pixel value, ⁇ i 0 difference value, E
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
Abstract
Description
本発明は、撮像装置等に関する。 The present invention relates to an imaging device and the like.
昨今のデジタルカメラやビデオカメラには、静止画撮影モードと動画撮影モードを切り替えて使用できるものがある。例えば、動画撮影中にユーザがボタン操作をすることで、動画よりも高解像の静止画を撮影できるものがある。 Some modern digital cameras and video cameras can be used by switching between still image shooting mode and movie shooting mode. For example, there is a camera that can shoot a still image with a resolution higher than that of a moving image by a user operating a button during moving image shooting.
しかしながら、この手法では、高解像な静止画を撮影する際に動画撮影が中断されてしまう。あるいは、動画撮影を中断しないために静止画相当の解像度で動画撮影をすると、動画のフレームレートが下がってしまう。このように、高解像な静止画撮影と高フレームレートの動画撮影を両立するという課題がある。 However, with this method, moving image shooting is interrupted when shooting a high-resolution still image. Alternatively, if the moving image shooting is performed at a resolution equivalent to a still image in order not to interrupt the moving image shooting, the frame rate of the moving image is lowered. As described above, there is a problem of achieving both high resolution still image shooting and high frame rate moving image shooting.
本発明者は、この課題を解決するために、加算読み出しの手法を利用して低解像の動画から高解像の静止画を生成することを考えている。具体的には、動画撮影時に、複数画素の画素値を重み付け加算して撮像素子から読み出し、その重み付け加算により得られた画素値から高解像画像を復元する。 In order to solve this problem, the present inventor is considering generating a high-resolution still image from a low-resolution moving image by using a method of addition reading. Specifically, at the time of moving image shooting, the pixel values of a plurality of pixels are weighted and added and read out from the image sensor, and a high resolution image is restored from the pixel values obtained by the weighted addition.
しかしながら、この手法では、重み付け係数が異なると加算読み出しされた画素値のレンジが異なってしまうという課題がある。例えば、自動露出の制御では、同じ露光量であっても重み付け係数の違いにより異なる画素値が読み出されるため、重み付け係数によって異なる制御が必要になる。そのため、自動露出の制御が複雑になってしまう。 However, with this method, there is a problem in that the range of pixel values read out by addition is different when the weighting coefficient is different. For example, in the automatic exposure control, different pixel values are read due to the difference in the weighting coefficient even when the exposure amount is the same, and thus different control is required depending on the weighting coefficient. This complicates the automatic exposure control.
なお、特許文献1には、光学系を機械的に画素シフトして動画撮影を行い、その動画から高精細画像を取得する手法が開示されている。また、特許文献2には、ライブビュー表示ゲインに応じて露出制御を行う手法が開示されている。
Note that
本発明の幾つかの態様によれば、簡素な露出制御を可能にする撮像装置等を提供できる。 According to some aspects of the present invention, it is possible to provide an imaging device or the like that enables simple exposure control.
本発明の一態様は、被写体像を撮像する撮像素子と、前記撮像素子の複数の画素の画素値を重み付け加算して加算画素値として読み出す読み出し制御部と、前記重み付け加算における重み付け係数を設定する係数設定部と、前記重み付け係数に基づいて、撮像部の露出制御を行うための露出制御情報を出力する露出制御情報出力部と、を含む撮像装置に関係する。 One embodiment of the present invention sets an imaging element that captures a subject image, a reading control unit that performs weighted addition of pixel values of a plurality of pixels of the imaging element and reads the result as an added pixel value, and a weighting coefficient in the weighted addition. The present invention relates to an imaging apparatus including a coefficient setting unit and an exposure control information output unit that outputs exposure control information for performing exposure control of the imaging unit based on the weighting coefficient.
本発明の一態様によれば、重み付け係数が設定され、その重み付け係数により重み付け加算されて加算画素値が読み出され、重み付け係数に基づいて露出制御情報が出力される。これにより、簡素な露出制御等が可能になる。 According to one aspect of the present invention, a weighting coefficient is set, weighted addition is performed using the weighting coefficient, an added pixel value is read, and exposure control information is output based on the weighting coefficient. Thereby, simple exposure control and the like are possible.
また、本発明の一態様では、前記係数設定部は、第1の撮像モードにおいて、第1の重み付け係数を設定し、第2の撮像モードにおいて、第2の重み付け係数を設定し、前記露出制御情報出力部は、前記第1の重み付け係数の和と前記第2の重み付け係数の和との比を重み付け係数比として求め、前記重み付け係数比に基づく測光評価値を用いて前記露出制御情報を出力してもよい。 In the aspect of the invention, the coefficient setting unit sets a first weighting coefficient in the first imaging mode, sets a second weighting coefficient in the second imaging mode, and performs the exposure control. The information output unit obtains a ratio of the sum of the first weighting factors and the sum of the second weighting factors as a weighting factor ratio, and outputs the exposure control information using a photometric evaluation value based on the weighting factor ratio May be.
このようにすれば、第1の撮像モードと第2の撮影モードの重み付け係数比に基づく測光評価値を用いて露出制御情報が出力されることで、撮像部の露出制御を行うための露出制御情報を出力できる。 In this way, the exposure control information is output using the photometric evaluation value based on the weighting coefficient ratio between the first imaging mode and the second imaging mode, so that the exposure control for performing the exposure control of the imaging unit is performed. Information can be output.
また、本発明の一態様では、前記係数設定部は、前記第1の撮像モードにおいて、前記重み付け加算される各画素に対して同じ値の係数を前記第1の重み付け係数として設定し、前記第2のモードにおいて、前記第1の重み付け係数とは異なる前記第2の重み付け係数を設定し、前記露出制御情報出力部は、前記第1の撮像モードにおいて、前記加算画素値から測光評価値を求め、求めた前記測光評価値を用いて前記露出制御情報を出力し、前記第2の撮像モードにおいて、前記加算画素値に対して前記重み付け係数比を乗算した画素値から前記測光評価値を求め、求めた前記測光評価値を用いて前記露出制御情報を出力してもよい。 In the aspect of the invention, the coefficient setting unit may set a coefficient having the same value as the first weighting coefficient for each pixel to be weighted and added in the first imaging mode. In the second mode, the second weighting coefficient different from the first weighting coefficient is set, and the exposure control information output unit obtains a photometric evaluation value from the added pixel value in the first imaging mode. Outputting the exposure control information using the obtained photometric evaluation value, and obtaining the photometric evaluation value from a pixel value obtained by multiplying the added pixel value by the weighting coefficient ratio in the second imaging mode, The exposure control information may be output using the obtained photometric evaluation value.
このようにすれば、第1の撮像モードにおいて、重み付け係数比が乗算されない加算画素値から測光評価値が求められ、第2の撮像モードにおいて、加算画素値に対して重み付け係数比を乗算した画素値から測光評価値が求められる。これにより、第1の撮影モードと第2の撮影モードで露出制御の共通化が可能になり、露出制御を簡素化できる。 In this way, in the first imaging mode, a photometric evaluation value is obtained from the added pixel value that is not multiplied by the weighting factor ratio, and in the second imaging mode, the pixel obtained by multiplying the added pixel value by the weighting factor ratio. The photometric evaluation value is obtained from the value. This makes it possible to share exposure control in the first shooting mode and the second shooting mode, thereby simplifying exposure control.
また、本発明の一態様では、前記重み付け係数に基づいて表示画像の輝度を調整し、調整後の前記表示画像を表示する制御を行う表示制御部を含んでもよい。 Further, according to one aspect of the present invention, a display control unit that adjusts the luminance of the display image based on the weighting coefficient and performs control to display the adjusted display image may be included.
このようにすれば、重み付け係数に基づいて表示画像の輝度を調整することで、重み付け係数に依らない適切な明るさの表示制御等が可能になる。 In this way, by adjusting the brightness of the display image based on the weighting coefficient, it is possible to perform display control with appropriate brightness independent of the weighting coefficient.
また、本発明の一態様では、前記係数設定部は、第1の撮像モードにおいて、第1の重み付け係数を設定し、第2の撮像モードにおいて、第2の重み付け係数を設定し、前記表示制御部は、前記第1の重み付け係数の和と前記第2の重み付け係数の和との比を重み付け係数比として求め、前記重み付け係数比に基づいて前記表示画像の輝度を調整してもよい撮像装置。 In the aspect of the invention, the coefficient setting unit sets a first weighting coefficient in the first imaging mode, sets a second weighting coefficient in the second imaging mode, and performs the display control. The unit may obtain a ratio of the sum of the first weighting coefficients and the sum of the second weighting coefficients as a weighting coefficient ratio, and may adjust the luminance of the display image based on the weighting coefficient ratio .
このようにすれば、第1の撮像モードと第2の撮像モードの重み付け係数比に基づいて表示画像の輝度を調整できる。 In this way, the brightness of the display image can be adjusted based on the weighting coefficient ratio between the first imaging mode and the second imaging mode.
また、本発明の一態様では、前記係数設定部は、前記第1の撮像モードにおいて、前記重み付け加算される各画素に対して同じ値の係数を前記第1の重み付け係数として設定し、前記第2のモードにおいて、前記第1の重み付け係数とは異なる前記第2の重み付け係数を設定し、前記表示制御部は、前記第1の撮像モードにおいて、前記第1の重み付け係数により重み付け加算された前記加算画素値による前記表示画像を表示する制御を行い、前記第2の撮像モードにおいて、前記第2の重み付け係数により重み付け加算された前記加算画素値に対して、前記重み付け係数比を乗算し、乗算後の前記加算画素値による前記表示画像を表示する制御を行ってもよい。 In the aspect of the invention, the coefficient setting unit may set a coefficient having the same value as the first weighting coefficient for each pixel to be weighted and added in the first imaging mode. In the second mode, the second weighting coefficient different from the first weighting coefficient is set, and the display control unit is weighted and added by the first weighting coefficient in the first imaging mode. Control the display of the display image based on the added pixel value, and multiply the added pixel value weighted and added by the second weighting coefficient in the second imaging mode by the weighting coefficient ratio. You may perform control which displays the said display image by the said addition pixel value after.
このようにすれば、第1の撮像モードにおいて、重み付け係数比が乗算されない加算画素値による表示画像が表示され、第2の撮像モードにおいて、重み付け係数比が乗算された加算画素値による表示画像が表示される。これにより、第1の撮影モードと第2の撮影モードで表示画像を同じ明るさに調整できる。 In this way, in the first imaging mode, a display image with the added pixel value not multiplied by the weighting coefficient ratio is displayed, and in the second imaging mode, a display image with the added pixel value multiplied by the weighting coefficient ratio is displayed. Is displayed. Thereby, the display image can be adjusted to the same brightness in the first shooting mode and the second shooting mode.
また、本発明の一態様では、前記加算画素値による画像を低解像フレーム画像として記憶する記憶部と、前記記憶部に記憶された複数の低解像フレーム画像に基づいて、前記受光単位に含まれる各画素の画素値を推定する推定演算部と、前記推定演算部により推定された画素値に基づいて、前記低解像フレーム画像よりも高解像度の高解像フレーム画像を出力する画像出力部と、を含み、前記読み出し制御部は、前記加算画素値を取得する単位である受光単位を前記撮像素子の複数の画素毎に設定し、前記受光単位に含まれる複数の画素の画素値を重み付け加算し、前記受光単位を重畳しながら順次画素シフトさせつつ前記加算画素値を読み出し、前記推定演算部は、前記受光単位が順次画素シフトされることで得られた複数の加算画素値に基づいて、前記受光単位に含まれる各画素の画素値を推定してもよい。 In one aspect of the present invention, the light receiving unit is configured based on a storage unit that stores an image based on the added pixel value as a low-resolution frame image, and a plurality of low-resolution frame images stored in the storage unit. An estimation calculation unit that estimates a pixel value of each pixel included, and an image output that outputs a high-resolution frame image having a higher resolution than the low-resolution frame image based on the pixel value estimated by the estimation calculation unit The readout control unit sets a light receiving unit, which is a unit for obtaining the added pixel value, for each of a plurality of pixels of the imaging element, and sets pixel values of the plurality of pixels included in the light receiving unit. The weighted addition is performed, and the added pixel values are read while sequentially shifting the pixels while superimposing the light receiving units, and the estimation calculation unit is configured to obtain a plurality of added pixel values obtained by sequentially shifting the light receiving units. Based on, it may estimate the pixel value of each pixel included in the light receiving unit.
このようにすれば、受光単位を重畳しながら順次画素シフトさせつつ加算画素値が取得され、その加算画素値による低解像フレーム画像が取得される。そして、複数の低解像フレーム画像に基づいて画素値が推定され、その画素値に基づいて高解像フレーム画像が出力される。これにより、簡素な処理により動画から高解像な静止画を得ること等が可能になる。 In this way, an additional pixel value is acquired while sequentially shifting pixels while superimposing light receiving units, and a low-resolution frame image based on the additional pixel value is acquired. Then, a pixel value is estimated based on the plurality of low resolution frame images, and a high resolution frame image is output based on the pixel values. Thereby, it is possible to obtain a high-resolution still image from a moving image by a simple process.
また、本発明の一態様では、前記画素シフトにより、前記受光単位が、第1のポジションと、前記第1のポジションの次の第2のポジションに順次設定され、前記第1のポジションの受光単位と前記第2のポジションの受光単位が重畳する場合に、前記推定演算部は、前記第1、第2のポジションの加算画素値の差分値を求め、前記第1のポジションの受光単位から重畳領域を除いた第1の受光領域の受光値である第1の中間画素値と、前記第2のポジションの受光単位から前記重畳領域を除いた第2の受光領域の受光値である第2の中間画素値との関係式を、前記差分値を用いて表し、前記関係式を用いて前記第1、第2の中間画素値を推定し、推定した前記第1の中間画素値を用いて前記受光単位に含まれる各画素の画素値を求めてもよい。 In the aspect of the invention, the light receiving unit is sequentially set to a first position and a second position next to the first position by the pixel shift, and the light receiving unit of the first position is set. And the light receiving unit of the second position overlap, the estimation calculation unit obtains a difference value between the added pixel values of the first and second positions, and overlaps the light receiving unit of the first position. A first intermediate pixel value that is a light receiving value of the first light receiving region excluding the light receiving value, and a second intermediate value that is a light receiving value of the second light receiving region excluding the overlapping region from the light receiving unit of the second position. A relational expression with a pixel value is expressed using the difference value, the first and second intermediate pixel values are estimated using the relational expression, and the light reception is performed using the estimated first intermediate pixel value. Even if the pixel value of each pixel included in the unit is obtained There.
このようにすれば、受光単位が重畳しながら順次画素シフトさせつつ読み出された加算画素値から中間画素値を推定し、推定した中間画素値から最終的な推定画素値を求めることができる。これにより、高解像フレーム画像の画素値推定を簡素化できる。 In this way, it is possible to estimate the intermediate pixel value from the added pixel value read while sequentially shifting the pixels while superimposing the light receiving units, and obtain the final estimated pixel value from the estimated intermediate pixel value. Thereby, pixel value estimation of a high resolution frame image can be simplified.
また、本発明の一態様では、前記推定演算部は、前記第1、第2の中間画素値を含む連続する中間画素値を中間画素値パターンとする場合に、前記中間画素値パターンに含まれる中間画素値間の関係式を、前記第1、第2のポジションの加算画素値を用いて表し、前記第1、第2のポジションの加算画素値を含む連続する加算画素値を加算画素値パターンとする場合に、前記中間画素値間の関係式で表された前記中間画素値パターンと前記加算画素値パターンとを比較して類似性を評価し、前記類似性の評価結果に基づいて、前記類似性が最も高くなるように前記中間画素値パターンに含まれる中間画素値を決定してもよい。 In the aspect of the invention, the estimation calculation unit may be included in the intermediate pixel value pattern when successive intermediate pixel values including the first and second intermediate pixel values are used as an intermediate pixel value pattern. A relational expression between intermediate pixel values is expressed using the added pixel values of the first and second positions, and successive added pixel values including the added pixel values of the first and second positions are added pixel value patterns. And comparing the intermediate pixel value pattern represented by the relational expression between the intermediate pixel values and the added pixel value pattern to evaluate similarity, and based on the similarity evaluation result, The intermediate pixel value included in the intermediate pixel value pattern may be determined so that the similarity is the highest.
このようにすれば、受光単位が重畳されながら画素シフトされることで取得された複数の加算画素値に基づいて、中間画素値を推定できる。 In this way, the intermediate pixel value can be estimated based on a plurality of added pixel values acquired by pixel shifting while superimposing the light receiving units.
また、本発明の一態様では、前記推定演算部は、前記中間画素値間の関係式で表された前記中間画素値パターンと前記加算画素値パターンとの誤差を表す評価関数を求め、前記評価関数の値が最小となるように前記中間画素値パターンに含まれる中間画素値を決定してもよい。 In the aspect of the invention, the estimation calculation unit obtains an evaluation function representing an error between the intermediate pixel value pattern and the addition pixel value pattern expressed by a relational expression between the intermediate pixel values, and the evaluation The intermediate pixel value included in the intermediate pixel value pattern may be determined so that the function value is minimized.
このようにすれば、評価関数の値が最小となるように中間画素値の値を決定することで、中間画素値パターンと加算画素値の類似性が最も高くなるように中間画素値の値を決定できる。 In this way, by determining the value of the intermediate pixel value so that the value of the evaluation function is minimized, the value of the intermediate pixel value is set so that the similarity between the intermediate pixel value pattern and the added pixel value is the highest. Can be determined.
以下、本発明の好適な実施の形態について詳細に説明する。なお以下に説明する本実施形態は特許請求の範囲に記載された本発明の内容を不当に限定するものではなく、本実施形態で説明される構成の全てが本発明の解決手段として必須であるとは限らない。 Hereinafter, preferred embodiments of the present invention will be described in detail. The present embodiment described below does not unduly limit the contents of the present invention described in the claims, and all the configurations described in the present embodiment are indispensable as means for solving the present invention. Not necessarily.
1.本実施形態の手法
まず、動画撮影中に静止画を撮影する場合の課題について説明し、本実施形態が低解像動画から高解像静止画を取得する手法について説明する。
1. First, a problem when a still image is shot during moving image shooting will be described, and a method in which the present embodiment acquires a high-resolution still image from a low-resolution moving image will be described.
図1に本実施形態の比較例を示す。この撮像装置は、動画スイッチ108をオンすると動画撮影を開始し、撮像部100が画像を撮像し、動画信号処理回路116がその画像を処理して動画データを取得する。そして、動画撮像中に静止画スイッチ109をオンすると、一旦動画撮影を停止し、撮像部100が画像を撮像し、静止画信号処理回路117がその画像を処理して静止画データを取得する。
FIG. 1 shows a comparative example of this embodiment. When the moving
例えば、12メガピクセルの高画素センサを60fps(fps: frame per second)で高速駆動することが可能であれば、12メガピクセルの動画を撮像し、その中の1枚を静止画として取得できる。この場合、動画撮影を中断(像消失)することなく、高解像度の静止画を取得可能である。しかしながら、このような駆動が可能なセンサは現状では存在しない。また、12メガピクセルの動画を記録すると、記憶容量の増大を招き、結果として記録時間の減少を招くことになる。このように、高フレームレート動画と高解像静止画の両立が困難という課題がある。 For example, if a 12-megapixel high-pixel sensor can be driven at a high speed of 60 fps (fps: “frame” per-second), a 12-megapixel moving image can be captured, and one of them can be acquired as a still image. In this case, a high-resolution still image can be acquired without interrupting moving image shooting (image disappearance). However, there is no sensor capable of such driving at present. Also, recording a 12 megapixel moving image causes an increase in storage capacity, resulting in a decrease in recording time. Thus, there is a problem that it is difficult to achieve both high frame rate moving images and high resolution still images.
この課題に対して、上述の特許文献1には、光学手振れ制御回路からの手振れ制御信号と画素ずらし制御回路からの画素ずらし制御信号を用いて光学像の撮像素子への入射位置をずらすシフト手段により画素ずらしを行い、その画素ずらしされた画像から高解像静止画を得る手法が開示されている。
In response to this problem, the above-described
しかしながら、この手法では、メカ的手法や光学的手法により画素ずらしを行うため、そのシフト量の制御をX-Y方向に行う必要がある。そのため、制御の困難さがある上に動画の読み出し時間への影響が考えられ、実用的ではない。また、手振れ制御と画素ずらし制御を両立させるために、制御が複雑になることも考えられる。また、この手法では、動画撮影中に解像度の高い静止画を得られる機会はシャッタを押した時だけであり、シャッターチャンスが限られてしまう。 However, in this method, since the pixel shift is performed by a mechanical method or an optical method, it is necessary to control the shift amount in the XY directions. For this reason, it is difficult to control and the influence on the readout time of the moving image is considered, which is not practical. Further, in order to achieve both camera shake control and pixel shift control, the control may be complicated. Also, with this method, the opportunity to obtain a high-resolution still image during moving image shooting is only when the shutter is pressed, and the shutter chance is limited.
そこで、本実施形態では、機械的シフトを行わず、4画素の画素値を加算読み出しし、その4画素の位置を各フレームで画素シフトしながら動画を撮影する。そして、その画素シフトにより得られた動画から事後的に高解像静止画を推定し、高フレームレートの動画撮影と任意タイミングでの高解像静止画の取得を可能にする。静止画の推定は、例えば図15~図19で後述する手法により行う。 Therefore, in this embodiment, mechanical shift is not performed, and pixel values of four pixels are added and read, and a moving image is shot while shifting the positions of the four pixels in each frame. Then, a high-resolution still image is estimated afterwards from the moving image obtained by the pixel shift, and it becomes possible to capture a high-resolution still image at a high frame rate and arbitrary timing. The estimation of the still image is performed by a method described later with reference to FIGS. 15 to 19, for example.
また、本実施形態では、画素値の読み出しにおいて重み付け加算を行い、高周波成分の再現性をより向上させる。しかしながら、モードに依って異なる重み付け係数を用いた場合、加算読み出しされた画素値のレンジ(信号レベル)がモードに依って異なるため、自動露出の制御や、ライブビュー表示の明るさに影響するという課題がある。 Also, in this embodiment, weighted addition is performed in reading out pixel values to further improve the reproducibility of high frequency components. However, when a different weighting coefficient is used depending on the mode, the range (signal level) of the pixel value obtained by addition and reading differs depending on the mode, which affects the automatic exposure control and the brightness of the live view display. There are challenges.
次に、この重み付け加算による課題について具体的に説明し、本実施形態が行う露出制御と表示制御の手法について説明する。 Next, the problem due to the weighted addition will be described in detail, and the exposure control and display control methods performed by the present embodiment will be described.
図2(A)、図2(B)に、本実施形態が行う重み付け加算の手法について模式的に示す。図2(A)に示すように、通常動画モードでは、4画素に同じ重み付けを行い、4画素加算値a=1・v1+1・v2+1・v3+1・v4を取得する。この場合、重み付け係数(第1の重み付け係数)の和は、1+1+1+1=4である。一方、融合動画モード(例えば、図9に示す動画静止画融合動画1モード)では、4画素全てに同じ重み付けをせずに、例えば4画素加算値a=1・v1+1/2・v2+1/2・v3+1/4・v4を取得する。この場合、重み付け係数(第2の重み付け係数)の和は、1+1/2+1/2+1/4=2.25である。
FIG. 2A and FIG. 2B schematically show a weighted addition method performed by this embodiment. As shown in FIG. 2A, in the normal moving image mode, the same weighting is applied to the four pixels, and the four-pixel added value a = 1 · v 1 + 1 · v 2 + 1 · v 3 + 1 · v 4 is acquired. In this case, the sum of the weighting coefficients (first weighting coefficients) is 1 + 1 + 1 + 1 = 4. On the other hand, in the fused moving image mode (for example, the moving image still image fused moving
図3(A)に示すように、例えばv1~v4の各画素値のレンジを256階調とする。そうすると、A1に示すように、通常動画モードでの4画素加算値aのレンジは、256×4=1024階調となる。一方、融合動画モードでの4画素加算値aのレンジは、256×2.25=576階調となる。このように、重み付け係数が異なると、4画素加算値のレンジが異なってしまう。 As shown in FIG. 3A, for example, the range of pixel values v1 to v4 is set to 256 gradations. Then, as indicated by A1, the range of the 4-pixel added value a in the normal moving image mode is 256 × 4 = 1024 gradations. On the other hand, the range of the 4-pixel added value a in the fusion moving image mode is 256 × 2.25 = 576 gradations. As described above, when the weighting coefficients are different, the range of the 4-pixel addition value is different.
図3(B)に、露出制御のプログラム線図例を示す。この例では、簡単のため測光評価値を4画素加算値とし、露出時間のプログラム線図を例に説明する。図3(B)に示すように、4画素加算値が1024の場合には、露出時間がT1となるように制御され、4画素加算値が576の場合には、露出時間がT2となるように制御される。このように、各モードの4画素加算値をそのまま用いて、同じプログラム線図で露出制御すると、同じ露光量であっても異なる露出時間に制御されてしまう。また、同じ露出時間にするためには、モード毎にプログラム線図が必要となり、制御が複雑となる。 Fig. 3 (B) shows an example of a program diagram for exposure control. In this example, for simplicity, a photometric evaluation value is assumed to be a 4-pixel addition value, and an exposure time program diagram will be described as an example. As shown in FIG. 3B, when the 4-pixel addition value is 1024, the exposure time is controlled to be T1, and when the 4-pixel addition value is 576, the exposure time is T2. Controlled. In this way, if exposure control is performed with the same program diagram using the 4-pixel addition value of each mode as it is, even with the same exposure amount, the exposure time is controlled to be different. Further, in order to achieve the same exposure time, a program diagram is required for each mode, and the control becomes complicated.
また、動画のライブビュー表示では、各モードの4画素加算値をそのまま用いて表示を行うと、4画素加算値のレンジの違いによって表示の明るさが異なってしまう。すなわち、融合動画モードでは、通常動画モードに比べて4/2.25=1.78倍=5dB程度、表示が暗くなってしまう。 Also, in live view display of moving images, if display is performed using the 4-pixel addition value of each mode as it is, the brightness of the display varies depending on the range of the 4-pixel addition value. That is, in the fused moving image mode, the display becomes darker by about 4 / 2.25 = 1.78 times = 5 dB than in the normal moving image mode.
そこで本実施形態では、図4に示すように、融合動画モードでの4画素加算値を1.78倍にゲインアップし、そのゲインアップした4画素加算値を用いて露出制御を行う。また、融合動画モードでは、ゲインアップした4画素加算値による動画をライブビュー表示する。このように、重み付け係数に応じて適応的にゲインアップすることで、露出制御の簡素化と、適正な明るさの動画表示が可能になる。なお、露出制御は、露出時間の制御だけでなく、絞り値の制御によって行われてもよい。 Therefore, in the present embodiment, as shown in FIG. 4, the 4-pixel addition value in the fusion moving image mode is increased by 1.78 times, and exposure control is performed using the increased 4-pixel addition value. In the fusion video mode, a video with a gain-up 4-pixel addition value is displayed in live view. In this way, by adaptively increasing the gain according to the weighting coefficient, it becomes possible to simplify exposure control and display a moving image with appropriate brightness. The exposure control may be performed not only by controlling the exposure time but also by controlling the aperture value.
なお、上述の特許文献2には、ライブビューに表示される画像を撮影者が所望の明るさで表示できるようにすると共に、より好ましい露光状態の画像を撮影可能にする手法が開示されている。しかしながら、この手法では、ライブビューに表示される画像及び、より好ましい露光状態の画像を撮影するための形態について言及されていない。すなわち、重み付け係数に応じた露出制御や表示制御が行われることは言及されていない。
Note that the above-described
2.撮像装置
図5に、重み付け係数に応じて4画素加算値をゲインアップし、露出制御と表示制御を行う本実施形態の撮像装置の構成例を示す。この撮像装置は、撮像部100、A/D変換部104、ユーザI/F部106、制御部113、撮像制御部118を含む。
2. Imaging Device FIG. 5 shows a configuration example of the imaging device of the present embodiment that performs gain control and exposure control by increasing the 4-pixel added value according to the weighting coefficient. The imaging apparatus includes an
なお、以下では、動画記録時に同時に静止画を取得可能にする処理を「動画静止画融合」と呼び、この処理を行うモードを「融合動画モード」と呼び、この処理に適した動画像を「融合動画」と呼ぶ。この「融合動画」は、高解像度な静止画を生成可能な画像であり、例えば、図10等で後述する画素シフトにより取得され、図15~図19等で後述する推定手法により静止画を取得可能である。 In the following, a process that enables acquisition of a still image at the same time as recording a moving image is referred to as `` moving image still image fusion '', a mode for performing this processing is referred to as a `` fusion movie mode '', and a moving image suitable for this processing is referred to as `` This is called “fusion video”. This “fusion video” is an image that can generate a high-resolution still image. For example, it is acquired by a pixel shift described later in FIG. 10 and the like, and a still image is acquired by an estimation method described later in FIGS. Is possible.
撮像制御部118は、絞り制御部120、撮像素子制御部119を含む。この撮像制御部118は、撮像部100の駆動や制御を行う。撮像素子制御部119は、撮像素子103の制御を行い、画素値の読み出しを制御する読み出し制御部160と、露出時間の制御を行う露出制御部161を含む。
The
撮像部100(光学系)は、撮像を行うための光学系であり、撮像レンズ101、絞り102、例えばCMOSセンサなどの撮像素子103、図示しないシャッタによって構成されている。システムコントローラ105からの命令により、絞り制御部120が絞り102とシャッタを駆動することで、絞り102とシャッタの動作が行われる。
The imaging unit 100 (optical system) is an optical system for performing imaging, and includes an
A/D変換部104は、撮像部100による撮像で得られたアナログ信号をデジタルデータに変換する。
The A /
システムコントローラ105は、本撮像装置(システム)の各部を制御する。システムコントローラ105は、重み付け加算に用いる重み付け係数を設定する係数設定部130を含む。
The
ユーザI/F部106(スイッチ部)は、ユーザが撮像モードを設定するモードスイッチ107や、動画記録のスタートストップを指示する動画スイッチ108や、静止画記録を指示する静止画スイッチ109を含む。例えば、このユーザI/F部106は、タッチパネルや、操作ボタン等により構成される。
The user I / F unit 106 (switch unit) includes a
外部メモリ110は、撮影した動画データや静止画データを記録する。表示装置111は、例えば液晶表示装置であり、ライブビュー表示や、再生した動画や静止画の表示を行う。記録媒体112は、画像データを記録するためのメディアである。表示装置111や記録媒体112は、撮像装置に内蔵されてもよく、USB等により着脱可能な外部装置であってもよい。
The
制御部113(信号処理系)は、システムコントローラ105、圧縮伸張回路121(圧縮/伸張回路、圧縮伸張部)、記録媒体I/F回路126(記録媒体I/F部)、露出制御情報出力部140(AE処理系)、静止画処理部141(信号処理系)、動画処理部142(信号処理系)、表示制御部150(表示処理系)を含む。この制御部113は、撮像画像の処理や、各構成要素の制御を行う。
The control unit 113 (signal processing system) includes a
動画処理部142は、A/D変換部104からの動画データを処理する。動画処理部142は、電子防振回路114、ラインメモリ115、動画信号処理回路116を含む。
The moving
電子防振回路114は、画像処理により電子的に手ぶれ補正を行う防振回路である。ラインメモリ115は、電子防振回路114が1画素未満の手振れ補正処理を行うために、1ライン分の画像データを保持する。動画信号処理回路116は、電子防振回路114からの画像データに対して、輝度信号変換や色差信号変換等の処理を行う。
The electronic
静止画処理部141は、A/D変換部104からの静止画データを処理する。静止画処理部141は、静止画信号処理回路117、高解像度処理回路127(推定演算部)、フレームメモリ128(記憶部)を含む。
The still
高解像度処理回路127は、動画を高解像化して静止画を推定する高解像度化処理を行う。フレームメモリ128は、高解像度処理回路127による高解像度化処理(推定処理)を行うためにフレーム画像を保持する。静止画信号処理回路117は、高解像度化処理された静止画に対する画像処理や、通常静止画モードで撮影された静止画に対する画像処理を行う。例えば、静止画信号処理回路117は、静止画データに対して輝度信号変換や色差信号変換等の処理を行う。
The high
露出制御情報出力部140は、撮像制御部118がAE制御(露出制御、AE: Auto Exposure)を行うためのAE制御情報(露出制御情報)を出力する。露出制御情報出力部1
40は、AE処理回路122、AEゲイン設定回路123を含む。
The exposure control
40 includes an
AE処理回路122は、A/D変換部104からのデジタル画像データからAE評価値を求める。システムコントローラ105は、このAE評価値に基づいて撮像制御部118を制御し、絞り制御部120は、絞り102の設定を行い、撮像素子制御部119は、撮像素子103の蓄積時間の設定を行う。このようにして、適正な露光となるように制御される。AEゲイン設定回路123は、通常動画モードと融合動画モードにおけるAE処理用の画像信号のレンジの違いを調整するためのAEゲインを設定する。AE処理回路122は、AEゲインが乗算されたデジタル画像データからAE評価値を求める。
The
表示制御部150は、表示装置111に表示画像を表示する制御を行い、表示装置制御回路124、表示ゲイン設定回路125を含む。なお以下では、記録時のモニタとして用いる場合における、表示制御部150の動作を例に説明する。
The
A/D変換回路104から出力されるデジタル画像データは、動画信号処理回路116とシステムコントローラ105を経て表示装置制御回路124へ入力される。表示装置制御回路124は、表示装置111へ適正な信号レベルの表示画像を送る制御を行う。表示ゲイン設定回路125は、通常動画モードや融合動画モードにおける表示用画像信号のレンジの違いを調整するための表示ゲインを設定する。表示装置制御回路124は、表示ゲインが乗算されたデジタル画像データを表示する制御を行う。
The digital image data output from the A /
圧縮伸張回路121は、静止画信号処理回路117により生成された静止画の画像データの圧縮処理や、動画信号処理回路116により生成された動画の画像データの圧縮処理や、圧縮された画像データの伸張処理を行う。例えば、圧縮伸張回路121は、画像データをJPEG画像に圧縮したり、動画データをMPEG画像に圧縮する。
The compression /
記録媒体I/F回路126は、記録媒体112に対してリードやライトの制御を行う。システムコントローラ105は、外部メモリ110に対してリードやライトのアクセスを行う。
The recording medium I /
3.AE制御、表示制御
次に、上記の撮像装置が行うAE制御と表示制御について詳細に説明する。まず、図3(B)等で説明した重み付け係数により生じるAE処理や表示処理の課題について、具体例を挙げて説明し、次に、本実施形態が行う処理のフローチャートについて説明する。以下では、図10等で後述する4画素加算値gr11から画素値GR11を算出し、そのGR11を用いてAEゲインの設定を行った場合の計算例について説明する。
3. AE Control and Display Control Next, AE control and display control performed by the imaging apparatus will be described in detail. First, a problem of AE processing and display processing caused by the weighting coefficient described in FIG. 3B and the like will be described with a specific example, and then a flowchart of processing performed by the present embodiment will be described. Hereinafter, a calculation example in the case where the pixel value GR11 is calculated from the 4-pixel addition value gr11 described later with reference to FIG. 10 and the AE gain is set using the GR11 will be described.
重み付け係数をW1、W2、W3、W4とすると、下式(1)により表される。また、4画素加算値gr11は、下式(2)により表される。ここで、rはr≧1の実数であり、GR11、GR13、GR31、GR33は、加算読み出しされる画素値である。
W1=1,W2=1/r,W3=1/r,W4=1/r2 (1)
W*gr11=W1*GR11+W2*GR13+
W3*GR31+W4*GR33 (2)
When the weighting coefficients are W1, W2, W3, and W4, they are expressed by the following equation (1). The four-pixel addition value gr11 is expressed by the following expression (2). Here, r is a real number with r ≧ 1, and GR11, GR13, GR31, and GR33 are pixel values that are added and read.
W1 = 1, W2 = 1 / r, W3 = 1 / r, W4 = 1 / r 2 (1)
W * gr11 = W1 * GR11 + W2 * GR13 +
W3 * GR31 + W4 * GR33 (2)
例えば、r=2とすると、上式(1)より下式(3)が成り立つ。このとき、4画素加算値として下式(4)が取得されたとする。
W1=1,W2=1/2,W3=1/2,W4=1/4 (3)
W*gr11=576 (4)
For example, when r = 2, the following expression (3) is established from the above expression (1). At this time, it is assumed that the following expression (4) is acquired as the 4-pixel addition value.
W1 = 1, W2 = 1/2, W3 = 1/2, W4 = 1/4 (3)
W * gr11 = 576 (4)
ここで、実際には重み付け加算されているにも拘わらず、重み付け係数を考慮せずにAE制御を行い、画素値GR11を飽和値までゲインアップした場合を仮定する。画素値GR11の飽和値を1024であるとする。また、画像全面に同じ明るさの光が当っていると仮定し、GR11=GR13=GR31=GR33が成り立つとする。 Here, it is assumed that although the weighted addition is actually performed, the AE control is performed without considering the weighting coefficient, and the pixel value GR11 is gained up to the saturation value. It is assumed that the saturation value of the pixel value GR11 is 1024. Further, it is assumed that GR11 = GR13 = GR31 = GR33 holds assuming that light of the same brightness strikes the entire surface of the image.
この場合、重み付け係数をW1=W2=W3=W4=1とみなすことになるため、上式(4)より、GR11=576/4=144と求まる。この平均値を用いて画素値GR11を1024にしようとすると、1024/144=7.11倍する必要があることになる。しかしながら、実際の画素値は下式(5)に示すようにGR11=256であるため、7.11倍してしまうと、256*7.11=1989となり飽和値以上の画素値となってしまう。
GR11=576/(1+1/2+1/2+1/4)=256 (5)
In this case, since the weighting coefficient is regarded as W1 = W2 = W3 = W4 = 1, GR11 = 576/4 = 144 is obtained from the above equation (4). In order to set the pixel value GR11 to 1024 using this average value, it is necessary to multiply 1024/144 = 7.11 times. However, since the actual pixel value is GR11 = 256 as shown in the following formula (5), if it is multiplied by 7.11, it becomes 256 * 7.11 = 1989, and the pixel value is equal to or higher than the saturation value. .
GR11 = 576 / (1 + 1/2 + 1/2 + 1/4) = 256 (5)
この点、本実施形態では、重み付け係数を考慮してAE制御を行い、AEゲインとして4/(1+1/2+1/2+1/4)=1.78を設定する。そのため、4画素加算値gr11がゲインアップされて576*1.78=1024となり、GR11=1024/4=256と求まる。この平均値を用いて画素値GR11を1024にしようとすると、1024/256=4倍であり、飽和値以内でAE制御を行うことが可能である。 In this respect, in this embodiment, AE control is performed in consideration of the weighting coefficient, and 4 / (1 + 1/2 + 1/2 + 1/4) = 1.78 is set as the AE gain. Therefore, the gain of the four-pixel addition value gr11 is increased to 576 * 1.78 = 1024, and GR11 = 1024/4 = 256 is obtained. If the average value is used to set the pixel value GR11 to 1024, 1024/256 = 4 times, and AE control can be performed within the saturation value.
一方、重み付け加算を行わず、上式(1)においてr=1である場合には、そもそもW1=W2=W3=W4=1であるため、W*gr11=1*256*4=1024となる。そのため、AEゲインには1を設定する。 On the other hand, when weighting addition is not performed and r = 1 in the above equation (1), W1 = W2 = W3 = W4 = 1 in the first place, so W * gr11 = 1 * 256 * 4 = 1024. . Therefore, 1 is set to the AE gain.
このように、本実施形態では、重み付け加算を行った時のAEゲインには、重み付け加算を行わないときのAEゲインと異なる値を設定する。表示制御においても、表示の明るさについて上記と同様の課題が生じるため、重み付け係数に応じて表示ゲインを設定する。なお、上記ではgr11からGR11を求める場合について説明したが、他の4画素加算値からも同様にそれぞれの画素の値を求めることができる。 Thus, in this embodiment, a value different from the AE gain when weighted addition is not performed is set as the AE gain when weighted addition is performed. Even in the display control, the same problem as described above occurs with respect to the brightness of the display. Therefore, the display gain is set according to the weighting coefficient. Although the case where GR11 is obtained from gr11 has been described above, the value of each pixel can be obtained similarly from the other four-pixel addition values.
次に、このような重み付け係数に応じてAE制御や表示制御を行う本実施形態の処理について詳細に説明する。図6に、本実施形態が行う処理のフローチャートを示す。 Next, the processing of this embodiment for performing AE control and display control according to such weighting coefficients will be described in detail. FIG. 6 shows a flowchart of processing performed by the present embodiment.
図6に示すように、この処理が開始されると、モードの選択を行う(ステップS1)。通常動画モードが選択された場合、絞りを制御し(ステップS2)、撮像素子の露出時間等を制御する(ステップS3)。次に、画素の読み出し制御を重み付け無し・画素シフト無しに設定し(ステップS4)、4画素の加算読み出しを行う(ステップS5)。次に、4画素加算値にAEゲインを掛けずAE処理を行い(ステップS6)、AE評価値を求め(ステップS7)、そのAE評価値により絞り値の設定や露出時間の設定を行う(ステップS2、S3)。また、表示ゲインを掛けずライブビュー表示の制御を行い(ステップS8)、ライブビュー画像を表示する(ステップS9)。また、撮影された動画を記録する制御を行い(ステップS10)、記録媒体に記録する(ステップS11)。 As shown in FIG. 6, when this process is started, a mode is selected (step S1). When the normal moving image mode is selected, the aperture is controlled (step S2), and the exposure time of the image sensor is controlled (step S3). Next, pixel readout control is set to no weighting and no pixel shift (step S4), and 4-pixel addition readout is performed (step S5). Next, AE processing is performed without multiplying the 4-pixel addition value by AE gain (step S6), an AE evaluation value is obtained (step S7), and an aperture value and exposure time are set based on the AE evaluation value (step S7). S2, S3). Further, the live view display is controlled without applying the display gain (step S8), and the live view image is displayed (step S9). Further, control is performed to record the captured moving image (step S10), and recording is performed on the recording medium (step S11).
一方、ステップS1において融合動画モードが選択された場合、絞りを制御し(ステップS12)、撮像素子の露出時間等を制御する(ステップS13)。次に、画素の読み出し制御を重み付け有りに設定し(ステップS14)、重み付け係数を設定する(ステップS15)。次に、4画素の加算読み出しを行い(ステップS16)、重み付け係数の和を算出し(ステップS17)、AEゲインと表示ゲインを設定する(ステップS18)。例えば、上式(1)において、融合動画モードではr=3、通常動画モードではr=1とすると、下式(6)よりAEゲインと表示ゲインは2.25に設定される。
(1+1+1+1)/(1+1/3+1/3+1/9)=2.25
(6)
On the other hand, when the fused video mode is selected in step S1, the aperture is controlled (step S12), and the exposure time of the image sensor is controlled (step S13). Next, pixel readout control is set to weighted (step S14), and a weighting coefficient is set (step S15). Next, four pixels are added and read (step S16), the sum of the weighting coefficients is calculated (step S17), and the AE gain and the display gain are set (step S18). For example, in the above equation (1), if r = 3 in the fused movie mode and r = 1 in the normal movie mode, the AE gain and the display gain are set to 2.25 from the following equation (6).
(1 + 1 + 1 + 1) / (1 + 1/3 + 1/3 + 1/9) = 2.25
(6)
次に、4画素加算値にAEゲインを乗算し(ステップS19)、その画像を用いてAE処理を行い(ステップS20)、AE評価値を求め(ステップS21)、そのAE評価値により絞り値の設定や露出時間の設定を行う(ステップS12、S13)。また、4画素加算値に表示ゲインを乗算し(ステップS22)、そのライブビュー画像の表示制御を行い(ステップS23)、ライブビュー画像を表示する(ステップS24)。また、撮影された動画を記録する制御を行い(ステップS25)、記録媒体に記録する(ステップS26)。この記録する動画には、ゲインは乗じない。 Next, the 4-pixel addition value is multiplied by the AE gain (step S19), AE processing is performed using the image (step S20), an AE evaluation value is obtained (step S21), and the aperture value is determined by the AE evaluation value. Settings and exposure time are set (steps S12 and S13). Further, the 4-pixel addition value is multiplied by the display gain (step S22), display control of the live view image is performed (step S23), and the live view image is displayed (step S24). Further, control is performed to record the captured moving image (step S25), and recording is performed on a recording medium (step S26). This recorded video is not multiplied by gain.
ここで、AE処理とは、例えば撮像画像に対して測光評価を行うエリアを設定する処理や、露光量に対する絞り値設定の特性や露出時間設定の特性(プログラム線図)を設定する処理である。また、AE評価値とは、撮像画像に設定されたエリア内の画素値に基づいて求められた測光評価値である。 Here, the AE process is, for example, a process of setting an area for performing photometric evaluation on a captured image, or a process of setting an aperture value setting characteristic and an exposure time setting characteristic (program diagram) with respect to the exposure amount. . The AE evaluation value is a photometric evaluation value obtained based on the pixel value in the area set in the captured image.
4.変形例
図7に、本実施形態が行う処理の変形例のフローチャートを示す。この変形例は、モードに依らない処理を共通化した例である。図7に示すように、この処理が開始されると、絞りを制御し(ステップS50)、撮像素子の露出時間等を制御し(ステップS51)、モードの選択を行う(ステップS52)。
4). Modified Example FIG. 7 shows a flowchart of a modified example of processing performed by the present embodiment. This modification is an example in which processing independent of the mode is made common. As shown in FIG. 7, when this process is started, the aperture is controlled (step S50), the exposure time of the image sensor is controlled (step S51), and the mode is selected (step S52).
通常動画モードが選択された場合、画素の読み出し制御を設定し(ステップS53)、4画素の加算読み出しを行う(ステップS54)。次に、AE制御(ステップS61、S62、S50、S51)や、表示制御(ステップS64、S65)、記録制御(ステップS66、S67)を行う。 When the normal moving image mode is selected, pixel readout control is set (step S53), and 4-pixel addition readout is performed (step S54). Next, AE control (steps S61, S62, S50, S51), display control (steps S64, S65), and recording control (steps S66, S67) are performed.
一方、ステップS52において融合動画モードが選択された場合、画素の読み出し制御を設定し(ステップS55)、重み付け係数を設定する(ステップS56)。次に、4画素の加算読み出しを行い(ステップS57)、重み付け係数の和を算出し(ステップS58)、AEゲインと表示ゲインを設定する(ステップS59)。次に、4画素加算値にAEゲインを乗算し(ステップS60)、その画像を用いてAE制御を行う(ステップS61、S62、S50、S51)。また、4画素加算値に表示ゲインを乗算し(ステップS63)、そのライブビュー画像の表示制御を行う(ステップS64、S65)。また、動画を記録する制御(ステップS66、S67)を行う。 On the other hand, when the fused video mode is selected in step S52, pixel readout control is set (step S55), and a weighting coefficient is set (step S56). Next, four pixels are added and read (step S57), the sum of the weighting coefficients is calculated (step S58), and the AE gain and the display gain are set (step S59). Next, the 4-pixel addition value is multiplied by the AE gain (step S60), and AE control is performed using the image (steps S61, S62, S50, S51). Further, the display gain is multiplied by the 4-pixel addition value (step S63), and display control of the live view image is performed (steps S64 and S65). Further, control for recording a moving image (steps S66 and S67) is performed.
5.第2の変形例
図8に、本実施形態が行う処理の第2変形例のフローチャートを示す。この変形例は、重み付け係数の設定処理を共通化した例である。図8に示すように、この処理が開始されると、絞りを制御し(ステップS100)、撮像素子の露出時間等を制御し(ステップS101)、読み出し制御を重み付け加算に設定し、モードに応じて画素シフト有り・無しを設定する(ステップS102)。
5. Second Modification FIG. 8 shows a flowchart of a second modification of the process performed by this embodiment. This modification is an example in which the weighting coefficient setting process is shared. As shown in FIG. 8, when this process is started, the aperture is controlled (step S100), the exposure time of the image sensor and the like are controlled (step S101), the readout control is set to weighted addition, and the mode is set according to the mode. The presence / absence of pixel shift is set (step S102).
次に、重み付け係数を上式(1)に示す値に設定する(ステップS103)。通常動画モードでは、r=1に設定され、融合動画モードでは、r>1に設定される。次に、画素値に重み付けし(ステップS104)、4画素の加算読み出しを行う(ステップS105)。次に、重み付け係数の和を算出し(ステップS106)、AEゲインと表示ゲインを設定する(ステップS107)。通常動画モードでは、ゲインは1であり、融合動画モードでは、ゲインは下式(7)に示す値である。
4/(1+1/r+1/r+1/r2) (7)
Next, the weighting coefficient is set to the value shown in the above formula (1) (step S103). In the normal moving image mode, r = 1 is set, and in the fused moving image mode, r> 1 is set. Next, the pixel values are weighted (step S104), and four pixels are added and read (step S105). Next, a sum of weighting coefficients is calculated (step S106), and an AE gain and a display gain are set (step S107). In the normal moving image mode, the gain is 1, and in the fused moving image mode, the gain is a value represented by the following expression (7).
4 / (1 + 1 / r + 1 / r + 1 / r 2 ) (7)
次に、4画素加算値にAEゲインを乗算し(ステップS108)、AE制御を行う(ステップS109、S110、S100、S101)。また、4画素加算値に表示ゲインを乗算し(ステップS111)、表示制御を行う(ステップS112、S113)。また、動画を記録する制御(ステップS114、S115)を行う。 Next, the 4-pixel addition value is multiplied by the AE gain (step S108), and AE control is performed (steps S109, S110, S100, and S101). Further, the 4-pixel addition value is multiplied by the display gain (step S111), and display control is performed (steps S112 and S113). Further, control for recording a moving image (steps S114 and S115) is performed.
さて、上述のように、モードに依って異なる重み付け係数を用いた場合、加算読み出しされた画素値のレンジがモードに依って異なるため、自動露出の制御や、ライブビュー表示の明るさに影響するという課題がある。 As described above, when a different weighting coefficient is used depending on the mode, the range of the pixel value obtained by addition and reading differs depending on the mode, which affects the automatic exposure control and the brightness of the live view display. There is a problem.
この点、本実施形態によれば、図5に示すように、撮像装置は、被写体像を撮像する撮像素子103と、撮像素子103の複数の画素の画素値を重み付け加算して加算画素値として読み出す読み出し制御部160と、重み付け加算における重み付け係数を設定する係数設定部130と、重み付け係数に基づいて撮像部100の露出制御を行うための露出制御情報を出力する露出制御情報出力部140と、を含む。
In this regard, according to the present embodiment, as illustrated in FIG. 5, the imaging device weights and adds the pixel values of the
これにより、簡素な露出制御等が可能になる。具体的には、重み付け係数に基づいて撮像部100の露出制御を行うことで、撮影モード別に露出制御を分けずに露出制御を共通化することが可能になる。
This enables simple exposure control and the like. Specifically, by performing exposure control of the
ここで、露出制御情報とは、例えば測光評価値であり、あるいは、測光評価値から求められた絞り値や露出時間を表す情報である。露出制御は、これらの情報により絞りや露出時間が設定されることで行われる。 Here, the exposure control information is, for example, a photometric evaluation value, or information indicating an aperture value and an exposure time obtained from the photometric evaluation value. Exposure control is performed by setting the aperture and exposure time based on these pieces of information.
なお、撮像部100は、コンパクトカメラのように撮像装置と一体に構成されてもよい。あるいは、撮像部100は、交換レンズ式カメラのように、撮像素子103が撮像装置(ボディ)と一体に構成され、絞り102や光学系101を含む交換レンズが別体に構成されてもよい。
Note that the
また、本実施形態では、図8のステップS103で上述のように、係数設定部130は、第1の撮像モード(例えば、通常動画モード)において、第1の重み付け係数を設定する。係数設定部130は、第2の撮像モード(例えば、融合動画モード)において、第2の重み付け係数を設定する。そして、上式(7)に示すように、露出制御情報出力部140は、第1の重み付け係数の和と第2の重み付け係数の和との比を重み付け係数比として求める。露出制御情報出力部140は、その重み付け係数比に基づく測光評価値を用いて露出制御情報を出力する。
In this embodiment, as described above in step S103 of FIG. 8, the
このようにすれば、重み付け係数比に基づく測光評価値を用いて露出制御情報が出力されることで、撮像部100の露出制御を行うための露出制御情報を出力できる。
In this way, the exposure control information for performing the exposure control of the
より具体的には、本実施形態では、係数設定部130は、第1の撮像モードにおいて、重み付け加算される各画素に対して同じ値の係数W1=W2=W3=W4=1を設定し、第2のモードにおいて、重み付け加算される各画素全てに対して同じとはならない値の係数をW1=1,W2=W3=1/r,W4=1/r2(r>1)設定する。そして、図8のステップS108~S110で上述のように、露出制御情報出力部140は、第1の撮像モードにおいて、加算画素値(図10で後述するgr11等)から測光評価値を求め、第2の撮像モードにおいて、加算画素値に対して重み付け係数比を乗算した画素値から測光評価値を求める。
More specifically, in the present embodiment, the
このようにすれば、第1の撮像モードにおいて、重み付け係数比が乗算されない加算画素値から測光評価値が求められ、第2の撮像モードにおいて、加算画素値に対して重み付け係数比を乗算した画素値から測光評価値が求められる。これにより、AE処理とAE評価値算出処理(ステップS109、S110)を、第1の撮影モードと第2の撮影モードで共通化できる。これにより、露出制御やAE回路の共通化が可能になる。 In this way, in the first imaging mode, a photometric evaluation value is obtained from the added pixel value that is not multiplied by the weighting factor ratio, and in the second imaging mode, the pixel obtained by multiplying the added pixel value by the weighting factor ratio. The photometric evaluation value is obtained from the value. Thereby, the AE process and the AE evaluation value calculation process (steps S109 and S110) can be made common in the first shooting mode and the second shooting mode. Thereby, exposure control and AE circuit can be shared.
また、本実施形態では、図5に示すように、撮像装置は、重み付け係数に基づいて表示画像の輝度を調整し、調整後の表示画像を表示する制御を行う表示制御部150を含む。
In this embodiment, as shown in FIG. 5, the imaging apparatus includes a
これにより、ライブビュー表示を適切な明るさで表示制御すること等が可能になる。すなわち、重み付け係数に基づいて表示画像の輝度を調整することで、重み付け係数に依らない適切な明るさの表示制御が可能になる。 This makes it possible to control live view display with appropriate brightness. That is, by adjusting the luminance of the display image based on the weighting coefficient, it is possible to perform display control with appropriate brightness that does not depend on the weighting coefficient.
また、本実施形態では、上式(7)に示すように、表示制御部150は、第1の重み付け係数の和と第2の重み付け係数の和との比を重み付け係数比として求める。そして、表示制御部150は、その重み付け係数比に基づいて表示画像の輝度を調整する。
In the present embodiment, as shown in the above equation (7), the
このようにすれば、第1の撮像モードと第2の撮像モードの重み付け係数比に基づいて表示画像の輝度を調整できる。 In this way, the brightness of the display image can be adjusted based on the weighting coefficient ratio between the first imaging mode and the second imaging mode.
より具体的には、本実施形態では、図8のステップS111~S113で上述のように、表示制御部150は、第1の撮像モード(例えば、通常動画モード)において、第1の重み付け係数W1=W2=W3=W4=1により重み付け加算された加算画素値(図10で後述するgr11等)による表示画像を表示する制御を行う。表示制御部150は、第2の撮像モード(例えば、融合動画モード)において、第2の重み付け係数W1=1,W2=W3=1/r,W4=1/r2(r>1)により重み付け加算された加算画素値に対して重み付け係数比を乗算し、その乗算後の加算画素値による表示画像を表示する制御を行う。
More specifically, in the present embodiment, as described above in steps S111 to S113 in FIG. 8, the
このようにすれば、第1の撮像モードにおいて、重み付け係数比が乗算されない加算画素値による表示画像が表示され、第2の撮像モードにおいて、重み付け係数比が乗算された加算画素値による表示画像が表示される。これにより、ライブビュー表示を、第1の撮影モードと第2の撮影モードで同じ明るさに調整して表示できる。 In this way, in the first imaging mode, a display image with the added pixel value not multiplied by the weighting coefficient ratio is displayed, and in the second imaging mode, a display image with the added pixel value multiplied by the weighting coefficient ratio is displayed. Is displayed. Thereby, the live view display can be adjusted and displayed with the same brightness in the first shooting mode and the second shooting mode.
6.通常動画モード
図9を用いて、本実施形態の撮影モードについて詳細に説明する。まず、通常動画モードについて説明する。このモードは、途中で静止画撮影を行わず動画のみを撮影するモードであり、重み付け無し・画素シフト無しで加算読み出しを行うモードである。
6). Normal Movie Mode The shooting mode of the present embodiment will be described in detail with reference to FIG. First, the normal moving image mode will be described. This mode is a mode in which only a moving image is captured without performing still image shooting in the middle, and addition reading is performed without weighting and without pixel shift.
通常動画モードにおける動作を、図5に示す撮像装置を例に説明する。このモードでは、システムコントローラ105は、図9に示すモード設定内容に従って、撮像部100、絞り制御部120、撮像素子制御部119、制御部113の制御を行う。モードスイッチ107からの指示によってシステムコントローラ105が各種設定を行う。
The operation in the normal moving image mode will be described by taking the imaging apparatus shown in FIG. 5 as an example. In this mode, the
撮像素子103からは、重み付け無し、重畳シフトなしの画素加算信号が読み出される。加算読み出しは、図13等で後述する手法により行われる。
The pixel addition signal without weighting and without superposition shift is read from the
AE制御に用いる画像や、表示に用いる画像は、ゲインアップしない画像であり、記録媒体112に記録される画像と同じ画像である。
The image used for AE control and the image used for display are images that do not gain up and are the same as the images recorded on the
撮像素子103に結像された被写体像は、電気信号に変換され、順次読み出される。読み出された画像はA/D変換部104でデジタル画像に変換された後、電子防振回路114に入力される。防振処理された信号は動画信号処理回路116により処理されて、輝度色差信号が生成される。
The subject image formed on the
動画信号処理回路116からの画像データは、外部メモリ110により保持される。外部メモリ110により保持された画像データは、システムコントローラ105を介して、圧縮伸張回路121に出力され、MPEG4やMotion-JPEG等のフォーマットに変換される。変換された画像データは、システムコントローラ105を介して、再び外部メモリ110に格納される。
The image data from the moving image
外部メモリ110に記録された圧縮画像データは、システムコントローラ105を介して、記録媒体I/F回路126に出力され、記録媒体112へ記録される。なお、圧縮処理を行わずに、記録媒体112に記録することも可能である。
The compressed image data recorded in the
次に、通常動画モードにおけるAE制御について説明する。A/D変換部104でデジタル画像に変換された画像はAE処理回路122へ入力される。AE処理回路122からの画像はシステムコントローラ105へ出力され、AE評価値が求められる。
Next, AE control in the normal moving image mode will be described. The image converted into a digital image by the A /
絞り制御部120が、AE評価値を用いて絞り102の制御を行い、撮像素子制御部119が、AE評価値を用いて撮像素子103の蓄積時間制御を行う。これらの制御により、適正な露光値になるようにAE制御が行われる。AEゲイン設定回路123は、通常動画モードでのAEゲイン(例えば1)を設定し、あるいは、通常動画モードではAEゲインを設定しない。
The
次に、通常動画モードにおける表示制御について説明する。A/D変換部104でデジタル画像に変換された画像はAE処理回路122へ入力される。システムコントローラ105は、AE処理回路122からの画像に基づいて、適正な明るさで表示装置111に表示を行うための評価値を求める。表示制御部150は、評価値に基づいて表示制御を行う。表示ゲイン設定回路125は、通常動画モードでの表示ゲイン(例えば1)を設定し、あるいは、通常動画モードでは表示ゲインを設定しない。
Next, display control in the normal moving image mode will be described. The image converted into a digital image by the A /
7.通常静止画モード
次に、図9に示す通常静止画モードについて説明する。このモードは、静止画のみを撮影するモードであり、加算読み出しを行わず全画素を読み出すモードである。
7. Normal Still Image Mode Next, the normal still image mode shown in FIG. 9 will be described. This mode is a mode for photographing only a still image, and is a mode for reading all pixels without performing addition reading.
通常静止画モードにおける動作を、図5に示す撮像装置を例に説明する。通常静止画モードでは、システムコントローラ105は、図9に示す設定内容に従って撮像部100、絞り制御部120、撮像素子制御部119、制御部113の制御を行う。このモードでは、高解像度処理回路127による推定演算処理を行わない。モードスイッチ107からの指示によってシステムコントローラ105が各種設定を行う。
The operation in the normal still image mode will be described using the imaging apparatus shown in FIG. 5 as an example. In the normal still image mode, the
撮像素子103からは、重み付け無し、重畳シフトなしの全画素読み出し信号が読み出される。
From the
AE制御に用いる画像や、表示に用いる画像は、ゲインアップしない画像であり、記録媒体112に記録される画像と同じ画像である。
The image used for AE control and the image used for display are images that do not gain up and are the same as the images recorded on the
撮像素子103に結像された被写体像は、電気信号に変換され、順次読み出される。読み出された画像はA/D変換部104でデジタル画像に変換された後、高解像度処理回路127に入力される。システムコントローラ105からの制御信号(指令)により、高解像度処理回路127はオフ(非動作状態)に設定される。高解像度処理回路127からの信号は、静止画信号処理回路117により処理されて、輝度色差信号が生成される。
The subject image formed on the
静止画信号処理回路117からの画像データは、外部メモリ110により保持される。外部メモリ110により保持された画像データは、システムコントローラ105を介して、圧縮伸張回路121に出力され、RAWやJPEG等のフォーマットに変換される。変換された画像データは、システムコントローラ105を介して、再び外部メモリ110に格納される。
The image data from the still image
外部メモリ110に記録された圧縮画像データは、システムコントローラ105を介して、記録媒体I/F回路126に出力され、記録媒体112へ記録される。なお、圧縮処理を行わずに、記録媒体112に記録することも可能である。
The compressed image data recorded in the
通常静止画モードでは、AE制御時に通常動画モードと同じ読み出し制御により画像を取得し、その画像を用いてAE制御を行う。そのため、AE制御は、通常動画モードにおけるAE制御と同じ制御である。同様に、表示制御は、通常動画モードと同じ読み出し制御によりライブビュー画像を取得して表示するため、通常動画モードにおける表示制御と同じ制御である。 In the normal still image mode, an image is acquired by the same readout control as in the normal moving image mode during AE control, and AE control is performed using the image. Therefore, the AE control is the same control as the AE control in the normal moving image mode. Similarly, the display control is the same control as the display control in the normal moving image mode because the live view image is acquired and displayed by the same readout control as in the normal moving image mode.
8.動画静止画融合動画1モード
次に、図9に示す動画静止画融合動画1モードについて説明する。このモードは、融合動画モードの1つであり、静止画を取得するための融合動画を動画として記録し、静止画の推定は行わないモードである。このモードでは、画素シフト読み出しを行わない。なお、このモードで撮影した融合動画から撮影終了後に高解像静止画を推定して取得することが可能である。
8). Movie Still
動画静止画融合動画1モードにおける動作を、図5に示す撮像装置を例に説明する。このモードでは、システムコントローラ105は、図9に示すモード設定内容に従って、撮像部100、絞り制御部120、撮像素子制御部119、制御部113の制御を行う。モードスイッチ107からの指示によってシステムコントローラ105が各種設定を行う。
The operation in the moving image still image
撮像素子103からは、重み付け有り、重畳シフト有りの画素加算信号が読み出される。加算読み出しは、図10等で後述する手法により行われる。例えば、画素値の重み付けは、画素毎にゲインを変える手法により実現される。具体的には、画素ごとにA/D変換回路を有する場合には、A/D変換時に重み付けを行う。あるいは、画素読み出し回路にゲインを持たせることでアナログ的に重み付けを行ってもよく、A/D変換後にデジタル処理により重み付けを行ってもよい。
From the
撮像素子103に結像された被写体像は、電気信号に変換され、順次読み出される。読み出された画像はA/D変換部104でデジタル画像に変換された後、電子防振回路114に入力される。防振処理された信号は動画信号処理回路116により処理されて、輝度色差信号が生成される。
The subject image formed on the
動画信号処理回路116からの画像データは、外部メモリ110により保持される。外部メモリ110により保持された画像データは、システムコントローラ105を介して、圧縮伸張回路121に出力され、MPEG4やMotion-JPEG等のフォーマットに変換される。変換された画像データは、システムコントローラ105を介して、再び外部メモリ110に格納される。
The image data from the moving image
外部メモリ110に記録された圧縮画像データは、システムコントローラ105を介して、記録媒体I/F回路126に出力され、記録媒体112へ記録される。なお、圧縮処理を行わずに、記録媒体112に記録することも可能である。
The compressed image data recorded in the
次に、動画静止画融合動画1モードにおけるAE制御について説明する。A/D変換部104でデジタル画像に変換された画像はAE処理回路122へ入力される。AE処理回路122からの画像はシステムコントローラ105へ出力され、AE評価値が求められる。
Next, AE control in the moving image still image
絞り制御部120が、AE評価値を用いて絞り102の制御を行い、撮像素子制御部119が、AE評価値を用いて撮像素子103の蓄積時間制御を行う。これらの制御により、適正な露光値になるようにAE制御が行われる。
The
このAE制御において、AEゲイン設定回路123は、AEゲインの設定を行う。すなわち、通常動画モードにおけるAE画像は重み付け無し、重畳シフト無しの画素加算信号であるのに対して、動画静止画融合動画1モードにおけるAE画像は重み付け有り、重畳シフト有りの画素加算信号である。そのため、上述のように、重み付けのある加算信号と重み付けのない加算信号とでは加算後の信号のレンジ(値)が異なる。AEゲイン設定回路123は、AEゲイン(例えば1.78)を設定することで、通常動画モードと動画静止画融合動画1モードにおける画素加算信号のレンジを揃える。
In this AE control, the AE
次に、通常動画モードにおける表示制御について説明する。A/D変換部104でデジタル画像に変換された画像は、AE処理回路122へ入力される。システムコントローラ105内の係数設定部130によって設定された、AEゲイン設定回路123に入力される係数と同じ値の係数、または表示装置111に適した値の係数を、表示ゲイン設定回路125に入力する。表示ゲイン設定回路125は、AEゲイン設定回路123に入力される係数と同じ値を用いて表示装置111の特性に合うように表示ゲインを設定する。そして、表示装置制御回路124は、その表示ゲインを用いて表示制御を行うことにより、適正な明るさにて表示装置111に動画を表示する制御を行う。
Next, display control in the normal moving image mode will be described. The image converted into a digital image by the A /
この表示制御において、表示ゲイン設定回路125は、表示ゲインの設定を行う。すなわち、上述のように、重み付けのある加算信号と重み付けのない加算信号とでは加算後の信号のレンジが異なる。表示ゲイン設定回路125は、表示ゲイン(例えば1.78)を設定することで、通常動画モードと動画静止画融合動画1モードにおける画素加算信号のレンジを揃える。
In this display control, the display
9.動画静止画融合動画2モード
次に、図9に示す動画静止画融合動画2モードについて説明する。このモードは、融合動画モードの1つであり、融合動画を動画として記録し、静止画の推定は行わないモードである。このモードでは、画素シフト読み出しを行う。なお、このモードで撮影した融合動画から撮影終了後に高解像静止画を推定して取得することが可能である。ここで、動画静止画融合動画1モードで説明した動作と同様の動作については、適宜説明を省略する。
9. Movie Still
動画静止画融合動画2モードにおける動作を、図5に示す撮像装置を例に説明する。このモードでは、システムコントローラ105は、図9に示すモード設定内容に従って、撮像部100、絞り制御部120、撮像素子制御部119、制御部113の制御を行う。モードスイッチ107からの指示によってシステムコントローラ105が各種設定を行う。
The operation in the moving image still image
撮像素子103からは、重み付け有り、重畳シフト有りの画素加算信号が読み出される。加算読み出しは、図13等で後述する手法により行われる。重み付け加算は、動画静止画融合動画1モードで説明した手法と同様に実現される。
From the
AEゲイン設定等のAE制御や、表示ゲイン設定等の表示制御は、動画静止画融合動画1モードで説明した手法と同様に行われる。
AE control such as AE gain setting and display control such as display gain setting are performed in the same manner as described in the video still
10.動画静止画融合静止画1モード
次に、図9に示す動画静止画融合静止画1モードについて説明する。このモードは、融合動画モードの1つであり、画素シフト読み出しにより融合動画を撮影し、その融合動画から高解像静止画を取得するモードである。なお、動画静止画融合動画1モードで説明した動作と同様の動作については、適宜説明を省略する。
10. Moving Image Still Image
このモードでは、高解像度処理回路127による高解像度静止画処理が行なわれる。すなわち、システムコントローラ105からの制御信号により、高解像度処理回路127はオン(動作状態)に設定される。A/D変換部104からの画像は、フレームメモリ128に一旦格納され、高解像度処理回路127により高解像度処理される。例えば、静止画を推定する処理は、図15~図19で後述する手法により行われる。あるいは、その処理は、既知の超解像処理等の他の手法により行われてもよい。高解像度処理回路127からの画像は、静止画信号処理回路117により処理されて、輝度色差信号が生成される。
In this mode, high-resolution still image processing is performed by the high-
静止画信号処理回路117からの画像データは、外部メモリ110により保持される。外部メモリ110により保持された画像データは、システムコントローラ105を介して、圧縮伸張回路121に出力され、RAWやJPEG等のフォーマットに変換される。変換された画像データは、システムコントローラ105を介して、再び外部メモリ110に格納される。
The image data from the still image
外部メモリ110に記録された圧縮画像データは、システムコントローラ105を介して、記録媒体I/F回路126に出力され、記録媒体112へ記録される。なお、圧縮処理を行わずに、記録媒体112に記録することも可能である。
The compressed image data recorded in the
AEゲイン設定等のAE制御や、表示ゲイン設定等の表示制御は、動画静止画融合動画1モードで説明した手法と同様に行われる。
AE control such as AE gain setting and display control such as display gain setting are performed in the same manner as described in the video still
11.動画静止画融合静止画2モード
次に、図9に示す動画静止画融合静止画2モードについて説明する。このモードは、融合動画モードの1つであり、画素シフトを行わずに融合動画を撮影し、その融合動画から高解像静止画を取得するモードである。なお、動画静止画融合動画2モードで説明した動作と同様の動作については、適宜説明を省略する。
11. Moving Image Still Image
このモードでは、高解像度処理回路127による高解像度静止画処理が行なわれる。すなわち、システムコントローラ105からの制御信号により、高解像度処理回路127はオン(動作状態)に設定される。A/D変換部104からの画像は、フレームメモリ128に一旦格納され、高解像度処理回路127により高解像度処理される。例えば、静止画を推定する処理は、図15~図19で後述する手法により行われる。あるいは、その処理は、既知の超解像処理等の他の手法により行われてもよい。このモードでは、画素シフトが行われないため、各フレームで撮影された4画素加算値を補間処理することで、画素シフトに相当する画素値を求める。あるいは、このモードでは、補間処理を行わず、エッジ強調等の手法により高解像化処理を行ってもよい。高解像度処理回路127からの画像は、静止画信号処理回路117により処理されて、輝度色差信号が生成される。
In this mode, high-resolution still image processing is performed by the high-
AEゲイン設定等のAE制御や、表示ゲイン設定等の表示制御は、動画静止画融合動画1モードで説明した手法と同様に行われる。
AE control such as AE gain setting and display control such as display gain setting are performed in the same manner as described in the video still
12.画素シフトを行う場合の加算読み出し制御
次に、撮像素子から加算読み出しを行う手法について詳細に説明する。なお、以下の説明で用いるフレームとは、例えば撮像素子により1つの(1枚の)画像が撮影されるタイミングや、画像処理において1つの画像が処理されるタイミングである。あるいは、画像データにおける1つの画像も適宜フレームと呼ぶ。
12 Next, a method for performing addition reading from the image sensor will be described in detail. Note that the frame used in the following description is, for example, the timing at which one (one) image is captured by the image sensor or the timing at which one image is processed in image processing. Alternatively, one image in the image data is also referred to as a frame as appropriate.
図10に、各フレームにおいて1画素ピッチで画素シフトを行う場合の説明図を示す。この読み出し制御は、上述の動画静止画融合動画1モード、動画静止画融合静止画1モードにおいて行われる。なお、以下では、撮像素子がベイヤ配列のカラー撮像素子である場合を例に説明する。
FIG. 10 shows an explanatory diagram in the case where pixel shift is performed at one pixel pitch in each frame. This read control is performed in the above-described moving image still image
図10に示すように、第nのフレームfn(nは自然数)では、下式(8)に示す4画素加算値が読み出される。ここで、W1,W2,W3,W4は、上式(1)に示す重み付け係数である。また、GRij,GBij(i,jは自然数)は緑色の画素値を表し、Rijは赤色の画素値を表し、Bijは青色の画素値を表す。また、grij,grijは緑色の4画素加算値を表し、rijは赤色の4画素加算値を表し、bijは青色の4画素加算値を表す。
gr11=W1*GR11+W2*GR13+
W3*GR31+W4*GR33,
r12 =W1*R12+W2*R14+
W3*R32+W4*R34,
b21 =W1*B21+W2*B23+
W3*B41+W4*B43,
gb22=W1*GB22+W2*GB24+
W3*GB42+W4*GB44 (8)
As shown in FIG. 10, in the nth frame fn (n is a natural number), the 4-pixel addition value shown in the following equation (8) is read. Here, W1, W2, W3 and W4 are weighting coefficients shown in the above equation (1). GRij and GBij (i and j are natural numbers) represent green pixel values, Rij represents a red pixel value, and Bij represents a blue pixel value. Further, grij and grij represent a green 4-pixel addition value, rij represents a red 4-pixel addition value, and bij represents a blue 4-pixel addition value.
gr11 = W1 * GR11 + W2 * GR13 +
W3 * GR31 + W4 * GR33,
r12 = W1 * R12 + W2 * R14 +
W3 * R32 + W4 * R34,
b21 = W1 * B21 + W2 * B23 +
W3 * B41 + W4 * B43,
gb22 = W1 * GB22 + W2 * GB24 +
W3 * GB42 + W4 * GB44 (8)
第n+1のフレームfn+1では、同色画素で見た場合に1画素だけ水平方向に画素シフトされ、下式(9)に示す4画素加算値が読み出される。
gr13=W1*GR13+W2*GR15+
W3*GR33+W4*GR35,
r14 =W1*R14+W2*R16+
W3*R34+W4*R36,
b23 =W1*B23+W2*B25+
W3*B43+W4*B45,
gb24=W1*GB24+W2*GB26+
W3*GB44+W4*GB46 (9)
In the (n + 1) th frame fn + 1, when viewed from the same color pixel, the pixel is shifted in the horizontal direction by one pixel, and the four-pixel addition value shown in the following equation (9) is read out.
gr13 = W1 * GR13 + W2 * GR15 +
W3 * GR33 + W4 * GR35,
r14 = W1 * R14 + W2 * R16 +
W3 * R34 + W4 * R36,
b23 = W1 * B23 + W2 * B25 +
W3 * B43 + W4 * B45,
gb24 = W1 * GB24 + W2 * GB26 +
W3 * GB44 + W4 * GB46 (9)
第n+2のフレームfn+2では、同色画素で見た場合に1画素だけ水平方向及び垂直方向に画素シフトされ、下式(10)に示す4画素加算値が読み出される。
gr33=W1*GR33+W2*GR35+
W3*GR53+W4*GR55,
r34 =W1*R34+W2*R36+
W3*R54+W4*R56,
b43 =W1*B43+W2*B45+
W3*B63+W4*B65,
gb44=W1*GB44+W2*GB46+
W3*GB64+W4*GB66 (10)
In the (n + 2) th frame fn + 2, when viewed from the same color pixel, the pixel is shifted in the horizontal direction and the vertical direction by one pixel, and the 4-pixel addition value shown in the following equation (10) is read out.
gr33 = W1 * GR33 + W2 * GR35 +
W3 * GR53 + W4 * GR55,
r34 = W1 * R34 + W2 * R36 +
W3 * R54 + W4 * R56,
b43 = W1 * B43 + W2 * B45 +
W3 * B63 + W4 * B65,
gb44 = W1 * GB44 + W2 * GB46 +
W3 * GB64 + W4 * GB66 (10)
第n+3のフレームfn+3では、同色画素で見た場合に1画素だけ垂直方向に画素シフトされ、下式(11)に示す4画素加算値が読み出される。
gr31=W1*GR31+W2*GR33+
W3*GR51+W4*GR53,
r32 =W1*R32+W2*R34+
W3*R52+W4*R54,
b41 =W1*B41+W2*B43+
W3*B61+W4*B63,
gb42=W1*GB42+W2*GB44+
W3*GB62+W4*GB64 (11)
In the (n + 3) th frame fn + 3, when viewed from the same color pixel, the pixel is shifted in the vertical direction by one pixel, and the four-pixel addition value shown in the following equation (11) is read out.
gr31 = W1 * GR31 + W2 * GR33 +
W3 * GR51 + W4 * GR53,
r32 = W1 * R32 + W2 * R34 +
W3 * R52 + W4 * R54,
b41 = W1 * B41 + W2 * B43 +
W3 * B61 + W4 * B63,
gb42 = W1 * GB42 + W2 * GB44 +
W3 * GB62 + W4 * GB64 (11)
なお、上記では撮像素子の一部の画素について説明したが、他の画素の読み出し制御についても同様である。例えば、フレームfnでは、gr11と同様にgr15,gr19等も読み出される。 In addition, although a part of pixels of the image sensor have been described above, the same applies to the readout control of other pixels. For example, in the frame fn, gr15, gr19, etc. are read out similarly to gr11.
図11に、上記の加算読み出し制御において、r=1である場合の説明図を示す。上式(1)より、r=1の場合、W1=W2=W3=W4=1である。図11に示すように、例えば、撮像素子の全画素の画素値が256であるものとする。この場合、上式(8)~(11)より、各4画素加算値として1024が読み出される。 FIG. 11 is an explanatory diagram in the case of r = 1 in the above addition read control. From the above equation (1), when r = 1, W1 = W2 = W3 = W4 = 1. As shown in FIG. 11, for example, it is assumed that the pixel values of all the pixels of the image sensor are 256. In this case, 1024 is read as each 4-pixel addition value from the above equations (8) to (11).
図12に、上記の加算読み出し制御において、r=2である場合の説明図を示す。上式(1)より、r=2の場合、W1=1,W2=W3=1/2,W4=1/4である。図12に示すように、例えば、撮像素子の全画素の画素値が256であるものとすると、例えばフレームfnでは、重み付けされた画素値は、W1*GR11=256,W2*GR13=W3*GR31=128,W4*GR33=64等となる。この場合、上式(8)~(11)より、各4画素加算値として576が読み出される。 FIG. 12 shows an explanatory diagram in the case of r = 2 in the above addition read control. From the above equation (1), when r = 2, W1 = 1, W2 = W3 = 1/2, and W4 = 1/4. As shown in FIG. 12, for example, if the pixel values of all the pixels of the image sensor are 256, for example, in the frame fn, the weighted pixel values are W1 * GR11 = 256, W2 * GR13 = W3 * GR31. = 128, W4 * GR33 = 64 etc. In this case, 576 is read as each 4-pixel addition value from the above equations (8) to (11).
13.画素シフトを行わない場合の加算読み出し制御
図13に、画素シフトを行わない場合の加算読み出し制御についての説明図を示す。この読み出し制御は、上述の通常動画モード、動画静止画融合動画2モード、動画静止画融合静止画2モードにおいて行われる。
13. Addition Read Control when Pixel Shift is not Performed FIG. 13 is an explanatory diagram showing the addition read control when pixel shift is not performed. This reading control is performed in the above-described normal moving image mode, moving image still image
図13に示すように、下式(12)に示す4画素加算値grij、下式(13)に示す4画素加算値rij、下式(14)に示す4画素加算値bij、下式(15)に示す4画素加算値gbijが読み出される。
gr11=W1*GR11+W2*GR13+
W3*GR31+W4*GR33,
gr13=W1*GR15+W2*GR17+
W3*GR35+W4*GR37,
gr31=W1*GR51+W2*GR53+
W3*GR71+W4*GR73,
gr33=W1*GR55+W2*GR57+
W3*GR75+W4*GR77 (12)
As shown in FIG. 13, the 4-pixel addition value grij shown in the following equation (12), the 4-pixel addition value rij shown in the following equation (13), the 4-pixel addition value bij shown in the following equation (14), and the following equation (15 4 pixel added value gbij shown in FIG.
gr11 = W1 * GR11 + W2 * GR13 +
W3 * GR31 + W4 * GR33,
gr13 = W1 * GR15 + W2 * GR17 +
W3 * GR35 + W4 * GR37,
gr31 = W1 * GR51 + W2 * GR53 +
W3 * GR71 + W4 * GR73,
gr33 = W1 * GR55 + W2 * GR57 +
W3 * GR75 + W4 * GR77 (12)
r12=W1*R12+W2*R14+
W3*R32+W4*R34,
r14=W1*R16+W2*R18+
W3*R36+W4*R38,
r32=W1*R52+W2*R54+
W3*R72+W4*R74,
r34=W1*R56+W2*R58+
W3*R76+W4*R78 (13)
r12 = W1 * R12 + W2 * R14 +
W3 * R32 + W4 * R34,
r14 = W1 * R16 + W2 * R18 +
W3 * R36 + W4 * R38,
r32 = W1 * R52 + W2 * R54 +
W3 * R72 + W4 * R74,
r34 = W1 * R56 + W2 * R58 +
W3 * R76 + W4 * R78 (13)
b21=W1*B21+W2*B23+
W3*B41+W4*B43,
b23=W1*B25+W2*B27+
W3*B45+W4*B47,
b41=W1*B61+W2*B63+
W3*B81+W4*B83,
b43=W1*B65+W2*B67+
W3*B85+W4*B87 (14)
b21 = W1 * B21 + W2 * B23 +
W3 * B41 + W4 * B43,
b23 = W1 * B25 + W2 * B27 +
W3 * B45 + W4 * B47,
b41 = W1 * B61 + W2 * B63 +
W3 * B81 + W4 * B83,
b43 = W1 * B65 + W2 * B67 +
W3 * B85 + W4 * B87 (14)
gb22=W1*GB22+W2*GB24+
W3*GB42+W4*GB44,
gb24=W1*GB26+W2*GB28+
W3*GB46+W4*GB48,
gb42=W1*GB62+W2*GB64+
W3*GB82+W4*GB84,
gb44=W1*GB66+W2*GB68+
W3*GB86+W4*GB88 (15)
gb22 = W1 * GB22 + W2 * GB24 +
W3 * GB42 + W4 * GB44,
gb24 = W1 * GB26 + W2 * GB28 +
W3 * GB46 + W4 * GB48,
gb42 = W1 * GB62 + W2 * GB64 +
W3 * GB82 + W4 * GB84,
gb44 = W1 * GB66 + W2 * GB68 +
W3 * GB86 + W4 * GB88 (15)
図14(A)に、上記の加算読み出し制御において、r=1である場合の説明図を示す。上式(1)より、r=1の場合、W1=W2=W3=W4=1である。図14(A)に示すように、例えば、撮像素子の全画素の画素値が256であるものとする。この場合、上式(12)~(15)より、各4画素加算値として1024が読み出される。 FIG. 14A shows an explanatory diagram in the case of r = 1 in the above addition read control. From the above equation (1), when r = 1, W1 = W2 = W3 = W4 = 1. As shown in FIG. 14A, for example, it is assumed that the pixel values of all the pixels of the image sensor are 256. In this case, 1024 is read as each 4-pixel addition value from the above equations (12) to (15).
図14(B)に、上記の加算読み出し制御において、r=2である場合の説明図を示す。上式(1)より、r=2の場合、W1=1,W2=W3=1/2,W4=1/4である。図14(B)に示すように、例えば、撮像素子の全画素の画素値が256であるものとすると、重み付けされた画素値は、例えばW1*GR11=256,W2*GR13=W3*GR31=128,W4*GR33=64等となる。この場合、上式(12)~(15)より、各4画素加算値として576が読み出される。 FIG. 14B shows an explanatory diagram in the case of r = 2 in the above addition read control. From the above equation (1), when r = 2, W1 = 1, W2 = W3 = 1/2, and W4 = 1/4. As shown in FIG. 14B, for example, if the pixel values of all the pixels of the image sensor are 256, the weighted pixel values are, for example, W1 * GR11 = 256, W2 * GR13 = W3 * GR31 = 128, W4 * GR33 = 64, etc. In this case, 576 is read as each 4-pixel addition value from the above equations (12) to (15).
14.推定処理
図15~図19を用いて、重み付け加算された加算画素値から高解像画像を推定する処理について説明する。
14 Estimation Process A process for estimating a high-resolution image from the weighted addition pixel value will be described with reference to FIGS.
なお、以下の説明で用いる受光単位(画素群)とは、加算読み出しされる複数の画素を含む撮像素子上の領域を表し、その受光単位に含まれる複数の画素の画素値が重み付け加算されることで加算画素値が取得される。 The light receiving unit (pixel group) used in the following description represents an area on the image sensor including a plurality of pixels to be added and read, and pixel values of a plurality of pixels included in the light receiving unit are weighted and added. Thus, the added pixel value is acquired.
また、以下の説明では、撮像素子の画素が直交2軸の座標系で配列される場合に、一方の軸に沿った方向を水平方向と呼び、他方の軸に沿った方向を垂直方向と呼ぶ。例えば、水平方向は、撮像動作における水平走査方向である。また、以下の説明では、画像データ上の方向についても適宜、直交2軸の一方の軸に沿った方向を水平方向と呼び、他方の軸に沿った方向を垂直方向と呼ぶ。 In the following description, when the pixels of the image sensor are arranged in an orthogonal two-axis coordinate system, a direction along one axis is referred to as a horizontal direction, and a direction along the other axis is referred to as a vertical direction. . For example, the horizontal direction is the horizontal scanning direction in the imaging operation. In the following description, as to the direction on the image data, the direction along one of the two orthogonal axes is referred to as a horizontal direction, and the direction along the other axis is referred to as a vertical direction.
図15に示すように、加算読み出しの重み係数をc1、c2、c3、c4とする。c1=1とすると、重み係数は下式(16)に示す比率関係のルールをとる(rは、r≧1の実数)。
c1=1,c2=1/r,c3=1/r,c4=1/r2 (16)
As shown in FIG. 15, the weighting factors for addition reading are c 1 , c 2 , c 3 , and c 4 . Assuming that c 1 = 1, the weighting coefficient takes the ratio relationship rule shown in the following equation (16) (r is a real number where r ≧ 1).
c 1 = 1, c 2 = 1 / r, c 3 = 1 / r, c 4 = 1 / r 2 (16)
以下では、説明を簡単にするために、r=2とおき、下式(17)とする。
c1=1、c2=1/2、c3=1/2、c4=1/4 (17)
In the following, in order to simplify the explanation, r = 2 is assumed and the following equation (17) is assumed.
c 1 = 1, c 2 = 1/2, c 3 = 1/2, c 4 = ¼ (17)
図16(A)に、受光単位の説明図を示す。vijは、加算画素値から推定される推定画素値であり、撮像素子の各画素に対応する画素値である。受光単位は、vijの4画素毎に設定され、各受光単位からの読み出しにより4画素加算値aijが取得される。隣り合う受光単位は、重畳領域を有する。例えば、a00とa10は、v10,v01で重畳する。 FIG. 16A is an explanatory diagram of light reception units. v ij is an estimated pixel value estimated from the added pixel value, and is a pixel value corresponding to each pixel of the image sensor. The light reception unit is set for every four pixels of v ij, and the 4-pixel addition value a ij is acquired by reading from each light reception unit. Adjacent light receiving units have overlapping regions. For example, a 00 and a 10 are superimposed in v 10, v 01.
例えば、画素シフトを行うモードでは、フレームfn~fn+3において、それぞれ4画素加算値a00,a10,a01,a11が読み出される。また、画素シフトを行わないモードでは、4画素加算値a00,a20,・・・が読み出され、4画素加算値a10,a01,a11は、周辺の4画素加算値a00,a20,・・・から補間により求められる。 For example, in the pixel shift mode, 4-pixel addition values a 00 , a 10 , a 01 , and a 11 are read in frames fn to fn + 3, respectively. In the mode in which no pixel shift is performed, the 4-pixel addition values a 00 , a 20 ,... Are read, and the 4-pixel addition values a 10 , a 01 , a 11 are the surrounding 4-pixel addition values a 00. , A 20 ,...
図16(B)に、中間画素値(中間推定画素値)の説明図を示す。本実施形態の推定処理では、まず水平方向の高解像化を行って中間画素値bijを求め、そのbijを垂直方向に高解像化して推定画素値vijを求める。中間画素値bijは、vij,vi(j+1)に対応する。垂直方向に隣り合うbijは、重畳領域を有する。例えば、b00とb01は、v01で重畳する。なお、本実施形態では、垂直方向に高解像化してbijを求め、水平方向に高解像化してvijを求めてもよい。 FIG. 16B illustrates an intermediate pixel value (intermediate estimated pixel value). In the estimation process of the present embodiment obtains the intermediate pixel value b ij is first performed in the horizontal direction of the high-resolution, determining an estimated pixel value v ij by Kokai Zoka the b ij in the vertical direction. The intermediate pixel value b ij corresponds to v ij and v i (j + 1) . Bij adjacent to each other in the vertical direction has an overlapping region. For example, b 00 and b 01 are superimposed in v 01. In this embodiment, b ij may be obtained by increasing the resolution in the vertical direction, and v ij may be obtained by increasing the resolution in the horizontal direction.
図17に示すように、重み付け画素加算重畳シフトサンプリングにより検出される水平方向の最初の行に注目し、シフト順に重み付け画素加算値をa00、a10、a20とする。このとき、下式(18)が成り立つ。
a00=c1v00+c2v01+c3v10+c4v11
a10=c1v10+c2v11+c3v20+c4v21 (18)
As shown in FIG. 17, paying attention to the first horizontal row detected by weighted pixel addition superimposed shift sampling, the weighted pixel addition values are set to a 00 , a 10 , and a 20 in the order of shift. At this time, the following expression (18) holds.
a 00 = c 1 v 00 + c 2 v 01 + c 3 v 10 + c 4 v 11
a 10 = c 1 v 10 + c 2 v 11 + c 3 v 20 + c 4 v 21 (18)
また、下式(19)に示すようにb00、b10、b20を定義し、上式(17)を代入する。
b00=c1v00+c2v01=v00+(1/2)v01
b10=c1v10+c2v11=v10+(1/2)v11
b20=c1v20+c2v21=v20+(1/2)v21 (19)
Further, b 00 , b 10 , and b 20 are defined as shown in the following expression (19), and the above expression (17) is substituted.
b 00 = c 1 v 00 + c 2 v 01 = v 00 + (1/2) v 01
b 10 = c 1 v 10 + c 2 v 11 = v 10 + (1/2) v 11
b 20 = c 1 v 20 + c 2 v 21 = v 20 + (1/2) v 21 (19)
次に、上式(17)、(19)を用いて上式(18)を変形すると、下式(20)が成り立つ。
a00=v00+(1/2)v01+(1/2)v10+(1/4)v11
=b00+(1/2)b10,
a10=v10+(1/2)v11+(1/2)v20+(1/4)v21
=b10+(1/2)b20 (20)
Next, when the above equation (18) is transformed using the above equations (17) and (19), the following equation (20) is established.
a 00 = v 00 + (1/2) v 01 + (1/2) v 10 + (1/4) v 11
= B 00 + (1/2) b 10 ,
a 10 = v 10 + (1/2 )
= B 10 + (1/2) b 20 (20)
上式(20)において、a00、a10に所定の係数(所定の重み係数)を掛けて差分δi0を取り、上式(19)を使って変形すると、下式(21)が成り立つ。
δi0=a10-2a00
=(1/2)v20+(1/4)v21-(2v00+v01)
=(1/2)b20-2b00 (21)
In the above equation (20), when a 00 and a 10 are multiplied by a predetermined coefficient (predetermined weighting coefficient) to obtain a difference δi 0 and transformed using the above equation (19), the following equation (21) is established.
δi 0 = a 10 -2a 00
= (1/2) v 20 + ( 1/4) v 21 - (2v 00 + v 01)
= (1/2) b 20 -2b 00 (21)
b00を未知数とすると、下式(22)に示すように、中間画素値b10、b20をb00の関数として求めることができる。
b00=(未知数),
b10=2(a00-b00),
b20=4b00+2δi0=4b00+2(a10-2a00) (22)
If b 00 is an unknown number, intermediate pixel values b 10 and b 20 can be obtained as a function of b 00 as shown in the following equation (22).
b 00 = (unknown number),
b 10 = 2 (a 00 -b 00 ),
b 20 = 4b 00 + 2δi 0 = 4b 00 +2 (a 10 −2a 00 ) (22)
このように、b00を未知数(初期変数)として高精細な中間画素値{b00,b10,b20}の組合せパターンが求められる。同様にして、2行目、3行目においてもb01、b02を未知数として中間画素値{b01,b11,b21}、{b02,b12,b22}の組合せパターンが求められる。 In this way, a high-definition combination pattern of intermediate pixel values {b 00 , b 10 , b 20 } is obtained with b 00 as an unknown (initial variable). Similarly, combination patterns of intermediate pixel values {b 01 , b 11 , b 21 }, {b 02 , b 12 , b 22 } are obtained with b 01 and b 02 as unknowns in the second and third lines. It is done.
次に、未知数b00を求める手法について説明する。図18に示すように、重み付け重畳シフトサンプリングにより検出されるサンプリング画素値によるパターン{a00,a10}と中間画素値{b00,b10,b20}によるパターンを比較する。そして、その誤差Eが最小になる未知数b00を導出し、中間画素値b00として設定する。 Next, a description will be given of a method of obtaining the unknown b 00. As shown in FIG. 18, a pattern {a 00 , a 10 } based on sampling pixel values detected by weighted superimposition shift sampling is compared with a pattern based on intermediate pixel values {b 00 , b 10 , b 20 }. Then, an unknown number b 00 that minimizes the error E is derived and set as the intermediate pixel value b 00 .
このとき、上式(20)に示すように、サンプリング画素値{a00,a10}は、中間画素値{b00,b10,b20}の異なる重み付けによる隣接値の加算値となる。そのため、単純にこれらを比較しても正しい推定値が得られない。そこで、図18に示すように、中間画素値に重み付けをして比較を行う。具体的には、中間画素値{bij,b(i+1)j}の重み付けが、c3=c1/2、c4=c2/2であることを利用すると、下式(23)が成り立つことが分かる。
aij=bij+(1/2)b(i+1)j (23)
At this time, as shown in the above equation (20), the sampling pixel values {a 00 , a 10 } are added values of adjacent values by different weighting of the intermediate pixel values {b 00 , b 10 , b 20 }. Therefore, even if these are simply compared, a correct estimated value cannot be obtained. Therefore, as shown in FIG. 18, the intermediate pixel values are weighted for comparison. Specifically, the intermediate pixel values weighted {b ij, b (i + 1) j} is, the use of it is c 3 = c 1/2, c 4 =
a ij = b ij + (1/2) b (i + 1) j (23)
この上式(23)による重み付けを考慮すると、下式(24)に示す評価関数Ejが求められる。そして、この評価関数Ejにより、パターン{a00,a10}と中間推定画素値{b00,b10,b20}の類似性評価を行う。
上式(22)を用いると、評価関数Ejは、b00を初期変数とした関数で表される。したがって、図19に示すように、Ejを最小にする未知数b00(=α)を求め、b00の値を決定できる。そして、推定したb00の値を上式(22)に代入し、b10,b20が求められる。なお、b00が取り得る値の範囲は0≦b00≦a00であるので、この範囲にて評価関数Ejの最小値を求めればよい。同様に、2行目、3行目においても、中間画素値{b01,b11,b21}、{b02,b12,b22}の組合せパターンをb01,b02を未知数として求められる。 Using the above equation (22), the evaluation function Ej is represented by a function with b 00 as an initial variable. Therefore, as shown in FIG. 19, an unknown b 00 (= α) that minimizes Ej is obtained, and the value of b 00 can be determined. Then, the estimated value of b 00 is substituted into the above equation (22) to obtain b 10 and b 20 . Since the range of values that b 00 can take is 0 ≦ b 00 ≦ a 00 , the minimum value of the evaluation function Ej may be obtained within this range. Similarly, in the second and third lines, a combination pattern of intermediate pixel values {b 01 , b 11 , b 21 } and {b 02 , b 12 , b 22 } is obtained with b 01 and b 02 as unknowns. It is done.
なお、bijからvijを推定する処理は、上述のaijからbijを推定する手法と同様に行われる。すなわち、v00を未知数として、v00,v01,v02の関係式をb00,b01の差分値を用いて求める。次に、{v00,v01,v02}と{b00,b01}の誤差の評価関数を求め、その評価関数が最小となるv00を求め、求めたv00を関係式に代入してv01,v02を求める。 The processing for estimating the v ij from b ij is carried out in the same manner as method of estimating the b ij from above a ij. That is, the relational expression of v 00 , v 01 , v 02 is obtained using the difference value of b 00 , b 01 with v 00 as an unknown. Next, an error evaluation function of {v 00 , v 01 , v 02 } and {b 00 , b 01 } is obtained, v 00 that minimizes the evaluation function is obtained, and the obtained v 00 is substituted into the relational expression. Then, v 01 and v 02 are obtained.
さて、上述のように、動画撮影中に静止画を撮影する場合、高フレームレートの動画と高解像静止画の両立が困難であるという課題がある。また、超解像処理では、大負荷の処理であることから、処理回路の規模が増大してしまう等の課題がある。 As described above, when shooting a still image during moving image shooting, there is a problem that it is difficult to achieve both a high frame rate moving image and a high-resolution still image. In addition, since the super-resolution processing is a heavy load processing, there is a problem that the scale of the processing circuit increases.
この点、本実施形態によれば、受光単位が撮像素子の複数の画素毎に設定され、その受光単位に含まれる複数の画素の画素値が重み付け加算されて加算画素値(受光値)として読み出され、低解像フレーム画像が取得される。そして、取得された低解像フレーム画像が記憶され、記憶された複数の低解像フレーム画像に基づいて、受光単位に含まれる各画素の画素値が推定される。推定された画素値に基づいて、低解像フレーム画像よりも高解像度の高解像フレーム画像が出力される。このとき、低解像フレーム画像は、受光単位を重畳しながら順次画素シフトさせつつ加算画素値が読み出されて取得される。そして、受光単位が順次画素シフトされることで得られた複数の加算画素値に基づいて、受光単位に含まれる各画素の画素値が推定される。 In this regard, according to the present embodiment, the light receiving unit is set for each of the plurality of pixels of the image sensor, and the pixel values of the plurality of pixels included in the light receiving unit are weighted and added to read as an added pixel value (light receiving value). The low resolution frame image is obtained. Then, the acquired low-resolution frame image is stored, and the pixel value of each pixel included in the light receiving unit is estimated based on the plurality of stored low-resolution frame images. Based on the estimated pixel value, a high-resolution frame image having a higher resolution than the low-resolution frame image is output. At this time, the low-resolution frame image is acquired by reading the added pixel value while sequentially shifting the pixels while superimposing the light receiving units. Then, the pixel value of each pixel included in the light reception unit is estimated based on a plurality of added pixel values obtained by sequentially shifting the light reception unit.
例えば、図16(A)で上述のように、受光単位が4画素毎に設定される。第1フレームで加算画素値a00、a20等が加算読み出しされ、a00、a20等による低解像フレーム画像が取得される。そして、a10、a30等による低解像フレーム画像、a11、a31等による低解像フレーム画像、a01、a21等による低解像フレーム画像が順次取得される。例えば、a00、a10、a11、a01を取得する受光単位は、1画素分ずつシフトされ、2画素ずつ重畳している。これらの画像は、例えば図5に示すフレームメモリ128(記憶部)に記憶される。そして、高解像度処理回路127(推定演算部)により推定画素値vijが推定される。静止画信号処理回路117(画像出力部)によりvijが処理され、撮像素子の解像度相当の高解像画像が出力される。 For example, as described above with reference to FIG. 16A, the light receiving unit is set for every four pixels. In the first frame, the additional pixel values a 00 , a 20 and the like are added and read, and a low resolution frame image by a 00 , a 20 and the like is acquired. Then, a low resolution frame image by a 10 , a 30, etc., a low resolution frame image by a 11 , a 31, etc., and a low resolution frame image by a 01 , a 21, etc. are sequentially acquired. For example, the light receiving units for acquiring a 00 , a 10 , a 11 , and a 01 are shifted by one pixel and overlapped by two pixels. These images are stored in, for example, the frame memory 128 (storage unit) shown in FIG. Then, the estimated pixel value v ij is estimated by the high resolution processing circuit 127 (estimation calculation unit). The still image signal processing circuit 117 (image output unit) processes v ij and outputs a high resolution image corresponding to the resolution of the image sensor.
これにより、簡素な処理で動画から高解像画像を取得することが可能になる。例えば、上述の中間画素値の推定を用いて推定処理を簡素化できる。また、高解像静止画は、低解像動画の任意タイミングのものを生成できるため、ユーザは、決定的瞬間の高解像静止画を容易に得ることができる。また、撮影時には低解像動画を取得することで高フレームレートで撮影し、必要に応じて高解像静止画を取得できる。 This makes it possible to obtain high-resolution images from moving images with simple processing. For example, the estimation process can be simplified using the above-described estimation of the intermediate pixel value. In addition, since the high-resolution still image can be generated at any timing of the low-resolution moving image, the user can easily obtain the high-resolution still image at the decisive moment. In addition, by capturing a low-resolution moving image at the time of shooting, it is possible to capture at a high frame rate and acquire a high-resolution still image as necessary.
より具体的には、本実施形態では、図16(A)に示すように、受光単位が第1のポジションa00と次の第2のポジションa10に順次設定される。これらの受光単位はv10、v11を含む領域において重畳する。そして、図17で上述のように、これらの受光単位から得られた加算画素値の差分値δi0が求められる。図16(B)に示すように、第1の中間画素値b00は、受光単位a00から重畳領域v10、v11を除いた第1の受光領域v00、v01の受光値である。第2の中間画素値b20は、受光単位a10から重畳領域v10、v11を除いた第2の受光領域v20、v21の受光値である。そして、上式(22)に示すように、b00とb20の関係式が差分値δi0を用いて表される。その関係式を用いて第1、第2の中間画素値b00,b20が推定され、推定された第1の中間画素値b00を用いて受光単位の各画素の画素値が求められる。 More specifically, in the present embodiment, as shown in FIG. 16 (A), the light receiving unit is sequentially set to a first position a 00 second position a 10 follows. These light receiving units are overlapped in a region including v 10 and v 11 . Then, as described above with reference to FIG. 17, the difference value δi 0 of the added pixel values obtained from these light receiving units is obtained. As shown in FIG. 16B, the first intermediate pixel value b 00 is the light reception value of the first light receiving regions v 00 and v 01 obtained by removing the overlapping regions v 10 and v 11 from the light receiving unit a 00. . The second intermediate pixel value b 20 is a received-light value in the second light receiving region v 20, v 21 excluding the overlapping area v 10, v 11 from the light receiving unit a 10. Then, as shown in the above equation (22), equation of b 00 and b 20 are represented by using a difference value .delta.i 0. The first and second intermediate pixel values b 00 and b 20 are estimated using the relational expression, and the pixel value of each pixel of the light receiving unit is obtained using the estimated first intermediate pixel value b 00 .
このようにすれば、重畳シフトされた加算画素値から中間画素値を一旦推定し、その重畳シフトされた中間画素値から推定画素値を求めることで、高解像画像の推定処理を簡素化できる。例えば、2次元フィルタの繰り返し演算(例えば特開2009-124621号公報)や、初期値の設定に適当な部分を探索(例えば特開2008-243037号公報)する等の複雑な処理が不要となる。 In this way, it is possible to simplify the estimation process of the high-resolution image by once estimating the intermediate pixel value from the addition pixel value subjected to the superposition shift and obtaining the estimation pixel value from the intermediate pixel value subjected to the superposition shift. . For example, complicated processing such as a repetitive calculation of a two-dimensional filter (for example, Japanese Patent Laid-Open No. 2009-124621) and a search for a part suitable for setting an initial value (for example, Japanese Patent Laid-Open No. 2008-243037) becomes unnecessary. .
また、本実施形態では、図18で上述のように、中間画素値b00,b20を含む連続する(連続する順番の)中間画素値{b00、b10、b20}を中間画素値パターンとする。この中間画素値間の関係式が加算画素値a00,a10を用いて表される。また、加算画素値a00,a10を含む連続する加算画素値{a00、a10}を加算画素値パターンとする。そして、中間画素値パターンと加算画素値パターンとを比較して類似性が評価され、その評価結果に基づいて、類似性が最も高くなるように中間画素値b00、b10、b20が決定される。 Further, in the present embodiment, as described above with reference to FIG. 18, successive intermediate pixel values {b 00 , b 10 , b 20 } including the intermediate pixel values b 00 and b 20 are converted into intermediate pixel values. A pattern. The relational expression between the intermediate pixel values is expressed using the added pixel values a 00 and a 10 . Further, the addition pixel value pattern addition pixel values a 00, a summing pixel values successive containing 10 {a 00, a 10} . Then, the similarity is evaluated by comparing the intermediate pixel value pattern with the added pixel value pattern, and based on the evaluation result, the intermediate pixel values b 00 , b 10 , and b 20 are determined so that the similarity becomes the highest. Is done.
このようにすれば、受光単位が重畳されながら画素シフトされることで取得された複数の加算画素値に基づいて、中間画素値を推定できる。 In this way, the intermediate pixel value can be estimated based on a plurality of added pixel values acquired by pixel shifting while superimposing the light receiving units.
より具体的には、本実施形態では、上式(22)、(24)に示すように、中間画素値間の関係式で表された中間画素値パターン{b00、b10、b20}と加算画素値パターン{a00、a10}との誤差を表す評価関数Ejが求められる。そして、図19に示すように、その評価関数Ejの値が最小となるように中間画素値b00、b10、b20が決定される。 More specifically, in the present embodiment, as shown in the above formulas (22) and (24), the intermediate pixel value pattern {b 00 , b 10 , b 20 } represented by the relational expression between the intermediate pixel values. And an evaluation function Ej representing an error between the added pixel value pattern {a 00 , a 10 }. Then, as shown in FIG. 19, the intermediate pixel values b 00 , b 10 , and b 20 are determined so that the value of the evaluation function Ej is minimized.
このようにすれば、誤差を評価関数で表し、その評価関数の極小値に対応する中間画素値を求めることで、中間画素値の値を推定できる。例えば、上述のように最小二乗法を用いて未知数を求めることで、簡素な処理で中間画素推定の初期値を設定できる。 In this way, the value of the intermediate pixel value can be estimated by expressing the error by the evaluation function and obtaining the intermediate pixel value corresponding to the minimum value of the evaluation function. For example, as described above, the initial value of the intermediate pixel estimation can be set with a simple process by obtaining the unknown using the least square method.
ここで、以上の実施形態では、露出制御情報出力部140や表示制御部150を構成する各部をハードウェアで構成することとしたが、これに限定されるものではない。例えば、CPUが各部の処理を行う構成とし、CPUがプログラムを実行することによってソフトウェアとして実現することとしてもよい。この場合、CPUは、例えば図6~図8に示すフローチャートの処理を実行する。
Here, in the above-described embodiment, each part of the exposure control
また、以上の実施形態では、静止画処理部141を構成する各部をハードウェアで構成することとしたが、これに限定されるものではない。例えば、パソコン等の公知のコンピュータシステムを画像処理装置として用い、静止画処理部141の各部が行う処理を実現するためのプログラムをコンピュータシステムのCPUが実行することによって、静止画処理部141の各部が行う処理をソフトウェアとして実現してもよい。
In the above embodiment, each unit constituting the still
なお、上記のように本実施形態について詳細に説明したが、本発明の新規事項および効果から実体的に逸脱しない多くの変形が可能であることは当業者には容易に理解できるであろう。従って、このような変形例はすべて本発明の範囲に含まれるものとする。例えば、明細書又は図面において、少なくとも一度、より広義又は同義な異なる用語(露出制御、加算画素値等)と共に記載された用語(AE制御、4画素加算値等)は、明細書又は図面のいかなる箇所においても、その異なる用語に置き換えることができる。また、露出制御情報出力部、表示制御部、静止画処理部、制御部、撮像制御部、撮像部、撮像装置等の構成、動作も本実施形態で説明したものに限定に限定されず、種々の変形実施が可能である。 Although the present embodiment has been described in detail as described above, it will be readily understood by those skilled in the art that many modifications can be made without departing from the novel matters and effects of the present invention. Accordingly, all such modifications are intended to be included in the scope of the present invention. For example, in the specification or the drawings, terms (AE control, 4-pixel addition values, etc.) described at least once together with different terms (exposure control, addition pixel value, etc.) in a broader sense or synonymous It can also be replaced with the different terminology. Also, the configuration and operation of the exposure control information output unit, display control unit, still image processing unit, control unit, imaging control unit, imaging unit, imaging device, etc. are not limited to those described in this embodiment, and various Can be implemented.
100 撮像部、101 撮像レンズ、102 絞り、103 撮像素子、
104 A/D変換部、105 システムコントローラ、
106 ユーザI/F部、107 モードスイッチ、108 動画スイッチ、
109 静止画スイッチ、110 外部メモリ、111 表示装置、
112 記録媒体、113 制御部、114 電子防振回路、
115 ラインメモリ、116 動画信号処理回路、
117 静止画信号処理回路、118 撮像制御部、
119 撮像素子制御部、120 絞り制御部、121 圧縮伸張回路、
122 AE処理回路、123 AEゲイン設定回路、
124 表示装置制御回路、125 表示ゲイン設定回路、
126 記録媒体I/F回路、127 高解像度処理回路、
128 フレームメモリ、130 係数設定部、
140 露出制御情報出力部、141 静止画処理部、
142 動画処理部、150 表示制御部、160 読み出し制御部、
161 露出制御部、
aij 画素加算値、bij 中間画素値、δi0 差分値、Ej 評価関数、
vij 推定画素値、fn フレーム、W1~W4 重み付け係数
100 imaging unit, 101 imaging lens, 102 aperture, 103 imaging device,
104 A / D converter, 105 system controller,
106 User I / F section, 107 Mode switch, 108 Movie switch,
109 still image switch, 110 external memory, 111 display device,
112 recording medium, 113 control unit, 114 electronic image stabilization circuit,
115 line memory, 116 video signal processing circuit,
117 still image signal processing circuit, 118 imaging control unit,
119 Image sensor control unit, 120 Aperture control unit, 121 Compression / decompression circuit,
122 AE processing circuit, 123 AE gain setting circuit,
124 display device control circuit, 125 display gain setting circuit,
126 recording medium I / F circuit, 127 high resolution processing circuit,
128 frame memory, 130 coefficient setting unit,
140 exposure control information output unit, 141 still image processing unit,
142 video processing unit, 150 display control unit, 160 readout control unit,
161 exposure control unit,
a ij pixel addition value, b ij intermediate pixel value, δi 0 difference value, Ej evaluation function,
v ij estimated pixel value, fn frame, W1 to W4 weighting coefficient
Claims (10)
前記撮像素子の複数の画素の画素値を重み付け加算して加算画素値として読み出す読み出し制御部と、
前記重み付け加算における重み付け係数を設定する係数設定部と、
前記重み付け係数に基づいて、撮像部の露出制御を行うための露出制御情報を出力する露出制御情報出力部と、
を含むことを特徴とする撮像装置。 An image sensor for capturing a subject image;
A read control unit that weights and adds pixel values of a plurality of pixels of the image sensor and reads them as added pixel values;
A coefficient setting unit for setting a weighting coefficient in the weighted addition;
An exposure control information output unit that outputs exposure control information for performing exposure control of the imaging unit based on the weighting coefficient;
An imaging apparatus comprising:
前記係数設定部は、
第1の撮像モードにおいて、第1の重み付け係数を設定し、
第2の撮像モードにおいて、第2の重み付け係数を設定し、
前記露出制御情報出力部は、
前記第1の重み付け係数の和と前記第2の重み付け係数の和との比を重み付け係数比として求め、前記重み付け係数比に基づく測光評価値を用いて前記露出制御情報を出力することを特徴とする撮像装置。 In claim 1,
The coefficient setting unit
In the first imaging mode, a first weighting factor is set,
In the second imaging mode, a second weighting factor is set,
The exposure control information output unit
A ratio of the sum of the first weighting coefficients and the sum of the second weighting coefficients is obtained as a weighting coefficient ratio, and the exposure control information is output using a photometric evaluation value based on the weighting coefficient ratio. An imaging device.
前記係数設定部は、
前記第1の撮像モードにおいて、前記重み付け加算される各画素に対して同じ値の係数を前記第1の重み付け係数として設定し、
前記第2のモードにおいて、前記第1の重み付け係数とは異なる前記第2の重み付け係数を設定し、
前記露出制御情報出力部は、
前記第1の撮像モードにおいて、前記加算画素値から測光評価値を求め、求めた前記測光評価値を用いて前記露出制御情報を出力し、
前記第2の撮像モードにおいて、前記加算画素値に対して前記重み付け係数比を乗算した画素値から前記測光評価値を求め、求めた前記測光評価値を用いて前記露出制御情報を出力することを特徴とする撮像装置。 In claim 2,
The coefficient setting unit
In the first imaging mode, a coefficient having the same value is set as the first weighting coefficient for each pixel to be weighted and added,
In the second mode, setting the second weighting factor different from the first weighting factor,
The exposure control information output unit
In the first imaging mode, a photometric evaluation value is obtained from the added pixel value, the exposure control information is output using the obtained photometric evaluation value,
In the second imaging mode, obtaining the photometric evaluation value from a pixel value obtained by multiplying the added pixel value by the weighting coefficient ratio, and outputting the exposure control information using the obtained photometric evaluation value. An imaging device that is characterized.
前記重み付け係数に基づいて表示画像の輝度を調整し、調整後の前記表示画像を表示する制御を行う表示制御部を含むことを特徴とする撮像装置。 In claim 1,
An imaging apparatus comprising: a display control unit configured to adjust a luminance of a display image based on the weighting coefficient and perform control to display the adjusted display image.
前記係数設定部は、
第1の撮像モードにおいて、第1の重み付け係数を設定し、
第2の撮像モードにおいて、第2の重み付け係数を設定し、
前記表示制御部は、
前記第1の重み付け係数の和と前記第2の重み付け係数の和との比を重み付け係数比として求め、前記重み付け係数比に基づいて前記表示画像の輝度を調整することを特徴とする撮像装置。 In claim 4,
The coefficient setting unit
In the first imaging mode, a first weighting factor is set,
In the second imaging mode, a second weighting factor is set,
The display control unit
An imaging apparatus, wherein a ratio between a sum of the first weighting coefficients and a sum of the second weighting coefficients is obtained as a weighting coefficient ratio, and brightness of the display image is adjusted based on the weighting coefficient ratio.
前記係数設定部は、
前記第1の撮像モードにおいて、前記重み付け加算される各画素に対して同じ値の係数を前記第1の重み付け係数として設定し、
前記第2のモードにおいて、前記第1の重み付け係数とは異なる前記第2の重み付け係数を設定し、
前記表示制御部は、
前記第1の撮像モードにおいて、前記第1の重み付け係数により重み付け加算された前記加算画素値による前記表示画像を表示する制御を行い、
前記第2の撮像モードにおいて、前記第2の重み付け係数により重み付け加算された前記加算画素値に対して、前記重み付け係数比を乗算し、乗算後の前記加算画素値による前記表示画像を表示する制御を行うことを特徴とする撮像装置。 In claim 5,
The coefficient setting unit
In the first imaging mode, a coefficient having the same value is set as the first weighting coefficient for each pixel to be weighted and added,
In the second mode, setting the second weighting factor different from the first weighting factor,
The display control unit
In the first imaging mode, control is performed to display the display image by the added pixel value weighted and added by the first weighting coefficient,
In the second imaging mode, control for multiplying the added pixel value weighted and added by the second weighting factor by the weighting factor ratio and displaying the display image by the added pixel value after multiplication An imaging apparatus characterized by performing
前記加算画素値による画像を低解像フレーム画像として記憶する記憶部と、
前記記憶部に記憶された複数の低解像フレーム画像に基づいて、前記受光単位に含まれる各画素の画素値を推定する推定演算部と、
前記推定演算部により推定された画素値に基づいて、前記低解像フレーム画像よりも高解像度の高解像フレーム画像を出力する画像出力部と、
を含み、
前記読み出し制御部は、
前記加算画素値を取得する単位である受光単位を前記撮像素子の複数の画素毎に設定し、前記受光単位に含まれる複数の画素の画素値を重み付け加算し、前記受光単位を重畳しながら順次画素シフトさせつつ前記加算画素値を読み出し、
前記推定演算部は、
前記受光単位が順次画素シフトされることで得られた複数の加算画素値に基づいて、前記受光単位に含まれる各画素の画素値を推定することを特徴とする撮像装置。 In any one of Claims 1 thru | or 6.
A storage unit for storing an image based on the added pixel value as a low-resolution frame image;
Based on a plurality of low-resolution frame images stored in the storage unit, an estimation calculation unit that estimates a pixel value of each pixel included in the light receiving unit;
An image output unit that outputs a high-resolution frame image having a higher resolution than the low-resolution frame image based on the pixel value estimated by the estimation calculation unit;
Including
The read control unit
A light receiving unit, which is a unit for obtaining the added pixel value, is set for each of a plurality of pixels of the image sensor, pixel values of a plurality of pixels included in the light receiving unit are weighted and added, and the light receiving units are sequentially superimposed. Read the added pixel value while shifting the pixel,
The estimation calculation unit includes:
An imaging apparatus, wherein a pixel value of each pixel included in the light receiving unit is estimated based on a plurality of added pixel values obtained by sequentially shifting the light receiving unit.
前記画素シフトにより、前記受光単位が、第1のポジションと、前記第1のポジションの次の第2のポジションに順次設定され、前記第1のポジションの受光単位と前記第2のポジションの受光単位が重畳する場合に、
前記推定演算部は、
前記第1、第2のポジションの加算画素値の差分値を求め、
前記第1のポジションの受光単位から重畳領域を除いた第1の受光領域の受光値である第1の中間画素値と、前記第2のポジションの受光単位から前記重畳領域を除いた第2の受光領域の受光値である第2の中間画素値との関係式を、前記差分値を用いて表し、
前記関係式を用いて前記第1、第2の中間画素値を推定し、推定した前記第1の中間画素値を用いて前記受光単位に含まれる各画素の画素値を求めることを特徴とする撮像装置。 In claim 7,
By the pixel shift, the light receiving unit is sequentially set to a first position and a second position next to the first position, and the light receiving unit of the first position and the light receiving unit of the second position. Are superimposed,
The estimation calculation unit includes:
Obtaining a difference value between the added pixel values of the first and second positions;
A first intermediate pixel value that is a light reception value of the first light receiving region obtained by removing the overlap region from the light reception unit of the first position, and a second value obtained by removing the overlap region from the light reception unit of the second position. A relational expression with the second intermediate pixel value that is a light reception value of the light receiving region is expressed using the difference value,
The first and second intermediate pixel values are estimated using the relational expression, and the pixel value of each pixel included in the light receiving unit is obtained using the estimated first intermediate pixel value. Imaging device.
前記推定演算部は、
前記第1、第2の中間画素値を含む連続する中間画素値を中間画素値パターンとする場合に、前記中間画素値パターンに含まれる中間画素値間の関係式を、前記第1、第2のポジションの加算画素値を用いて表し、
前記第1、第2のポジションの加算画素値を含む連続する加算画素値を加算画素値パターンとする場合に、前記中間画素値間の関係式で表された前記中間画素値パターンと前記加算画素値パターンとを比較して類似性を評価し、
前記類似性の評価結果に基づいて、前記類似性が最も高くなるように前記中間画素値パターンに含まれる中間画素値を決定することを特徴とする撮像装置。 In claim 8,
The estimation calculation unit includes:
When successive intermediate pixel values including the first and second intermediate pixel values are used as an intermediate pixel value pattern, a relational expression between intermediate pixel values included in the intermediate pixel value pattern is expressed as the first and second It is expressed using the added pixel value of the position of
The intermediate pixel value pattern and the addition pixel represented by the relational expression between the intermediate pixel values when successive addition pixel values including the addition pixel values of the first and second positions are used as the addition pixel value pattern. Compare with value patterns to evaluate similarity,
An imaging device, wherein an intermediate pixel value included in the intermediate pixel value pattern is determined based on the similarity evaluation result so that the similarity becomes the highest.
前記推定演算部は、
前記中間画素値間の関係式で表された前記中間画素値パターンと前記加算画素値パターンとの誤差を表す評価関数を求め、前記評価関数の値が最小となるように前記中間画素値パターンに含まれる中間画素値を決定することを特徴とする撮像装置。 In claim 9,
The estimation calculation unit includes:
An evaluation function representing an error between the intermediate pixel value pattern represented by the relational expression between the intermediate pixel values and the addition pixel value pattern is obtained, and the intermediate pixel value pattern is set so that the value of the evaluation function is minimized. An image pickup apparatus that determines an intermediate pixel value to be included.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2010-164705 | 2010-07-22 | ||
| JP2010164705A JP2012028971A (en) | 2010-07-22 | 2010-07-22 | Imaging device |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2012011484A1 true WO2012011484A1 (en) | 2012-01-26 |
Family
ID=45496903
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2011/066413 Ceased WO2012011484A1 (en) | 2010-07-22 | 2011-07-20 | Image capture device |
Country Status (2)
| Country | Link |
|---|---|
| JP (1) | JP2012028971A (en) |
| WO (1) | WO2012011484A1 (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104885445A (en) * | 2012-12-25 | 2015-09-02 | 索尼公司 | Solid state image-sensing element, method of driving same, and electronic device |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2015115224A1 (en) * | 2014-02-03 | 2015-08-06 | オリンパス株式会社 | Solid-state image capture device and image capture system |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2001359038A (en) * | 2000-06-09 | 2001-12-26 | Olympus Optical Co Ltd | Image pickup device |
| JP2007282134A (en) * | 2006-04-11 | 2007-10-25 | Olympus Imaging Corp | Imaging apparatus |
| JP2009124621A (en) * | 2007-11-19 | 2009-06-04 | Sanyo Electric Co Ltd | Super-resolution processing apparatus and method, and imaging apparatus |
| JP2010130289A (en) * | 2008-11-27 | 2010-06-10 | Panasonic Corp | Solid-state imaging apparatus, semiconductor integrated circuit and defective pixel correction method |
-
2010
- 2010-07-22 JP JP2010164705A patent/JP2012028971A/en not_active Withdrawn
-
2011
- 2011-07-20 WO PCT/JP2011/066413 patent/WO2012011484A1/en not_active Ceased
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2001359038A (en) * | 2000-06-09 | 2001-12-26 | Olympus Optical Co Ltd | Image pickup device |
| JP2007282134A (en) * | 2006-04-11 | 2007-10-25 | Olympus Imaging Corp | Imaging apparatus |
| JP2009124621A (en) * | 2007-11-19 | 2009-06-04 | Sanyo Electric Co Ltd | Super-resolution processing apparatus and method, and imaging apparatus |
| JP2010130289A (en) * | 2008-11-27 | 2010-06-10 | Panasonic Corp | Solid-state imaging apparatus, semiconductor integrated circuit and defective pixel correction method |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104885445A (en) * | 2012-12-25 | 2015-09-02 | 索尼公司 | Solid state image-sensing element, method of driving same, and electronic device |
| CN104885445B (en) * | 2012-12-25 | 2018-08-28 | 索尼公司 | Solid photographic device and its driving method and electronic equipment |
Also Published As
| Publication number | Publication date |
|---|---|
| JP2012028971A (en) | 2012-02-09 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP5652649B2 (en) | Image processing apparatus, image processing method, and image processing program | |
| US10063768B2 (en) | Imaging device capable of combining a plurality of image data, and control method for imaging device | |
| JP5764740B2 (en) | Imaging device | |
| US8982242B2 (en) | Imaging device and imaging method | |
| US9398230B2 (en) | Imaging device and imaging method | |
| WO2016072103A1 (en) | Imaging device, imaging method and processing program | |
| JP5780764B2 (en) | Imaging device | |
| JP5729237B2 (en) | Image processing apparatus, image processing method, and program | |
| KR101013830B1 (en) | Recording medium recording the shooting device and program | |
| US20170208264A1 (en) | Image pickup apparatus, image pickup method, and non-transitory computer-readable medium storing computer program | |
| US8836821B2 (en) | Electronic camera | |
| US20180020149A1 (en) | Imaging apparatus and image compositing method | |
| US11832020B2 (en) | Image pickup apparatus, image pickup method, and storage medium | |
| JP4678061B2 (en) | Image processing apparatus, digital camera equipped with the same, and image processing program | |
| JP2018148512A (en) | IMAGING DEVICE, IMAGING DEVICE CONTROL METHOD, AND PROGRAM | |
| CN114554041A (en) | Image pickup apparatus, image pickup method, and storage medium | |
| JP4639406B2 (en) | Imaging device | |
| WO2012011484A1 (en) | Image capture device | |
| KR100819811B1 (en) | Photographing apparatus, and photographing method | |
| US10491840B2 (en) | Image pickup apparatus, signal processing method, and signal processing program | |
| US20080012964A1 (en) | Image processing apparatus, image restoration method and program | |
| JP5655589B2 (en) | Imaging device | |
| US12432429B2 (en) | Image capture apparatus and control method thereof, and image processing apparatus with correction of brightness of image data for frames and with compositing of image data for corrected frames | |
| JP2006253887A (en) | Imaging apparatus | |
| JP2009278486A (en) | Imaging apparatus |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11809654 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 11809654 Country of ref document: EP Kind code of ref document: A1 |