US11102427B2 - Imaging apparatus, image processing apparatus, image processing method, and recording medium that records image processing program - Google Patents
Imaging apparatus, image processing apparatus, image processing method, and recording medium that records image processing program Download PDFInfo
- Publication number
- US11102427B2 US11102427B2 US16/459,587 US201916459587A US11102427B2 US 11102427 B2 US11102427 B2 US 11102427B2 US 201916459587 A US201916459587 A US 201916459587A US 11102427 B2 US11102427 B2 US 11102427B2
- Authority
- US
- United States
- Prior art keywords
- value
- pixels
- values
- image
- image areas
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- H04N5/341—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
- H04N23/843—Demosaicing, e.g. interpolating colour pixel values
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
-
- H04N5/232—
-
- H04N9/045—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/70—SSIS architectures; Circuits associated therewith
- H04N25/703—SSIS architectures incorporating pixels for producing signals other than image signals
- H04N25/704—Pixels specially adapted for focusing, e.g. phase difference pixel sets
Definitions
- the present embodiment relates to an imaging apparatus, an image processing apparatus, an image processing method, and a recording medium that records an image processing program.
- a phenomenon called a ghost may occur.
- the ghost occurs with ghost light, such as light internally reflected in a lens, imaged by an image sensor.
- the ghost which occurs in an imaging apparatus having an image sensor with Bayer array structure, causes false color.
- a difference occurs between the pixel value of a Gr pixel and the pixel value of a Gb pixel that should be originally the same level.
- the presence or absence of the ghost can be determined from this difference between the pixel value of a Gr pixel and the pixel value of a Gb pixel.
- the presence or absence of the ghost is determined from the difference between the mean value of the pixel values of Gr pixels and the mean value of the pixel values of Gb pixels.
- the difference between the pixel value of a Gr pixel and that of a Gb pixel, which occurs due to the structure (edge structure) of a fine subject, can be suppressed by calculating the mean value.
- a Gr pixel and a Gb pixel are arranged at different positions. Therefore, the center of gravity represented by the mean value of the pixel values of Gr pixels and the center of gravity represented by the mean value of the pixel values of Gb pixels do not match each other. Therefore, the difference between the pixel value of a Gr pixel and the pixel value of a Gb pixel, which occurs due to an edge structure depending on the structure of a subject, cannot be suppressed just by calculating the mean values.
- the present invention has been made in view of the above circumstance, and an object of the invention is to provide an imaging apparatus, an image processing apparatus, an image processing method, and a recording medium that records an image processing program, which can accurately detect a ghost by eliminating an influence of the structure of a subject.
- An imaging apparatus includes a solid-state image sensor with Bayer array structure.
- the Bayer array structure includes a first line and a second line. In the first line, red pixels and first green pixels are alternately arranged in a horizontal direction. In the second line, blue pixels and second green pixels are alternately arranged in the horizontal direction. The first line and second line are alternately arranged in a vertical direction.
- the imaging apparatus includes a calculation circuit, an interpolation operating circuit, and a G-step detecting circuit. The calculation circuit calculates a first value and a second value, for each predetermined image area of image data output from the solid-state image sensor.
- the first value is a mean value or integrated value of pixel values of the first green pixels.
- the second value is a mean value or integrated value of pixel values of the second green pixels.
- the interpolation operating circuit performs, for a plurality of the image areas, an interpolation operation by using the first values and the second values such that a first center of gravity represented by each of the first values and a second center of gravity represented by each of the second values match each other.
- the G-step detecting circuit calculates a difference between the first value and the second value interpolated.
- An image processing apparatus includes a calculation circuit, an interpolation operating circuit, and a G-step detecting circuit.
- the calculation circuit calculates a first value and a second value, for each predetermined image area of image data output from a solid-state image sensor with Bayer array structure.
- the Bayer array structure includes a first line and a second line. In the first line, red pixels and first green pixels are alternately arranged in a horizontal direction. In the second line, blue pixels and second green pixels are alternately arranged in the horizontal direction. The first line and second line are alternately arranged in a vertical direction.
- the interpolation operating circuit performs, for a plurality of the image areas, an interpolation operation by using the first values and the second values such that a first center of gravity represented by each of the first values and a second center of gravity represented by each of the second values match each other.
- the G-step detecting circuit calculates a difference between the first value and the second value interpolated.
- An image processing method including calculating a first value and a second value, for each predetermined image area of image data output from a solid-state image sensor with Bayer array structure in which a line, where red pixels and first green pixels are alternately arranged in a horizontal direction, and a line, where blue pixels and second green pixels are alternately arranged in the horizontal direction, are alternately arranged in a vertical direction, the first value being a mean value or integrated value of pixel values of the first green pixels and the second value being a mean value or integrated value of pixel values of the second green pixels; performing an interpolation operation by using a plurality of the first values and a plurality of the second values calculated for a plurality of the image areas such that a first center of gravity represented by each of the first values and a second center of gravity represented by each of the seconds match each other; and calculating a difference between the first value and the second value interpolated by the interpolation operation.
- a computer readable recording medium records an image processing program.
- the image processing program causes a computer to execute: calculating a first value and a second value, for each predetermined image area of image data output from a solid-state image sensor with Bayer array structure in which a line, where red pixels and first green pixels are alternately arranged in a horizontal direction, and a line, where blue pixels and second green pixels are alternately arranged in the horizontal direction, are alternately arranged in a vertical direction, the first value being a mean value or integrated value of pixel values of the first green pixels and a second value being a mean value or integrated value of pixel values of the second green pixels; performing an interpolation operation by using a plurality of the first values and a plurality of the second values calculated for a plurality of the image areas such that a first center of gravity represented by each of the plurality of the first values and a second center of gravity represented by each of the plurality of the second values match each other; and calculating a difference between the first value and the second value inter
- FIG. 1 is a block diagram illustrating a configuration of one example of a camera system as one example of an imaging apparatus according to one embodiment.
- FIG. 2 is a view illustrating division of image data in one embodiment.
- FIG. 3 is a view illustrating a positional relationship among centers of gravity Gr 1 , Gb 1 , Gr 2 , and Gb 2 .
- FIG. 4 is a view illustrating movements of the centers of gravity after an interpolation operation is performed in FIG. 3 .
- FIG. 5 is a view illustrating how to shift an image area R 2 when the head color of an image area R 1 is R.
- FIG. 6 is a view illustrating a positional relationship among centers of gravity Gr 1 , Gb 1 , Gr 2 , and Gb 2 , when the head color of the image area R 1 is R and when the image area R 2 is created by shifting the image area R 1 by 2 pixels in each of the horizontal and vertical directions to the upper right direction.
- FIG. 7 is a view illustrating movements of the centers of gravity after an interpolation operation is performed in FIG. 6 .
- FIG. 8A is a flowchart showing operations of a camera system when a still image is recorded.
- FIG. 8B is a flowchart showing operations of the camera system when a still image is recorded.
- FIG. 9A is a flowchart showing operations of the camera system when a movie image is recorded.
- FIG. 9B is a flowchart showing operations of the camera system when a movie image is recorded.
- FIG. 10A is a view illustrating a positional relationship among four image areas when division is performed with four patterns.
- FIG. 10B is a view illustrating a positional relationship among four image areas when division is performed with four patterns.
- FIG. 11 is a view illustrating a positional relationship among eight centers of gravity represented by first values and second values obtained in the four image areas.
- FIG. 12 is a view illustrating movements of the centers of gravity after an interpolation operation is performed in FIG. 11 .
- FIG. 13A illustrates an example of an image sensor in which phase difference detection pixels are arranged in the positions of some of Gr pixels and Gb pixels.
- FIG. 13B illustrates an example of an image sensor in which phase difference detection pixels are arranged in a certain pixel line.
- FIG. 1 is a block diagram illustrating a configuration of a camera system as one example of an imaging apparatus according to one embodiment of the present invention.
- a camera system 1 illustrated in FIG. 1 includes an interchangeable lens 100 and a camera body 200 .
- the interchangeable lens 100 is configured to be attached to and detached from the camera body 200 .
- the interchangeable lens 100 and the camera body 200 are communicably connected to each other.
- the camera system 1 does not necessarily have to be a lens-interchangeable camera system.
- the camera system 1 may be, for example, a lens-integrated camera system.
- the interchangeable lens 100 includes an imaging lens 102 , an aperture 104 , a driver 106 , a lens microcomputer 108 , and a flash memory 110 .
- the imaging lens 102 is an optical system for imaging a light flux from a subject on the image sensor 206 of the camera body 200 .
- the imaging lens 102 has one or more lenses including a focus lens.
- the imaging lens 102 may include a zoom lens.
- the aperture 104 is arranged on the optical axis of the imaging lens 102 , and is configured such that its diameter is variable.
- the aperture 104 regulates a light flux from a subject that passes through the imaging lens 102 to enter the image sensor 206 .
- the driver 106 drives the focus lens of the imaging lens 102 and drives the aperture 104 , based on a control signal from the lens microcomputer 108 .
- the lens microcomputer 108 is configured to be communicable with a microcomputer 234 of the camera body 200 via an interface (I/F) 202 provided in the camera body 200 .
- This lens microcomputer 108 controls, under the control of the microcomputer 234 , the driver 106 in accordance with a program stored in the flash memory 110 .
- the lens microcomputer 108 also transmits various types of information stored in the flash memory 110 via an I/F 202 , such as lens information, to the microcomputer 234 .
- the lens microcomputer 108 does not necessarily have to be configured as a microcomputer, and may be configured by an ASIC (application specific integrated circuit), a FPGA (field-programmable gate array), or the like.
- the flash memory 110 stores programs required for the operation of the interchangeable lens 100 .
- the flash memory 110 also stores lens information on the interchangeable lens 100 .
- the lens information includes, for example, information on the focal length of the imaging lens 102 and information on aberration.
- the camera body 200 has the I/F 202 , a shutter 204 , the image sensor 206 , an image sensor driver 208 , an analog processor 210 , an analog-to-digital (A/D) converter 212 , a bus 214 , an SDRAM (Synchronous Dynamic Random Access Memory) 216 , an AE processor 218 , an AF processor 220 , a first image processor 222 , a second image processor 224 , a display driver 226 , a display 228 , a memory interface (I/F) 230 , a recording medium 232 , the microcomputer 234 , a flash memory 236 , and an operation interface 238 .
- I/F memory interface
- Each block of the camera body 200 is configured, for example, by hardware. However, it does not necessarily have to be configured by hardware, and may be configured by software. In addition, each block of the camera body 200 may not be configured by single hardware or software, and may be configured by a plurality of hardware or software.
- the shutter 204 is configured to be freely opened and closed.
- the shutter 204 adjusts the incidence time of a light flux from a subject into the image sensor 206 (the exposure time of the image sensor 206 ).
- a focal-plane shutter is adopted as the shutter 204 .
- the shutter 204 is driven based on a control signal from the microcomputer 234 .
- the image sensor 206 is arranged at a position that is on the optical axis of the imaging lens 102 and is behind the shutter 204 , and at which a light flux from a subject forms an image by the imaging lens 102 .
- the image sensor 206 is a solid-state image sensor having a color filter with Bayer array structure.
- the Bayer array structure means an array structure of a color filter in which a line, where R (red) pixels and Gr (first green) pixels are alternately arranged in the horizontal direction, and a line, where B (blue) pixels and Gb (second green) pixels are alternately arranged in the horizontal direction, are alternately arranged in the vertical direction.
- Such an image sensor 206 generates an image signal by imaging a subject field.
- the image sensor driver 208 drives the image sensor 206 .
- the image sensor driver 208 also controls reading of an image signal generated by the image sensor 206 .
- the analog processor 210 performs analog processing, such as amplification processing, on the image signal read from the image sensor 206 .
- the A/D converter 212 converts the image signal output from the analog processor 210 into digital-format image data.
- the bus 214 is connected to the A/D converter 212 , the SDRAM 216 , the AE processor 218 , the AF processor 220 , the first image processor 222 , the second image processor 224 , the display driver 226 , and the memory I/F 230 , and operates as a transfer path for transferring the various data generated in these blocks.
- the SDRAM 216 is an electrically rewritable memory.
- the SDRAM 216 temporarily stores various data such as the image data output from the A/D converter 212 , the first image processor 222 , or the second image processor 224 and the processed data in the AE processor 218 , the AF processor 220 , or the microcomputer 234 .
- a DRAM Dynamic Random Access Memory
- the AE processor 218 performs automatic exposure (AE) processing. Specifically, the AE processor 218 sets imaging conditions (aperture value and shutter speed value) based on an AE evaluation value representing a subject brightness in the image data.
- AE automatic exposure
- the AF processor 220 performs automatic focus adjustment (AF) processing. Specifically, the AF processor 220 controls the drive of a focus lens included in the imaging lens 102 , based on focal information obtained from the image data or the like.
- the focal information is an AF evaluation value (contrast value) calculated, for example, from the image data.
- the focal information may be a defocus amount calculated from an output of the focus detection pixel.
- the first image processor 222 performs image processing for obtaining information required for performing basic processing for imaging, such as obtaining of the AE evaluation value for AE processing, the AF evaluation value for AF processing, and information for calculating a white balance gain in AWB (auto white balance mode).
- the AE evaluation value can be obtained by calculating, for example, the mean value or integrated value of the pixel values of the same color pixels of image data for each predetermined image area.
- the AF evaluation value can be obtained by calculating, for example, the integrated value of the high frequency components for each predetermined image area (this image area may or may not match the image area for calculating the AE evaluation value) of the image data extracted by HPF (high pass filter) processing.
- the information for calculating the white balance gain can be obtained by calculating, for example, the mean value or integrated value of the pixel values of the same color pixels of image data for each predetermined image area and by using the spectral transmittance characteristics of the imaging lens 102 , the spectral sensitivity characteristics of the image sensor 206 , and the like.
- the first image processor 222 in the present embodiment includes a calculation circuit 2221 .
- the calculation circuit 2221 calculates, for each predetermined image area of image data, a first value that is the mean value or integrated value of the pixel values of Gr pixels and a second value that is the mean value or integrated value of the pixel values of Gb pixels.
- the first value and the second value calculated by the calculation circuit 2221 are used for the later-described ghost determination.
- processing for calculating the mean value or integrated value of pixel values is also performed as described above. Therefore, a calculation circuit for calculating the AE evaluation value or the white balance gain in AWB may double as the calculation circuit 2221 .
- the calculation circuit 2221 may be provided separately from the calculation circuit for calculating the AE evaluation value or the white balance gain in AWB.
- the second image processor 224 includes a basic image processor 2241 .
- the basic image processor 2241 performs basic image processing required for displaying or recording an image on image data. This basic image processing includes, for example, optical black (OB) subtraction processing, white balance (WB) correction processing, demosaic processing, color conversion processing, gamma conversion processing, noise reduction processing, scaling processing, and compression processing.
- the basic image processor 2241 of the second image processor 224 may be further configured to perform image processing for ghost cancellation.
- image processing for ghost cancellation processing is cited, in which the pixel value of a pixel where a ghost occurs is interpolated, for example, by using the pixel values of the surrounding pixels of the same color.
- An image processor for ghost cancellation may be provided separately from the basic image processor 2241 .
- the display driver 226 drives the display 228 to make the display 228 display an image based on the image data processed by the second image processor 224 .
- the display 228 is a display such as, for example, a liquid crystal display or an organic EL display.
- the display 228 is arranged, for example, on the rear surface of the camera body 200 .
- the display 228 is driven by the display driver 226 to display various images.
- the display 228 does not necessarily have to be provided in the camera body 200 .
- the display 228 may be, for example, a TV monitor, a monitor display, or the like that is communicably connected to the camera body 200 .
- the memory I/F 230 mediates the data transfer from the recording medium 232 to the bus 214 and from the bus 214 to the recording medium 232 .
- the recording medium 232 is, for example, a flash memory.
- the recording medium 232 is configured to be built in or loaded into the camera body 200 .
- the recording medium 232 records the image data processed by the second image processor 224 as an image file of a predetermined format.
- the microcomputer 234 is a controller that controls each block of the camera body 200 in accordance with the programs stored in the flash memory 236 .
- the microcomputer 234 does not necessarily have to be configured as a microcomputer, and may be configured by an ASIC, an FPGA, or the like.
- the microcomputer 234 in the present embodiment includes an interpolation operating circuit 2341 and a G-step detecting circuit 2342 .
- the interpolation operating circuit 2341 performs an interpolation operation by using the first value and the second value calculated by the calculation circuit 2221 such that the center of gravity represented by the first value and the center of gravity represented by the second value match each other.
- the G-step detecting circuit 2342 determines the presence or absence of a ghost by calculating a G-step that is the difference between the first value and the second value interpolated by the interpolation operating circuit 2341 .
- the details of the interpolation operating circuit 2341 and the G-step detecting circuit 2342 will be described later.
- the flash memory 236 stores programs required for the operation of the camera body 200 .
- the flash memory 236 also stores information required for various processing of the camera body 200 . This information includes, for example, information on the parameters of image processing.
- the operation interface 238 includes various operation members such as: various operation buttons including a power button for turning on and off the power supply of the camera body 200 , a release button for generating a trigger signal that directs a still image to be imaged, a movie start button for starting recording of a movie image, a movie end button for ending recording of a movie image, a play button that directs an imaged and recorded still image or to be played, and a menu button that directs the change or setting of various setting values and modes of the camera body 200 ; and a touch panel that performs functions similar to the operations of the various operation buttons.
- the operation interface 238 detects the operation states of the various operation members, and outputs a signal representing a detection result to the microcomputer 234 .
- the calculation circuit 2221 first divides the image data obtained via the image sensor 206 into a plurality of image areas with a first pattern in the present embodiment.
- the calculation circuit 2221 divides, for example, the image data into units each having an image area R 1 of 6 pixels ⁇ 6 pixels, as illustrated in FIG. 2 .
- the size of the image area R 1 may be appropriately set, and does not necessarily have to be 6 pixels ⁇ 6 pixels.
- the head color (the color of the upper left end pixel) of the image area R 1 is Gr, but the head color of the image area R 1 may not be Gr.
- the calculation circuit 2221 calculates, for each image area R 1 , the mean value or integrated value (first value) of the pixel values of Gr pixels and the mean value or integrated value (second value) of the pixel values of Gb pixels.
- the calculation circuit 2221 divides the image data into a plurality of image areas with a second pattern.
- the calculation circuit 2221 divides the image data into units each having an image area R 2 that has the same size and color arrangement as the image area R 1 and that is located at a different position from the image area R 1 .
- the calculation circuit 2221 divides, for example, the image data into units each having the image area R 2 that is located at a position shifted from the image area R 1 by 2 pixels in each of the horizontal and vertical directions to the lower right direction, as illustrated in FIG. 2 .
- the shift amount does not necessarily have to be 2 pixels, and may be 4 pixels, 6 pixels, or the like.
- the image data is divided such that the image area R 2 is shifted from the image area R 1 by the minimum number of pixels, i.e., by 2 pixels.
- the calculation circuit 2221 calculates, for each image area R 2 , the mean value or integrated value (first value) of the pixel values of Gr pixels and the mean value or integrated value (second value) of the pixel values of Gb pixels.
- the first value for the image area R 1 is the mean value or integrated value of the pixel values of Gr pixels in the image area R 1 . Therefore, it can be considered that the first value for the image area R 1 is a value at the center of gravity of the Gr pixels in the image area R 1 .
- the second value for the image area R 1 is a value at the center of gravity of the Gb pixels in the image area R 1 ;
- the first value for the image area R 2 is a value at the center of gravity of the Gr pixels in the image area R 2 ;
- the second value for the image area R 2 is a value at the center of gravity of the Gb pixels in the image area R 2 .
- the positional relationship among these four centers of gravity is as illustrated in FIG. 3 .
- a center of gravity Gr 1 represented by the first value for the image area R 1 is shifted from a center of gravity Gb 1 represented by the second value for the image area R 1 .
- a center of gravity Gr 2 represented by the first value for the image area R 2 is shifted from a center of gravity Gb 2 represented by the second value for the image area R 2 . Due to the shifts between these centers of gravity, a difference may be caused between the first value and the second value due to an influence of the structure of a subject. The larger the shift between the centers of gravity, the larger an influence of the structure of a subject. If the difference is caused due to an influence of the structure of a subject, it may lead to incorrect ghost determination.
- an interpolation operation is performed in the present embodiment by using the first value and the second value for the image area R 1 and the first value and the second value for the image area R 2 such that the center of gravity represented by the first value and the center of gravity represented by the second value match each other.
- Igr is a first value (a first interpolated value) after the interpolation operation
- Vgr 1 is the first value for the image area R 1
- Vgr 2 is the first value for the image area R 2 .
- Igb is a second value (a second interpolated value) after the interpolation operation
- Vgb 1 is the second value for the image area R 1
- Vgb 2 is the second value for the image area R 2 .
- FIG. 4 is a view illustrating movements of the centers of gravity after the interpolation operations are performed.
- the position (interpolated position) represented by the first interpolated value Igr obtained by the interpolation operation shown in (Equation 1) is the position of I in FIG. 4 .
- the position (interpolated position) represented by the second interpolated value Igb obtained by the interpolation operation shown in (Equation 1) is also the position of I in FIG. 4 . That is, the center of gravity represented by the first interpolated value and the center of gravity represented by the second interpolated value match each other.
- the positions of the centers of gravity match each other, the influences of the structure of a subject in the Gr pixel and Gb pixel become the same. Thereby, ghost determination can be performed correctly.
- the G-step detecting circuit 2342 calculates a difference (a G-step) between the first interpolated value Igr and the second interpolated value Igb that are obtained by the interpolation operation. If the Gr pixel and the Gb pixel are not influenced by the structure of a subject, the pixel values of the Gr pixel and the Gb pixel located close to each other are usually almost the same. Therefore, the G-step, the difference between the first value that is the mean value or integrated value of Gr pixels and the second value that is the mean value or integrated value of Gb pixels, becomes small.
- the G-step the difference between the first value that is the mean value or integrated value of Gr pixels and the second value that is the mean value or integrated value of Gb pixels, also becomes large. From such a relationship, the presence or absence of a ghost can be determined by the size of the G-step.
- the image area R 2 is shifted from the image area R 1 by 2 pixels in each of the horizontal and vertical directions to the lower right direction. This is because the center of gravity Gb 1 represented by the second value is shifted from the center of gravity Gr 1 represented by the first value to the lower right direction.
- the direction to which the image area R 2 is to be shifted needs to be changed depending on the head color of the image area R 1 .
- FIG. 5 is a view illustrating how to shift the image area R 2 when the head color of the image area R 1 is R.
- the center of gravity Gb 1 represented by the second value is shifted from the center of gravity Gr 1 represented by the first value in the image area R 1 to the upper right direction. Therefore, if the image area R 2 is shifted from the image area R 1 to the lower right direction, the centers of gravity Gr 1 , Gb 1 , Gr 2 , and Gb 2 are not aligned on the same straight line. In this case, the centers of gravity cannot be matched each other by the interpolation operations.
- the image area R 2 may be located at a position shifted from the image area R 1 by 2 pixels in each of the horizontal and vertical directions to the upper right direction, as illustrated in FIG. 5 .
- the image area R 2 may be located at a position shifted to the lower right (upper left) direction
- the image area R 2 may be located at a position shifted to the upper right (lower left) direction.
- FIG. 6 is a view illustrating the positional relationship among the centers of gravity Gr 1 , Gb 1 , Gr 2 , and Gb 2 , when the head color of the image area R 1 is R and when the image area R 2 is located at a position shifted from the image area R 1 by 2 pixels in each of the horizontal and vertical directions to the upper right direction.
- the centers of gravity Gr 1 , Gb 1 , Gr 2 , and Gb 2 are aligned on the same straight line by shifting the image area R 2 from the image area R 1 to the upper right direction.
- the centers of gravity Gr 1 , Gb 1 , Gr 2 , and Gb 2 have a positional relationship as illustrated in FIG.
- FIG. 7 is a view illustrating movements of the centers of gravity after the interpolation operations are performed in FIG. 6 .
- the position (interpolated position) represented by the first interpolated value Igr obtained by the interpolation operation shown in (Equation 2) is the position of I in FIG. 7 .
- the position (interpolated position) represented by the second interpolated value Igb obtained by the interpolation operation shown in (Equation 2) is also the position of I in FIG. 7 . That is, the center of gravity represented by the first interpolated value and the center of gravity represented by the second interpolated value match each other.
- FIGS. 8A and 8B are flowcharts showing operations of the camera system 1 when a still image is recorded.
- the operations of FIGS. 8A and 8B are mainly controlled by the microcomputer 234 .
- step S 1 the microcomputer 234 drives the image sensor 206 at a predetermined frame rate via the image sensor driver 208 in order to obtain an image for a live view.
- the image signal obtained by the image sensor 206 is analog processed by the analog processor 210 .
- the image signal analog processed by the analog processor 210 is converted into image data, a digital signal, by the A/D converter 212 .
- the image data obtained by the A/D converter 212 is stored in the SDRAM 216 via the first image processor 222 .
- step S 2 the microcomputer 234 directs the second image processor 224 to perform basic image processing on the image data obtained by the operations for obtaining the image for a live view.
- the second image processor 224 performs the basic image processing on the image data stored in the SDRAM 216 by the basic image processor 2241 .
- the basic image processing herein is image processing required for the display on the display 228 , and includes, for example, OB subtraction processing, WB correction processing, demosaic processing, color conversion processing, gamma conversion processing, noise reduction processing, and scaling processing.
- the microcomputer 234 directs the display driver 226 to display the live view.
- the display driver 226 inputs the image data to the display 228 , the image data having been sequentially obtained via the image sensor 206 and sequentially processed by the second image processor 224 .
- the display 228 displays the live view based on the input image data.
- step S 3 the first image processor 222 divides, by the calculation circuit 2221 , the image data obtained by the operations for obtaining the image for a live view with the first pattern. Then, the first image processor 222 calculates, for example, the mean value (first value) of the pixel values of Gr pixels and the mean value (second value) of the pixel values of Gb pixels by the calculation circuit 2221 , for each image area R 1 .
- the integrated value of the pixel values of Gr pixels and the integrated value of the pixel values of Gb pixels may be calculated as the first value and the second value, as described above.
- the processing of step S 3 may be performed while the image for a live view is being obtained, that is, while the image data are being input from the A/D converter 212 to the first image processor 222 .
- step S 4 the first image processor 222 determines whether the head color of the image area R 1 is R or B. When it is determined in step S 4 that the head color of the image area R 1 is R or B, the processing moves to step S 5 . When it is determined in step S 4 that the head color of the image area R 1 is not R nor B, that is, is Gr or Gb, the processing moves to step S 6 .
- step S 5 the first image processor 222 creates a second pattern by shifting the first pattern by 2 pixels in each of the horizontal and vertical directions to the upper right direction.
- step SG the first image processor 222 creates a second pattern by shifting the first pattern by 2 pixels in each of the horizontal and vertical directions to the lower right direction.
- step S 7 the first image processor 222 divides, by the calculation circuit 2221 , the image data obtained by the operations for obtaining the image for a live view with the second pattern. Then, the first image processor 222 calculates, for example, the mean value (first value) of the pixel values of Gr pixels and the mean value (second value) of the pixel values of Gb pixels by the calculation circuit 2221 , for each image area R 2 .
- the integrated value of the pixel values of Gr pixels and the integrated value of the pixel values of Gb pixels are calculated in step S 3
- the integrated value of the pixel values of Gr pixels and the integrated value of the pixel values of Gb pixels are calculated also in step S 7 .
- step S 8 the microcomputer 234 performs interpolation operations for each pair of the image areas by the interpolation operating circuit 2341 .
- the interpolation operating circuit 2341 performs the operations of (Equation 1), and when the head color of the image area R 1 is R or Gb, it performs the operations of (Equation 2).
- the center of gravity represented by the mean value of the pixel values of Gr pixels and the center of gravity represented by the mean value of the pixel values of Gb pixels match each other.
- step S 9 the microcomputer 234 calculates, for each pair of the image areas, the difference (G-step) between the first interpolated value Igr and the second interpolated value Igb, which are obtained by the interpolation operations, by the G-step detecting circuit 2342 .
- step S 10 the microcomputer 234 determines whether the mean value (or the integrated value) of the differences (G-steps) between the first interpolated values Igr and the second interpolated values Igb calculated for each image area is larger than or equal to a threshold.
- This threshold is used for determining the presence or absence of a ghost and is appropriately set.
- step S 11 the microcomputer 234 determines that a ghost currently occurs, and sets a flag with ghost.
- step S 12 the microcomputer 234 determines that a ghost does not currently occur, and sets a flag without ghost. After step S 11 or step S 12 , the processing moves to step S 13 .
- step S 13 the microcomputer 234 determines whether a user has pressed the release button. When it is determined in step S 13 that the release button has been pressed, the processing moves to step S 14 . When it is determined in step S 13 that the release button has not been pressed, the processing moves to step S 22 .
- step S 14 the microcomputer 234 drives the image sensor 206 via the image sensor driver 208 in order to record a still image.
- the image sensor 206 is driven in accordance with the imaging condition (shutter speed value) set by the AE processor 218 .
- the image signal obtained by the image sensor 206 is analog processed by the analog processor 210 .
- the image signal analog processed by the analog processor 210 is converted into image data, a digital signal, by the A/D converter 212 .
- the image data obtained by the A/D converter 212 is stored in the SDRAM 216 via the first image processor 222 .
- step S 15 the microcomputer 234 determines whether a ghost occurs. This determination is determined by whether the flag with ghost is set. When it is determined in step S 15 that a ghost occurs, the processing moves to step S 16 . When it is determined in step S 15 that a ghost does not occur, the processing moves to step S 19 .
- step S 16 the microcomputer 234 makes the display 228 display a warning indicating that a ghost currently occurs in the obtained still image.
- the warning of step S 16 is made, for example, by displaying a message.
- the warning does not necessarily have to be displayed on the display 228 , and may be displayed on a display different from the display 228 .
- the warning may be made by a technique other than display.
- step S 17 the microcomputer 234 directs the second image processor 224 to perform basic image processing including the image processing for ghost cancellation.
- the second image processor 224 performs, by the basic image processor 2241 , the basic image processing including the image processing for ghost cancellation on the image data stored in the SDRAM 216 .
- the processing moves to step S 18 .
- the basic image processing of step S 17 is image processing required for recording a still image on the recording medium 232 , and includes, for example, OB subtraction processing, WB correction processing, demosaic processing, color conversion processing, gamma conversion processing, noise reduction processing, scaling processing, and still image compression processing, in addition to the image processing for ghost cancellation.
- the basic image processor 2241 may be configured to change an image processing parameter in accordance with the size of the G-step. That is, it can be considered that a stronger ghost occurs as the G-step is larger, and hence the image processing parameter may be changed such that an effect of the ghost cancellation is made larger as the G-step is larger.
- step S 17 there is the possibility that a user may intentionally cause a ghost, and hence processing may be provided prior to the processing of step S 17 , in which a user is caused to select whether to perform image processing for ghost cancellation.
- processing of step S 17 when a user selects to perform the image processing for ghost cancellation, the processing of step S 17 is performed.
- the image processing for ghost cancellation of step S 17 may be omitted.
- step S 18 the microcomputer 234 records the flag with ghost in Exif (Exchangeable image file format) information that is additional information on a still image. Thereafter, the processing moves to step S 21 .
- step S 19 after it is determined in step S 15 that a ghost does not occur, the microcomputer 234 directs the second image processor 224 to perform the normal basic image processing not including the image processing for ghost cancellation.
- the second image processor 224 performs, by the basic image processor 2241 , the normal basic image processing not including the image processing for ghost cancellation on the image data stored in the SDRAM 216 . Thereafter, the processing moves to step S 20 .
- the basic image processing of step S 19 is the same as the basic image processing required for recording a still image described in step S 17 , except for the image processing for ghost cancellation.
- step S 20 the microcomputer 234 records the flag without ghost in the Exif information that is additional information on a still image. Thereafter, the processing moves to step S 21 .
- step S 21 the microcomputer 234 generates a still image file by adding the Exif information to the still image on which the image processing has been performed, and records the generated still image file on the recording medium 232 via the memory I/F 230 . Thereafter, the processing returns to step S 1 .
- step S 22 after it is determined in step S 13 that the release button has not been pressed, the microcomputer 234 determines whether a user has pressed the power button. When it is determined in step S 22 that the power button has been pressed, the microcomputer 234 turns off the power supply of the camera body 200 . Thereafter, the processing of FIGS. 8A and 8B are ended. When it is determined in step S 22 that the power button has not been pressed, the processing returns to step S 1 .
- the result of the ghost determination performed when the live view is displayed is used in recording the still image.
- the time required for recording a still image can be shortened.
- the ghost determination may be performed by using the image data for recording the still image itself. In this case, the ghost determination to be performed when the live view is displayed may be omitted.
- FIGS. 9A and 9B are flowcharts showing operations of the camera system 1 when a movie image is recorded.
- the operations of FIGS. 9A and 9B are mainly controlled by the microcomputer 234 .
- description of the same operations as in FIGS. 8A and 8B will be appropriately omitted.
- step S 101 the microcomputer 234 drives the image sensor 206 at a predetermined frame rate via the image sensor driver 208 in order to obtain an image for a live view.
- step S 102 the microcomputer 234 performs processing for displaying the live view. This processing is the same as the processing of step S 2 of FIG. 8A . Therefore, description will be omitted.
- processing (processing group 1 ) shown by steps S 103 to S 112 are the same as the processing shown by steps S 3 to S 12 of FIG. 8A . Therefore, description will be omitted.
- ghost determination is also performed when a live view in a movie image recording mode is displayed.
- step S 113 after the ghost determination is completed, the microcomputer 234 determines whether a user has pressed the movie start button. When it is determined in step S 113 that the movie start button has been pressed, the processing moves to step S 114 . When it is determined in step S 113 that the movie start button has not been pressed, the processing moves to step S 123 .
- step S 114 the microcomputer 234 drives the image sensor 206 via the image sensor driver 208 in order to record a movie frame.
- the image sensor 206 is driven at a predetermined frame rate.
- the image signal obtained by the image sensor 206 is analog processed by the analog processor 210 .
- the image signal analog processed by the analog processor 210 is converted into image data, a digital signal, by the A/D converter 212 .
- the image data obtained by the A/D converter 212 is stored in the SDRAM 216 via the first image processor 222 .
- step S 115 the microcomputer 234 determines whether a ghost occurs in the last-minute ghost determination.
- the last-minute ghost determination means the ghost determination performed when a live view is displayed, and when it is after that time, the last-minute ghost determination means the ghost determination performed while the movie frame before one frame is being generated.
- the determination in step S 115 is determined by whether a flag with ghost is set.
- step S 115 when it is determined in the last-minute ghost determination that a ghost occurs, the processing moves to step S 116 .
- step S 115 when it is determined in the last-minute ghost determination that a ghost does not occur, the processing moves to step S 118 .
- step S 116 the microcomputer 234 makes, for example, the display 228 display a warning indicating that a ghost currently occurs.
- the warning of step S 116 is made, for example, by displaying a message.
- the warning does not necessarily have to be displayed on the display 228 , and may be displayed on a display different from the display 228 .
- the warning may be made by a technique other than display.
- step S 117 the microcomputer 234 directs the second image processor 224 to perform basic image processing including image processing for ghost cancellation.
- the second image processor 224 performs, by the basic image processor 2241 , the basic image processing including the image processing for ghost cancellation on the image data stored in the SDRAM 216 .
- the processing moves to step S 119 .
- the basic image processing of step S 117 is image processing required for recording a movie image on the recording medium 232 , and includes, for example, OB subtraction processing, WB correction processing, demosaic processing, color conversion processing, gamma conversion processing, noise reduction processing, scaling processing, and movie image compression processing, in addition to the image processing for ghost cancellation.
- the basic image processor 2241 may be configured to change an image processing parameter in accordance with the size of the G-step. That is, it can be considered that a stronger ghost occurs as the G-step is larger, and hence the image processing parameter may be changed such that an effect of the ghost cancellation is made larger as the G-step is larger.
- step S 117 there is the possibility that a user may intentionally cause a ghost, and hence processing may be provided prior to the processing of step S 117 , in which a user is caused to select whether to perform image processing for ghost cancellation. In this case, when a user selects to perform the image processing for ghost cancellation, the processing of step S 117 is performed. Alternatively, the image processing for ghost cancellation of step S 117 may be omitted.
- step S 118 after it is determined in step S 115 that a ghost does not occur, the microcomputer 234 directs the second image processor 224 to perform the normal basic image processing not including the image processing for ghost cancellation.
- the second image processor 224 performs, by the basic image processor 2241 , the normal basic image processing not including the image processing for ghost cancellation on the image data stored in the SDRAM 216 .
- the processing moves to step S 119 .
- the basic image processing of step S 118 is the same as the basic image processing for recording a movie image, which has been described in step S 117 , except for the image processing for ghost cancellation.
- step S 119 the microcomputer 234 performs the ghost determination shown by the processing group 1 . That is, in the movie image recording mode, ghost determination is performed while a movie image is being recorded. Thereby, the information on the presence or absence of a ghost can be updated even while a movie image is being recorded.
- step S 120 the microcomputer 234 makes, for example, the SDRAM 216 store the movie frame obtained by the basic image processing.
- step S 121 the microcomputer 234 determines whether a user has pressed the movie end button. When it is determined in step S 121 that the movie end button has not been pressed, the processing returns to step S 114 . In this case, processing for generating the next movie frame is performed. When it is determined in step S 121 that the movie end button has been pressed, the processing moves to step S 122 .
- step S 122 the microcomputer 234 generates a movie image file from the movie frame stored by the SDRAM 216 . Then, the microcomputer 234 records a flag with ghost or a flag without ghost in the header of the generated movie image file. The flag with ghost or the flag without ghost is recorded to be matched, for example, with the frame number of the movie frame. After the movie image file is generated, the microcomputer 234 records the generated movie image file on the recording medium 232 via the memory I/F 230 . Thereafter, the processing returns to step S 101 .
- step S 123 after it is determined in step S 113 that the movie start button has not been pressed, the microcomputer 234 determines whether a user has pressed the power button. When it is determined in step S 123 that the power button has been pressed, the microcomputer 234 turns off the power supply of the camera body 200 . Thereafter, the processing of FIGS. 9A and 9B are ended. When it is determined in step S 123 that the power button has not been pressed, the processing returns to step S 101 .
- the result of the ghost determination performed when the live view is displayed is used just after the movie start button is pressed. Thereby, the time required for recording a movie image can be shortened.
- the ghost determination may be performed when every movie frame is generated, by using image data for recording itself. In this case, the ghost determination to be performed when the live view is displayed may be omitted.
- image data is divided into image area units with a first pattern and a second pattern different from the first pattern in performing ghost determination, and for each image area divided with each pattern, a first value that is the mean value or integrated value of the pixel values of Gr pixels and a second value that is the mean value or integrated value of the pixel values of Gb pixels are calculated, as described above. Then, interpolation operations are performed such that the center of gravity represented by the first value and the center of gravity represented by the second value match each other. Thereby, an influence of the structure of a subject on the first value and the second value can be suppressed. Therefore, the ghost determination can be performed accurately.
- the second pattern is set in accordance with the head color of the image area created by the division with the first pattern. Thereby, interpolation operations suitable in accordance with the setting of the first pattern can be performed.
- the calculation circuit is provided in the first image processor 222 that processes the image data from the A/D converter 212 .
- the calculation circuit may be provided, for example, in the second image processor 224 , or may be provided separately from the first image processor 222 and the second image processor 224 .
- the same function as the calculation circuit may be achieved by the microcomputer 234 .
- the interpolation operating circuit and the G-step detecting circuit are supposed to be performed in the microcomputer 234 .
- the interpolation operating circuit and the G-step detecting circuit may be provided separately from the microcomputer 234 .
- the interpolation operating circuit and the G-step detecting circuit may be provided, for example, in the first image processor 222 or in the second image processor 224 , or may be provided separately from the first image processor 222 and the second image processor 224 .
- the interpolation operations are performed from the first value and the second value for the two image areas R 1 and R 2 .
- Second Variation is an example in which it is unnecessary to determine the direction to which an image area is to be shifted, by using four image areas created by the division with four different patterns.
- FIGS. 10A and 10B are views illustrating the positional relationships among four image areas when division is performed with four patterns.
- the pixels illustrated in FIG. 10A and the pixels illustrated in FIG. 10B are located at the same positions.
- an image area R 2 is shifted from an image area R 1 to the right direction by 2 pixels
- an image area R 3 is shifted to the lower direction by 2 pixels
- an image area R 4 is shifted to the lower right direction by 2 pixels.
- Division with four different patterns is performed such that division into image area units is performed in this way. In the case of the division illustrated in FIGS.
- the shift amount of other image areas with respect to the image area R 1 may not be 2 pixels, like the case of two image areas.
- FIG. 11 is a view illustrating the positional relationship among eight centers of gravity represented by first values and second values obtained in the four image areas.
- a center of gravity Gr 1 is a center of gravity represented by the first value calculated for the image area R 1
- a center of gravity Gb 1 is a center of gravity represented by the second value calculated for the image area R 1
- a center of gravity Gr 2 is a center of gravity represented by the first value calculated for the image area R 2
- a center of gravity Gb 2 is a center of gravity represented by the second value calculated for the image area R 2 .
- a center of gravity Gr 3 is a center of gravity represented by the first value calculated for the image area R 3
- a center of gravity Gb 3 is a center of gravity represented by the second value calculated for the image area R 3
- a center of gravity Gr 4 is a center of gravity represented by the first value calculated for the image area R 4
- a center of gravity Gb 4 is a center of gravity represented by the second value calculated for the image area R 4 .
- Igr is a first value (a first interpolated value) after the interpolation operation
- Vgr 1 is the first value for the image area R 1
- Vgr 2 is the first value for the image area R 2
- Vgr 3 is the first value for the image area R 3
- Vgr 4 is the first value for the image area R 4 .
- Igb is a second value (a second interpolated value) after the interpolation operation
- Vgb 1 is the second value for the image area R 1
- Vgb 2 is the second value for the image area R 2
- Vgb 3 is the second value for the image area R 3
- Vgb 4 is the second value for the image area R 4 .
- FIG. 12 is a view illustrating movements of the centers of gravity after the interpolation operations are performed in FIG. 11 .
- the position (interpolated position) represented by the first interpolated value Igr obtained by the interpolation operation shown in (Equation 3) is the position of I in FIG. 12 .
- the position (interpolated position) represented by the second interpolated value Igb obtained by the interpolation operation shown in (Equation 3) is also the position of I in FIG. 12 . That is, the center of gravity represented by the first interpolated value and the center of gravity represented by the second interpolated value match each other.
- Recent image sensors may have a phase difference detection pixel.
- the phase difference detection pixel is, for example, a pixel whose partial area is shaded such that a phase difference can be detected. How to arrange the phase difference detection pixel is changed depending on which position of a screen a focus state is detected at.
- FIG. 13A illustrates an example of an image sensor in which the phase difference detection pixels AF are arranged, for example, at the positions of some of Gr pixels and Gb pixels.
- FIG. 13B illustrates an example of an image sensor in which the phase difference detection pixels AF are arranged in a certain pixel line.
- phase difference detection pixel and a normal pixel that is not the phase difference detection pixel respectively output image signals different from each other. Therefore, if the phase difference detection pixel is used for the ghost determination, the difference between the image signals of the phase difference detection pixel and a normal pixel appears as a G-step. If the ghost determination is performed by such a G-step, incorrect determination is caused.
- the phase difference detection pixel is arranged at the position of the Gr pixel or Gb pixel in an image area, the first value or second value is calculated by excluding the pixel value of the phase difference detection pixel, in Third Variation. Thereby, the accuracy of the ghost determination can be further increased.
- phase difference detection pixel has been described as an example of a pixel different from a normal pixel in Third Variation
- technique of Third Variation can be applied to pixels of various structures that output different pixel signals from normal pixels.
- a camera system such as a digital camera has been described as an application example of the imaging apparatus.
- the technique of the present embodiment can be applied to various imaging apparatuses having a solid-state image sensor with Bayer array structure.
- the technique of the present embodiment can also be applied to various imaging apparatuses such as, for example, an endoscope, a microscope, and a surveillance camera.
- the ghost determination in the present embodiment can be performed when image data is obtained via a solid-state image sensor with Bayer array structure. Therefore, the technique of the present embodiment can also be applied to an image processing apparatus that does not have an image sensor and to which the image data obtained via a solid-state image sensor with Bayer array structure is to be input.
- each processing in the above embodiment can also be stored as an image processing program that the microcomputer 234 can execute.
- the each processing can be distributed by storing on a storage medium of an external storage apparatus, such as a magnetic disk, an optical disc, or a semiconductor memory.
- the microcomputer 234 can perform the above processing by reading the image processing program stored in the storage medium of the external storage apparatus so that operations are controlled by the loaded image processing program.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Color Television Image Signal Generators (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
Abstract
Description
Igr=(1×Vgr1+3×Vgr2)/4;
Igb=(3×Vgb1+1×Vgb2)/4 (Equation 1)
Herein, Igr is a first value (a first interpolated value) after the interpolation operation, Vgr1 is the first value for the image area R1, and Vgr2 is the first value for the image area R2. Igb is a second value (a second interpolated value) after the interpolation operation, Vgb1 is the second value for the image area R1, and Vgb2 is the second value for the image area R2.
Igr=(3×Vgr1+1×Vgr2)/4
Igb=(1×Vgb1+3×Vgb2)/4 (Equation 2)
Igr=(3×Vgr1+1×Vgr2+9×Vgr3+3×Vgr4)/16
Igb=(3×Vgb1+9×Vgb2+1×Vgb3+3×Vgb4)/16 (Equation 3)
Herein, Igr is a first value (a first interpolated value) after the interpolation operation, Vgr1 is the first value for the image area R1, Vgr2 is the first value for the image area R2, Vgr3 is the first value for the image area R3, and Vgr4 is the first value for the image area R4. Igb is a second value (a second interpolated value) after the interpolation operation, Vgb1 is the second value for the image area R1, Vgb2 is the second value for the image area R2, Vgb3 is the second value for the image area R3, and Vgb4 is the second value for the image area R4.
Claims (15)
Applications Claiming Priority (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2017044852A JP2018148526A (en) | 2017-03-09 | 2017-03-09 | Imaging apparatus, image processing apparatus, image processing method, and image processing program |
| JPJP2017-044852 | 2017-03-09 | ||
| JP2017-044852 | 2017-03-09 | ||
| PCT/JP2018/002751 WO2018163659A1 (en) | 2017-03-09 | 2018-01-29 | Imaging device, image processing device, image processing method and recording medium on which image processing program is recorded |
Related Parent Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2018/002751 Continuation WO2018163659A1 (en) | 2017-03-09 | 2018-01-29 | Imaging device, image processing device, image processing method and recording medium on which image processing program is recorded |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20190327429A1 US20190327429A1 (en) | 2019-10-24 |
| US11102427B2 true US11102427B2 (en) | 2021-08-24 |
Family
ID=63448742
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US16/459,587 Active 2038-08-16 US11102427B2 (en) | 2017-03-09 | 2019-07-01 | Imaging apparatus, image processing apparatus, image processing method, and recording medium that records image processing program |
Country Status (3)
| Country | Link |
|---|---|
| US (1) | US11102427B2 (en) |
| JP (1) | JP2018148526A (en) |
| WO (1) | WO2018163659A1 (en) |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20050094007A1 (en) * | 2003-10-31 | 2005-05-05 | Yoshikuni Nomura | Image processing apparatus, image processing method, and program |
| US20080193049A1 (en) | 2007-02-09 | 2008-08-14 | Kenichi Onomura | Image processing apparatus and method, and electronic camera |
| JP2012009919A (en) | 2010-06-22 | 2012-01-12 | Olympus Imaging Corp | Imaging apparatus and imaging method |
| US8305458B2 (en) * | 2009-07-21 | 2012-11-06 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, program, and storage medium for correcting chromatic aberration |
| WO2014136570A1 (en) | 2013-03-05 | 2014-09-12 | 富士フイルム株式会社 | Imaging device, image processing device, image processing method and program |
-
2017
- 2017-03-09 JP JP2017044852A patent/JP2018148526A/en active Pending
-
2018
- 2018-01-29 WO PCT/JP2018/002751 patent/WO2018163659A1/en not_active Ceased
-
2019
- 2019-07-01 US US16/459,587 patent/US11102427B2/en active Active
Patent Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20050094007A1 (en) * | 2003-10-31 | 2005-05-05 | Yoshikuni Nomura | Image processing apparatus, image processing method, and program |
| US20080193049A1 (en) | 2007-02-09 | 2008-08-14 | Kenichi Onomura | Image processing apparatus and method, and electronic camera |
| JP2008199177A (en) | 2007-02-09 | 2008-08-28 | Olympus Imaging Corp | Image processor and its method, and electronic camera |
| US8305458B2 (en) * | 2009-07-21 | 2012-11-06 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, program, and storage medium for correcting chromatic aberration |
| JP2012009919A (en) | 2010-06-22 | 2012-01-12 | Olympus Imaging Corp | Imaging apparatus and imaging method |
| WO2014136570A1 (en) | 2013-03-05 | 2014-09-12 | 富士フイルム株式会社 | Imaging device, image processing device, image processing method and program |
| US20150326838A1 (en) | 2013-03-05 | 2015-11-12 | Fujifilm Corporation | Imaging device, image processing device, image processing method and program |
Non-Patent Citations (2)
| Title |
|---|
| English translation of the PCT International Preliminary Report on Patentability from corresponding International Application No. PCT/JP2018/002751, dated Sep. 19, 2019 (6 pgs.). |
| International Search Report from corresponding International Application No. PCT/JP2018/002751, dated Apr. 3, 2018 (2 pgs.), with translation (1 pg.). |
Also Published As
| Publication number | Publication date |
|---|---|
| JP2018148526A (en) | 2018-09-20 |
| WO2018163659A1 (en) | 2018-09-13 |
| US20190327429A1 (en) | 2019-10-24 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN102025903B (en) | Image pickup apparatus | |
| US9832363B2 (en) | Imaging apparatus, control method of imaging apparatus, and non-transitory storage medium storing control program of imaging apparatus | |
| US9489747B2 (en) | Image processing apparatus for performing object recognition focusing on object motion, and image processing method therefor | |
| JP2010062640A (en) | Image capturing apparatus, method of controlling the same, and program | |
| US9906708B2 (en) | Imaging apparatus, imaging method, and non-transitory storage medium storing imaging program for controlling an auto-focus scan drive | |
| US10530986B2 (en) | Image capturing apparatus, image capturing method, and storage medium | |
| JP5499853B2 (en) | Electronic camera | |
| JP2012113171A (en) | Imaging device and control method therefor | |
| JP2010054730A (en) | Focusing position detecting device, imaging apparatus, and focusing position detecting method | |
| US9398207B2 (en) | Imaging apparatus and image correction method, and image processing apparatus and image processing method | |
| US9503661B2 (en) | Imaging apparatus and image processing method | |
| JP2020148980A (en) | Imaging device and focus adjustment method | |
| JP2016100879A (en) | Imaging apparatus and image processing method | |
| JP5446955B2 (en) | Imaging device | |
| WO2016103745A1 (en) | Imaging element, focal point detecting device, and focal point detecting method | |
| US11102427B2 (en) | Imaging apparatus, image processing apparatus, image processing method, and recording medium that records image processing program | |
| US11202015B2 (en) | Control apparatus and control method | |
| JP6566731B2 (en) | Imaging apparatus and control method thereof | |
| JP2018061153A (en) | Imaging apparatus, imaging apparatus control method and program | |
| JPWO2015182021A1 (en) | Imaging control apparatus, imaging apparatus, and imaging control method | |
| JP2015204579A (en) | Imaging device | |
| US20240334069A1 (en) | Image capturing apparatus, control method thereof, and storage medium | |
| JP6421032B2 (en) | Focus detection apparatus, focus detection method, and focus detection program | |
| JP2011109580A (en) | Camera | |
| JP2013070322A (en) | Defective pixel detection device, imaging device, and defective pixel detection program |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: OLYMPUS CORPORATION, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ONOMURA, KENICHI;REEL/FRAME:049647/0956 Effective date: 20190617 |
|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
| MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |