US20180343444A1 - Method for dynamically calibrating an image capture device - Google Patents
Method for dynamically calibrating an image capture device Download PDFInfo
- Publication number
- US20180343444A1 US20180343444A1 US15/605,159 US201715605159A US2018343444A1 US 20180343444 A1 US20180343444 A1 US 20180343444A1 US 201715605159 A US201715605159 A US 201715605159A US 2018343444 A1 US2018343444 A1 US 2018343444A1
- Authority
- US
- United States
- Prior art keywords
- dac
- far
- plp
- err
- lens actuator
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N17/00—Diagnosis, testing or measuring for television systems or their details
- H04N17/002—Diagnosis, testing or measuring for television systems or their details for television cameras
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B13/00—Viewfinders; Focusing aids for cameras; Means for focusing for cameras; Autofocus systems for cameras
- G03B13/32—Means for focusing
- G03B13/34—Power focusing
- G03B13/36—Autofocus systems
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B43/00—Testing correct operation of photographic apparatus or parts thereof
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B7/00—Mountings, adjusting means, or light-tight connections, for optical elements
- G02B7/02—Mountings, adjusting means, or light-tight connections, for optical elements for lenses
- G02B7/04—Mountings, adjusting means, or light-tight connections, for optical elements for lenses with mechanism for focusing or varying magnification
- G02B7/08—Mountings, adjusting means, or light-tight connections, for optical elements for lenses with mechanism for focusing or varying magnification adapted to co-operate with a remote control mechanism
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
Definitions
- the present invention relates to a method for dynamically calibrating an image capture device.
- a typical auto-focus (AF) module 10 for a camera module 12 within an image capture device can obtain an estimate for the distance from the camera module to the target object, for example, from a laser device or stereo camera system 14 .
- AF auto-focus
- the auto-focus module 10 can compute a required physical position for a lens 16 to bring the target object into focus.
- lens position is typically controlled by a lens actuator 18 which is driven by a digital to analog convertor (DAC)—often using an 8-bit DAC code with 255 distinct voltage output levels, or a 10-bit DAC code with 1024 voltage levels—provided by the AF module 10 .
- DAC digital to analog convertor
- the AF module 10 determines a required DAC code for a subject distance and the DAC converts the DAC code into an equivalent analog actuator voltage or current value depending on the actuator output circuitry, for example, depending on whether the lens 16 comprises a VCM (voice coil module) or MEMs (micro-electromechanical systems) lens actuator, to determine the lens position.
- VCM voice coil module
- MEMs micro-electromechanical systems
- the camera module can be calibrated by adjusting the DAC codes for infinity and macro distances:
- DAC FAR [t ] physical lens position to focus at far (infinity) distance at time [ t] [1]
- the auto-focus module 10 can determine the required DAC code to be supplied to the lens actuator 18 as a function of the distance to the target object as well as DAC NEAR [t] and DAC FAR [t].
- the camera module 12 may be affected by operating conditions such as SAG (gravity influence) or thermal (temperature influence) and WO 2016/000874 (Ref: FN-396-PCT) discloses some methods to compensate for SAG and thermal effects by adjusting DAC NEAR [t] and DAC FAR [t] according to operating conditions.
- SAG gravitation influence
- thermal temperature influence
- WO 2016/000874 discloses some methods to compensate for SAG and thermal effects by adjusting DAC NEAR [t] and DAC FAR [t] according to operating conditions.
- the camera module may then be required to hunt for focus and this both impacts adversely on focus speed as well as causing an unacceptable lens wobble effect within a preview stream.
- a computer program product comprising a computer readable medium on computer readable instructions are stored and which when executed on an image capture device are arranged to perform the steps of claim 1 .
- an image capture device configured to perform the steps of claim 1 .
- the present method runs on an image capture device, possibly within a camera module, and dynamically compensates for calibration errors while the user is operating the device.
- the method does not affect the production line process and collects the necessary data while the user is operating the device, without adversely affecting the user experience.
- the method can improve auto-focus speed and minimize lens wobble when estimating the calibration error and then updating the calibration parameters.
- the method can be triggered from time to time to check if the calibration parameters haven't been affected by for example, camera ageing, and if so, perform the necessary corrections.
- FIG. 1 illustrates schematically a typical auto-focusing camera module
- FIG. 2 illustrates a method for dynamically calibrating an image capture device according to a first embodiment of the present invention
- FIG. 3 illustrates a method for dynamically calibrating an image capture device according to a second embodiment of the present invention.
- PLP errors Calibration errors, other than those caused by SAG or thermal effects and referred to herein generally as PLP errors can be quantified as follows:
- ERR FAR PLP D ⁇ ⁇ A ⁇ ⁇ C FAR ⁇ [ t ] - D ⁇ ⁇ A ⁇ ⁇ C FAR PLP ⁇ [ 3 ]
- ERR NEAR PLP D ⁇ ⁇ A ⁇ ⁇ C NEAR ⁇ [ t ] - D ⁇ ⁇ A ⁇ ⁇ C NEAR PLP ⁇ [ 4 ]
- DAC FAR PLP and DAC NEAR PLP are the stored calibration parameters for the camera module (CM). This can be measured and determined at production time, or they can be updated from time to time during camera operation as disclosed in WO 2016/000874 (Ref: FN-395-PCT).
- DAC FAR [t] and DAC NEAR [t] are the desired corrected calibration parameters at time [t], while ERR FAR PLP and ERR NEAR PLP are the respective errors in these parameters.
- Equations [9], [10] and [11] are derived from thin lens equation:
- ERR D is the overall error generated by PLP errors (ERR FAR PLP and ERR NEAR PLP ).
- the first step of a compensation method is to collect input data in an internal buffer (BUFF) while the user is operating the camera. This process can be done transparently without affecting the user experience.
- An input record should contain the data summarized in table 2.
- the buffer should have enough space to store at least 2 records.
- Sensor temperature [T CRT ] and device orientation [O CRT ] are used to adjust the original calibration parameters to compensate for the CM being affected by SAG (gravity) or thermal effects, as disclosed in WO 2016/000874 (Ref: FN-395-PCT).
- SAG gravitation
- FN-395-PCT thermal effects
- SAG( ) and TH( ) can involve lookup tables, and again details of how to adjust the DAC FAR PLP and DAC NEAR PLP values to take into account temperature and orientation are disclosed in WO 2016/000874 (Ref: FN-395-PCT).
- D ⁇ ⁇ A ⁇ ⁇ C D CRT INIT D ⁇ ⁇ A ⁇ ⁇ C FAR PLP + ( D ⁇ ⁇ A ⁇ ⁇ C NEAR PLP - D ⁇ ⁇ A ⁇ ⁇ C FAR PLP ) * ( D FAR - D CRT ) ( D FAR - D NEAR ) * ( D NEAR - f ) ( D CRT - f )
- the DAC D CRT FOCUS focus position will be far from the initial position DAC D CRT INIT .
- the focus speed will be slow and the lens wobble effect strongly visible.
- the DAC D CRT FOCUS focus position will be closer to the initial position DAC D CRT INIT .
- the focus speed will be higher and the lens wobble effect less visible.
- the goal of dynamic compensation method is to use the above data (provided by steps 1, 2 and 3) to estimate PLP errors.
- the calibration parameters DAC FAR PLP , DAC NEAR PLP
- the lens position provided by [16] will be the focus position. Focus sweeping will not be necessary anymore, and thus the AF module speed will be improved and the lens wobble effect reduced.
- DAC STEP is the absolute difference between the corresponding DAC values at D F and D N distances. Using [15], [16] and assuming ERR D F ⁇ ERR D N , an estimated value of DAC STEP is given by:
- DAC STEP should be a constant value (should not vary with the distance D).
- the second step of the compensation method is to estimate the errors [3] and [4] and to update the calibration parameters. It requires, two input records (T CRT , O CRT , D CRT , DAC D CRT INIT , DAC D CRT FOCUS ) which satisfy the following condition:
- [D 1 F , D 1 N ] is the DOF range at first distance D 1 (D 1 F is the far limit, D 1 N is the near limit).
- [D 2 F , D 2 N ] is the DOF range at second distance D 2 (D 2 F is the far limit, D 2 N is the near limit).
- ERR D 1 DAC D 1 FOCUS ⁇ DAC D 1 INIT [26]
- DAC D 1 INIT and DAC D 2 INIT are computed using [16].
- ERR D 1 ERR FAR PLP + ( ERR NEAR PLP - ERR FAR PLP ) * ( D FAR - D 1 ) ( D FAR - D NEAR ) * ( D NEAR - f ) ( D 1 - f )
- ERR D 2 ERR FAR PLP + ( ERR NEAR PLP - ERR FAR PLP ) * ( D FAR - D 2 ) ( D FAR - D NEAR ) * ( D NEAR - f ) ( D 2 - f )
- the new system becomes:
- ERR D 1 ERR FAR PLP ⁇ ( 1 - ⁇ ⁇ D 1 ) + ⁇ ⁇ D 1 ⁇ ERR NEAR PLP
- ERR D 2 ERR FAR PLP ⁇ ( 1 - ⁇ ⁇ D 2 ) + ⁇ ⁇ D 2 ⁇ ERR NEAR PLP
- ERR FAR PLP ⁇ D 2 ⁇ * ERR D 1 - ⁇ ⁇ D 1 ⁇ * ERR D 2 ⁇ D 2 ⁇ - ⁇ D 1 ⁇ [ 30 ]
- ERR NEAR PLP ( 1 - ⁇ ⁇ D 2 ) * ERR D 1 - ( 1 - ⁇ ⁇ D 1 ) * ERR D 2 ⁇ D 1 ⁇ - ⁇ D 2 ⁇ [ 31 ]
- ERR D 1 and ERR D 2 are computed using [26] and [27].
- ⁇ D 1 and ⁇ D 2 are computed using [28] and [29].
- the new updated calibration parameters (which should be used further to improve AF module speed and reduced lens wobble effect) are:
- the distance can be estimated based on an assumed dimension of an object being imaged, for example, a face. More details about estimating the distance based on face information or indeed any recognizable object with a known dimension can be found in U.S. Pat. No. 8,970,770 (Ref: FN-361) and WO 2016/091545 (Ref: FN-399), the disclosures of which are herein incorporated by reference.
- the second embodiment aims to provide dynamic compensation to estimate ERR PLP while taking into account that false faces may be present in a scene.
- the lens position [DAC D ] to focus at distance [D] is given by the following formula:
- DAC D DAC D EST INIT +ERR D [2]
- DAC D EST INIT the initial lens position computed based on estimated distance [D EST ], can be calculated as per DAC D CRT INIT in equation [16];
- ERR FAR PLP ⁇ ERR NEAR PLP ⁇ ERR PLP ERR D EST represents an error caused by the wrong estimation of the distance to an object.
- the first step of dynamic compensation method is to collect into an internal buffer (BUFF) the necessary input data while the user is operating the camera. This process should be done transparently without affecting the user experience.
- BUFF internal buffer
- an input record should contain the data summarized in table 2, but instead of the measured
- D CRT of the first embodiment D EST the estimated distance is used.
- the buffer should have enough space to store at least 2 records.
- the second step of dynamic compensation process is to estimate ERR PLP and to update the calibration parameters.
- the embodiment attempts to image a given face at two separate distances, although in variants of the embodiment, measurements from images of different faces could be employed.
- two input records T CRT , O CRT , D EST , DAC D EST INIT , DAC D EST FOCUS ) are required which satisfy the following conditions:
- [D 1 F , D 1 N ] is the DOF range for the first estimated distance (D 1 F is the far limit, D 1 N is the near limit);
- [D 2 F , D 2 N ] is the DOF range for the second estimated distance (D 2 F is the far limit, D 2 N is the near limit).
- DAC STEP is determined as per equation [24] of the first embodiment
- the first condition (a) requires that the two distances should be different.
- the second condition (b) requires that the object being imaged indeed exhibits the assumed dimension so that a given face to be a live human face, ed ⁇ 70 mm.
- the errors should not be larger than the maximum error (N*DAC STEP ) and they should be quite similar (the difference should not be higher than half DAC STEP ).
- This condition assures that ERR D EST ⁇ 0 and ERR D ⁇ ERR PLP as in the first embodiment.
- condition (b) is not respected), and so compensation must not be done until a new valid face is received.
- ERR PLP ERR PLP
- ERR PLP ERR 1 + ERR 2 2 [ 38 ]
- the new updated calibration parameters (which should be used further to improve AF speed and reduced lens wobble effect) are:
- the maximum estimated error should not be higher than N*DAC STEP (ERR PLP ⁇ N*DAC STEP ).
- the value of N can be determined by the image acquisition device or camera module manufacturer according to how tightly they wish the estimated compensation process to operate. Thus, the larger the value of N the greater the possibility of calibrating based on a poorly estimated distance to an object.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Studio Devices (AREA)
- Measurement Of Optical Distance (AREA)
Abstract
Description
- The present invention relates to a method for dynamically calibrating an image capture device.
- Referring now to
FIG. 1 , a typical auto-focus (AF)module 10 for acamera module 12 within an image capture device can obtain an estimate for the distance from the camera module to the target object, for example, from a laser device orstereo camera system 14. - Knowing the estimated subject distance, the auto-
focus module 10 can compute a required physical position for alens 16 to bring the target object into focus. As explained in WO 2016/000874 (Ref: FN-396-PCT), the disclosure of which is herein incorporated by reference, lens position is typically controlled by alens actuator 18 which is driven by a digital to analog convertor (DAC)—often using an 8-bit DAC code with 255 distinct voltage output levels, or a 10-bit DAC code with 1024 voltage levels—provided by theAF module 10. Thus theAF module 10 determines a required DAC code for a subject distance and the DAC converts the DAC code into an equivalent analog actuator voltage or current value depending on the actuator output circuitry, for example, depending on whether thelens 16 comprises a VCM (voice coil module) or MEMs (micro-electromechanical systems) lens actuator, to determine the lens position. - Once the relationship between DAC code and lens position is determined, for example, there can be a linear relationship between the two, the camera module can be calibrated by adjusting the DAC codes for infinity and macro distances:
-
DAC FAR [t]—physical lens position to focus at far (infinity) distance at time [t] [1] -
DAC NEAR [t]—physical lens position to focus at near (macro) distance at time [t] [2] - These calibration parameters can be determined during a production line process (PLP) and their values stored in a
non-volatile memory 20 inside thecamera module 12 or elsewhere in the camera. - Thus, the auto-
focus module 10 can determine the required DAC code to be supplied to thelens actuator 18 as a function of the distance to the target object as well as DACNEAR [t] and DACFAR[t]. - It is known that the
camera module 12 may be affected by operating conditions such as SAG (gravity influence) or thermal (temperature influence) and WO 2016/000874 (Ref: FN-396-PCT) discloses some methods to compensate for SAG and thermal effects by adjusting DACNEAR[t] and DACFAR [t] according to operating conditions. - Nonetheless, there may be other components contributing to calibration error including inaccuracies, due to some limitations of the PLP or, as disclosed in WO 2016/000874 (Ref: FN-395-PCT), camera module performance drifting over time, for example, due to device aging or even device on-time.
- If PLP, SAG or thermal errors are not compensated accordingly, the DAC code computed by AF module will not provide proper focus on the target object.
- The camera module may then be required to hunt for focus and this both impacts adversely on focus speed as well as causing an unacceptable lens wobble effect within a preview stream.
- It is an object of the present application to mitigate these problems.
- According to the present invention there is provided a method for dynamically calibrating an image capture device according to
claim 1. - According to a second aspect there is provided a computer program product comprising a computer readable medium on computer readable instructions are stored and which when executed on an image capture device are arranged to perform the steps of
claim 1. - According to a third aspect there is provided an image capture device configured to perform the steps of
claim 1. - The present method runs on an image capture device, possibly within a camera module, and dynamically compensates for calibration errors while the user is operating the device.
- The method does not affect the production line process and collects the necessary data while the user is operating the device, without adversely affecting the user experience. The method can improve auto-focus speed and minimize lens wobble when estimating the calibration error and then updating the calibration parameters.
- The method can be triggered from time to time to check if the calibration parameters haven't been affected by for example, camera ageing, and if so, perform the necessary corrections.
- An embodiment of the invention will now be described, by way of example, with reference to the accompanying drawings, in which:
-
FIG. 1 illustrates schematically a typical auto-focusing camera module; -
FIG. 2 illustrates a method for dynamically calibrating an image capture device according to a first embodiment of the present invention; and -
FIG. 3 illustrates a method for dynamically calibrating an image capture device according to a second embodiment of the present invention. - Calibration errors, other than those caused by SAG or thermal effects and referred to herein generally as PLP errors can be quantified as follows:
-
- where:
DACFAR PLP and DACNEAR PLP are the stored calibration parameters for the camera module (CM). This can be measured and determined at production time, or they can be updated from time to time during camera operation as disclosed in WO 2016/000874 (Ref: FN-395-PCT). - Thus, DACFAR [t] and DACNEAR [t] are the desired corrected calibration parameters at time [t], while ERRFAR PLP and ERRNEAR PLP are the respective errors in these parameters.
- To illustrate the impact of the errors in equations [3] and [4] against the final focus position, let us assume a target object is placed at distance [D] from the camera. Typically, in a handheld image capture device such as a consumer camera, smartphone, tablet computer or equivalent, the object of interest is a human face. The corresponding lens position [DACD] to focus at distance [D] is given by the following formula:
-
- Assuming a linear DAC function, the additional parameters together with their formula are detailed in table 1:
-
TABLE 1 List of parameters used for mapping the distance to the lens position (DAC) Parameter Description Unit Formula DACFAR[t] See [1] DAC codes ERRFAR PLP + DACFAR PLP [6] DACNEAR[t] See [2] DAC codes ERRNEAR PLP + DACNEAR PLP [7] m Actuation slope mm/DAC codes [8] LFAR Lens displacement to focus at DFAR mm [9] LNEAR Lens displacement to focus at DNEAR mm [10] LD Lens displacement to focus at current distance mm [11] DFAR Far distance (set by CM mm — manufacturer) DNEAR Near distance (set by CM mm — manufacturer) f Focal length of the system mm — - Nonetheless, it will be appreciated that the invention is also applicable to a non-arithmetic, but nonetheless linear relationship between DAC codes and lens position.
- Equations [9], [10] and [11] are derived from thin lens equation:
-
- Replacing [6], [7], [8] in [5], the new formula for computing the DAC value becomes:
-
- Replacing [9], [10], [11] in [13] and [14], the final formula which estimates the DAC value is given by:
-
- ERRD is the overall error generated by PLP errors (ERRFAR PLP and ERRNEAR PLP).
- Referring now to
FIG. 2 , the first step of a compensation method according to one embodiment is to collect input data in an internal buffer (BUFF) while the user is operating the camera. This process can be done transparently without affecting the user experience. An input record should contain the data summarized in table 2. The buffer should have enough space to store at least 2 records. -
TABLE 2 Dynamic compensation input data Data Description Unit TCRT Current sensor temperature (provided by Celsius the internal thermal sensor of the CM) degrees OCRT Current device orientation relative to Degrees horizontal plane (estimated from the output of the device accelerometer) DCRT Current estimated distance to the Millimeters object (provided by AF algorithm) DACD CRT INIT Initial lens position (provided by AF DAC codes algorithm based on DCRT). DACD CRT FOCUS Focus lens position (provided DAC codes by AF algorithm at optimal focus) - Sensor temperature [TCRT] and device orientation [OCRT] are used to adjust the original calibration parameters to compensate for the CM being affected by SAG (gravity) or thermal effects, as disclosed in WO 2016/000874 (Ref: FN-395-PCT). To briefly explain how to compensate SAG and thermal effects, assume that during production, the CM orientation was OPLP and the sensor temperature was TPLP.
- If the CM is affected by SAG, OCRT≠OPLP, then the original calibration parameters are converted into the OCRT range. In the present description, this transformation function is represented as SAG below:
-
[DAC FAR PLP ,DAC NEAR PLP]OCRT =SAG)DAC FAR PLP ,DAC NEAR PLP ,O PLP ,O CRT - If the CM is not affected by SAG, the calibration parameters will remain unchanged:
-
[DAC FAR PLP ,DAC NEAR PLP]OCRT =[DAC FAR PLP ,DAC NEAR PLP] - If the CM is affected by thermal effect, TCRT≠TPLP, then the calibration parameters after SAG correction are converted into the TCRT range. Again, a transformation function, called TH below, can be used:
-
[DAC FAR PLP ,DAC NEAR PLP]TCRT =TH([DAC FAR PLP ,DAC NEAR PLP]OCRT ,T PLP ,T CRT) - If the CM is not affected by thermal effects, then the calibration parameters resulting from any SAG correction will remain unchanged:
-
[DAC FAR PLP ,DAC NEAR PLP]TCRT =[DAC FAR PLP ,DAC NEAR PLP]OCRT - Note that in each case SAG( ) and TH( ) can involve lookup tables, and again details of how to adjust the DACFAR PLP and DACNEAR PLP values to take into account temperature and orientation are disclosed in WO 2016/000874 (Ref: FN-395-PCT).
- At this point, PLP errors are unknown. For estimating them, data is collected during AF module operation as follows:
-
- 1. Estimate [DCRT] using the laser device or a
stereo camera system 14; - 2. Use [16] to compute the initial lens position [DACD
CRT INIT]
- 1. Estimate [DCRT] using the laser device or a
-
-
- 3. Set the lens position to DACD
CRT INIT and start searching the focus position around DACDCRT INIT. The lens should be moved forth or back until the best contrast value is achieved. The lens position with the best contrast value will be the focus position [DACDCRT FOCUS]
- 3. Set the lens position to DACD
- For large PLP errors, the DACD
CRT FOCUS focus position will be far from the initial position DACDCRT INIT. The focus speed will be slow and the lens wobble effect strongly visible. - For small errors, the DACD
CRT FOCUS focus position will be closer to the initial position DACDCRT INIT. The focus speed will be higher and the lens wobble effect less visible. - The goal of dynamic compensation method is to use the above data (provided by
steps 1, 2 and 3) to estimate PLP errors. Once the estimation is done, the calibration parameters (DACFAR PLP, DACNEAR PLP) will be properly updated and the lens position provided by [16] will be the focus position. Focus sweeping will not be necessary anymore, and thus the AF module speed will be improved and the lens wobble effect reduced. - One way to define sufficiently good accuracy, is to restrict the errors of [DCRT] and [DACD
CRT FOCUS] to less than set thresholds as follows: -
- To understand the meaning of DOFD
CRT and DACSTEP, additional parameters are summarized in table 3. -
TABLE 3 DOF Parameters Parameter Description Unit Formula DOFD Depth of field at mm DOFD = DF − DN [20] distance [D] DF Far limit of DOF at distance [D] mm [21] DN Near limit of DOF at distance [D] mm [22] N Relative aperture (F#) — — of the lens system c Circle of confusion mm c = 2 * PS [23] PS Pixel size mm — f Focal length of the mm — system - DACSTEP is the absolute difference between the corresponding DAC values at DF and DN distances. Using [15], [16] and assuming ERRD
F ≈ERRDN , an estimated value of DACSTEP is given by: -
- DACSTEP should be a constant value (should not vary with the distance D).
- The second step of the compensation method is to estimate the errors [3] and [4] and to update the calibration parameters. It requires, two input records (TCRT, OCRT, DCRT, DACD
CRT INIT, DACDCRT FOCUS) which satisfy the following condition: -
[D 1F ,D 1N ]∩[D 2F ,D 2N ]=Ø[25] - where:
- [T1, O1, D1, DACD
1 INIT, DACD1 FOCUS] is the first record. - [T2, O2, D2, DACD
2 INIT, DACD2 FOCUS] is the second record. - [D1
F , D1N ] is the DOF range at first distance D1 (D1F is the far limit, D1N is the near limit). - [D2
F , D2N ] is the DOF range at second distance D2 (D2F is the far limit, D2N is the near limit). - If the test of equation [25] is satisfied (the two distances are quite different), then using [15] and replacing [DACD, D] with [DACD
1 FOCUS, D1] and [DACD2 FOCUS, D2], the resulting ERRD1 and ERRD2 are: -
ERR D1 =DAC D1 FOCUS −DAC D1 INIT [26] -
ERR D2 =DAC D2 FOCUS −DAC D2 INIT [27] - where:
DACD1 INIT and DACD2 INIT are computed using [16]. - Using [17] and replacing [ERRD, D] with [ERRD
1 , D1] and [ERRD2 , D2], it results the following linear system: -
- To simplify the above system, the following substitutions will be done:
-
- The new system becomes:
-
- PLP errors can now be estimated with the following formulae:
-
- where:
ERRD1 and ERRD2 are computed using [26] and [27].
∝D1 and ∝D2 are computed using [28] and [29]. - The new updated calibration parameters (which should be used further to improve AF module speed and reduced lens wobble effect) are:
-
- In a second embodiment of the present invention, instead of directly measuring a distance to an object in a scene being imaged, the distance can be estimated based on an assumed dimension of an object being imaged, for example, a face. More details about estimating the distance based on face information or indeed any recognizable object with a known dimension can be found in U.S. Pat. No. 8,970,770 (Ref: FN-361) and WO 2016/091545 (Ref: FN-399), the disclosures of which are herein incorporated by reference.
- However, as disclosed in WO 2016/091545 (Ref: FN-399), care should be taken when doing so to ensure that the object is not a false image of an object, for example, a billboard showing a large face, or a small printed face or a small child's face, where the assumed dimension may not apply. Thus, the second embodiment aims to provide dynamic compensation to estimate ERRPLP while taking into account that false faces may be present in a scene.
- Let assume the distance from the face to the image acquisition device is [D]. The current estimated distance [DEST] to that face is computed with the following formula:
-
- where:
-
- f represents the focal length of the lens system
- PS is the pixel size
- ed represents the assumed dimension, in this case, eye distance in millimeters for a human face (ed=70 mm)
- edp represents the computed eye distance in pixels within the detected face region
- For those human faces (where ed≈70 mm), formula [1] will provide a good estimation of the distance (DEST≈D).
- For false faces (ex. a small printed face with ed≈20 mm), formula [1] will provide a wrong distance because it assumes that ed=70 mm.
- The lens position [DACD] to focus at distance [D] is given by the following formula:
-
DAC D =DAC DEST INIT +ERR D [2] - where:
- DACD
EST INIT, the initial lens position computed based on estimated distance [DEST], can be calculated as per DACDCRT INIT in equation [16]; and -
ERR D =ERR PLP +ERR DEST [35] - Note that in this example, near and far PLP errors are assumed to be almost the same (ERRFAR PLP≈ERRNEAR PLP≈ERRPLP) and ERRD
EST represents an error caused by the wrong estimation of the distance to an object. - Referring now to
FIG. 3 , again the first step of dynamic compensation method is to collect into an internal buffer (BUFF) the necessary input data while the user is operating the camera. This process should be done transparently without affecting the user experience. Again, an input record should contain the data summarized in table 2, but instead of the measured - DCRT of the first embodiment, DEST the estimated distance is used. The buffer should have enough space to store at least 2 records.
- The second step of dynamic compensation process is to estimate ERRPLP and to update the calibration parameters. The embodiment attempts to image a given face at two separate distances, although in variants of the embodiment, measurements from images of different faces could be employed. In any case as in the first embodiment, two input records (TCRT, OCRT, DEST, DACD
EST INIT, DACDEST FOCUS) are required which satisfy the following conditions: -
[D 1F ,D 1N ]∩[D 2F ,D 2N ]=Ø a) - where:
- [T1, O1, D1, DACD
1 INIT, DACD1 FOCUS] is the first record; - [T2, O2, D2, DACD
2 INIT, DACD2 FOCUS] is the second record; - [D1
F , D1N ] is the DOF range for the first estimated distance (D1F is the far limit, D1N is the near limit); and - [D2
F , D2N ] is the DOF range for the second estimated distance (D2F is the far limit, D2N is the near limit). -
|ERR 1 |≤N*DAC STEP -
|ERR 2 |≤N*DAC STEP -
|ERR 1 −ERR 2 |≤DAC STEP/2 b) - where
- DACSTEP is determined as per equation [24] of the first embodiment;
-
- As before, the first condition (a), requires that the two distances should be different.
- The second condition (b), requires that the object being imaged indeed exhibits the assumed dimension so that a given face to be a live human face, ed≈70 mm. In this case, the errors should not be larger than the maximum error (N*DACSTEP) and they should be quite similar (the difference should not be higher than half DACSTEP). This condition assures that ERRD
EST ≈0 and ERRD≈ERRPLP as in the first embodiment. - If the current face is false (condition (b) is not respected), and so compensation must not be done until a new valid face is received.
- If conditions (a) and (b) are respected, them ERRPLP will be estimated as follows:
-
- The new updated calibration parameters (which should be used further to improve AF speed and reduced lens wobble effect) are:
-
- As indicated, the maximum estimated error should not be higher than N*DACSTEP (ERRPLP≤N*DACSTEP). The value of N can be determined by the image acquisition device or camera module manufacturer according to how tightly they wish the estimated compensation process to operate. Thus, the larger the value of N the greater the possibility of calibrating based on a poorly estimated distance to an object.
- It will be appreciated that many variations of the above described embodiments are possible and that for example features and functions described in relation to the first embodiment are applicable to the second embodiment and vice versa where possible.
Claims (17)
ERR D
ERR D
Priority Applications (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US15/605,159 US10148945B1 (en) | 2017-05-25 | 2017-05-25 | Method for dynamically calibrating an image capture device |
| EP18170272.1A EP3407594B1 (en) | 2017-05-25 | 2018-05-01 | A method for dynamically calibrating an image capture device |
| CN201810507884.8A CN108933937B (en) | 2017-05-25 | 2018-05-24 | Method for dynamically calibrating an image capture device |
| US16/208,091 US10560690B2 (en) | 2017-05-25 | 2018-12-03 | Method for dynamically calibrating an image capture device |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US15/605,159 US10148945B1 (en) | 2017-05-25 | 2017-05-25 | Method for dynamically calibrating an image capture device |
Related Child Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US16/208,091 Continuation US10560690B2 (en) | 2017-05-25 | 2018-12-03 | Method for dynamically calibrating an image capture device |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20180343444A1 true US20180343444A1 (en) | 2018-11-29 |
| US10148945B1 US10148945B1 (en) | 2018-12-04 |
Family
ID=62217718
Family Applications (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US15/605,159 Active US10148945B1 (en) | 2017-05-25 | 2017-05-25 | Method for dynamically calibrating an image capture device |
| US16/208,091 Active US10560690B2 (en) | 2017-05-25 | 2018-12-03 | Method for dynamically calibrating an image capture device |
Family Applications After (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US16/208,091 Active US10560690B2 (en) | 2017-05-25 | 2018-12-03 | Method for dynamically calibrating an image capture device |
Country Status (3)
| Country | Link |
|---|---|
| US (2) | US10148945B1 (en) |
| EP (1) | EP3407594B1 (en) |
| CN (1) | CN108933937B (en) |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10951808B2 (en) * | 2017-06-16 | 2021-03-16 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Method for focusing control, mobile terminal and non-transitory storage medium |
| US11726392B2 (en) * | 2020-09-01 | 2023-08-15 | Sorenson Ip Holdings, Llc | System, method, and computer-readable medium for autofocusing a videophone camera |
| US20230403464A1 (en) * | 2022-06-10 | 2023-12-14 | Dell Products L.P. | Autofocus accuracy and speed using thermal input information |
Families Citing this family (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106990669B (en) * | 2016-11-24 | 2019-07-26 | 深圳市圆周率软件科技有限责任公司 | A kind of panorama camera mass production method and system |
| US10148945B1 (en) * | 2017-05-25 | 2018-12-04 | Fotonation Limited | Method for dynamically calibrating an image capture device |
| US11611693B1 (en) * | 2021-11-30 | 2023-03-21 | Motorola Mobility Llc | Updating lens focus calibration values |
| CN114339033B (en) * | 2021-12-07 | 2022-10-04 | 珠海视熙科技有限公司 | Dynamic calibration method and device based on camera far focus |
| US12244912B1 (en) | 2023-04-03 | 2025-03-04 | Rockwell Collins, Inc. | System and method for determining performance of an imaging device in real-time |
Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20090102403A1 (en) * | 2007-10-22 | 2009-04-23 | Stmicroelectronics (Grenoble) Sas | Vcm control circuit |
| US20090202235A1 (en) * | 2008-02-13 | 2009-08-13 | Qualcomm Incorporated | Auto-focus calibration for image capture device |
| US20160014404A1 (en) * | 2014-07-11 | 2016-01-14 | Evgeny Krestyannikov | Method and system for automatic focus with self-calibration |
| US20160147131A1 (en) * | 2014-11-21 | 2016-05-26 | Motorola Mobility Llc | Multiple Camera Apparatus and Method for Synchronized Autofocus |
| US20160165129A1 (en) * | 2014-12-09 | 2016-06-09 | Fotonation Limited | Image Processing Method |
| US20170155896A1 (en) * | 2014-07-01 | 2017-06-01 | Fotonation Limited | Method for calibrating an image capture device |
| US20180041754A1 (en) * | 2016-08-08 | 2018-02-08 | Fotonation Limited | Image acquisition device and method |
Family Cites Families (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US8970770B2 (en) | 2010-09-28 | 2015-03-03 | Fotonation Limited | Continuous autofocus based on face detection and tracking |
| JP2014142497A (en) * | 2013-01-24 | 2014-08-07 | Canon Inc | Imaging apparatus and method for controlling the same |
| US10148945B1 (en) * | 2017-05-25 | 2018-12-04 | Fotonation Limited | Method for dynamically calibrating an image capture device |
-
2017
- 2017-05-25 US US15/605,159 patent/US10148945B1/en active Active
-
2018
- 2018-05-01 EP EP18170272.1A patent/EP3407594B1/en active Active
- 2018-05-24 CN CN201810507884.8A patent/CN108933937B/en active Active
- 2018-12-03 US US16/208,091 patent/US10560690B2/en active Active
Patent Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20090102403A1 (en) * | 2007-10-22 | 2009-04-23 | Stmicroelectronics (Grenoble) Sas | Vcm control circuit |
| US20090202235A1 (en) * | 2008-02-13 | 2009-08-13 | Qualcomm Incorporated | Auto-focus calibration for image capture device |
| US20170155896A1 (en) * | 2014-07-01 | 2017-06-01 | Fotonation Limited | Method for calibrating an image capture device |
| US20160014404A1 (en) * | 2014-07-11 | 2016-01-14 | Evgeny Krestyannikov | Method and system for automatic focus with self-calibration |
| US20160147131A1 (en) * | 2014-11-21 | 2016-05-26 | Motorola Mobility Llc | Multiple Camera Apparatus and Method for Synchronized Autofocus |
| US20160165129A1 (en) * | 2014-12-09 | 2016-06-09 | Fotonation Limited | Image Processing Method |
| US20180041754A1 (en) * | 2016-08-08 | 2018-02-08 | Fotonation Limited | Image acquisition device and method |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10951808B2 (en) * | 2017-06-16 | 2021-03-16 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Method for focusing control, mobile terminal and non-transitory storage medium |
| US11726392B2 (en) * | 2020-09-01 | 2023-08-15 | Sorenson Ip Holdings, Llc | System, method, and computer-readable medium for autofocusing a videophone camera |
| US20230403464A1 (en) * | 2022-06-10 | 2023-12-14 | Dell Products L.P. | Autofocus accuracy and speed using thermal input information |
| US12143716B2 (en) * | 2022-06-10 | 2024-11-12 | Dell Products L.P. | Autofocus accuracy and speed using thermal input information |
Also Published As
| Publication number | Publication date |
|---|---|
| US20190158822A1 (en) | 2019-05-23 |
| EP3407594B1 (en) | 2019-09-11 |
| US10148945B1 (en) | 2018-12-04 |
| EP3407594A1 (en) | 2018-11-28 |
| CN108933937A (en) | 2018-12-04 |
| US10560690B2 (en) | 2020-02-11 |
| CN108933937B (en) | 2022-02-25 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10560690B2 (en) | Method for dynamically calibrating an image capture device | |
| US8259183B2 (en) | Shooting lens having vibration reducing function and camera system for same | |
| US20170163876A1 (en) | Image processing apparatus, image processing method, image processing program, and image pickup apparatus acquiring a focusing distance from a plurality of images | |
| US10812722B2 (en) | Imaging apparatus, shake correction method, lens unit, and body unit | |
| US20050018051A1 (en) | Shooting lens having vibration reducing function and camera system for same | |
| JP7289621B2 (en) | Control device, imaging device, control method, and program | |
| JP6749791B2 (en) | Imaging device and automatic focusing method | |
| KR20150074641A (en) | Auto focus adjusting method and auto focus adjusting apparatus | |
| GB2528382A (en) | Image processing apparatus, method for controlling the same, and storage medium | |
| US20080239099A1 (en) | Camera | |
| CN107040711B (en) | Image stabilization device and control method thereof | |
| WO2018030166A1 (en) | Image shake correction device, optical device, and method for correcting image shake | |
| WO2016185701A1 (en) | Infrared imaging device and method for updating fixed pattern noise data | |
| JP6543946B2 (en) | Shake correction device, camera and electronic device | |
| JP5283667B2 (en) | Image processing apparatus, image processing method, and program | |
| US9848115B2 (en) | Image capturing apparatus capable of adjusting optical characteristics of lens unit attachable thereto | |
| CN105308489A (en) | Focus adjustment device, camera system, and focus adjustment method | |
| CN107409182A (en) | Image processing apparatus, image processing method and program | |
| JP2005157268A (en) | IMAGING DEVICE, ITS CONTROL METHOD, PROGRAM, AND STORAGE MEDIUM | |
| JP2019216374A (en) | Imaging apparatus and control method therefor | |
| TW201823678A (en) | Image capturing device and method for correcting phase focusing thereof | |
| KR101854355B1 (en) | Image correction apparatus selectively using multi-sensor | |
| JP2009152725A (en) | Automatic tracking apparatus and method | |
| JP2019061228A (en) | CONTROL DEVICE, IMAGING DEVICE, CONTROL METHOD, AND PROGRAM | |
| US9930245B2 (en) | Focus control apparatus and control method for the same |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: FOTONATION LIMITED, IRELAND Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MALAESCU, ALEXANDRU;NANU, FLORIN;REEL/FRAME:042514/0715 Effective date: 20170526 |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
| MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |
|
| AS | Assignment |
Owner name: TOBII TECHNOLOGY LIMITED, IRELAND Free format text: CHANGE OF NAME;ASSIGNOR:FOTONATION LIMITED;REEL/FRAME:070238/0774 Effective date: 20240820 |
|
| AS | Assignment |
Owner name: TOBII TECHNOLOGIES LIMITED, IRELAND Free format text: CHANGE OF NAME;ASSIGNOR:FOTONATION LIMITED;REEL/FRAME:070682/0207 Effective date: 20240820 |
|
| AS | Assignment |
Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, ILLINOIS Free format text: SECURITY INTEREST;ASSIGNORS:ADEIA INC. (F/K/A XPERI HOLDING CORPORATION);ADEIA HOLDINGS INC.;ADEIA MEDIA HOLDINGS INC.;AND OTHERS;REEL/FRAME:071454/0343 Effective date: 20250527 |
|
| AS | Assignment |
Owner name: ADEIA MEDIA HOLDINGS LLC, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:TOBII TECHNOLOGIES LTD;REEL/FRAME:071572/0855 Effective date: 20250318 Owner name: ADEIA MEDIA HOLDINGS INC., CALIFORNIA Free format text: CONVERSION;ASSIGNOR:ADEIA MEDIA HOLDINGS LLC;REEL/FRAME:071577/0875 Effective date: 20250408 |