[go: up one dir, main page]

CN112819897B - Camera parameter correction method, device, electronic device and storage medium - Google Patents

Camera parameter correction method, device, electronic device and storage medium Download PDF

Info

Publication number
CN112819897B
CN112819897B CN202110016581.8A CN202110016581A CN112819897B CN 112819897 B CN112819897 B CN 112819897B CN 202110016581 A CN202110016581 A CN 202110016581A CN 112819897 B CN112819897 B CN 112819897B
Authority
CN
China
Prior art keywords
correction
processing
parameter
camera
processing result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110016581.8A
Other languages
Chinese (zh)
Other versions
CN112819897A (en
Inventor
朱文波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202110016581.8A priority Critical patent/CN112819897B/en
Publication of CN112819897A publication Critical patent/CN112819897A/en
Application granted granted Critical
Publication of CN112819897B publication Critical patent/CN112819897B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application discloses a camera parameter correction method, a device, electronic equipment and a storage medium, wherein the method comprises the steps of acquiring a target image acquired by a camera; and determining that the first correction condition is met based on the first processing result and the second processing result, and correcting the calibration parameters of the camera based on the first processing result. In this way, the image processing result of the first processing model is used as an evaluation standard of the intrinsic second processing model of the camera, when the first processing result and the second processing result meet the correction condition, for example, the difference between the first processing result and the second processing result is larger, the calibration parameters of the camera are corrected by using the more accurate first processing result, and the corrected parameters are used for image processing, so that the image processing effect of the camera can be improved.

Description

Camera parameter correction method and device, electronic equipment and storage medium
Technical Field
The present application relates to image processing technologies, and in particular, to a method and apparatus for correcting camera parameters, an electronic device, and a storage medium.
Background
The calibration parameters of the current camera are generally finished when leaving the factory, and the obtained calibration parameters are solidified into module parameters of the camera. However, as the camera components age and collide with each other, the characteristics of the lens and the sensor may change, and the relative position of the lens may change, so that the calibration parameters in the original factory are not suitable for the new shooting scene, and it is necessary to perform a new calibration or parameter correction process, otherwise, when the camera performs image processing based on the original calibration parameters, a larger deviation occurs in the image processing result, which affects the shooting effect of the camera. Therefore, solving the problem of variation of camera calibration parameters is an important research direction for improving the shooting effect of cameras.
Disclosure of Invention
In order to solve the above technical problems, an embodiment of the application is expected to provide a method, a device, an electronic device and a storage medium for correcting camera parameters.
The technical scheme of the application is realized as follows:
in a first aspect, a method for correcting parameters of a camera is provided, the method comprising:
acquiring a target image acquired by a camera;
performing specific image processing on the target image by using a first processing model to obtain a first processing result of the target image;
performing specific image processing on the target image by using a second processing model to obtain a second processing result of the target image;
judging whether a preset first correction condition is met or not based on the first processing result and the second processing result;
And determining that the first correction condition is met, and correcting the calibration parameters of the camera based on the first processing result.
In a second aspect, there is provided a camera parameter correction apparatus, the apparatus comprising:
The acquisition unit is used for acquiring a target image acquired by the camera;
The processing unit is used for carrying out specific image processing on the target image by using a first processing model to obtain a first processing result of the target image;
a judging unit, configured to judge whether a preset first correction condition is satisfied based on the first processing result and the second processing result;
And the correction unit is used for determining that the first correction condition is met and correcting the calibration parameters of the camera based on the first processing result.
In a third aspect, an electronic device is provided comprising a processor and a memory configured to store a computer program capable of running on the processor, wherein the processor is configured to perform the steps of the aforementioned method when the computer program is run.
In a fourth aspect, a computer storage medium is provided, on which a computer program is stored, wherein the computer program, when being executed by a processor, carries out the steps of the aforementioned method.
The embodiment of the application provides a camera parameter correction method, a device, electronic equipment and a storage medium, wherein the method comprises the steps of acquiring a target image acquired by a camera; the method comprises the steps of carrying out specific image processing on a target image by using a first processing model to obtain a first processing result of the target image, carrying out specific image processing on the target image by using a second processing model to obtain a second processing result of the target image, judging whether a preset first correction condition is met or not based on the first processing result and the second processing result, determining that the first correction condition is met, and correcting calibration parameters of the camera based on the first processing result. In this way, the image processing result of the first processing model is used as an evaluation standard of the intrinsic second processing model of the camera, when the first processing result and the second processing result meet the correction condition, for example, the difference between the first processing result and the second processing result is larger, the calibration parameters of the camera are corrected by using the more accurate first processing result, and the corrected parameters are used for image processing, so that the image processing effect of the camera can be improved.
Drawings
FIG. 1 is a schematic diagram of four coordinate systems of a camera according to an embodiment of the present application;
FIG. 2 is a first flow chart of a camera parameter correction method according to an embodiment of the application;
FIG. 3 is a second flow chart of a camera parameter correction method according to an embodiment of the application;
FIG. 4 is a third flow chart of a camera parameter correction method according to an embodiment of the application;
FIG. 5 is a schematic diagram of an imaging principle of a binocular camera according to an embodiment of the present application;
FIG. 6 is a schematic view of imaging a single point target on left and right imaging planes of a binocular camera according to an embodiment of the present application;
FIG. 7 is a schematic diagram illustrating a coordinate transformation relationship between a single-point target and an imaging plane according to an embodiment of the present application;
FIG. 8 is a fourth flowchart of a method for correcting camera parameters according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a correction convergence curve according to an embodiment of the application;
FIG. 10 is a schematic diagram of a camera parameter correction device according to an embodiment of the present application;
Fig. 11 is a schematic diagram of a composition structure of an electronic device according to an embodiment of the present application.
Detailed Description
For a more complete understanding of the nature and the technical content of the embodiments of the present application, reference should be made to the following detailed description of embodiments of the application, taken in conjunction with the accompanying drawings, which are meant to be illustrative only and not limiting of the embodiments of the application.
Before further describing the embodiments of the present application in detail, the embodiments of the present application will be explained with reference to camera calibration parameters.
In the existing work of mobile phone delivery, the calibration of camera parameters is a very critical link no matter in image measurement or machine vision application, and the accuracy of the calibration result and the stability of the algorithm directly influence the accuracy of the camera work generation result. Therefore, the camera calibration is performed on the premise that subsequent work is performed, and the improvement of the calibration precision is the basis of the subsequent image algorithm for processing the image based on parameters obtained by calibration.
The camera calibration is mainly performed to obtain the internal and external parameters of the camera (including the position parameters among cameras for multiple cameras) and the distortion parameters. Calibration of the camera is usually performed by correcting distortion of each lens due to different distortion degrees of each lens, and by calibrating the camera, the camera can correct distortion of the lens to generate a corrected image, and the camera can reconstruct a three-dimensional scene according to the obtained image.
The camera calibration process can be simply described as that through a calibration plate, as shown in fig. 1, four coordinate systems are involved in image processing, namely, a world coordinate system (Ow-XwYwZw) describes the camera position, a camera coordinate system (Oc-XcYcZc) describes an imaging plane, a midpoint of the imaging plane is the origin o, a pixel coordinate system (uv) origin is the upper left corner of the image, a pixel is used as a unit, coordinates (u, v) of each pixel are the column number and the row number of the pixel in an array respectively, a point P (Xw, yw, zw) is one point in the world coordinate system, namely, a real point in life, a point P (x, y) is an imaging point of the image in the image coordinate system, and coordinates in the pixel coordinate system are (u, v). f is the camera focal length, f= ||o-oc|.
The three-dimensional points Pi of the world coordinate system and the two-dimensional points Pi of the corresponding image coordinate system are converted from the three-dimensional points to the two-dimensional points through a series of matrix transformations by the camera internal parameters K, the camera external parameters (comprising the rotation matrix R and the offset vector T) and the distortion parameters D.
The transformation from the world coordinate system to the camera coordinate system belongs to rigid transformation, namely, the object is not deformed, only rotation and translation are needed, and the transformation is described through camera external parameters R and T. The method comprises the steps of converting from 3D to 2D from a camera coordinate system to an image coordinate system, belonging to perspective projection relation.
Under most conditions, these parameters must be obtained through experiments and calculations, and this process of solving the parameters is called camera calibration (or camera calibration). The conventional camera calibration method, the active vision camera calibration method and the camera self-calibration method are adopted as the conventional camera calibration method.
The traditional camera calibration method needs to use a calibration object with known size, and obtains the internal and external parameters of a camera model by a certain algorithm through establishing the correspondence between the points with known coordinates on the calibration object and the image points of the calibration object. The three-dimensional calibration object and the planar calibration object can be classified according to different calibration objects. The three-dimensional calibration object can be calibrated by a single image, the calibration precision is higher, but the processing and maintenance of the high-precision three-dimensional calibration object are more difficult. The planar calibration object is simpler to manufacture than the three-dimensional calibration object, the precision is easy to ensure, but two or more images are needed to be adopted in the calibration. The traditional camera calibration method always needs a calibration object in the calibration process, and the manufacturing precision of the calibration object can influence the calibration result. Meanwhile, some occasions are not suitable for placing the calibration object, and the application of the traditional camera calibration method is limited.
The self-calibration algorithm mainly utilizes the constraint of camera motion. The motion constraints of the camera are too strong, thus making it impractical in practice. The use of scene constraints is mainly the use of some parallel or orthogonal information in the scene. The intersection point of the parallel lines in space on the image plane of the camera is called vanishing point, which is a very important feature in the projection geometry, so many scholars have studied the self-calibration method of the camera based on vanishing point. The self-calibration method has strong flexibility and can calibrate the camera on line. But because it is an absolute conic or curved based method, its algorithm robustness is poor.
The camera calibration method based on active vision refers to calibrating a camera by knowing certain motion information of the camera. The method does not need a calibration object, but needs to control the camera to do certain special movements, and the internal parameters of the camera can be calculated by utilizing the particularities of the movements. The camera calibration method based on active vision has the advantages that the algorithm is simple, linear solutions can be obtained, so that the robustness is high, and the method is high in system cost, high in experimental equipment and high in experimental condition requirements, and is not suitable for occasions where motion parameters are unknown or cannot be controlled.
The embodiment of the application provides a camera parameter correction method, which mainly aims at correcting original calibration parameters of a camera and solving the problem that the characteristics of a lens and a sensor are changed due to aging and collision of camera components, so that the original calibration parameters in leaving a factory are not suitable for a new shooting scene.
Fig. 2 is a first flow chart of a method for correcting camera parameters according to an embodiment of the application, as shown in fig. 2, the method specifically includes:
step 201, acquiring a target image acquired by a camera;
in some embodiments, the method further comprises pre-caching the target image.
In some embodiments, the acquiring the target image acquired by the camera includes acquiring the cached target image in an idle state of the camera to perform a calibration parameter correction operation.
It should be noted that, in order to obtain a more accurate effect and have no strong aging requirement, the image data at a certain moment can be buffered, two image processing methods (i.e. a first processing model and a second processing model) are respectively utilized to perform specific image processing on the image data of the same frame when the system resources are sufficient or idle, such as calculating image depth information, then comparing the two calculated depth information, and judging whether to perform parameter correction according to the comparison result.
In some embodiments, when the correction judgment is carried out again, the method further comprises the steps of acquiring a target image acquired again by the camera, and carrying out specific image processing on the acquired target image by utilizing the first processing model and the second processing model respectively.
That is, the correction process for the camera calibration parameters may also be performed by processing a new frame on the basis of one correction, and then performing iterative correction again (that is, each correction is performed by updating the processed image (for example, after generating a new stepwise correction parameter, collecting a new frame of image data as a processing object), and not processing one image all the time) (also being an iterative convergence process), until the results of the two processing modes are consistent, then taking the camera calibration parameters when the results are consistent as new calibration parameters, and finally completing correction for the camera related calibration parameters.
Of course, the processing of one frame of image can be continued, and the iterative correction of the parameters can be performed.
Step 202, performing specific image processing on the target image by using a first processing model to obtain a first processing result of the target image;
Here, the specific image processing procedure of the first processing model is not related to the calibration parameters of the camera. It is understood that the input variables of the first process model include the target image to be processed, and do not include the calibration parameters.
It should be noted that, the image processing process of the first processing model is not related to the calibration parameters, but also has higher accuracy in image processing, so that the first processing result can be used as an evaluation standard for the image processing effect of the second processing model, if the second processing result approaches to or is consistent with the first processing result, the image processing accuracy of the second processing model can be determined to be higher without correcting the calibration parameters, otherwise, the calibration parameters need to be corrected.
Illustratively, the first processing model is a neural network model established based on a neural network algorithm.
In some embodiments, the performing specific image processing on the target image by using the first processing model to obtain a first processing result of the target image includes performing specific image processing on the target image by using the first processing model to obtain an initial processing result of the target image, obtaining a first processing result of a related image, wherein the related image has a correlation with the target image, and correcting the initial processing result by using the first processing result of the related image to obtain the first processing result of the target image.
Here, in order to improve accuracy of the first processing result, the first processing model may correct the initial processing result according to the processing result of the related image after outputting the initial processing result of the target image.
Here, the related image may be an adjacent frame, or another frame having the same shooting intra-frame as the target image, and may be used to evaluate or correct the processing result of the target image because the processing result of the related image is the same or similar.
Illustratively, the depth information of the target image is corrected, for example, using the depth information of neighboring frames.
Step 203, performing specific image processing on the target image by using a second processing model to obtain a second processing result of the target image;
here, the specific image processing procedure of the second processing model is related to calibration parameters of the camera. It is understood that the input variables of the first process model include the target image to be processed and the calibration parameters.
In some embodiments, the feature image processing includes at least one of image depth information calculation, image blurring processing, image noise reduction processing, image correction processing.
In some embodiments, the calibration parameters include at least one of a position parameter between lenses, a camera intrinsic parameter, a camera extrinsic parameter, a camera distortion parameter.
It should be noted that, in practical application, calibration parameters related to the image processing effect may be corrected by using the image processing result. The second processing model may be used to process specific images, where one image processing result may be affected by one or more calibration parameters. Thus, an image processing result is used to correct the corresponding calibration parameters. The camera distortion parameters affect the image correction process, and the camera distortion parameters can be corrected by comparing the two modules to obtain the image correction result.
204, Judging whether a preset first correction condition is met or not based on the first processing result and the second processing result;
Here, the first correction condition is used to determine whether an error between the first processing result and the second processing result is within an allowable first error range, determine that correction of the calibration parameter is unnecessary if the error is within the allowable first error range, and determine that correction of the calibration parameter is necessary if the error exceeds the allowable first error range.
In some embodiments, the judging whether the preset first correction condition is met includes determining a first error value between the first processing result and the second processing result, determining that the first correction condition is met when the first error value is greater than or equal to a first error threshold, and determining that the first correction condition is not met when the first error value is less than the first error threshold.
Here, the first error value may be determined based on pixel information in both results. For example, when the processing result is depth information, an error value is obtained according to the depth information of the same pixel point in the two results.
In some embodiments, when the feature image processing is image depth information calculation, the first processing result and the second processing result are depth information of an image;
the determining of the first error value between the first processing result and the second processing result comprises the steps of carrying out blurring processing on the target image based on the first processing result to obtain a first blurring processing result, carrying out blurring processing on the target image based on the second processing result to obtain a second blurring processing result, and determining the first error value between the first blurring processing result and the blurring second blurring processing result.
It should be noted that, because the blurring processing result can better reflect the image processing effect, the blurring processing result can be compared to determine whether the calibration information needs to be corrected.
Step 205, determining that the first correction condition is met, and correcting calibration parameters of the camera based on the first processing result.
Here, the calibration parameter may be an original calibration parameter at the time of original shipment or a correction parameter obtained at the time of last correction.
Here, since the accuracy of the first processing result is higher, the calibration parameters of the camera may be reversely corrected according to the first processing result.
In some embodiments, the correcting the calibration parameters of the camera based on the first processing result includes obtaining first correction parameters of the camera based on the first processing result, carrying out first correction judgment operation again based on the first correction parameters until the first correction conditions are determined to be not met, taking the corresponding first correction parameters as second correction parameters when the first correction conditions are not met, and correcting the calibration parameters by using the second correction parameters.
Here, the first correction judging operation instructs to re-execute steps 201 to 205 until it is determined that the first correction condition is not satisfied, at which time the error between the current first processing result and the second processing result can be considered to be within the allowable error range.
In some embodiments, the method further comprises determining that the first correction condition is not met, using the current first correction parameter as a second correction parameter if there is a correction operation, correcting the calibration parameter by using the second correction parameter, and if there is no correction operation, keeping the calibration parameter of the camera unchanged.
It should be noted that the method for correcting camera parameters provided in the embodiment of the present application may be applied to an electronic device with a camera function, where the electronic device may include a mobile phone, a tablet computer, a notebook computer, a palm computer, a Personal digital assistant (Personal DIGITAL ASSISTANT, PDA), a Portable media player (Portable MEDIA PLAYER, PMP), a wearable device, a camera, and so on.
By adopting the technical scheme, the image processing result of the first processing model is used as an evaluation standard of the inherent second processing model of the camera, when the first processing result and the second processing result meet the correction conditions, for example, the difference between the first processing result and the second processing result is larger, the calibration parameters of the camera are corrected by using the more accurate first processing result, and the corrected parameters are used for image processing, so that the image processing effect of the camera can be improved.
In order to further embody the object of the present application, the parameter correction process is further illustrated based on the foregoing embodiment of the present application, and as shown in fig. 3, the method specifically includes:
Step 301, acquiring a target image acquired by a camera;
in some embodiments, the method further comprises pre-caching the target image.
In some embodiments, the acquiring the target image acquired by the camera includes acquiring the cached target image in an idle state of the camera to perform a calibration parameter correction operation.
In some embodiments, when the correction judgment is carried out again, the method further comprises the steps of acquiring a target image acquired again by the camera, and carrying out specific image processing on the acquired target image by utilizing the first processing model and the second processing model respectively.
Step 302, performing specific image processing on the target image by using a first processing model to obtain a first processing result of the target image;
Here, the specific image processing procedure of the first processing model is not related to the calibration parameters of the camera. It is understood that the input variables of the first process model include the target image to be processed, and do not include the calibration parameters.
It should be noted that, the first processing result may be used as an evaluation criterion for the image processing effect of the second processing model, if the second processing result approaches the first processing result or is consistent, it may be determined that the image processing accuracy of the second processing model is higher, without correcting the calibration parameter, or else, correcting the calibration parameter. The image processing process of the first processing model is not only irrelevant to calibration parameters, but also has higher accuracy on image processing. Illustratively, the first processing model is a neural network model established based on a neural network algorithm.
Step 303, configuring forward input parameters when the second processing model performs forward operation;
it should be noted that, when the first correction judgment operation is performed for the first time, the method includes obtaining the calibration parameter of the camera, and taking the calibration parameter as the forward input parameter of the second processing model.
That is, the second processing model performs image processing by using the original calibration parameters of the camera when the first correction judging operation starts to be performed, and if the first correction condition is not satisfied all the time during the first correction judging operation, that is, the second processing model always has a more accurate image processing result by using the original calibration parameters, the forward input parameters are always the original calibration parameters.
In some embodiments, the method further comprises obtaining a first correction parameter of the camera based on the first processing result when a first correction condition is met, and taking the first correction parameter as a forward input parameter of the second processing model.
When the correction operation is performed, if the first correction condition is satisfied, it indicates that the original calibration parameter needs to be corrected, and a first correction parameter is obtained, and in order to verify the accuracy of the first correction parameter, it is necessary to perform actual verification using the first correction parameter as a forward input parameter of the second processing model, so that the first correction judgment operation needs to be performed again, at this time, it is necessary to temporarily use the first correction parameter as the forward input parameter, and the second processing model performs forward operation to perform specific image processing on the input image.
It should be noted that, the first correction parameter is only temporarily used as a forward input parameter in the first correction judging operation, at this time, the calibration parameter of the camera is still the original calibration parameter or the correction parameter obtained in the last correction, and before the correction of the calibration parameter is not confirmed, if the camera is in the working state, the second processing model is still performing the normal image processing operation based on the calibration parameter.
The first correction determining operation may be performed when the camera is in an idle state, for example, the camera is in a sleep mode to perform an offline operation, so as to avoid occupying normal processing resources of the camera.
Step 304, performing specific image processing on the target image by using a second processing model to obtain a second processing result of the target image;
here, the specific image processing procedure of the second processing model is related to calibration parameters of the camera. It is understood that the input variables of the first process model include the target image to be processed and the calibration parameters.
Step 305, judging whether a preset first correction condition is met or not based on the first processing result and the second processing result, if yes, executing step 307, and if not, executing step 306;
in some embodiments, the judging whether the preset first correction condition is met includes determining a first error value between the first processing result and the second processing result, determining that the first correction condition is met when the first error value is greater than or equal to a first error threshold, and determining that the first correction condition is not met when the first error value is less than the first error threshold.
Step 306, determining that the first correction condition is not satisfied, taking the current first correction parameter as a second correction parameter, and correcting the calibration parameter by using the second correction parameter;
Step 306 may be understood as determining that the first correction condition is not satisfied and taking the current forward input parameter of the second process model as the second correction parameter.
It should be noted that, in the first correction judging operation process, if the first correction condition is not satisfied all the time, that is, the second processing model always has a more accurate image processing result by using the original calibration parameter, the forward input parameter is always the original calibration parameter. At this time, the forward input parameter is a calibration parameter of the camera, and step 306 may be replaced by determining that the first correction condition is not satisfied, and keeping the calibration parameter of the camera unchanged.
When the first correction judgment operation is needed, if the first correction condition is met, obtaining a first correction parameter, obtaining the first correction parameter which finally enables the first processing result and the second processing result to be approximately consistent in a repeated iteration correction mode, and correcting the calibration parameter of the camera by taking the finally obtained first correction parameter as the second correction parameter. At this time, the forward input parameter is the first correction parameter.
In some embodiments, modifying the calibration parameter with the second modification parameter includes replacing the calibration parameter with the second modification parameter. That is, the second correction parameter obtained by the first correction judgment operation is used to directly replace the current calibration parameter.
In other embodiments, correcting the calibration parameter by using the second correction parameter includes performing a second correction judgment operation based on the second correction parameter until it is determined that the second correction condition is not satisfied, using the second correction parameter corresponding to the second correction condition that is not satisfied as a third correction parameter, and correcting the calibration parameter by using the third correction parameter.
The second correction condition is used for judging whether the error between the third processing result and the standard processing result obtained when the second processing model performs image processing by using the second correction parameter is within an allowable second error range, if so, determining that correction of the calibration parameter is not needed, and if the error exceeds the allowable second error range, determining that correction of the calibration parameter is needed. Here, the first error range and the second error range may be the same, and the second error range may be smaller than the first error range, so as to ensure that the corrected calibration parameter is more accurate.
That is, the second correction parameter obtained by the first correction judgment operation can be further verified, and the current calibration parameter is replaced after the verification is passed, so that the accuracy of the calibration parameter and the image processing effect are further improved. This is further illustrated in the next example.
Step 307, determining that the first correction condition is met, and obtaining a first correction parameter of the camera based on the first processing result;
in some embodiments, the first processing result is used as a reverse input parameter of the second processing model, and the second processing model is subjected to reverse operation based on the reverse input parameter, so that a first correction parameter is reversely output.
That is, the original output parameters of the second processing model become known quantities, the input parameters become unknown quantities, and the input parameters can be obtained through inverse operation.
Step 308, taking the first correction parameter as a forward input parameter of the second processing model, and returning to step 303.
That is, after the first correction parameter is obtained, in order to verify the accuracy of the first correction parameter, it is necessary to perform actual verification using the first correction parameter as the forward input parameter of the second processing model, and therefore it is necessary to perform the first correction judgment operation again, and at this time, it is necessary to perform the specific image processing on the input image by performing forward operation using the first correction parameter temporarily as the forward input parameter.
By adopting the technical scheme, the image processing result of the first processing model is used as an evaluation standard of the inherent second processing model of the camera, when the first processing result and the second processing result meet the correction conditions, for example, the difference between the first processing result and the second processing result is larger, the calibration parameters of the camera are corrected by using the more accurate first processing result, and the corrected parameters are used for image processing, so that the image processing effect of the camera can be improved.
In order to further embody the object of the present application, the parameter correction process is further illustrated based on the foregoing embodiment of the present application, and as shown in fig. 4, the method specifically includes:
Step 301, acquiring a target image acquired by a camera;
Step 302, performing specific image processing on the target image by using a first processing model to obtain a first processing result of the target image;
Step 303, configuring forward input parameters when the second processing model performs forward operation;
Step 304, performing specific image processing on the target image by using a second processing model to obtain a second processing result of the target image;
Step 305, judging whether a preset first correction condition is met or not based on the first processing result and the second processing result, if yes, executing step 307, and if not, executing step 306;
step 306, determining that the first correction condition is not satisfied, and taking the current first correction parameter as a second correction parameter;
correcting the calibration parameter using the second correction parameter may specifically include:
Step 401, obtaining a standard image and a corresponding standard processing result;
here, the standard image and the corresponding standard processing result may be stored in the camera storage unit in advance, and serve as a standard for verifying the correction parameter when the current calibration parameter needs to be corrected.
Step 402, using the second correction parameter as a forward input parameter of the second processing model;
In this case, the second correction parameter is temporarily required to be a forward input parameter, and the second processing model performs a forward operation to perform a specific image processing on the input standard image.
Step 403, performing specific image processing on the standard image by using the second processing model to obtain a third processing result of the standard image;
Step 404, judging whether the second correction condition is satisfied based on the third processing result and the standard processing result, if yes, executing step 405, and if not, executing step 406;
The second correction condition is used for judging whether the error between the third processing result and the standard processing result obtained when the second processing model performs image processing by using the second correction parameter is within an allowable second error range, if so, determining that correction of the calibration parameter is not needed, and if the error exceeds the allowable second error range, determining that correction of the calibration parameter is needed. Here, the first error range and the second error range may be the same, and the second error range may be smaller than the first error range, so as to ensure that the corrected calibration parameter is more accurate.
In some embodiments, the determining whether the second correction condition is satisfied includes determining a second error value between the third processing result and the standard processing result, determining that the second correction condition is satisfied when the second error value is greater than or equal to a second error threshold, and determining that the second correction condition is not satisfied when the second error value is less than the second error threshold.
Step 405, determining that the second correction condition is met, taking the second correction parameter as a forward input parameter, and returning to step 303;
Here, returning to step 303 may be understood as re-performing the first correction judgment operation and the second correction judgment operation based on the second correction parameter until it is determined that the second correction condition is not satisfied.
And step 406, correcting the calibration parameter by using the third correction parameter.
In some embodiments, the modifying the calibration parameter with the third modification parameter includes replacing the calibration parameter with the third modification parameter.
Step 307, determining that the first correction condition is met, and obtaining a first correction parameter of the camera based on the first processing result;
Step 308, taking the first correction parameter as a forward input parameter of the second processing model, and returning to step 303.
By adopting the technical scheme, the image processing result of the first processing model is used as an evaluation standard of the inherent second processing model of the camera, when the first processing result and the second processing result meet the correction conditions, for example, the difference between the first processing result and the second processing result is larger, the calibration parameters of the camera are corrected by using the more accurate first processing result, and the corrected parameters are used for image processing, so that the image processing effect of the camera can be improved.
Further examples will be given below using a dual camera as an example.
As shown in fig. 5, for the dual camera, when calculating depth information, two cameras (camera parameters are identical) with each other with appropriate distance and parallel optical axes are considered (similar to two eyes of a person), the same object point M (for convenience of discussion, it is assumed that the object point M is located on the left side of the left camera), M1 is an image point of the object point M on the right camera image plane, M2 is an image point of the object point M on the left camera image plane, O1 is the optical center of the right camera, and O2 is the optical center of the left camera.
When the left and right imaging planes of fig. 5 are taken out separately, as shown in fig. 6, I1 and I2 are centers of the left and right imaging planes (i.e., projections of optical centers of the left and right cameras on the imaging planes, respectively), and coordinate systems are established with I1 and I2 as coordinate centers, respectively. From the geometric knowledge, it is known that O1O2 is parallel to I1I2 and is denoted as O1O 2I 1I2, O1O 2M 1M2, so M1M 2I 1I2.
Looking at plane MM1M2 alone (hatched in fig. 5), it is easy to see that Δm1o2 is similar to Δmm1m2, so MO 1/mm1=o1o2/m1m2 according to the triangle similarity principle.
Assuming that the abscissa components of M1 and M2 in the corresponding image planes are x1 and x2, there are:
M1M2=I2I1+x2-x1
if the distance between the optical centers of the two cameras is defined as the baseline distance, denoted b=i1i2, (x 2-x 1) is the parallax of the same object point on the two image planes, denoted d. The above formula can be simplified as:
M1M2=b+d
The formula derivation is further performed:
MO1/MM1=MO1/(MO1+O1M1)=b/(b+d)
can be simplified as:
MO1/O1M1=b/d
The correspondence of the object point M with its image point M1 at the image plane of the right camera is now seen separately. The world coordinate system is established with the right camera optical center O1 as the origin of coordinates as shown in fig. 7.
The coordinate transformation relation between the imaging coordinate system of the camera and the world coordinate system of the optical center can be easily obtained through the image, and the depth information Z is obtained:
Z/O1I1=MO1/O1M1=b/d
z= (b x f)/d (f: focal length of camera) thus we can recover the depth information of the object M.
As shown in fig. 8, the calibration parameter revising method for the dual-camera specifically may include:
step 801, turning on a camera;
step 802, starting image acquisition, caching acquired image data, and preparing for offline depth information calculation;
here, the image format may be a RAW format.
803, Sending the collected image data to a front-end NN network for depth information calculation;
Here, the NN network may be regarded as a first processing model, i.e. a neural network (Neural Network, NN).
The processing based on the front-end NN network focuses on calculating more accurate depth information corresponding to the image by utilizing the characteristic that an algorithm in the NN network is not influenced by camera calibration parameters (the calibration parameters of the camera are not used).
It should be noted that when the NN network is used for calculating the depth information, there may be various modes, which may be performed based on the principle similar to AI or depth learning blurring, or based on the blurring effect of the history frame, and the purpose is to obtain more accurate image depth information, which is beneficial to correcting the calibration information of the camera by comparing.
Step 804, blurring the image according to the depth information obtained by NN network to obtain a first blurring result;
Step 805, obtaining camera calibration parameters as forward input parameters of a traditional image algorithm;
step 806, the acquired image data is simultaneously sent to a traditional image algorithm to calculate depth information;
Here, the conventional image algorithm may be regarded as an algorithm applied by the second processing model. The traditional double-shot data-based depth information calculation algorithm calculates the depth information of the data frames at the same time (using the calibration parameters of the cameras).
Step 807, blurring the image according to the depth information obtained by the traditional image algorithm to obtain a second blurring result;
here, the depth information obtained in both modes performs blurring processing on the image using the same blurring algorithm.
Step 808, comparing the first blurring processing result with the second blurring processing result;
step 809, judging whether the error of the comparison result is smaller than d, if yes, executing step 810, otherwise, returning to step 802;
analyzing the accuracy of the blurring result obtained based on the two kinds of depth information (generally, the accuracy of the traditional mode is lower than that obtained by NN network mode due to the change of camera calibration parameters), reversely correcting the calibration parameters of the camera according to the more accurate depth information, so that the depth information calculated based on the new calibration parameters in the traditional mode gradually approaches to the result obtained based on NN calculation;
Here, when the error is greater than or equal to d, it is necessary to perform the reverse correction first to obtain the correction parameter, and perform the correction judgment again using the correction parameter.
Step 810, taking the corrected camera calibration parameters as the camera calibration parameters;
step 811, applying the camera calibration parameters to subsequent image algorithm processing;
step 812, video recording, photographing or previewing.
In the process of correcting parameters by using the depth information, because the image data at a certain moment can be cached mainly for obtaining more accurate effect and without strong aging requirement, two ways (a front end NN network and a rear end utilize a traditional image algorithm of camera calibration parameters) are respectively used for calculating the depth information of the same piece of data (according to a standard of accuracy) when the system resources are sufficient or idle, then the two pieces of calculated depth information are compared, the image is subjected to blurring processing (the image after blurring processing can be analyzed and the blurring effect based on which parameter is judged to be more accurate) based on the depth information obtained by the two ways, the calibration parameters (such as the position information between lenses) of the camera are reversely corrected according to the comparison result, and then the depth information and the blurring are reprocessed according to the corrected calibration information until the accuracy is close to the standard result;
of course, the correction process of the camera calibration parameters can also process a new frame on the basis of one correction, then perform iterative correction again (i.e. the image processed by each correction is updated (for example, a new frame of image data is collected as a processed object after a new stepwise correction parameter is generated) and one image is not processed all the time) (the same process of iterative convergence) until the results of the two processing modes are consistent, then the camera calibration parameters at the same time are used as new calibration parameters, and finally the correction of the camera related calibration parameters is completed.
Fig. 9 is a schematic diagram of a correction convergence curve in an embodiment of the present application, where, as shown in fig. 9, the abscissa is the iteration number, and the ordinate is the calibration parameter, and as the iteration number increases, the obtained calibration parameter gets closer to an accurate value.
Because the characteristics of the lens and the sensor are possibly changed along with the aging and collision of the camera components and the change of the relative positions of the lens, the characteristics of the sensor of the original camera calibration parameter camera are not matched, when the camera works, the depth information of the corresponding frame can be calculated in a high-precision off-line mode by utilizing a front-end NN network at a fixed moment (for example, the first frame when the camera is opened each time) (the calculation method and the calculation parameter can fix the standard after the two calculation modes are corrected according to the factory (the original precision standard is used for the follow-up correction); and the camera calculates the depth information of the subsequent frames by using the corrected calibration parameters until the error of the two parameters reaches a certain range, and the correction of the camera calibration information is considered to be completed. Note that either the high-precision processing of depth information by the front-end or the subsequent gradual correction process may be performed off-line in order to obtain accurate depth information without affecting the overall performance (e.g., frame rate) of the camera.
It should be noted that, the embodiment of the application not only can correct the calibration parameters by using the depth information, but also can correct the original calibration parameters by comparing other processing results of the image, for example, the original image distortion parameters are corrected by comparing the image distortion processing results, the calibration parameters of the camera easily deviate along with the increase of the service time, and the parameters are reversely adjusted according to the comparison of two different processing modes to finally obtain the calibration parameters of the camera which are more suitable for the current.
In practical application, the calibration parameter correction process can be performed once when the camera is opened each time, or a time can be set, and the calibration is performed once at regular intervals (for example, days or months), so as to save system resources and reduce the influence on the use of the camera.
It should be noted that, the correction method provided by the embodiment of the present application may also be applied to parameter revisions of a monocular or a multi-view camera, and is not limited to a binocular camera.
In order to implement the method of the embodiment of the present application, the embodiment of the present application further provides a device for correcting camera parameters based on the same inventive concept, as shown in fig. 10, where the device includes:
an acquiring unit 1001, configured to acquire a target image acquired by a camera;
A processing unit 1002, configured to perform a specific image processing on the target image using a first processing model to obtain a first processing result of the target image;
A judging unit 1003 configured to judge whether a preset first correction condition is satisfied, based on the first processing result and the second processing result;
and a correction unit 1004, configured to determine that the first correction condition is satisfied, and correct the calibration parameter of the camera based on the first processing result.
In some embodiments, the specific image processing of the first process model is not related to the calibration parameters of the camera, and the specific image processing of the second process model is related to the calibration parameters of the camera.
In some embodiments, the first processing model is a neural network model built based on a neural network algorithm.
In some embodiments, the processing unit 1002 is further configured to configure a forward input parameter when the second processing model performs a forward operation.
In some embodiments, the processing unit 1002 is further configured to obtain calibration parameters of the camera, and take the calibration parameters as forward input parameters of the second processing model.
In some embodiments, the correction unit 1004 is further configured to obtain a first correction parameter of the camera based on the first processing result;
the processing unit 1002 is further configured to re-perform a first correction determination operation based on the first correction parameter until it is determined that the first correction condition is not satisfied;
the correcting unit 1004 is further configured to take a first correction parameter corresponding to the first correction condition not satisfied as a second correction parameter, and correct the calibration parameter by using the second correction parameter.
In some embodiments, the correction unit 1004 is specifically configured to take the first processing result as an inverted input parameter of the second processing model, and perform an inverted operation on the second processing model based on the inverted input parameter, and output a first correction parameter in an inverted manner.
In some embodiments, the processing unit 1002 is specifically configured to take the first correction parameter as a forward input parameter of the second processing model when performing the first correction determining operation again based on the first correction parameter.
In some embodiments, the correction unit 1004 is specifically configured to perform a second correction determining operation based on the second correction parameter until it is determined that the second correction condition is not satisfied, take, as a third correction parameter, a second correction parameter corresponding to the second correction condition that is not satisfied, and correct the calibration parameter using the third correction parameter.
In some embodiments, the correction unit 1004 is further configured to obtain a standard image and a corresponding standard processing result when performing the second correction determination operation based on the second correction parameter, use the second correction parameter as a forward input parameter of the second processing model, and perform a specific image processing on the standard image by using the second processing model to obtain a third processing result of the standard image;
a correction unit 1004, configured to determine whether the second correction condition is satisfied based on the third processing result and the standard processing result;
The correction unit 1004 is further configured to determine that the second correction condition is satisfied, and re-perform the first correction determination operation and the second correction determination operation based on the second correction parameter until it is determined that the second correction condition is not satisfied.
In some embodiments, the correction unit 1004 is specifically configured to determine a second error value between the third processing result and the standard processing result, determine that the second correction condition is satisfied when the second error value is greater than or equal to a second error threshold, and determine that the second correction condition is not satisfied when the second error value is less than the second error threshold.
In some embodiments, the correction unit 1004 is specifically configured to replace the calibration parameter with the third correction parameter.
In some embodiments, the obtaining unit 1001 is further configured to obtain, when the first correction determination operation is performed again, a target image acquired again by the camera;
The processing unit 1002 is further configured to perform specific image processing on the re-acquired target image by using the first processing model and the second processing model, respectively.
In some embodiments, the processing unit 1002 is specifically configured to perform a specific image processing on the target image using a first processing model to obtain an initial processing result of the target image, obtain a first processing result of a related image, where the related image has a correlation with the target image, and correct the initial processing result using the first processing result of the related image to obtain the first processing result of the target image.
In some embodiments, the determining unit 1003 is specifically configured to determine a first error value between the first processing result and the second processing result, determine that the first correction condition is met when the first error value is greater than or equal to a first error threshold, and determine that the first correction condition is not met when the first error value is less than the first error threshold.
In some embodiments, the feature image processing includes at least one of image depth information calculation, image blurring processing, image noise reduction processing, image correction processing.
In some embodiments, when the feature image processing is image depth information calculation, the first processing result and the second processing result are depth information of an image;
The judging unit 1003 is specifically configured to perform blurring processing on the target image based on the first processing result to obtain a first blurring processing result, perform blurring processing on the target image based on the second processing result to obtain a second blurring processing result, and determine a first error value between the first blurring processing result and the blurring second blurring processing result.
In some embodiments, the apparatus further comprises a storage unit for pre-buffering the target image;
The obtaining unit 1001 is specifically configured to obtain the cached target image in the idle state of the camera, so as to perform calibration parameter correction operation.
In some embodiments, the calibration parameters include at least one of a position parameter between lenses, a camera intrinsic parameter, a camera extrinsic parameter, a camera distortion parameter.
Based on the hardware implementation of the units in the above apparatus, the embodiment of the present application further provides an electronic device, as shown in fig. 11, where the electronic device includes a processor 1101 and a memory 1102 configured to store a computer program capable of running on the processor;
Wherein the processor 1101 is configured to execute the method steps of the previous embodiments when running a computer program.
Of course, in actual practice, the various components of the electronic device would be coupled together via bus system 1103 as shown in FIG. 11. It is appreciated that the bus system 1103 serves to facilitate connected communications between these components. The bus system 1103 includes a power bus, a control bus, and a status signal bus in addition to the data bus. But for clarity of illustration, the various buses are labeled as bus system 1103 in fig. 11.
In practical applications, the processor may be at least one of an Application Specific Integrated Circuit (ASIC), a digital signal processing device (DSPD, digital Signal Processing Device), a Programmable logic device (PLD, programmable Logic Device), a Field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA), a controller, a microcontroller, and a microprocessor. It will be appreciated that the electronics for implementing the above-described processor functions may be other for different devices, and embodiments of the present application are not particularly limited.
The Memory may be a volatile Memory (RAM), such as Random Access Memory (RAM), or a non-volatile Memory (non-volatile Memory), such as Read-Only Memory (ROM), flash Memory (flash Memory), hard disk (HDD, hard Disk Drive) or Solid state disk (SSD, solid-STATE DRIVE), or a combination of the above types of Memory, and provides instructions and data to the processor.
By adding the first processing model to the camera and taking the image processing result of the first processing model as an evaluation standard of the inherent second processing model of the camera, when the first processing result and the second processing result meet the correction conditions, for example, the difference between the first processing result and the second processing result is larger, the calibration parameters of the camera are corrected by using the more accurate first processing result, and the image processing is performed by using the corrected parameters, so that the image processing effect of the camera can be improved.
In an exemplary embodiment, the application also provides a computer-readable storage medium, for example a memory comprising a computer program executable by a processor of an electronic device for performing the steps of the aforementioned method.
It is to be understood that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items. The terms "having," "including," and "containing," or "may include" and "including" are used herein to indicate the presence of a corresponding feature (e.g., an element such as a numerical value, function, operation, or component), but do not exclude the presence of additional features.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another and do not necessarily describe a particular order or sequence. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the application.
The technical schemes described in the embodiments of the present application may be arbitrarily combined without any collision.
In the several embodiments provided in the present application, it should be understood that the disclosed method, apparatus and device may be implemented in other manners. The above-described embodiments are merely illustrative, e.g., the partitioning of elements is merely a logical functional partitioning, and there may be additional partitioning in actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the various components shown or discussed may be coupled or directly coupled or communicatively coupled to each other via some interface, whether indirectly coupled or communicatively coupled to devices or units, whether electrically, mechanically, or otherwise.
The units described as separate components may or may not be physically separate, and components displayed as units may or may not be physical units, may be located in one place, may be distributed on a plurality of network units, and may select some or all of the units according to actual needs to achieve the purpose of the embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may be separately used as a unit, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of hardware plus a form of software functional unit.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application.

Claims (20)

1. A method for correcting camera parameters, the method comprising:
acquiring a target image acquired by a camera;
performing image processing on the target image by using a first processing model to obtain a first processing result of the target image, wherein the image processing process of the first processing model is irrelevant to the calibration parameters of the camera;
performing image processing on the target image by using a second processing model to obtain a second processing result of the target image, wherein the image processing process of the second processing model is related to the calibration parameters of the camera;
judging whether a preset first correction condition is met or not based on the first processing result and the second processing result;
determining that the first correction condition is met, and obtaining a first correction parameter of the camera based on the first processing result;
Carrying out first correction judgment operation again based on the first correction parameters until the first correction conditions are not met;
Taking the corresponding first correction parameter as a second correction parameter when the first correction condition is not satisfied;
and correcting the calibration parameter by using the second correction parameter.
2. The method of claim 1, wherein the first processing model is a neural network model established based on a neural network algorithm.
3. The method of claim 1, wherein prior to image processing the target image using the second processing model, the method further comprises:
and configuring forward input parameters when the second processing model performs forward operation.
4. A method according to claim 3, characterized in that the method further comprises:
obtaining calibration parameters of the camera;
and taking the calibration parameter as a forward input parameter of the second processing model.
5. The method of claim 1, wherein the obtaining the first correction parameter of the camera based on the first processing result comprises:
Taking the first processing result as a reverse input parameter of the second processing model;
And carrying out reverse operation on the second processing model based on the reverse input parameters, and reversely outputting first correction parameters.
6. The method of claim 1, wherein the re-performing the first rework determination operation based on the first rework parameter comprises:
and taking the first correction parameter as a forward input parameter of the second processing model.
7. The method of claim 1, wherein said modifying said calibration parameter with said second modification parameter comprises:
Performing a second correction judgment operation based on the second correction parameter until it is determined that a second correction condition is not satisfied;
taking the corresponding second correction parameter as a third correction parameter when the second correction condition is not satisfied;
And correcting the calibration parameter by using the third correction parameter.
8. The method of claim 7, wherein performing a second correction determination operation based on the second correction parameter comprises:
obtaining a standard image and a corresponding standard processing result;
taking the second correction parameter as a forward input parameter of the second processing model;
Performing image processing on the standard image by using the second processing model to obtain a third processing result of the standard image;
judging whether the second correction condition is satisfied or not based on the third processing result and the standard processing result;
and determining that the second correction condition is met, and carrying out the first correction judging operation and the second correction judging operation again based on the second correction parameter until the second correction condition is not met.
9. The method of claim 8, wherein the determining whether the second correction condition is satisfied comprises:
Determining a second error value between the third processing result and the standard processing result;
When the second error value is greater than or equal to a second error threshold value, determining that the second correction condition is met;
And when the second error value is smaller than the second error threshold value, determining that the second correction condition is not satisfied.
10. The method of claim 7, wherein said modifying said calibration parameter with said third modification parameter comprises:
And replacing the calibration parameter by the third correction parameter.
11. The method of claim 1, wherein upon the re-performing the first correction determination operation, the method further comprises:
Acquiring a target image acquired by the camera again;
And respectively carrying out image processing on the re-acquired target image by using the first processing model and the second processing model.
12. The method of claim 1, wherein the performing image processing on the target image using the first processing model to obtain a first processing result of the target image comprises:
performing image processing on the target image by using the first processing model to obtain an initial processing result of the target image;
Acquiring a first processing result of a related image, wherein the related image has a correlation with the target image;
And correcting the initial processing result by using the first processing result of the related image to obtain the first processing result of the target image.
13. The method of claim 1, wherein determining whether the preset first correction condition is satisfied comprises:
Determining a first error value between the first processing result and the second processing result;
When the first error value is greater than or equal to a first error threshold value, determining that the first correction condition is met;
and when the first error value is smaller than the first error threshold value, determining that the first correction condition is not met.
14. The method of claim 13, wherein the image processing comprises at least one of image depth information calculation, image blurring processing, image noise reduction processing, and image correction processing.
15. The method of claim 14, wherein when the image processing is image depth information calculation, the first processing result and the second processing result are depth information of an image;
the determining a first error value between the first processing result and the second processing result includes:
Performing blurring processing on the target image based on the first processing result to obtain a first blurring processing result;
performing blurring processing on the target image based on the second processing result to obtain a second blurring processing result;
a first error value between the first and second blurring process results is determined.
16. The method according to claim 1, wherein the method further comprises:
pre-caching the target image;
the obtaining the target image acquired by the camera comprises the following steps:
and in the idle state of the camera, acquiring the cached target image so as to execute calibration parameter correction operation.
17. The method of claim 1, wherein the calibration parameters include at least one of a position parameter between lenses, a camera intrinsic parameter, a camera extrinsic parameter, a camera distortion parameter.
18. A camera parameter correction apparatus, the apparatus comprising:
The acquisition unit is used for acquiring a target image acquired by the camera;
The image processing device comprises a processing unit, a target image processing unit and a camera calibration unit, wherein the processing unit is used for performing image processing on the target image by using a first processing model to obtain a first processing result of the target image;
a judging unit, configured to judge whether a preset first correction condition is satisfied based on the first processing result and the second processing result;
The correction unit is used for determining that the first correction condition is met and obtaining a first correction parameter of the camera based on the first processing result;
The processing unit is further configured to re-perform a first correction judgment operation based on the first correction parameter until it is determined that the first correction condition is not satisfied;
The correction unit is further configured to take a first correction parameter corresponding to the first correction condition not satisfied as a second correction parameter, and correct the calibration parameter by using the second correction parameter.
19. An electronic device comprising a processor and a memory configured to store a computer program capable of running on the processor,
Wherein the processor is configured to perform the steps of the method of any of claims 1 to 17 when the computer program is run.
20. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 17.
CN202110016581.8A 2021-01-07 2021-01-07 Camera parameter correction method, device, electronic device and storage medium Active CN112819897B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110016581.8A CN112819897B (en) 2021-01-07 2021-01-07 Camera parameter correction method, device, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110016581.8A CN112819897B (en) 2021-01-07 2021-01-07 Camera parameter correction method, device, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN112819897A CN112819897A (en) 2021-05-18
CN112819897B true CN112819897B (en) 2025-01-10

Family

ID=75858147

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110016581.8A Active CN112819897B (en) 2021-01-07 2021-01-07 Camera parameter correction method, device, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN112819897B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115862081A (en) * 2021-09-22 2023-03-28 Oppo广东移动通信有限公司 Image acquisition method, device, equipment and computer readable storage medium
CN114663532B (en) * 2022-03-31 2025-04-11 上海擎朗智能科技有限公司 Robot sensor calibration method, robot and computer-readable storage medium
JP2023173018A (en) * 2022-05-25 2023-12-07 セイコーエプソン株式会社 Method, system, and computer program for visualizing camera calibration status
CN119991823B (en) * 2025-01-08 2026-01-02 奇瑞汽车股份有限公司 Panoramic image calibration method and system based on algorithm fusion

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109166152A (en) * 2018-07-27 2019-01-08 深圳六滴科技有限公司 Bearing calibration, system, computer equipment and the storage medium of panorama camera calibration

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109215087B (en) * 2018-08-28 2021-04-27 维沃移动通信有限公司 Calibration method and device of double-camera module and terminal
CN110490938A (en) * 2019-08-05 2019-11-22 Oppo广东移动通信有限公司 For verifying the method, apparatus and electronic equipment of camera calibration parameter
CN111145271B (en) * 2019-12-30 2023-04-28 广东博智林机器人有限公司 Method, device, storage medium and terminal for determining accuracy of camera parameters
CN111681186A (en) * 2020-06-10 2020-09-18 创新奇智(北京)科技有限公司 Image processing method and device, electronic equipment and readable storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109166152A (en) * 2018-07-27 2019-01-08 深圳六滴科技有限公司 Bearing calibration, system, computer equipment and the storage medium of panorama camera calibration

Also Published As

Publication number Publication date
CN112819897A (en) 2021-05-18

Similar Documents

Publication Publication Date Title
CN112819897B (en) Camera parameter correction method, device, electronic device and storage medium
JP5285619B2 (en) Camera system calibration
CN110264528B (en) A fast self-calibration method for fisheye lens binocular cameras
WO2021063128A1 (en) Method for determining pose of active rigid body in single-camera environment, and related apparatus
US11882262B2 (en) System and method for stereoscopic image analysis
CN110874852A (en) Method for determining depth image, image processor and storage medium
CN104182933A (en) Wide-angle lens image distortion correcting method based on reverse division model
CN111882655A (en) Method, apparatus, system, computer device and storage medium for three-dimensional reconstruction
CN110246194A (en) A fast calibration method for the rotation relationship between camera and inertial measurement unit
CN111105452A (en) High-low resolution fusion stereo matching method based on binocular vision
JP7318698B2 (en) Three-dimensional model construction method, device, and computer-readable storage medium
TWI517094B (en) Image calibration method and image calibration circuit
CN101577004A (en) Rectification method for polar lines, appliance and system thereof
CN109996005B (en) Focus correction method and device
CN118298033B (en) Parameter calibration method, device, equipment and storage medium of binocular camera
CN104182961A (en) Fisheye image distortion correction method based on reverse polynomial model
CN112907654A (en) Multi-camera external parameter optimization method, device, electronic device and storage medium
CN111553850B (en) Three-dimensional information acquisition method and device based on binocular stereoscopic vision
WO2024125245A1 (en) Panoramic image processing method and apparatus, and electronic device and storage medium
KR102371594B1 (en) Apparatus for automatic calibration of stereo camera image, system having the same and method thereof
CN117611685A (en) Point cloud optimization method and related device based on beam adjustment and optical flow estimation
CN113706622A (en) Road surface fitting method and system based on binocular stereo vision and intelligent terminal
CN114066991A (en) Light field camera calibration method based on space plane homography stationary point constraint
CN119338922B (en) A camera production line calibration and verification integrated method, system, computer equipment and storage medium
CN119295524B (en) Video depth estimation method, apparatus, storage medium, and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant