[go: up one dir, main page]

CN116894875A - Camera pose compensation calculation method and device, storage medium, electronic device - Google Patents

Camera pose compensation calculation method and device, storage medium, electronic device Download PDF

Info

Publication number
CN116894875A
CN116894875A CN202310916736.2A CN202310916736A CN116894875A CN 116894875 A CN116894875 A CN 116894875A CN 202310916736 A CN202310916736 A CN 202310916736A CN 116894875 A CN116894875 A CN 116894875A
Authority
CN
China
Prior art keywords
information matrix
image frame
image
tensor
deformation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310916736.2A
Other languages
Chinese (zh)
Other versions
CN116894875B (en
Inventor
任祥云
罗毅
康轶非
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Chang'an Technology Co ltd
Original Assignee
Chongqing Changan Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Changan Automobile Co Ltd filed Critical Chongqing Changan Automobile Co Ltd
Priority to CN202310916736.2A priority Critical patent/CN116894875B/en
Publication of CN116894875A publication Critical patent/CN116894875A/en
Application granted granted Critical
Publication of CN116894875B publication Critical patent/CN116894875B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a camera pose compensation calculation method and device, a storage medium and an electronic device, belonging to the field of image processing, wherein the method comprises the following steps: acquiring a first image frame of a reference camera and a second image frame of a target camera; calculating a projection information matrix between the first image frame and the second image frame, and calculating a deformation tensor between the first image frame and the second image frame, and calculating an environmental distortion information matrix based on the deformation tensor; constructing an information matrix by adopting the projection information matrix and the environment distortion information matrix; and calculating the camera pose of the target camera by adopting the information matrix compensation. According to the embodiment of the invention, the technical problem of low pose precision of the camera in the related technology is solved, the measurement precision in the image measurement field can be improved, and the reasoning and positioning precision of the vehicle is improved.

Description

Compensation calculation method and device for camera pose, storage medium and electronic device
Technical Field
The invention relates to the field of image processing, in particular to a method and a device for compensating and calculating the pose of a camera, a storage medium and an electronic device.
Background
In the related art, with the rapid development of computer science theory and hardware devices, numerous artificial intelligence technologies and devices such as a phaGo and ChatGPT gradually enter the daily life of the public. Intelligent robots and unmanned vehicles are one of the most relevant intelligent devices for life of the public, and are widely paid attention to in the current society. Visual information is an important means of sensing the environment by such intelligent devices, and is a data base for controlling operation and intelligent analysis, particularly in positioning and planning control of an automatic driving vehicle.
In the related art, the processing of visual information mainly comprises two types of semantic features and feature point features. The former obtains semantic perception classification of the image through a deep learning technology; the latter generally calculates the pose of the camera by SFM technique and combines the filtering theory, the wheel speed measuring equipment and the GPS equipment, etc., and deduces the vehicle positioning. However, although the current means can obtain a vehicle positioning accuracy enough to meet the control decision, due to lack of environmental information and lack of consideration of perspective deformation caused by camera angle change, it is often difficult to obtain a high-precision camera pose result and positioning result, so that development of high-precision image measurement technology and higher-level unmanned technology is hindered to a certain extent.
In view of the above problems in the related art, an efficient and accurate solution has not been found.
Disclosure of Invention
The invention provides a camera pose compensation calculation method and device, a storage medium and an electronic device, and aims to solve the technical problems in the related art.
According to an embodiment of the present invention, there is provided a compensation calculation method for camera pose, including: acquiring a first image frame of a reference camera and a second image frame of a target camera; calculating a projection information matrix between the first image frame and the second image frame, and calculating a deformation tensor between the first image frame and the second image frame, and calculating an environmental distortion information matrix based on the deformation tensor; constructing an information matrix by adopting the projection information matrix and the environment distortion information matrix; and calculating the camera pose of the target camera by adopting the information matrix compensation.
Further, calculating a projection information matrix between the first image frame and the second image frame includes: projecting the first image frame to an Euler coordinate system, and projecting the second image frame to a Lagrangian coordinate system; calculating a baseline and a parallax between the reference camera and the target camera; a projection information matrix between the first image frame and the second image frame is calculated using the baseline and the disparity.
Further, calculating a deformation tensor between the first image frame and the second image frame includes: extracting a first feature set of the first image frame and extracting a second feature set of the second image frame; determining a predefined derivative axis, wherein the derivative axis comprises: a world coordinate system defined in the image, a sensing coordinate system after de-distortion, and a normalized coordinate system under the image; a deformation tensor between the first feature set and the second feature set is calculated based on the derivative axis.
Further, calculating a deformation tensor between the first feature set and the second feature set based on the derivative axis comprises: the second feature set is adopted to derive variables under an image normalization coordinate system according to a chain rule, and a first intermediate value is obtained; the first intermediate value is adopted to derive variables under the image de-distortion sensing coordinate system according to a chain rule, and a second intermediate value is obtained; the second intermediate value is adopted to derive variables under an image normalization coordinate system according to a chain rule, and a third intermediate value is obtained; adopting the third intermediate value to conduct derivation on variables under the image world coordinate system according to a chain rule to obtain a first derivation value; sequentially deriving variables under the image world coordinate system, the image normalization coordinate system, the image de-distortion sensing coordinate system and the image normalization coordinate system according to a chain rule by adopting the first feature set to obtain a second derivative value; conducting derivative compounding on the first derivative value and the second derivative value to obtain a compound result; and carrying out polar decomposition on the composite result to obtain a deformation tensor.
Further, calculating an environmental distortion information matrix based on the deformation tensor includes: judging whether the deformation tensor is equal to the identity matrix or not; if the deformation tensor is not equal to the identity matrix, analyzing the characteristic direction of the characteristic point in the second image frame; constructing a strain tensor based on the characteristic direction; carrying out main strain analysis on the strain tensor, removing tangential strain to obtain a main strain tensor matrix, wherein a positive value in the main strain tensor matrix represents tensile deformation, and a negative value represents compressive deformation; selecting an environmental deformation correction coefficient, and taking the environmental deformation correction coefficient as a poisson ratio; calculating a proportionality coefficient of the main strain tensor moment based on the poisson ratio and the deformation tensor, and counting position fluctuation values of a central point of the second image frame under a plurality of response conditions, and constructing variances by adopting a plurality of position fluctuation values; an environmental distortion information matrix is calculated from the variance, the scaling factor, and the principal strain tensor matrix.
Further, calculating an environmental distortion information matrix from the variance, the scaling factor, and the principal strain tensor matrix includes: the environment distortion information matrix is calculated using the following formula Wherein V is the proportionality coefficient, +.>For the variance ε 2 For the principal strain tensor matrix, T is the transpose.
Further, selecting the environmental deformation correction coefficient includes: acquiring weather information of an environment where the target camera is located; and selecting an environmental deformation correction coefficient matched with the meteorological information.
Further, after constructing an information matrix using the projection information matrix and the environment distortion information matrix, the method further comprises: acquiring imaging quality parameters of the target camera; judging whether the imaging quality parameter is smaller than a preset threshold value or not; if the imaging quality parameter is smaller than a preset threshold value, generating a subpixel interpolation information matrix of the target camera; and compensating the subpixel interpolation information matrix in the information matrix, and updating the information matrix.
Further, calculating the camera pose of the target camera using the information matrix compensation includes: in the current iteration period of the target camera in the Gaussian Newton iteration process, calculating the camera pose deltax of the target camera in the current iteration period by adopting the following formula: Δx= - (J) T2 ) -1 J) -1 J(σ 2 ) -1 u is; wherein sigma 2 For the information matrix, J is a jacobian, u is a nonlinear function, and T is a transpose.
According to another embodiment of the present invention, there is provided a compensation calculating apparatus for camera pose, including: the acquisition module is used for acquiring a first image frame of the reference camera and a second image frame of the target camera; a first calculation module for calculating a projection information matrix between the first image frame and the second image frame, and calculating a deformation tensor between the first image frame and the second image frame, and calculating an environmental distortion information matrix based on the deformation tensor; the construction module is used for constructing an information matrix by adopting the projection information matrix and the environment distortion information matrix; and the second calculation module is used for calculating the camera pose of the target camera by adopting the information matrix compensation.
Further, the first computing module includes: a projection unit, configured to project the first image frame to an euler coordinate system, and project the second image frame to a lagrangian coordinate system; a first calculation unit for calculating a baseline and a parallax between the reference camera and the target camera; a second calculation unit for calculating a projection information matrix between the first image frame and the second image frame using the baseline and the parallax.
Further, the first computing module includes: an extraction unit for extracting a first feature set of the first image frame and extracting a second feature set of the second image frame; a determining unit, configured to determine a predefined derivative axis, where the derivative axis includes: a world coordinate system defined in the image, a sensing coordinate system after de-distortion, and a normalized coordinate system under the image; a third calculation unit for calculating a deformation tensor between the first feature set and the second feature set based on the derivative axis.
Further, the third computing unit is further configured to: the second feature set is adopted to derive variables under an image normalization coordinate system according to a chain rule, and a first intermediate value is obtained; the first intermediate value is adopted to derive variables under the image de-distortion sensing coordinate system according to a chain rule, and a second intermediate value is obtained; the second intermediate value is adopted to derive variables under an image normalization coordinate system according to a chain rule, and a third intermediate value is obtained; adopting the third intermediate value to conduct derivation on variables under the image world coordinate system according to a chain rule to obtain a first derivation value; sequentially deriving variables under the image world coordinate system, the image normalization coordinate system, the image de-distortion sensing coordinate system and the image normalization coordinate system according to a chain rule by adopting the first feature set to obtain a second derivative value; conducting derivative compounding on the first derivative value and the second derivative value to obtain a compound result; and carrying out polar decomposition on the composite result to obtain a deformation tensor.
Further, the first computing module includes: a judging unit for judging whether the deformation tensor is equal to the identity matrix; the analysis unit is used for analyzing the characteristic direction of the characteristic point in the second image frame if the deformation tensor is not equal to the identity matrix; a construction unit configured to construct a strain tensor based on the characteristic direction; the rejecting unit is used for carrying out main strain analysis on the strain tensor and rejecting tangential strain to obtain a main strain tensor matrix, wherein positive values in the main strain tensor matrix represent tensile deformation, and negative values represent compressive deformation; the selecting unit is used for selecting the environmental deformation correction coefficient and taking the environmental deformation correction coefficient as the Poisson ratio; a fourth calculation unit, configured to calculate a scaling factor of the principal strain tensor moment based on the poisson ratio and the deformation tensor, and calculate position fluctuation values of a center point of the second image frame under multiple response conditions, and construct a variance using multiple position fluctuation values; a fifth calculation unit for calculating an environmental distortion information matrix from the variance, the scaling factor, and the principal strain tensor matrix.
Further, the fifth calculation unit includes: a computing subunit for computing an environment distortion information matrix using the following formula Wherein V is the proportionality coefficient, +.>For the variance ε 2 For the principal strain tensor matrix, T is the transpose.
Further, the selecting unit includes: the acquisition subunit is used for acquiring weather information of the environment where the target camera is located; and the selecting subunit is used for selecting the environmental deformation correction coefficient matched with the meteorological information.
Further, the method further comprises: the acquisition module is used for acquiring imaging quality parameters of the target camera after the construction module adopts the projection information matrix and the environment distortion information matrix to construct an information matrix; the judging module is used for judging whether the imaging quality parameter is smaller than a preset threshold value or not; the generation module is used for generating a subpixel interpolation information matrix of the target camera if the imaging quality parameter is smaller than a preset threshold value; and the updating module is used for compensating the subpixel interpolation information matrix in the information matrix and updating the information matrix.
Further, the second computing module includes: a calculating subunit, configured to calculate, in a current iteration period of the target camera in the process of performing the gaussian newton iteration, a camera pose Δx of the target camera in the current iteration period using the following formula: Δx= - (J) T2 ) -1 J) -1 J(σ 2 ) -1 u is; wherein sigma 2 For the information matrix, J is a jacobian, u is a nonlinear function, and T is a transpose.
According to another aspect of the embodiments of the present application, there is also provided a storage medium including a stored program that performs the above steps when running.
According to another aspect of the embodiment of the present application, there is also provided an electronic device including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus; wherein: a memory for storing a computer program; and a processor for executing the steps of the method by running a program stored on the memory.
Embodiments of the present application also provide a computer program product comprising instructions which, when run on a computer, cause the computer to perform the steps of the above method.
The application has the beneficial effects that:
1. according to the application, the projection information matrix and the environment distortion information matrix between two frames of images of the camera are calculated, and the camera pose of the target camera is compensated and calculated, so that the perspective deformation caused by the change of the camera angle and the distortion caused by the environment information are compensated, and the precision of the camera pose is improved. Solves the technical problem of low precision of the camera pose in the related art, can improve the measurement precision in the image measurement field and improve the reasoning and positioning precision of the vehicle
2. According to the application, the environment deformation correction coefficient is modified, and shear deformation correction is added, so that the severe test environment condition is compensated, and the distortion information caused by weather is corrected, so that the over-fitting correction is prevented;
3. the application further compensates the sub-pixel interpolation information matrix by considering the problem of poor information matrix caused by the sub-pixel interpolation, forms the final information matrix and compensates the influence of low camera pixels.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
FIG. 1 is a block diagram of a hardware configuration of a vehicle according to an embodiment of the present application;
FIG. 2 is a flowchart of a method for calculating compensation of camera pose according to an embodiment of the present application;
FIG. 3 is a schematic diagram of deriving a coordinate system in an embodiment of the application;
FIG. 4 is a solution schematic of a principal strain tensor matrix according to an embodiment of the application;
FIG. 5 is a flow chart of one implementation of an embodiment of the present application;
fig. 6 is a block diagram of a camera pose compensation calculating device according to an embodiment of the present application.
Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application. It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
The method embodiment provided by the first embodiment of the application can be executed in a vehicle, a vehicle-mounted controller, a camera or a similar processing device. Taking a vehicle running as an example, fig. 1 is a hardware configuration block diagram of a vehicle according to an embodiment of the present application. As shown in fig. 1, the vehicle may include one or more processors 102 (only one is shown in fig. 1) (the processor 102 may include, but is not limited to, a microprocessor MCU or a processing device such as a programmable logic device FPGA) and a memory 104 for storing data, and optionally, a transmission device 106 for communication functions and an input-output device 108. It will be appreciated by those skilled in the art that the configuration shown in fig. 1 is merely illustrative and is not intended to limit the configuration of the vehicle described above. For example, the vehicle may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used to store a vehicle program, for example, a software program of application software and a module, such as a vehicle program corresponding to a method for calculating the compensation of the pose of a camera in the embodiment of the present application, and the processor 102 executes the vehicle program stored in the memory 104, thereby performing various functional applications and data processing, that is, implementing the method described above. Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory remotely located with respect to the processor 102, which may be connected to the vehicle via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the vehicle. In one example, the transmission device 106 includes a network adapter (Network Interface Controller, simply referred to as NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is configured to communicate with the internet wirelessly.
In this embodiment, a method for calculating the compensation of the pose of a camera is provided, fig. 2 is a flowchart of a method for calculating the compensation of the pose of a camera according to an embodiment of the present invention, as shown in fig. 2, and the flowchart includes the following steps:
step S202, acquiring a first image frame of a reference camera and a second image frame of a target camera;
the reference camera and the target camera of the present embodiment may be the same camera, for example, a camera configured on a vehicle, and the first image frame and the second image frame may be time-continuous image frames acquired by the camera, or may be discontinuous image frames.
Step S204, calculating a projection information matrix between the first image frame and the second image frame, calculating a deformation tensor between the first image frame and the second image frame, and calculating an environment distortion information matrix based on the deformation tensor;
The physical deformation described in this embodiment is a deformation from the same feature set between two different frames. The deformation here refers to the difference obtained by camera calibration parameters and then by the standard projection model. In this embodiment, all the corresponding mechanical deformation parameters correspond to the relevant image deformation conditions. The description of the mechanical parameters in this embodiment actually maps to the representation of the image features.
S206, constructing an information matrix by adopting a projection information matrix and an environment distortion information matrix;
optionally, the information matrix σ 2 Can be based on the deformation tensor and the environment of elastic deformation and distortion information matrixAnd projection information matrix->Composition, can be according to a linear combination strategy of 1: the weight values of 1 constitute sigma 2
Step S208, the information matrix is adopted to compensate and calculate the camera pose of the target camera.
Through the steps, the first image frame of the reference camera and the second image frame of the target camera are acquired, the projection information matrix between the first image frame and the second image frame is calculated, the deformation tensor between the first image frame and the second image frame is calculated, the environment distortion information matrix is calculated based on the deformation tensor, the projection information matrix and the environment distortion information matrix are adopted to construct an information matrix, the information matrix is adopted to compensate and calculate the camera pose of the target camera, the projection information matrix and the environment distortion information matrix between the two images of the camera are calculated, the camera pose of the target camera is calculated in a compensating mode, the perspective deformation caused by the change of the camera angle and the distortion caused by the environment information are compensated, the precision of the camera pose is improved, the technical problem that the precision of the camera pose in the related technology is low is solved, the measurement precision in the image measurement field can be improved, and the reasoning positioning precision of a vehicle is improved.
In an example of the present embodiment, calculating the projection information matrix between the first image frame and the second image frame includes: projecting the first image frame to an Euler coordinate system, and projecting the second image frame to a Lagrangian coordinate system; calculating a baseline and a parallax between the reference camera and the target camera; a projection information matrix between the first image frame and the second image frame is calculated using the baseline and the disparity.
The Lagrangian coordinate system of the embodiment is a following coordinate system or a following coordinate system, a reference camera frame image A and an arbitrary target camera frame image B are determined, the Euler coordinate system is defined on the reference camera image A, the Lagrangian satellite coordinate system is defined on the arbitrary image B, and the projected information matrix is calculated according to the baseline, parallax and the like of the traditional projection relation
In one implementation of the present embodiment, calculating the deformation tensor between the first image frame and the second image frame includes: extracting a first feature set of the first image frame and extracting a second feature set of the second image frame; determining a predefined derivative axis, wherein the derivative axis comprises: a world coordinate system defined in the image, a sensing coordinate system after de-distortion, and a normalized coordinate system under the image; a deformation tensor between the first feature set and the second feature set is calculated based on the derivative axis.
In one example, feature point information in each image is obtained using SURF (Speeded Up Robust Features, accelerated robust feature algorithm) or other feature extraction method.
And calculating the deformation tensor of the characteristic points between two frames according to the defined relation (derivative axis) of the image projection axes. Meanwhile, since it is considered in the deformation hypothesis that: the same feature point among multiple frames is rigid and no deformation should exist. Therefore, the non-unit deformation tensor matrix is obtained by calculation, that is, the comprehensive representation of the environmental information and the perspective deformation information which are not considered in this case.
Optionally, calculating the deformation tensor between the first feature set and the second feature set based on the derivative axis comprises: the second feature set is adopted to derive variables under an image normalization coordinate system according to a chain rule, and a first intermediate value is obtained; the first intermediate value is adopted to derive variables under the image de-distortion sensing coordinate system according to a chain rule, and a second intermediate value is obtained; the second intermediate value is adopted to derive variables under an image normalization coordinate system according to a chain rule, and a third intermediate value is obtained; adopting the third intermediate value to conduct derivation on variables under the image world coordinate system according to a chain rule to obtain a first derivation value; sequentially deriving variables under the image world coordinate system, the image normalization coordinate system, the image de-distortion sensing coordinate system and the image normalization coordinate system according to a chain rule by adopting the first feature set to obtain a second derivative value; conducting derivative compounding on the first derivative value and the second derivative value to obtain a compound result; and carrying out polar decomposition on the composite result to obtain a deformation tensor.
In this embodiment, the solution of the deformation tensor is a conversion solution in the euler coordinate system and the lagrangian coordinate system, and since the projection relationship between multiple frames of the same object or feature is a process of forward projection and backward projection, and the middle involves the defining axes of multiple imaging models, the solution is actually a process of repeatedly deriving and compounding the multiple axes. The derivative shaft system comprises: and a world coordinate system defined in the image, a sensing coordinate system after de-distortion and a normalized coordinate system under the image. The sequence of the compounding process is: the method comprises the steps of solving an image B and then solving an image A.
The deformation tensor of the present embodiment describes deformation of a substance itself after a rigid motion is proposed, and in an image, the difference between the same feature set of the image (e.g. a person in the image, a vehicle) in different projection frames under the same parameters is represented. FIG. 3 is a schematic diagram of deriving a coordinate system in an embodiment of the invention, defining the derived coordinate system, and solving the steps of: firstly, deriving a feature set u on a B frame image from an image normalization coordinate system, deriving a value obtained from the image de-distortion sensing coordinate system, deriving the value obtained from the image normalization coordinate system, and finally deriving the value obtained from the image world coordinate system. And after the derivative under the B frame is finished, carrying out inverse rule derivative compounding on the same set under the A frame according to a compounding principle. And finally, carrying out polar decomposition on the composite result, thereby obtaining an image deformation tensor.
In one implementation of the present embodiment, computing the environmental distortion information matrix based on the deformation tensor includes: judging whether the deformation tensor is equal to the identity matrix; if the deformation tensor is not equal to the identity matrix, analyzing the characteristic direction of the characteristic points in the second image frame; constructing a strain tensor based on the characteristic direction; carrying out main strain analysis on the strain tensor, removing tangential strain to obtain a main strain tensor matrix, wherein positive values in the main strain tensor matrix represent tensile deformation, and negative values represent compression deformation; selecting an environmental deformation correction coefficient, and taking the environmental deformation correction coefficient as a poisson ratio; calculating a proportionality coefficient of a main strain tensor moment based on the poisson ratio and the deformation tensor, counting position fluctuation values of a central point of the second image frame under a plurality of response conditions, and constructing variances by adopting a plurality of position fluctuation values; an environmental distortion information matrix is calculated from the variance, scaling factor, and principal strain tensor matrix.
When the above embodiment analyzes the feature directions of the feature points in the second image frame, the number of directions of each feature will be determined by the attribute of the feature itself. The characterization feature is one-dimensional feature direction or two-dimensional feature direction. For example, corner features are features that are formed in two dimensions, actually formed from several directions, and determined by the features used.
The direction of each feature i in the image is considered, and a corresponding "strain tensor" is constructed. And carrying out main strain analysis on the obtained 'strain tensor', and obtaining a main strain tensor matrix through U transformation, wherein the positive and negative of the value represent different deformation forms. Positive tensile deformation and negative compressive deformation. As with the mechanical analysis, to find the principal direction, tensor analysis is performed on the "strain matrix" tensor, and the principal direction of influence is obtained by diagonalization with U.
In this embodiment, the "strain tensor" constructed is physically the square term of the strain. The principal strain analysis performed on the strain is to remove tangential strain and convert the tangential strain into simple elastic deformation of unidirectional stretching and compression. Since the strain is actually a ratio of solutions, it can be taken as the corresponding scaling factor term for the information matrix.
FIG. 4 is a solution schematic of a principal strain tensor matrix according to an embodiment of the invention, dx representing the vector connecting the P and Q points, dx ' representing the vector connecting the P ' and Q ' points; n represents a unit vector of the vector PQ, n ' represents a unit vector of the vector P ' Q '; f is a deformation gradient; epsilon 2 Is a corresponding quadratic term of the deformation tensor, and shows the basic principle of solving a strain matrix. It can be seen that since the strain is a ratio of calculated to original length, an information matrix can be assigned as a contribution coefficient.
Considering the elastic deformation hypothesis, selecting a proper deformation correction value as a similar Poisson ratio mu value, calculating a proportionality coefficient of the deformation correction value in an information matrix, namely an environmental deformation correction coefficient, and mainly correcting distortion information caused by weather reasons so as to prevent over-fitting correction.
Optionally, computing the environmental distortion information moment based on the variance, scaling factor, and principal strain tensor matrixThe array comprises: the environment distortion information matrix is calculated using the following formulaWherein V is a proportionality coefficient,is the variance, epsilon 2 The principal strain tensor matrix, T is the transpose.
Calculating response condition position fluctuation of center point by statistics, and constructing varianceAnd substituting the ratio coefficient to obtain an environmental distortion information matrix considering environmental and distortion factors>
In one example based on the above embodiment, selecting the environmental deformation correction coefficient includes: acquiring weather information of an environment where a target camera is located; and selecting an environmental deformation correction coefficient matched with the weather information.
For the elastic assumption, the environmental distortion correction coefficient is selected, and μ=0.99 can be selected in consideration of that the distortion of the image feature point should be relatively single. If the facing image situation is complex, if the weather situation is a rainy day or a foggy day, a proper environmental deformation correction coefficient can be selected according to the situation, and shear deformation correction is added to compensate for some worse test environmental situations. The environmental distortion correction coefficient value determines the quality of the image by controlling the distortion form, thereby forming a further description of the environmental impact. The selection of the environmental distortion correction coefficients may be performed using a deep learning network based on a deep residual network (Deep residual network, resNet). The method comprises the steps of collecting images of a sunny day, a rainy day, a foggy day and the like of the same scene, taking the deviation of characteristic positions as an object, putting the object into a deep learning network based on ResNet as a framework, and learning environment deformation correction coefficients. The second-order deformation function can also be constructed through an SSSIG method, and the environment deformation correction coefficient is obtained in a matched calculation mode.
In one implementation scenario of the present embodiment, after constructing the information matrix using the projection information matrix and the environment distortion information matrix, further includes: acquiring imaging quality parameters of a target camera; judging whether the imaging quality parameter is smaller than a preset threshold value or not; if the imaging quality parameter is smaller than a preset threshold value, generating a subpixel interpolation information matrix of the target camera; compensating the sub-pixel interpolation information matrix in the information matrix, and updating the information matrix.
If the problem of information matrix difference caused by subpixel interpolation is also considered, the subpixel interpolation information matrix is compensated in the original information matrix to form a final information matrix sigma 2
The information matrix may be composed of a sum distortion information matrix based on a deformation tensor and an elastically deformed environmentAnd projected information matrix->Composition is prepared. Interpolation information matrix which can be compensated into sub-pixels if possible>A linear combination strategy of 1 is generally preferred: 1: the weight values of 1 constitute sigma 2 . Of course, depending on the camera and lens quality employed, different proportions of linear combinations may be selected, and if the camera pixels are higher, the effect of the interpolation term may even be ignored.
In this embodiment, calculating the camera pose of the target camera using information matrix compensation includes: in the current iteration period of the target camera in the Gaussian Newton iteration process, the camera pose deltax of the target camera in the current iteration period is calculated by adopting the following formula: Δx= - (J) T2 ) -1 J) -1 J(σ 2 ) -1 u is; wherein sigma 2 Is an information matrix, J is a jacobian matrix, u is a nonlinear function, and T is a transpose.
According to the iterative step of Gauss-Newton, the information matrix σ2 is compensated in the iteratively updated value Δx and the normal matrix, respectively. Therefore, iteration is continuously updated, the camera pose with higher precision can be obtained, and the vehicle positioning with high precision can be deduced. And supplementing an information matrix item in the iteration, solving the pose increment of the camera according to the above, obtaining the pose of the camera with high precision, and inferentially calculating the position of the vehicle or other measured values according to the results of other sensors.
The embodiment provides a single-camera pose high-precision estimation method among multiple frames, which is used for improving measurement precision in the field of image measurement, improving vehicle reasoning and positioning precision in the field of unmanned driving and laying a technical foundation for the field of higher-level unmanned driving. The embodiment is a single-camera pose high-precision estimation method between multiple frames based on deformation tensor and elastic deformation assumption. In the embodiment, the relation between the deformation tensor and the image deformation is constructed, the proportional coefficient is determined through the elastic deformation hypothesis, and the information matrix is finally formed, so that the information matrix item is compensated in the iteration of the pose of the computer. The information matrix sigma 2 is respectively compensated in the iteration update value delta x and the normal matrix, so that iteration is continuously updated, and the camera pose with higher precision can be obtained. Meanwhile, the coefficient determined based on the elastic deformation assumption may represent the environmental condition at the time of acquiring the image to some extent. In general, assuming a good environment, a defined environmental deformation correction coefficient ratio is selected to be 0.99. If conditions such as rain are encountered, the environmental distortion correction coefficients may be modified and shear distortion correction added to compensate for some of the more severe test environmental conditions.
Fig. 5 illustrates an overall flow of a method for estimating pose of a single camera with high precision based on deformation tensor and elastic deformation assumption between multiframes, the flow includes: determining A and B frame images, and defining an Euler coordinate system; extracting features, solving deformation tensors C of the features, grad (χ) Dx=F, and C=F T F, grad represents gradient calculation; judging whether C is equal to I (identity matrix), if so,if not, determining the direction of the characteristic points, and constructing a strain matrix epsilon 2 ,ε 2 =m T Cm, solving the proportionality coefficient V, epsilon 2 =VCV T ∈diag(R 2 ),V∈U(R 2*2 ) Diag represents a diagonal matrix, U represents unitary variation, R represents a real set, and m represents a unit vector of the same feature point between two frames; calculating according to the coefficient->Whether interpolation is considered, if so, the method comprises the steps of>If no, go up>
According to the deformation tensor and elastic deformation hypothesis-based single-camera pose high-precision estimation method between the multiple frames, the change of image characteristics is combined with the actual physical deformation significance, the deformation relation and the strain relation are analyzed, the characteristic information between the multiple frames is converted at one time, and an information matrix lacking in a traditional mode is constructed. The compensation of the information matrix plays a decisive role and optimization on the final Gauss-Newton iterative solution, and can effectively improve the estimation of the camera pose.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present invention.
Example 2
The embodiment also provides a device for calculating the camera pose compensation, which is used for realizing the above embodiment and the preferred implementation, and is not described in detail. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
Fig. 6 is a block diagram of a device for calculating the compensation of camera pose according to an embodiment of the present invention, as shown in fig. 6, the device includes:
an acquisition module 60 for acquiring a first image frame of the reference camera and a second image frame of the target camera;
a first calculation module 62 for calculating a projection information matrix between the first image frame and the second image frame, and a deformation tensor between the first image frame and the second image frame, and calculating an environmental distortion information matrix based on the deformation tensor;
a construction module 64 for constructing an information matrix using the projection information matrix and the environment distortion information matrix;
a second calculation module 66, configured to calculate a camera pose of the target camera using the information matrix compensation.
Optionally, the first computing module includes: a projection unit, configured to project the first image frame to an euler coordinate system, and project the second image frame to a lagrangian coordinate system; a first calculation unit for calculating a baseline and a parallax between the reference camera and the target camera; a second calculation unit for calculating a projection information matrix between the first image frame and the second image frame using the baseline and the parallax.
Optionally, the first computing module includes: an extraction unit for extracting a first feature set of the first image frame and extracting a second feature set of the second image frame; a determining unit, configured to determine a predefined derivative axis, where the derivative axis includes: a world coordinate system defined in the image, a sensing coordinate system after de-distortion, and a normalized coordinate system under the image; a third calculation unit for calculating a deformation tensor between the first feature set and the second feature set based on the derivative axis.
Optionally, the third computing unit is further configured to: the second feature set is adopted to derive variables under an image normalization coordinate system according to a chain rule, and a first intermediate value is obtained; the first intermediate value is adopted to derive variables under the image de-distortion sensing coordinate system according to a chain rule, and a second intermediate value is obtained; the second intermediate value is adopted to derive variables under an image normalization coordinate system according to a chain rule, and a third intermediate value is obtained; adopting the third intermediate value to conduct derivation on variables under the image world coordinate system according to a chain rule to obtain a first derivation value; sequentially deriving variables under the image world coordinate system, the image normalization coordinate system, the image de-distortion sensing coordinate system and the image normalization coordinate system according to a chain rule by adopting the first feature set to obtain a second derivative value; conducting derivative compounding on the first derivative value and the second derivative value to obtain a compound result; and carrying out polar decomposition on the composite result to obtain a deformation tensor.
Optionally, the first computing module includes: a judging unit for judging whether the deformation tensor is equal to the identity matrix; the analysis unit is used for analyzing the characteristic direction of the characteristic point in the second image frame if the deformation tensor is not equal to the identity matrix; a construction unit configured to construct a strain tensor based on the characteristic direction; the rejecting unit is used for carrying out main strain analysis on the strain tensor and rejecting tangential strain to obtain a main strain tensor matrix, wherein positive values in the main strain tensor matrix represent tensile deformation, and negative values represent compressive deformation; the selecting unit is used for selecting the environmental deformation correction coefficient and taking the environmental deformation correction coefficient as the Poisson ratio; a fourth calculation unit, configured to calculate a scaling factor of the principal strain tensor moment based on the poisson ratio and the deformation tensor, and calculate position fluctuation values of a center point of the second image frame under multiple response conditions, and construct a variance using multiple position fluctuation values; a fifth calculation unit for calculating an environmental distortion information matrix from the variance, the scaling factor, and the principal strain tensor matrix.
Optionally, the fifth calculating unit includes: a computing subunit for computing an environment distortion information matrix using the following formula Wherein V is the proportionality coefficient, +.>For the variance ε 2 For the principal strain tensor matrix, T is the transpose.
Optionally, the selecting unit includes: the acquisition subunit is used for acquiring weather information of the environment where the target camera is located; and the selecting subunit is used for selecting the environmental deformation correction coefficient matched with the meteorological information.
Optionally, the method further comprises: the acquisition module is used for acquiring imaging quality parameters of the target camera after the construction module adopts the projection information matrix and the environment distortion information matrix to construct an information matrix; the judging module is used for judging whether the imaging quality parameter is smaller than a preset threshold value or not; the generation module is used for generating a subpixel interpolation information matrix of the target camera if the imaging quality parameter is smaller than a preset threshold value; and the updating module is used for compensating the subpixel interpolation information matrix in the information matrix and updating the information matrix.
Optionally, the second computing module includes: a calculating subunit, configured to calculate, in a current iteration period of the target camera in the process of performing the gaussian newton iteration, a camera pose Δx of the target camera in the current iteration period using the following formula: Δx= - (J) T2 ) -1 J) -1 J(σ 2 ) -1 u is; wherein sigma 2 For the information matrix, J is a Jacobian matrix, u is a nonlinear function, T is a transitionAnd (5) placing the symbol.
It should be noted that each of the above modules may be implemented by software or hardware, and for the latter, it may be implemented by, but not limited to: the modules are all located in the same processor; alternatively, the above modules may be located in different processors in any combination.
Example 3
An embodiment of the invention also provides a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
Alternatively, in the present embodiment, the above-described storage medium may be configured to store a computer program for performing the steps of:
s1, acquiring a first image frame of a reference camera and a second image frame of a target camera;
s2, calculating a projection information matrix between the first image frame and the second image frame, calculating a deformation tensor between the first image frame and the second image frame, and calculating an environment distortion information matrix based on the deformation tensor;
s3, constructing an information matrix by adopting the projection information matrix and the environment distortion information matrix;
And S4, calculating the camera pose of the target camera by adopting the information matrix compensation.
Alternatively, in the present embodiment, the storage medium may include, but is not limited to: a usb disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing a computer program.
An embodiment of the invention also provides an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, where the transmission device is connected to the processor, and the input/output device is connected to the processor.
Alternatively, in the present embodiment, the above-described processor may be configured to execute the following steps by a computer program:
s1, acquiring a first image frame of a reference camera and a second image frame of a target camera;
s2, calculating a projection information matrix between the first image frame and the second image frame, calculating a deformation tensor between the first image frame and the second image frame, and calculating an environment distortion information matrix based on the deformation tensor;
S3, constructing an information matrix by adopting the projection information matrix and the environment distortion information matrix;
and S4, calculating the camera pose of the target camera by adopting the information matrix compensation.
Alternatively, specific examples in this embodiment may refer to examples described in the foregoing embodiments and optional implementations, and this embodiment is not described herein.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, such as the division of the units, is merely a logical function division, and may be implemented in another manner, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application, which are intended to be comprehended within the scope of the present application.

Claims (12)

1.一种相机位姿的补偿计算方法,其特征在于,包括:1. A method for calculating camera pose compensation, characterized in that it includes: 采集基准相机的第一图像帧和目标相机的第二图像帧;Acquire the first image frame from the reference camera and the second image frame from the target camera; 计算所述第一图像帧与所述第二图像帧之间的投影信息矩阵,以及计算所述第一图像帧与所述第二图像帧之间的变形张量,并基于所述变形张量计算环境畸变信息矩阵;Calculate the projection information matrix between the first image frame and the second image frame, and calculate the deformation tensor between the first image frame and the second image frame, and calculate the environmental distortion information matrix based on the deformation tensor; 采用所述投影信息矩阵和所述环境畸变信息矩阵构建信息矩阵;An information matrix is constructed using the projection information matrix and the environmental distortion information matrix; 采用所述信息矩阵补偿计算所述目标相机的相机位姿。The camera pose of the target camera is calculated using the information matrix compensation. 2.根据权利要求1所述的方法,其特征在于,计算所述第一图像帧与所述第二图像帧之间的投影信息矩阵包括:2. The method according to claim 1, wherein calculating the projection information matrix between the first image frame and the second image frame comprises: 将所述第一图像帧投影至欧拉坐标系,将所述第二图像帧投影至拉格朗日坐标系;The first image frame is projected onto the Euler coordinate system, and the second image frame is projected onto the Lagrange coordinate system; 计算所述基准相机与所述目标相机之间的基线和视差;Calculate the baseline and parallax between the reference camera and the target camera; 采用所述基线和所述视差计算所述第一图像帧与所述第二图像帧之间的投影信息矩阵。The projection information matrix between the first image frame and the second image frame is calculated using the baseline and the disparity. 3.根据权利要求1所述的方法,其特征在于,计算所述第一图像帧与所述第二图像帧之间的变形张量包括:3. The method according to claim 1, wherein calculating the deformation tensor between the first image frame and the second image frame comprises: 提取所述第一图像帧的第一特征集合,以及提取所述第二图像帧的第二特征集合;Extract a first feature set from the first image frame, and extract a second feature set from the second image frame; 确定预定义的求导轴系,其中,所述求导轴系包括:图像中定义的世界坐标系,去畸变后的传感坐标系,图像下的归一化坐标系;Determine a predefined differentiation axis system, wherein the differentiation axis system includes: the world coordinate system defined in the image, the distortion-free sensor coordinate system, and the normalized coordinate system under the image; 基于所述求导轴系计算所述第一特征集合与所述第二特征集合之间的变形张量。The deformation tensor between the first feature set and the second feature set is calculated based on the differentiation axis. 4.根据权利要求3所述的方法,其特征在于,基于所述求导轴系计算所述第一特征集合与所述第二特征集合之间的变形张量包括:4. The method according to claim 3, characterized in that, calculating the deformation tensor between the first feature set and the second feature set based on the differentiation axis includes: 采用所述第二特征集合根据链式法则对图像归一化坐标系下的变量求导,得到第一中间值;The second feature set is used to differentiate the variables in the image normalized coordinate system according to the chain rule to obtain the first intermediate value; 采用所述第一中间值根据链式法则对图像去畸变传感坐标系下的变量求导,得到第二中间值;The second intermediate value is obtained by taking the derivative of the variables in the image distortion removal sensing coordinate system according to the chain rule using the first intermediate value; 采用所述第二中间值根据链式法则对图像归一化坐标系下的变量求导,得到第三中间值;The second intermediate value is used to differentiate the variables in the image normalized coordinate system according to the chain rule to obtain the third intermediate value; 采用所述第三中间值根据链式法则对图像世界坐标系下的变量进行求导,得到第一求导值;The first derivative value is obtained by taking the derivative of the variable in the image world coordinate system according to the chain rule using the third intermediate value. 采用所述第一特征集合依次根据链式法则对所述图像世界坐标系、所述图像归一化坐标系、所述图像去畸变传感坐标系、所述图像归一化坐标系下的变量求导,得到第二求导值;The first feature set is used to sequentially differentiate the variables in the image world coordinate system, the image normalized coordinate system, the image distortion removal sensing coordinate system, and the image normalized coordinate system according to the chain rule to obtain the second derivative value; 对所述第一求导值和所述第二求导值进行求导复合,得到复合结果;The first derivative and the second derivative are combined to obtain the composite result; 对所述复合结果进行极分解,得到变形张量。The composite result is decomposed to obtain the deformable tensor. 5.根据权利要求1所述的方法,其特征在于,基于所述变形张量计算环境畸变信息矩阵包括:5. The method according to claim 1, characterized in that, calculating the environmental distortion information matrix based on the deformation tensor includes: 判断所述变形张量是否与单位矩阵相等;Determine whether the deformed tensor is equal to the identity matrix; 若所述变形张量与单位矩阵不相等,解析所述第二图像帧中特征点的特征方向;If the deformation tensor is not equal to the identity matrix, analyze the feature directions of the feature points in the second image frame; 基于所述特征方向构建应变张量;A strain tensor is constructed based on the aforementioned characteristic direction; 对所述应变张量进行主应变分析,剔除切向应变,得到主应变张量矩阵,其中,所述主应变张量矩阵中的正数值表征拉伸变形,负数值表征压缩变形;Principal strain analysis is performed on the strain tensor to remove tangential strain, resulting in the principal strain tensor matrix. Positive values in the principal strain tensor matrix represent tensile deformation, while negative values represent compressive deformation. 选取环境变形修正系数,并将所述环境变形修正系数作为泊松比;An environmental deformation correction factor is selected and used as Poisson's ratio. 基于所述泊松比和所述变形张量计算所述主应变张量矩的比例系数,并统计所述第二图像帧的中心点在多个响应情况下的位置波动值,采用多个位置波动值构建方差;The proportionality coefficient of the principal strain tensor moment is calculated based on the Poisson's ratio and the deformation tensor, and the position fluctuation value of the center point of the second image frame under multiple response conditions is statistically analyzed. The variance is constructed using multiple position fluctuation values. 根据所述方差、所述比例系数、以及所述主应变张量矩阵计算环境畸变信息矩阵。The environmental distortion information matrix is calculated based on the variance, the scaling factor, and the principal strain tensor matrix. 6.根据权利要求5所述的方法,其特征在于,根据所述方差、所述比例系数、以及所述主应变张量矩阵计算环境畸变信息矩阵包括:6. The method according to claim 5, characterized in that calculating the environmental distortion information matrix based on the variance, the scaling factor, and the principal strain tensor matrix includes: 采用以下公式计算环境畸变信息矩阵 The environmental distortion information matrix is calculated using the following formula. 其中,V为所述比例系数,为所述方差,ε2为所述主应变张量矩阵,T为转置符。Wherein, V is the proportionality coefficient. Let ε be the variance, ε2 be the principal strain tensor matrix, and T be the transpose. 7.根据权利要求5所述的方法,其特征在于,选取环境变形修正系数包括:7. The method according to claim 5, characterized in that selecting the environmental deformation correction coefficient includes: 获取所述目标相机所在环境的气象信息;Obtain meteorological information about the environment where the target camera is located; 选择与所述气象信息匹配的环境变形修正系数。Select an environmental deformation correction factor that matches the meteorological information. 8.根据权利要求1所述的方法,其特征在于,在采用所述投影信息矩阵和所述环境畸变信息矩阵构建信息矩阵之后,所述方法还包括:8. The method according to claim 1, characterized in that, after constructing the information matrix using the projection information matrix and the environmental distortion information matrix, the method further includes: 获取所述目标相机的成像质量参数;Obtain the imaging quality parameters of the target camera; 判断所述成像质量参数是否小于预设阈值;Determine whether the imaging quality parameter is less than a preset threshold; 若所述成像质量参数小于预设阈值,生成所述目标相机的亚像素插值信息矩阵;If the imaging quality parameter is less than a preset threshold, a sub-pixel interpolation information matrix of the target camera is generated; 在所述信息矩阵中补偿所述亚像素插值信息矩阵,并更新所述信息矩阵。The subpixel interpolation information matrix is compensated in the information matrix, and the information matrix is updated. 9.根据权利要求1所述的方法,其特征在于,采用所述信息矩阵补偿计算所述目标相机的相机位姿包括:9. The method according to claim 1, characterized in that, calculating the camera pose of the target camera using the information matrix compensation includes: 在所述目标相机在进行高斯牛顿迭代过程中的当前迭代周期中,采用以下公式计算所述目标相机在当前迭代周期的相机位姿Δx:In the current iteration cycle of the Gauss-Newton iteration process of the target camera, the camera pose Δx in the current iteration cycle is calculated using the following formula: Δx=-(JT2)-1J)-1J(σ2)-1u;Δx=-(J T2 ) -1 J) -1 J(σ 2 ) -1 u; 其中,σ2为所述信息矩阵,J为雅可比矩阵,u为非线性函数,T为转置符。Where σ² is the information matrix, J is the Jacobian matrix, u is a nonlinear function, and T is a transpose. 10.一种相机位姿的补偿计算装置,其特征在于,包括:10. A camera pose compensation calculation device, characterized in that it comprises: 采集模块,用于采集基准相机的第一图像帧和目标相机的第二图像帧;The acquisition module is used to acquire the first image frame from the reference camera and the second image frame from the target camera; 第一计算模块,用于计算所述第一图像帧与所述第二图像帧之间的投影信息矩阵,以及计算所述第一图像帧与所述第二图像帧之间的变形张量,并基于所述变形张量计算环境畸变信息矩阵;The first calculation module is used to calculate the projection information matrix between the first image frame and the second image frame, and to calculate the deformation tensor between the first image frame and the second image frame, and to calculate the environmental distortion information matrix based on the deformation tensor. 构建模块,用于采用所述投影信息矩阵和所述环境畸变信息矩阵构建信息矩阵;The construction module is used to construct an information matrix using the projection information matrix and the environmental distortion information matrix; 第二计算模块,用于采用所述信息矩阵补偿计算所述目标相机的相机位姿。The second calculation module is used to calculate the camera pose of the target camera using the information matrix compensation. 11.一种存储介质,其特征在于,所述存储介质中存储有计算机程序,其中,所述计算机程序被设置为运行时执行所述权利要求1至9任一项中所述的方法。11. A storage medium, characterized in that the storage medium stores a computer program, wherein the computer program is configured to execute the method described in any one of claims 1 to 9 when it is run. 12.一种电子装置,包括存储器和处理器,其特征在于,所述存储器中存储有计算机程序,所述处理器被设置为运行所述计算机程序以执行所述权利要求1至9任一项中所述的方法。12. An electronic device comprising a memory and a processor, characterized in that the memory stores a computer program, and the processor is configured to run the computer program to perform the method according to any one of claims 1 to 9.
CN202310916736.2A 2023-07-24 2023-07-24 Compensation calculation method and device for camera pose, storage medium and electronic device Active CN116894875B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310916736.2A CN116894875B (en) 2023-07-24 2023-07-24 Compensation calculation method and device for camera pose, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310916736.2A CN116894875B (en) 2023-07-24 2023-07-24 Compensation calculation method and device for camera pose, storage medium and electronic device

Publications (2)

Publication Number Publication Date
CN116894875A true CN116894875A (en) 2023-10-17
CN116894875B CN116894875B (en) 2025-09-09

Family

ID=88310615

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310916736.2A Active CN116894875B (en) 2023-07-24 2023-07-24 Compensation calculation method and device for camera pose, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN116894875B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108780577A (en) * 2017-11-30 2018-11-09 深圳市大疆创新科技有限公司 Image processing method and device
CN110047108A (en) * 2019-03-07 2019-07-23 中国科学院深圳先进技术研究院 UAV position and orientation determines method, apparatus, computer equipment and storage medium
WO2020161118A1 (en) * 2019-02-05 2020-08-13 Siemens Aktiengesellschaft Adversarial joint image and pose distribution learning for camera pose regression and refinement
CN111586360A (en) * 2020-05-14 2020-08-25 佳都新太科技股份有限公司 Unmanned aerial vehicle projection method, device, equipment and storage medium
CN111712857A (en) * 2019-06-25 2020-09-25 深圳市大疆创新科技有限公司 Image processing method, device, pan/tilt and storage medium
CN112907620A (en) * 2021-01-25 2021-06-04 北京地平线机器人技术研发有限公司 Camera pose estimation method and device, readable storage medium and electronic equipment
WO2021179745A1 (en) * 2020-03-11 2021-09-16 中国科学院深圳先进技术研究院 Environment reconstruction method and device
WO2022061495A1 (en) * 2020-09-22 2022-03-31 深圳市大疆创新科技有限公司 Parameter calibration method and apparatus, and mobile platform
CN114549652A (en) * 2022-01-13 2022-05-27 湖南视比特机器人有限公司 Camera calibration method, device, equipment and computer readable medium
CN116342713A (en) * 2023-03-29 2023-06-27 重庆长安汽车股份有限公司 A method, device, electronic device and storage medium for calibrating external parameters of a vehicle-mounted camera

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108780577A (en) * 2017-11-30 2018-11-09 深圳市大疆创新科技有限公司 Image processing method and device
WO2020161118A1 (en) * 2019-02-05 2020-08-13 Siemens Aktiengesellschaft Adversarial joint image and pose distribution learning for camera pose regression and refinement
CN110047108A (en) * 2019-03-07 2019-07-23 中国科学院深圳先进技术研究院 UAV position and orientation determines method, apparatus, computer equipment and storage medium
CN111712857A (en) * 2019-06-25 2020-09-25 深圳市大疆创新科技有限公司 Image processing method, device, pan/tilt and storage medium
WO2021179745A1 (en) * 2020-03-11 2021-09-16 中国科学院深圳先进技术研究院 Environment reconstruction method and device
CN111586360A (en) * 2020-05-14 2020-08-25 佳都新太科技股份有限公司 Unmanned aerial vehicle projection method, device, equipment and storage medium
WO2022061495A1 (en) * 2020-09-22 2022-03-31 深圳市大疆创新科技有限公司 Parameter calibration method and apparatus, and mobile platform
CN112907620A (en) * 2021-01-25 2021-06-04 北京地平线机器人技术研发有限公司 Camera pose estimation method and device, readable storage medium and electronic equipment
CN114549652A (en) * 2022-01-13 2022-05-27 湖南视比特机器人有限公司 Camera calibration method, device, equipment and computer readable medium
CN116342713A (en) * 2023-03-29 2023-06-27 重庆长安汽车股份有限公司 A method, device, electronic device and storage medium for calibrating external parameters of a vehicle-mounted camera

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
COSIMO PATRUNO等: "A Vision-Based Odometer for Localization of Omnidirectional Indoor Robots", 《SENSORS》, 29 February 2020 (2020-02-29), pages 1 - 25 *
ZHUOYI YIN等: "Binocular camera calibration based on timing correction", 《APPLIED OPTICS》, vol. 61, no. 6, 15 February 2022 (2022-02-15), pages 1475 - 1481 *
周佳乐等: "融合二维图像和三维点云的相机位姿估计", 《光学精密工程》, vol. 30, no. 22, 30 November 2022 (2022-11-30), pages 2901 - 2912 *

Also Published As

Publication number Publication date
CN116894875B (en) 2025-09-09

Similar Documents

Publication Publication Date Title
CN107341814B (en) Monocular visual odometry method for quadrotor UAV based on sparse direct method
CN110009674B (en) A real-time calculation method of monocular image depth of field based on unsupervised deep learning
CN113674421B (en) 3D target detection method, model training method, related devices and electronic equipment
CN119068042B (en) Cargo volume calculation method and system based on panoramic video
CN118820557B (en) Cable line fine monitoring method and system based on three-dimensional digital model
CN113569852A (en) Training method and device of semantic segmentation model, electronic equipment and storage medium
CN112233149B (en) Method and device for determining scene flow, storage medium, and electronic device
CN103778598B (en) Disparity map ameliorative way and device
CN114155256B (en) A method and system for tracking deformation of flexible objects using RGBD cameras
CN117593702B (en) Remote monitoring method, device, equipment and storage medium
CN119417912A (en) External parameter calibration method, device and electronic equipment
CN117558066A (en) Model training method, joint point prediction method, device, equipment and storage medium
CN111531546B (en) Robot pose estimation method, device, equipment and storage medium
CN111553954B (en) An online photometric calibration method based on direct monocular SLAM
CN119323741B (en) Unmanned aerial vehicle video target detection method and system based on space-time correlation
CN116894875B (en) Compensation calculation method and device for camera pose, storage medium and electronic device
CN111260706A (en) Dense depth map calculation method based on monocular camera
CN115457344A (en) Point-labeled panoramic segmentation model training method, panoramic segmentation method and device
CN120182974A (en) An AI-assisted image annotation method based on semi-supervised learning
CN116912645B (en) Three-dimensional target detection method and device integrating texture and geometric features
EP4318404A1 (en) System and apparatus suitable for use with a hypernetwork in association with neural radiance fields (nerf) related processing, and a processing method in association thereto
CN118135029A (en) Multi-camera extrinsic parameter estimation method for road side perception fusion scene
CN114762001B (en) Self-supervised depth and pose estimation based on sampling
CN114926719A (en) Hypergraph low-rank representation-based complex dynamic system perception feature fusion method
CN113379821A (en) Stable monocular video depth estimation method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20260130

Address after: 401133 Chongqing Yubei District, Liangjiang New Area, Longxing Town, Modern Avenue 120, Building 1

Patentee after: Chongqing Chang'an Technology Co.,Ltd.

Country or region after: China

Address before: 400023 Chongqing Jiangbei District, the new East Road, No. 260

Patentee before: Chongqing Changan Automobile Co.,Ltd.

Country or region before: China