[go: up one dir, main page]

CN119478055B - Camera calibration method and object three-dimensional shape information recovery method - Google Patents

Camera calibration method and object three-dimensional shape information recovery method

Info

Publication number
CN119478055B
CN119478055B CN202411461978.8A CN202411461978A CN119478055B CN 119478055 B CN119478055 B CN 119478055B CN 202411461978 A CN202411461978 A CN 202411461978A CN 119478055 B CN119478055 B CN 119478055B
Authority
CN
China
Prior art keywords
phase
target
camera
feature points
calibration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202411461978.8A
Other languages
Chinese (zh)
Other versions
CN119478055A (en
Inventor
朱江平
周佩
朱政忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN202411461978.8A priority Critical patent/CN119478055B/en
Publication of CN119478055A publication Critical patent/CN119478055A/en
Application granted granted Critical
Publication of CN119478055B publication Critical patent/CN119478055B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a camera calibration method, which comprises the steps of constructing an orthogonal binary stripe target with periodic characteristics, separating horizontal and vertical phase shift stripes from an image through Fourier transformation, positioning characteristic point positions through a phase diagram and extracting characteristic point pixel coordinates, and also provides a method for recovering three-dimensional shape information of an object.

Description

Camera calibration method and object three-dimensional morphology information recovery method
Technical Field
The invention relates to the technical field of cameras, in particular to a camera calibration method and a method for recovering three-dimensional shape information of an object.
Background
For precision instrument manufacturing and intelligent detection, three-dimensional detection of complex microstructures and micro devices is a key task, and camera calibration is the first step of three-dimensional imaging. The current mainstream camera calibration method generally requires to obtain a clear target image to perform accurate feature detection, so as to be used for camera calibration. However, in a practical working environment, many cameras for shooting precision parts have small fields of view and small depths of field, and capturing clear images within a limited depth of field is difficult. The double telecentric camera is widely applied in the microscopic imaging field, is suitable for measurement of precise instruments due to the characteristics of constant magnification and tiny distortion in the depth direction, but has smaller field of view and depth of field, and is difficult to calibrate.
Camera calibration is a key step of estimating internal and external parameters of a stereoscopic camera linking a two-dimensional image and a three-dimensional world, and is currently mainly realized through a calibration target with characteristic points. These targets are largely classified into 3 classes, 3D targets, 2D targets, 1D targets. The 3D target, although having higher calibration accuracy, is bulky, expensive and costly to manufacture, and is not suitable for measurement and camera calibration in the microscopic small field of view. The 1D target has a simple structure, few characteristic points and cannot guarantee the calibration precision. Therefore, since the Zhang's calibration method based on the 2D calibration plate is proposed, the originally complex camera calibration procedure becomes simple and convenient, the 2D plane calibration plate can be placed in any rotating posture and direction, and the method is widely used. The common characteristic points of the 2D calibration plate suitable for the method are square angular points, round points, checkerboard angular points and the like, and besides a proper calibration model is established, the coordinate extraction precision of the characteristic points also directly influences the calibration precision. In addition to the above three targets, some scholars have also proposed a self-calibration method that does not rely on calibrating the target, but requires a large number of corresponding features and complex calculations, and the calibration results are unstable. Because the traditional calibration method directly utilizes the gray information of the target image to extract the characteristic points, the image quality influences the calibration precision of the camera to a great extent.
Disclosure of Invention
In order to solve the problems in the prior art, the invention aims to provide a camera calibration method and a method for recovering three-dimensional shape information of an object.
In order to achieve the purpose, the technical scheme adopted by the invention is that the camera calibration method comprises the following steps:
Step 1, constructing an orthogonal binary stripe target with periodic characteristics;
and 2, performing Fourier transformation on the image, separating horizontal and vertical phase shift stripes from the image, positioning the position of the characteristic point through a phase diagram, and extracting the pixel coordinates of the characteristic point.
As a further improvement of the present invention, the step 1 is specifically as follows:
The light intensity of an orthogonal binary stripe target with periodic features is expressed as:
Wherein (X, Y) is the planar world coordinate, a, b 1,b2 is a constant, p x,py is the period in the X and Y directions, respectively;
Performing binary coding on the generated orthogonal stripes, introducing four parameters a 1,a2,a3,a4, performing optimized diffusion, and controlling error diffusion ratio, so as to improve sine similarity of the stripe patterns, wherein the optimized kernel is expressed as:
Where, -representing the previously processed pixel, x representing the pixel currently being processed, α 1、α2、α3、α4 is the error diffusion ratio that is optimized.
As a further improvement of the present invention, the step 2 is specifically as follows:
The periodic target is regarded as a periodic signal of a 2D plane, and is obtained through Fourier transformation:
wherein E 0 is zero-order spectrum, AndRepresents the k-th order spectrum, E * represents the conjugate operation, f x,fy represents the frequencies in the x, y directions, respectively, and Σ represents the summation;
After Fourier transformation, selecting main frequency components along an x-axis and a y-axis from the frequency spectrum, selecting a proper window to separate out fundamental frequency components, and then respectively carrying out inverse Fourier transformation on the separated fundamental frequency components to obtain two wrapping phases within a range of 0 to 2 pi;
The pixel coordinates θ x =2pi and θ y =2pi are regarded as coordinates of the feature point, respectively, in the obtained image.
As a further improvement of the present invention, the method further comprises optimizing coordinates of the view feature points, specifically as follows:
If the pixel points meeting the requirements of |theta x -2pi| < delta and |theta y -2pi| < delta in the horizontal stripe and the vertical stripe are selected as candidate points, wherein delta represents a threshold value;
the characteristic points in the same row or column on the target are in the same straight line, nonlinear fitting is carried out on the candidate points and the periphery of the candidate points locally, and constraint optimization is carried out on the characteristic points in the same row or column on the target;
Feature points are included in global optimization, unwrapping is carried out on wrapping phases, and the midpoint of an image plane is set as a starting point of phase unwrapping:
c 1 and c 2 are constants directly related to the starting point of the phase unwrapping, and the candidate feature points are incorporated into the global linear fit when unwrapped phases are obtained:
And finally, obtaining the pixel coordinates of the feature points with sub-pixel precision after simultaneously restraining the local and global optimization of the feature points.
As a further improvement of the invention, the material of the orthogonal binary stripe target is a chromed glass substrate, and the chromed glass substrate has a thickness of 1mm and is formed by a laser photoetching process.
The invention also provides a method for recovering the three-dimensional shape information of the object, which is realized by adopting the camera calibrated by the camera calibration method, and simultaneously, the three-dimensional shape information of the object is recovered by adopting the reconstruction method based on the phase height mapping.
As a further improvement of the invention, the three-dimensional morphology information of the object is recovered by adopting a reconstruction method based on phase height mapping, which is specifically as follows:
The high-precision translation stage is controlled to collect stripe structure light images of different reference planes, wherein the distances between adjacent reference planes are equal to d, and a phase diagram of each plane is obtained through phase decomposition Fitting a polynomial by adopting an absolute phase height mapping method:
B 0,...,bn is a fitted polynomial coefficient, so that a phase height mapping relation in a measurement space is obtained, then modulation stripes on the surface of the object to be measured are collected, a phase diagram of the object is recovered by adopting the same phase solving algorithm, and three-dimensional morphology information of the object to be measured is further obtained by combining the phase height mapping relation and calibration parameters.
The method of the invention lays a road for improving the calibration of the double telecentric camera and opens up a new direction in each application research field of the double telecentric camera. Firstly, according to the characteristics of small field of view and small depth of field of a double telecentric camera, an error diffusion optimization algorithm is utilized to encode orthogonal stripes, and an orthogonal binary phase target is generated through physical lithography. By doing so, inherent coding noise of the coding is filtered, and the optimized orthogonal stripes have stronger sine, so that the accurate extraction of the characteristic points is realized. In addition, the feature points are extracted by adopting a global constraint method, so that the feature point extraction precision is remarkably improved. The feasibility of the method is confirmed by theoretical analysis and experiments.
The beneficial effects of the invention are as follows:
The calibration method has stronger robustness to camera defocusing, is suitable for cameras with small depth of field, and can extend the original depth of field. Compared with the existing method, the method has the advantages of reducing the reprojection error and calibrating the defocused camera. When the camera is seriously out of focus and the calibration of the traditional calibration method fails, the method can still accurately calibrate the camera successfully, and the reprojection error is 0.08155pixel. These improvements in camera calibration help to drive the development of three-dimensional reconstruction in the microscopy field and improve the accuracy in three-dimensional detection.
Drawings
FIG. 1 is a schematic diagram of a dual telecentric lens model in an embodiment of the invention;
FIG. 2 is a schematic diagram of a quadrature binary phase target pattern according to an embodiment of the present invention;
FIG. 3 is a flow chart of feature point extraction in an embodiment of the invention;
FIG. 4 is a flow chart of feature point extraction optimization in an embodiment of the invention;
FIG. 5 is a schematic diagram of phase height mapping according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of an experimental apparatus in an embodiment of the invention;
FIG. 7 is a schematic diagram of system calibration in an embodiment of the invention;
FIG. 8 is a photograph of different targets according to an embodiment of the invention;
FIG. 9 is a reprojection error map according to an embodiment of the present invention;
FIG. 10 is a graph of comparison of labeled sharpness in an embodiment of the present invention;
FIG. 11 is a diagram of the reprojection errors of different defocus levels according to the embodiment of the present invention;
FIG. 12 is a schematic view of the distance between adjacent feature points according to an embodiment of the present invention;
FIG. 13 is a checkerboard, dot target, and quadrature binary phase target pitch error histogram for in-focus and out-of-focus embodiments of the present invention;
FIG. 14 is a fringe pattern taken by a double telecentric camera in an embodiment of the invention;
FIG. 15 is a schematic diagram of measurement of standard blocks in an embodiment of the present invention;
FIG. 16 is a diagram illustrating one of the reconstruction results according to an embodiment of the present invention;
FIG. 17 is a graph showing another reconstruction result according to an embodiment of the present invention.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
Examples
A camera calibration method comprising:
A. double telecentric camera imaging model:
Camera calibration has been a key and primary link in the computer vision field. Telecentric cameras are widely used in the field of microscopic imaging due to their constant magnification and minimal distortion, as well as relatively small depth of field. As shown in fig. 1, in a double telecentric imaging system, a stop is placed at the focal point, allowing only light parallel to the main optical axis to enter the telecentric lens on the imaging side. Since the telecentric lens has a constant magnification, for a double telecentric imaging model, the imaging of the object does not change with the distance between the object and the object-side telecentric lens.
(X w,Yw,Zw) is the three-dimensional coordinates of the target point P in the world coordinate system, and (X c,yc,zc) is the three-dimensional coordinates of the object point corresponding to the object point in the camera coordinate system. As known from the camera imaging principle, (u, v) corresponds to point p (x c,yc,zc) in the image coordinate system. Since the double telecentric lens is based on orthogonal projection, the projection for any point p in the image coordinate system can be expressed as:
Where m is the fixed magnification of the double telecentric camera, d u and d v represent the pixel sizes in the u and v directions, respectively, (u 0,v0) represents the center of the image coordinate system. The world coordinate system P (X w,Yw,Zw) is converted into an image coordinate system P (X c,yc,zc),R={rij } (i=1, 2,3; j=1, 2, 3) as a rotation matrix and t= [ T x ty tz ] as a translation vector by rigid body transformation:
In the double telecentric imaging model, the lens distortion is mainly two kinds of distortion, namely mirror image distortion and tangential distortion. In the study of this embodiment, tangential distortion is considered to be much smaller than radial distortion, while the distortion of the double telecentric lens is much smaller than that of the normal lens, so only radial distortion is considered. In a double telecentric imaging model, the first three coefficients of radial distortion are chosen, so the true coordinates with distortion can be expressed as:
ud=u+u(k1r2+k2r4+k3r6)
vd=v+v(k1r2+k2r4+k3r6)
where k 1,k2 and k 3 are coefficients of radial distortion; (u, v) is ideal image coordinates and (u d,vd) is true image coordinates with distortion.
Typically, in performing a double telecentric camera calibration, six internal parameters and one external parameter matrix [ R T ] are calibrated. The six internal parameters include three internal parameters of the camera (effective magnification m, distortion center (u 0,v0)) and three coefficients of radial distortion (k 1,k2,k3).
B. Quadrature binary phase target pattern:
this embodiment uses an orthogonal stripe target with periodic characteristics, and as shown in fig. 2 (a), the light intensity can be expressed as:
Where (X, Y) is the planar world coordinate, a, b 1,b2 is a constant, where a=1/2, b 1=b2=1/4,px,py is the period in X and Y directions, respectively. Phi X0Y0 is an additional phase value. Meanwhile, the generated orthogonal stripes are subjected to binary coding, as shown in (b) of fig. 2, and the purpose of the binary coding is to enable the image of the target to still keep good sine under the out-of-focus condition. At the same time, the projection curve which is very similar to the sine curve is smoother and kept periodic, and is more suitable for Fourier transformation. An optimized kernel method based on the traditional Floyd-steinberg dithering method is used, and the sine similarity of the stripe pattern is improved by introducing four parameters a 1,a2,a3,a4, optimizing diffusion and controlling error diffusion ratio, wherein the optimized kernel can be expressed as:
Where, -representing the previously processed pixel, x representing the pixel currently being processed, α 1、α2、α3、α4 is the error diffusion ratio that is optimized.
C. Feature point extraction:
The periodic target designed in this embodiment can be regarded as a periodic signal in a 2D plane, and thus can be obtained by fourier transform, as follows:
wherein E 0 is zero-order spectrum, AndRepresenting the kth order spectrum, E * represents the conjugation operation (conjugate operation). Fig. 3 shows a flow of feature point extraction. First, the photographed image is Fourier transformed in (a) of FIG. 3, and the dominant frequency components along the x-axis and along the y-axis are selected from the spectrum, as shown in (b) of FIG. 3, where E 0 represents the zero frequency component,AndRepresents fundamental frequency components and is separated by selecting proper windowAndThe separated fundamental frequency components are then respectively subjected to inverse fourier transformation to obtain two wrapping phases in the range of 0 to 2pi, as shown in (c) of fig. 3. Therefore, as shown in (e) of fig. 3, the pixel coordinates θ x =2pi and θ y =2pi are regarded as coordinates of the feature point, respectively, in the obtained image. However, this achieves only the pixel level accuracy, and in order to achieve the sub-pixel level accuracy, the present embodiment combines the characteristic of small distortion of the double telecentric camera, and proposes a new optimization algorithm. The optimization algorithm is shown in fig. 4.
First, if pixel points satisfying |θ x -2pi| < δ and |θ y -2pi| < δ in the horizontal and vertical stripes are selected as candidate points, as shown in (a) of fig. 4, where δ represents an appropriate threshold value. Secondly, the feature points in the same row or column on the target are all in the same straight line, so that the embodiment not only carries out nonlinear fitting on the candidate points and the periphery thereof locally, but also carries out constraint optimization on the feature points in the same row or column on the target, as shown in (b) in fig. 4.
As shown in (a) of fig. 4, the present embodiment determines candidate regions for each feature point to be extracted separately in the first step, and the present embodiment incorporates feature points into global optimization in consideration of target integrity, unlike non-linear fitting around only a single feature point. First, the wrapped phase is unwrapped. Without loss of generality, the midpoint of the image plane is set to the starting point of the phase unwrapping:
as shown in (b) of fig. 4, c 1 and c 2 are constants directly related to the phase unwrapping start point. The candidate feature points are included in a global linear fit after the unwrapped phase is obtained:
wherein c vi,chi (i=1, 2, 3) is a fitting coefficient, and the constraint feature points are in the same column or the same row. Finally, after constraining the feature points locally and globally simultaneously, as shown in (c) of fig. 4, the feature point pixel coordinates of sub-pixel accuracy are obtained.
D. Phase height mapping:
For monocular structured light measuring systems, there are two general reconstruction models, one is to consider a projector as a reverse camera, and adopts the stereoscopic vision principle as a reconstruction basis, and the other is to establish a phase height mapping relationship to reconstruct three-dimensional information of an object. However, under the measurement system based on the double telecentric camera, the imaging model does not accord with the common perspective projection model, and the telecentric imaging model is insensitive to the depth change along the optical axis direction, so that the reconstruction method based on the stereoscopic vision principle cannot be directly used in the microscopic three-position measurement system with the double telecentric lens. The reconstruction method based on the phase height mapping is relatively simple, does not need to consider an imaging model of the system, but needs high-precision displacement table auxiliary measurement. In the embodiment, the three-dimensional shape information of the object is recovered by adopting a reconstruction method based on phase height mapping, and the method does not need to calibrate a projector, but needs to establish the mapping relation between the phase and the height in a measurement space.
Firstly, controlling a high-precision translation stage to acquire stripe structure light images of different reference planes, wherein the distances between adjacent reference planes are equal to d, the system principle is as shown in figure 5, and a phase diagram of each plane is obtained through phase resolutionFitting a polynomial by adopting an absolute phase height mapping method:
B 0,...,bn is a fitted polynomial coefficient, so that a phase height mapping relation in a measurement space is obtained, then modulation stripes on the surface of the object to be measured are collected, a phase diagram of the object is recovered by adopting the same phase solving algorithm, and three-dimensional morphology information of the object to be measured is further obtained by combining the phase height mapping relation and calibration parameters.
In order to verify the credibility and effectiveness of the method proposed by this embodiment, this embodiment establishes a set of precise experimental apparatus for physical measurement, as shown in fig. 6.
As shown in fig. 6 (a), the measurement system includes various components, including a double telecentric camera, an electronically controlled displacement stage, and a projector. In order to ensure that the shot picture is accurate and distortion is small, a large constant MER2-301-125U3M camera (pixel size is 3.45 um) with the resolution of 2048 x 1536 is selected, and a high-precision double telecentric lens (magnification: 0.687, object space depth of field + -0.9 mm, object space working distance of 24mm and field of view of 10.5mm x 7.9 mm) is matched. Meanwhile, in order to realize accurate positioning and control movement in the measuring process, a OptoSigma OSMS80-20ZF-0B electric control displacement table is used, and the precision is 0.1um. In the system, a projector with excellent performance and carrying a texas instrument DLP4500 chip is used, and a double telecentric camera is matched, so that accurate and uniform projection on the target surface is realized, and three-dimensional reconstruction is realized. Meanwhile, a target is designed for the small-view-field double telecentric lens, and as shown in (b) of fig. 6, a photoetching calibration plate (resolution 2 um) is customized, wherein the first row is a binary target with different periods, and the second row comprises a dot target and a checkerboard target.
A. Calibration test-comparative test in the case of focusing and defocus:
in the first experiment, a dot target, a checkerboard and an orthogonal binary phase target are placed in the depth of field of a camera to respectively shoot 50 pictures with different postures, the calibration experiment is repeated for a plurality of times, and 15 pictures are randomly selected for camera calibration in each camera calibration experiment, as shown in fig. 7.
Fig. 8 (a) - (c) show taken pictures of different targets, respectively. In this case, since the phase in the vicinity of the image boundary cannot be correctly recovered, the feature point at the boundary is excluded in the feature point extraction process, as shown in (c) of fig. 8. And the characteristic points of the checkerboard and the dot targets are extracted and calibrated by using a calibration tool [ Opencv toolbox ] in OpenCV. Table 1 shows the results of accurate calibration of three targets.
TABLE 1
As shown in fig. 9, in the case of focusing the camera, the three targets can be calibrated well to obtain a small re-projection error. The re-projection error obtained by the method provided by the embodiment is 0.04631pixel, which is superior to two targets, namely a checkerboard and a dot target.
The depth of field of the double telecentric lens used in the embodiment is only +/-0.9 mm, and clear pictures can be captured in the depth of field through the control of the displacement table, so that camera calibration under the focusing condition is realized. As shown in fig. 10, since the double telecentric depth of field is small, the image blurring is caused by the small-angle inclination of the target, and the conventional dot target and checkerboard show defocus blurring phenomenon, which affects the final calibration result. In an actual working environment, the whole target is difficult to control and is in the range of the depth of field of the camera at the same time, and the calibration method designed by the embodiment well solves the problem. The quadrature binary phase target has good robustness to defocus blur and can successfully calibrate the camera.
And (3) under the condition of research defocus, the calibration results of different targets are obtained, and two groups of target images with different degrees of defocus are shot. In order to maintain constant in-camera parameters, the video camera was kept stationary during the experiment. For the first group, the targets were controlled to be in the range of slight defocus (0.9 mm-1.2 mm) using an electronically controlled displacement table. For the second group, targets were placed at a distance of heavy defocus (1.2-1.5 mm). The two sets of target pictures were used for camera calibration, respectively, table 2 summarizes the calibration results, with a checkerboard re-projection error of 0.74764 pixels, a dot target of 0.13899 pixels, and a re-projection error of 0.0784 pixels, which is far less than that obtained with conventional target calibration cameras. In the case of heavy defocus, only the method of the embodiment successfully calibrates the camera and has excellent robustness to image defocus blur. Fig. 11 shows the re-projection error map obtained by the proposed method and the zhang method, and it can be seen that the re-projection error obtained by calibrating the camera by the method of the present embodiment still surrounds the origin in the case of slight and severe defocus, whereas the checkerboard and the dot target can successfully calibrate the camera only in the case of slight defocus, and the re-projection error is greatly transformed compared with the case of focusing, whereas the checkerboard and the dot target are both failed to calibrate in the case of severe defocus of the camera, whereas the method of the present embodiment successfully calibrates the camera, and the re-projection error is 0.08155pixel.
TABLE 2
B. Reconstruction experiment:
in order to verify that the method provided by the embodiment can also improve the reconstruction accuracy of the object, the characteristic points of all the poses of the three targets in the calibration experiment are respectively reconstructed, and meanwhile, the distance between the adjacent characteristic points is measured, as shown in fig. 12, the orange line is the distance between the transverse adjacent characteristic points, the blue line is the distance between the longitudinal adjacent characteristic points, and the manufacturing accuracy of the high-accuracy photoetching calibration plate adopted according to the experimental setting description is 0.001um. When the target is manufactured, characteristic point intervals on the checkerboard and dot targets are designed to be 0.4mm, and characteristic point intervals of the orthogonal binary phase targets are designed to be 0.2mm. And (3) extracting the characteristic point spacing on the orthogonal stripe phase target at intervals, and comparing the characteristic point spacing with the checkerboard and dot targets for reconstruction experiments. Reconstruction experiments were performed in the focus and defocus ranges, respectively, and the experimental results are shown in table 3. In the statistics, the method of the present embodiment achieves better results both in the case of focusing and out-of-focus. The characteristic point spacing error distribution map reconstructed with the parameters obtained by the three target calibration cameras under the focusing and defocusing conditions is counted, as shown in fig. 13. When the camera focuses, most errors measured by the three targets are concentrated around 0, and when the camera is out of focus and is blurred, the distance errors measured by the calibration method provided by the embodiment are far smaller than those of the other two targets, and most errors are concentrated around 0, so that the calibration parameters of the method provided by the embodiment are more accurate, the tolerance to the out of focus is higher, and the robustness is better.
TABLE 3 Table 3
In order to verify that the camera calibrated by the method of the embodiment can still accurately reconstruct three dimensions under the defocusing and blurring condition, a plane is reconstructed at the same position by using a three-frequency four-step phase shift method, and the highest frequency adopts 10 steps. The phase bolus on 8 planes was captured at intervals of 0.2mm during the phase-height mapping, and the order of the polynomial was set to n=2, as shown in fig. 14, one white plane was measured, and (a) - (c) in fig. 14 are fringe images taken by a double telecentric camera, and a lookup table was established for measuring standard blocks, targets, and small objects.
As shown in FIG. 15, standard blocks having heights of 1mm and a height plane of 1.5mm (flatness of 0.15um and measurement uncertainty values of 0.06um and 0.08um, respectively) were measured. The phase shift stripes in the horizontal direction are projected onto a standard block, and the double telecentric cameras synchronously capture stripe images modulated and deformed by the standard block. Calculating 2 standard block point clouds using standard block modulation fringe patterns in combination with a phase 3D lookup table is shown in fig. 15 (b), 15 (c). To quantitatively evaluate 3D reconstruction accuracy, the heights along the 1000 th row of the x-axis, i.e., the black lines in (b) and (c) of fig. 15, are given by (D) and (e) of fig. 15, providing a more intuitive view. The standard block surface plane fit 1000 th row error statistics are shown in Table 4, the average absolute error of the measured 1mm standard block is 0.00044mm, the variance is 2.1027e-07mm, the average absolute error of the measured 1.5mm height plane is 0.00061mm, and the variance is 3.4581e-07mm. Therefore, even if the reconstruction plane is outside the depth of field of the camera, the method of the embodiment still has higher measurement accuracy.
TABLE 4 Table 4
To test the performance of the method of this example in measuring the surface topography of a small object in practical applications, the small object in (d) of fig. 16 was measured. In total, 18 stripe patterns of different frequencies were projected, one pattern for each frequency being shown in (a) - (c) of fig. 16. The physical diagram is presented in fig. 16 (d). The result after the phase height mapping is shown in (e) of fig. 16, in which the unit of the Z axis is only millimeter and the units of the X and Y axes are pixels. The coordinates in the X and Y directions are converted into physical dimensions in combination with the calibrated camera internal and external parameters, and the unit becomes mm in (e) in fig. 16.
The second test object is two small pendants, each of the titles in (a) - (f) of fig. 17 corresponds to the title in fig. 16. In fig. 17 (f), the diameter of the pendant is about 2.5mm, which corresponds to the actual case.
In general, the research of this embodiment innovatively designs a target suitable for a high-resolution, small-field-of-view, double-telecentric camera and proposes a novel calibration method that significantly reduces calibration errors and is suitable for calibration of an out-of-focus camera. The novel orthogonal binary stripe target is designed by combining Fourier transformation, the pixel coordinates of the characteristic points are accurately extracted, the camera is successfully calibrated by using an out-of-focus blurred image, and the re-projection error is 0.07840pixel. Experiments prove that the method can realize accurate camera calibration and three-dimensional morphology reconstruction under the defocusing or focusing environment. The method is greatly improved in the aspect of camera calibration, and has positive pushing effect on pushing three-dimensional reconstruction in the microscopic field and improving the precision in three-dimensional detection.
The foregoing examples merely illustrate specific embodiments of the invention, which are described in greater detail and are not to be construed as limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention.

Claims (5)

1.一种摄像机标定方法,其特征在于,包括以下步骤:1. A camera calibration method, characterized in that it comprises the following steps: 步骤1、构建具有周期性特征的正交二值条纹标靶;Step 1, constructing an orthogonal binary fringe target with periodic characteristics; 所述步骤1具体如下:The step 1 is specifically as follows: 具有周期性特征的正交二值条纹标靶的光强表示为:The light intensity of an orthogonal binary fringe target with periodic characteristics is expressed as: 其中,(X,Y)是平面世界坐标,a,b1,b2是常数,px,py分别是X和Y方向的周期;φX0Y0是额外的相位值;Where (X,Y) is the plane world coordinate, a, b 1 , b 2 are constants, p x , p y are the periods in the X and Y directions respectively; φ X0 , φ Y0 are additional phase values; 对生成的正交条纹进行二值编码,引入四个参数a1,a2,a3,a4,经优化扩散,控制误差扩散比率,从而改善条纹图案的正弦相似性,优化后的核表示为:The generated orthogonal stripes are binary coded, and four parameters a 1 , a 2 , a 3 , and a 4 are introduced. After optimized diffusion, the error diffusion ratio is controlled to improve the sinusoidal similarity of the stripe pattern. The optimized kernel is expressed as: 其中,-代表先前处理过的像素,x代表当前正在处理的像素,α1、α2、α3、α4是进行优化的误差扩散比率;Where - represents the previously processed pixel, x represents the pixel currently being processed, α 1 , α 2 , α 3 , α 4 are the error diffusion ratios to be optimized; 步骤2、通过对图像进行傅里叶变换,从中分离出水平和垂直相移条纹,通过相位图定位特征点位置并提取出特征点像素坐标;Step 2: By performing Fourier transform on the image, the horizontal and vertical phase shift fringes are separated therefrom, the position of the feature point is located through the phase map, and the pixel coordinates of the feature point are extracted; 所述步骤2具体如下:The step 2 is specifically as follows: 将周期性标靶视为一个2D平面的周期性信号,通过傅里叶变换得到:Considering the periodic target as a periodic signal in a 2D plane, the following is obtained through Fourier transform: 其中,E0为零阶频谱,表示第k阶频谱,E*表示共轭操作;Among them, E 0 is the zero-order spectrum, and represents the k-th order spectrum, E * represents the conjugation operation; 傅里叶变换后从频谱当中选择了沿x轴与沿y轴的主频分量,选用合适的窗口分离出基频分量,然后对分离出的基频分量分别进行逆傅里叶变换获得两个在0到2π范围内的包裹相位;After Fourier transformation, the main frequency components along the x-axis and the y-axis are selected from the spectrum, and the fundamental frequency components are separated by using a suitable window. Then, the separated fundamental frequency components are subjected to inverse Fourier transformation to obtain two wrapped phases in the range of 0 to 2π. 在获得的图像中分别将像素坐标θx=2π和θy=2π视作特征点的坐标。In the obtained image, pixel coordinates θ x =2π and θ y =2π are respectively regarded as coordinates of feature points. 2.根据权利要求1所述的摄像机标定方法,其特征在于,还包括对视作特征点的坐标进行优化,具体如下:2. The camera calibration method according to claim 1, further comprising optimizing the coordinates of the feature points, as follows: 在横竖条纹中如果满足|θx-2π|<δ和|θy-2π|<δ的像素点被选出来作为候选点,其中δ代表的是阈值;In the horizontal and vertical stripes, pixels that satisfy |θ x -2π|<δ and |θ y -2π|<δ are selected as candidate points, where δ represents the threshold; 标靶上处于同一行或列的特征点处于同一条直线上,在局部对候选点及其周围进行非线性拟合,同时对标靶上处于同一行或列的特征点进行约束优化;The feature points in the same row or column on the target are on the same straight line, and nonlinear fitting is performed locally on the candidate points and their surroundings, while constrained optimization is performed on the feature points in the same row or column on the target; 将特征点纳入全局优化当中,对包裹相位进行解相,将图像平面的中点设定为相位展开的起始点:Incorporate the feature points into the global optimization, unwrap the wrapped phase, and set the midpoint of the image plane as the starting point of the phase unwrapping: c1和c2是与相位展开起始点直接相关的常数;在得到展开相位将候选特征点纳入全局线性拟合中:c 1 and c 2 are constants directly related to the starting point of phase unwrapping; after obtaining the unwrapped phase, the candidate feature points are incorporated into the global linear fit: 其中cvi,chi(i=1,2,3)为拟合系数,约束特征点处于同一列或同一行上;最终,在同时约束特征点局部以及全局优化后,获得亚像素精度的特征点像素坐标。Wherein c vi , c hi (i=1,2,3) are fitting coefficients, constraining the feature points to be in the same column or row; finally, after constraining the feature points to be optimized locally and globally at the same time, the pixel coordinates of the feature points with sub-pixel accuracy are obtained. 3.根据权利要求1所述的摄像机标定方法,其特征在于,所述正交二值条纹标靶的材料为镀铬玻璃基板,镀铬玻璃基板的厚度为1mm,且通过激光光刻加工工艺形成。3. The camera calibration method according to claim 1 is characterized in that the material of the orthogonal binary stripe target is a chrome-plated glass substrate, the thickness of the chrome-plated glass substrate is 1 mm, and it is formed by a laser lithography process. 4.一种物体三维形貌信息的恢复方法,其特征在于,采用如权利要求1-3任一项所述的摄像机标定方法标定后的摄像机实现,同时采用基于相位高度映射的重建方法恢复物体的三维形貌信息。4. A method for recovering three-dimensional shape information of an object, characterized in that it is implemented by using a camera calibrated by the camera calibration method as described in any one of claims 1 to 3, and at the same time, a reconstruction method based on phase height mapping is used to recover the three-dimensional shape information of the object. 5.根据权利要求4所述的物体三维形貌信息的恢复方法,其特征在于,采用基于相位高度映射的重建方法恢复物体的三维形貌信息具体如下:5. The method for restoring the three-dimensional shape information of an object according to claim 4 is characterized in that the three-dimensional shape information of the object is restored by using a reconstruction method based on phase height mapping as follows: 控制高精度平移台采集不同参考平面的条纹结构光图像,其中相邻参考平面之间距离相等d,通过解相得到每个平面的相位图采用绝对相位高度映射方法拟合多项式:Control the high-precision translation stage to collect stripe structured light images of different reference planes, where the distance between adjacent reference planes is equal to d, and the phase map of each plane is obtained by phase decomposition Fitting a polynomial using the absolute phase height mapping method: 式中b0,...,bn是拟合的多项式系数,由此得到测量空间内的相位高度映射关系,然后采集待测物体表面的调制条纹,采用同样的解相算法恢复出物体的相位图,结合相位高度映射关系和标定参数进一步得到待测物体的三维形貌信息。Where b0 , ..., bn are the coefficients of the fitted polynomial, from which the phase-height mapping relationship in the measurement space is obtained. Then, the modulated fringes on the surface of the object to be measured are collected, and the phase map of the object is restored using the same phase deconvolution algorithm. The three-dimensional morphology information of the object to be measured is further obtained by combining the phase-height mapping relationship and the calibration parameters.
CN202411461978.8A 2024-10-18 2024-10-18 Camera calibration method and object three-dimensional shape information recovery method Active CN119478055B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411461978.8A CN119478055B (en) 2024-10-18 2024-10-18 Camera calibration method and object three-dimensional shape information recovery method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411461978.8A CN119478055B (en) 2024-10-18 2024-10-18 Camera calibration method and object three-dimensional shape information recovery method

Publications (2)

Publication Number Publication Date
CN119478055A CN119478055A (en) 2025-02-18
CN119478055B true CN119478055B (en) 2025-07-18

Family

ID=94565356

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411461978.8A Active CN119478055B (en) 2024-10-18 2024-10-18 Camera calibration method and object three-dimensional shape information recovery method

Country Status (1)

Country Link
CN (1) CN119478055B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112070842A (en) * 2020-07-28 2020-12-11 安徽农业大学 Multi-camera global calibration method based on orthogonal coding stripes

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101466998B (en) * 2005-11-09 2015-09-16 几何信息学股份有限公司 The method and apparatus of absolute-coordinate three-dimensional surface imaging
CN101567085A (en) * 2009-06-01 2009-10-28 四川大学 Two-dimensional plane phase target used for calibrating camera

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112070842A (en) * 2020-07-28 2020-12-11 安徽农业大学 Multi-camera global calibration method based on orthogonal coding stripes

Also Published As

Publication number Publication date
CN119478055A (en) 2025-02-18

Similar Documents

Publication Publication Date Title
CN112013792B (en) Surface scanning three-dimensional reconstruction method for complex large-component robot
US6268611B1 (en) Feature-free registration of dissimilar images using a robust similarity metric
CN112465912A (en) Three-dimensional camera calibration method and device
CN106548489B (en) A kind of method for registering, the three-dimensional image acquisition apparatus of depth image and color image
CN113160339A (en) Projector calibration method based on Samm&#39;s law
CN111028295A (en) A 3D imaging method based on encoded structured light and binocular
Watanabe et al. Real-time computation of depth from defocus
CN115546285B (en) Large-depth-of-field stripe projection three-dimensional measurement method based on point spread function calculation
WO2018201677A1 (en) Bundle adjustment-based calibration method and device for telecentric lens-containing three-dimensional imaging system
CN103994732B (en) A kind of method for three-dimensional measurement based on fringe projection
CN109373912A (en) A non-contact six-degree-of-freedom displacement measurement method based on binocular vision
US6256058B1 (en) Method for simultaneously compositing a panoramic image and determining camera focal length
CN112598747A (en) Combined calibration method for monocular camera and projector
CN116485869B (en) A multi-target absolute depth estimation method based on monocular polarization 3D imaging
CN108550171A (en) The line-scan digital camera scaling method containing Eight Diagrams coding information based on Cross ration invariability
Zhou et al. Ring-light photometric stereo
CN113160393A (en) High-precision three-dimensional reconstruction method and device based on large field depth and related components thereof
Watanabe et al. Telecentric optics for constant magnification imaging
CN114219866B (en) Binocular structured light three-dimensional reconstruction method, reconstruction system and reconstruction equipment
CN119478055B (en) Camera calibration method and object three-dimensional shape information recovery method
KR102584209B1 (en) 3d reconstruction method of integrated image using concave lens array
CN114782545B (en) A light field camera calibration method to eliminate primary lens distortion
CN116030199A (en) A method and system for three-dimensional and color texture reconstruction of trinocular speckle structured light
CN113865514B (en) Calibration method of line structured light three-dimensional measurement system
Zhu et al. Bi-telecentric camera calibration via orthogonal binary phase target for shallow DoF and high accuracy fringe projection 3D microscopy

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant