CN119359818B - Extraction method of corner points in checkerboard calibration object image - Google Patents
Extraction method of corner points in checkerboard calibration object imageInfo
- Publication number
- CN119359818B CN119359818B CN202411282575.7A CN202411282575A CN119359818B CN 119359818 B CN119359818 B CN 119359818B CN 202411282575 A CN202411282575 A CN 202411282575A CN 119359818 B CN119359818 B CN 119359818B
- Authority
- CN
- China
- Prior art keywords
- image
- corner
- response
- pixel
- point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30204—Marker
- G06T2207/30208—Marker matrix
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a method for extracting corner points in a checkerboard marker image. The method comprises the steps of firstly obtaining rough positions of corner points of a checkerboard standard object, then converting the rough positions of the corner points into an oversampled image coordinate system, intercepting an image window at the rough positions of the corner points, then calculating response values of pixels in a central area of the window to generate a response graph, performing Gaussian blur on the response graph to enable the response graph to be smooth, fitting areas near the corner points in the blurred response graph into elliptic paraboloids, then calculating sub-pixel extremum positions of bright spots according to elliptic paraboloid parameters, and finally obtaining accurate positions of the corner points after coordinate conversion. Compared with the extraction method of the corner positions of the checkerboard markers based on the corner boundaries, the method provided by the invention can realize higher extraction precision of the corner positions, thereby improving the precision of the camera calibration result and being beneficial to the accurate realization of computer vision application.
Description
Technical Field
The invention belongs to a processing method of a checkerboard marker image in the technical field of machine vision, and particularly relates to an accurate extraction method of corner points in the checkerboard marker image.
Background
Camera calibration is the process of determining the geometric relationship between the world coordinate system and the image coordinate system by calculating camera parameters. It is an essential step in many three-dimensional vision applications, with a critical impact on the implementation of the vision application. Camera calibration typically requires the use of a calibration object that contains a set of key points whose geometry is known. Firstly, images of multiple view angles of a calibration object are shot, and then camera parameters are solved according to known positions (object points) of key points of the calibration object in a world coordinate system and the positions (image points) of the key points in the image coordinate system obtained through extraction.
Among various planar calibration objects, the checkerboard calibration object naturally contains key points of the corner points of the checkerboard, has a simple structure, is easy to manufacture with high precision, and is widely used. The method for extracting the corner positions from the checkerboard marker image based on image processing is generally performed in three steps. Firstly, using angular points or feature detectors to obtain all possible angular point positions from the image, secondly, screening angular points formed by intersecting checkerboard boundaries from the detection result of the last step, which can be realized by means of the geometric rule of the checkerboard pattern, and finally, further optimizing the positions of the angular points of the checkerboard, thereby improving the accuracy of the angular point positions.
For the last step of the above process, this can be achieved in a number of different ways. And extracting the accurate positions of the angular points by utilizing the characteristic that the angular points of the checkerboard are positioned at the intersection points of the black and white checkerboard boundaries and utilizing the gradient at the checkerboard boundaries. The method requires clear and accurate black and white square boundaries in the image, has poor robustness to image noise, and has low accuracy in practical application, especially in non-ideal shooting conditions such as outdoor calibration.
Disclosure of Invention
In order to solve the problems and the demands in the background technology, the invention provides an accurate extraction method of angular points in a checkerboard calibration object image. The invention focuses on straight lines passing through the corners in all directions based on the center line model of the corners, rather than just considering the dividing lines of black and white squares. The invention uses a response value to estimate the distance between a point near the corner and the accurate position of the corner, and the greater the response value is, the closer the point and the accurate position of the corner are. The invention firstly obtains the rough position of the checkerboard angular point, then calculates the response values of all pixels in a square window near the position, and forms a response graph. And according to the relation between the magnitude of the response value and the distance between the pitch angle point positions, the maximum position of the brightness of the sub-pixels of the response graph is the accurate position of the corner point. According to the invention, the sub-pixel brightness maximum position of the response diagram is estimated by fitting the response diagram into the elliptic paraboloid, so that the extraction of the angular point sub-pixel position is realized.
The technical scheme of the invention is as follows:
1. Extraction method of corner points in checkerboard calibration object image
S10, using checkerboard calibration objects with black and white checkers alternately arranged, wherein the number of checkers in each row and each column is known, so that the number and arrangement of internal corner points of the checkerboard calibration objects are known. And shooting the checkered calibration object by a camera to obtain a checkered calibration object image, wherein the image contains all internal angular points of the checkered calibration object. Acquiring rough estimation positions of all angular points from the checkerboard marker image by using the existing method;
S20, after supersampling is carried out on the checkerboard calibration object image, a supersampled image is obtained, the rough estimation position of the corner points obtained in the S10 is converted into an image coordinate system after supersampling, then a preset image window is utilized to intercept and obtain an original corner point image corresponding to each corner point, and accordingly an original corner point image corresponding to each corner point is obtained, wherein the preset image window is a square image window, and the distance between the center of the window and the x and y directions of the rough position of the corner points is not more than 0.5 pixel respectively. Supersampling is achieved by interpolation, such as bilinear interpolation.
S30, generating a response diagram corresponding to each original corner image, and extracting the accurate position of the corner according to the response diagram;
in S30, generating a response map corresponding to each original corner image, including:
for each pixel position in the central area of the original angular point image, calculating a response value of the pixel position according to a multidirectional line integral result of each pixel position, traversing all the pixel positions in the central area, and calculating to obtain the response value of each pixel position, so that a response graph is generated, and the image center of the response graph coincides with the image center of the original angular point image.
The dimension of the original corner image 2A multiplied by 2A, the dimension of the response diagram 2B multiplied by 2B and the dimension of the line integral satisfy that A=B+l B,2lB +1=L, A is half of the length/width of the original corner image, B is half of the length/width of the response diagram, L B is half of the dimension of the line integral, L is the total length of the line integral, and A, B, L B is a positive integer.
Calculating a response value for each pixel location based on the multidirectional line integral result for that pixel location, comprising:
Forming an integration circle with a radius of l B by taking each pixel position as a circle center, selecting n D diameters, equally dividing the integration circle, marking the trend of each diameter as a direction, performing line integration on sampling points on each diameter to obtain a line integration result in the direction, performing traversal calculation on each diameter to obtain a multidirectional line integration result, and taking the variance of the multidirectional line integration result as a response value of the current pixel position. The specific calculation mode of the line integration is that (2 l B +1) sampling points with equal intervals are taken on the line segment, the central sampling point is positioned at the current pixel position, the line segment respectively comprises l B sampling points at two sides of the current pixel position, the distance between any two adjacent sampling points is 1 pixel, the sum of pixel values at all the sampling points is calculated and averaged to be used as a direction integration value;
if the sampling point is located at the sub-pixel position, the pixel value is obtained through bilinear interpolation.
In S30, extracting the accurate position of the corner according to the response diagram includes:
Firstly, carrying out Gaussian blur on a current response graph to obtain a blurred response graph, fitting a central bright spot of the blurred response graph into an elliptic paraboloid expressed by a general equation, calculating a sub-pixel extremum position of the bright spot according to the elliptic paraboloid equation obtained by fitting, converting the sub-pixel extremum position of the bright spot from an image coordinate system of the blurred response graph to an image coordinate system after supersampling, and converting the image coordinate system of a checkerboard calibration object image to obtain an accurate position of a corresponding angular point of the response graph.
Obtaining a blurred response map after Gaussian blur of the current response map comprises the following steps:
A square Gaussian kernel with the size of (2 l G+1,2lG +1) is used for carrying out Gaussian blur on a response graph with the size of 2B multiplied by 2B, a blurred response graph with the size of 2C multiplied by 2C is obtained, the center of the image of the blurred response graph coincides with the center of the image of the response graph, the size of the blurred response graph and the size of a Gaussian kernel meet the requirements that B=C+l G, B is half of the length/width of the response graph, C is half of the length/width of the blurred response graph, l G is half of the length of the Gaussian kernel, B, C and l G are positive integers, and standard deviations in two directions of the Gaussian kernel are equal, namely sigma x=σy.
Calculating the sub-pixel extremum position of the bright spot according to an elliptic paraboloid equation obtained by fitting, comprising:
Calculating a projection point of a parabolic maximum point in an image plane according to an elliptic parabolic equation obtained by fitting and taking the projection point as a sub-pixel extremum position of a bright spot, namely the accurate position of a corner point in a response diagram
Wherein, the superscript M represents the response diagram after blurring, and the subscript P represents the accurate position of the corner point.
Accurate positions of corner points in response diagrams after blurringTransforming back into the image coordinate system after the oversampling to obtain the accurate position of the corner point represented in the image coordinate system after the oversamplingWherein, the superscript S represents the supersampled image of the checkerboard calibration;
accurate position of corner point expressed in image coordinate system after super sampling Transforming back to the original image coordinate system to obtain accurate position of corner point in the original imageWherein the superscript O represents the original image of the checkerboard calibration.
S40, repeating S30, and accurately extracting the corner positions of the rest original corner images, so that the accurate positions of all the corners are obtained.
2. A storage medium storing a computer program, wherein the computer program when executed by a processor implements the method for extracting corner points in a checkerboard marker image.
The beneficial effects of the invention are as follows:
According to the invention, through realizing high-precision extraction of the angular point positions of the checkerboard calibration objects, the accuracy of the camera calibration process is improved, and finally, the realization effect of computer vision application is improved. In the embodiment, through a synthetic image test and a camera calibration experiment, the error of the image gradient-based angular point position extraction function cornerSubPix of OpenCV and the method of the invention is compared, so that the method of the invention has higher accuracy.
Drawings
FIG. 1 is a general flow chart of the method of the present invention;
Fig. 2 is a schematic diagram of the relationship between a straight line passing through a point near a corner point and a black and white area of a checkerboard calibration object, wherein (a) the straight line passing through the corner point only passes through the black or white area, and (b) one part of the straight line passing through a point outside the corner point is positioned in the black area, and the other part of the straight line is positioned in the white area.
Fig. 3 is a method for selecting direction integration sampling points when calculating pixel response values, taking the case when the total number of integration directions is 8 and the number of sampling points in each direction is 25 as an example;
FIG. 4 is a schematic diagram of the image window, response map, range of blurred response map and relationship between parameters when extracting the exact position of a single corner;
fig. 5 is a response diagram of a single corner and a blurred response diagram, where (a) is a response diagram of a single corner and (b) is a blurred response diagram.
Fig. 6 is a schematic view of the corner synthesis image in embodiment 1, in which (a) is an initial corner image and (b) is a corner image after the transformation is applied;
FIG. 7 shows the re-projection errors of two calibration experiments in example 2, where (a) is the re-projection error of the first calibration experiment and (b) is the re-projection error of the second calibration experiment.
Detailed Description
The invention provides an accurate extraction method of angular points in a checkerboard marker image, and in order to explain the implementation process and the effectiveness of the method in detail, two embodiments are described below. In which, embodiment 1 illustrates how to extract the corner positions of checkerboard calibrators in the composite image by using the method of the present invention and compare the errors with the image gradient-based corner position extraction function cornerSubPix of OpenCV, and embodiment 2 illustrates how to extract the corner positions of checkerboard calibrators in the image captured by the camera by using the method of the present invention, and further use these position information to perform camera calibration, and then calculate the re-projection errors and compare the re-projection errors with those when using the image gradient-based corner position extraction function cornerSubPix of OpenCV.
Example 1
In this embodiment, a composite image I U of a single corner point of a checkerboard calibration object with a size of 24×24 is rotated around the corner point and scaled with the corner point as an anchor point to obtain a rotated and scaled intermediate composite image I RS, and sub-pixel shifting is performed on the rotated and scaled intermediate composite image I RS along the horizontal and vertical directions to obtain a composite image I O similar to the corner point image of the checkerboard calibration object photographed by the camera. The accurate position of the corner point is then extracted by using the method provided by the invention. The method has the advantages that the position true value of the corner in the synthesized image is known, so that the position extraction error of the corner can be calculated, and the accuracy of the algorithm is verified.
In the following description, each image coordinate system is defined as an image coordinate system origin with the center of the upper left corner pixel of the image as the x-axis positive direction vertically downward and the y-axis positive direction horizontally rightward. The transformation of the image is described by using the symbols that the counter-clockwise rotation angle of the counter-image around the center of the corner point is theta, the scaling multiple of the counter-image along the x direction with the corner point as an anchor point is s x, the scaling multiple along the y direction is s y, and the sub-pixel offset of the counter-image is
The generation of the composite image I O is performed in two steps, and first, rotation around the corner and scaling with the corner as an anchor point are performed on the composite image I U, so as to obtain an intermediate composite image I RS. Wherein the superscript RS denotes the intermediate composite image after rotation and scaling. One point in image I U Position after transformationThe relation between the two is P RS=S(sx,sy. Rot (theta.) and (P-d) +d, whereinIn order to scale the matrix,For the rotation matrix, d is the vector from the origin of the image I U coordinate system to the corner point, i.e., the center of the image. Image I RS is generated by virtually reversing its corresponding sub-pixel position P U in image I U from integer-pixel position P RS therein, with the relationship: For example, when the sizes of the images I U and I RS are 24×24, the rotation angle is 30 °, and the scaling coefficients in the x and y directions are s x=1,sy =0.5, the pixel value of the integer position P RS in the image I RS is the same as the corresponding sub-pixel position P U in the image I U, and the relationship between the two is that The pixel value at the position P U in image I U is obtained by bilinear interpolation as the pixel value of image I RS at P RS. The above operation is performed for each pixel in the image I RS, thereby generating an intermediate composite image I RS.
And secondly, carrying out sub-pixel offset in the x and y directions on the image I RS, wherein the offset is in the range of 0.00-1.00 pixel, and obtaining a composite image I O. The relationship between position P O in image I O and the corresponding position P RS in image I RS is P RS=PO -t, where t is a vector representing the sub-pixel offset. For example, when the x, y direction offsets are 0.20 and 0.40 pixels, respectively, the above formula isFor each integral pixel position P O in image I O, its corresponding sub-pixel position P RS in image I RS is calculated using the above relationship, and the gray value at that position is obtained by bilinear interpolation as the pixel value of the pixel of image I O at P O, thereby generating a composite image I O. The untransformed composite image I U is a composite image I O obtained by rotating I U around a corner point, scaling with the corner point as an anchor point, and sub-pixel shifting as shown in fig. 6 (a), and is shown in fig. 6 (b).
As shown in fig. 1, the exact positions of the corner points in the composite image are extracted by:
S10, acquiring rough estimated positions of all angular points from the checkerboard calibration object image;
Since the offset of the corner point in the synthesized picture used in the embodiment is known relative to the center of the image and the offsets in the x and y directions are not more than 1 pixel, the original position of the corner point, that is, the center position of the synthesized picture is directly taken as the rough estimation position of the corner point Wherein the subscript R indicates the coarse position of the corner point. For the present embodiment
S21, image supersampling. Taking the super-sampling magnification n=2, performing super-sampling on the composite image I O, and using bilinear interpolation as an image interpolation mode, wherein the super-sampled composite image is recorded as I S, and the size is 48 multiplied by 48.
And S22, transforming the rough position coordinates of the corner points into an oversampled coordinate system. The rough position of the corner in the composite image I O is located at the center of the image, and correspondingly, the rough position of the corner in the super-sampled composite image I S is also located at the center of the image, so thatThe rough position of the corner after the super sampling can also be calculated by using a formulaN=2 carry-inThe same results can be obtained.
S23, intercepting and obtaining the corresponding original corner images at each corner by utilizing a preset image window. The rough position of the corner point in the image I S after the super sampling isThe nearest whole pixel position at the right lower part isWhere the subscript BR denotes the lower right corner. Since the size of the image I S is 48×48 and is smaller, the image I S is used as an image window for subsequent processing, namely, the size of the window is 2a×2a=48×48, the half side length of the image window is a=48/2=24, and the x and y ranges of the window are respectively as followsAnd (3) with
And S24, calculating a response value of the central area of the window, and generating a response chart as shown in fig. 4. The half side of the image window is a=48/2=24. The response value of a pixel position is calculated by making equal-angle line segments in 8 directions centering on the pixel position, and the number of integration directions n D =8, and if the vertical downward direction is 0 DEG and the horizontal rightward direction is 90 DEG, the integration directions areI.e., 0 °,22.5 °,. 157.5 °, each having a line segment length of 2l B +1=25 pixels, i.e., l B =12. As shown in fig. 3, line integration is performed along 8 line segments, specifically, 25 sampling points are taken on a line segment, a central sampling point is the center of the pixel position, l B =12 sampling points are taken on two sides of the line segment separated by the pixel position, and the distance between any two adjacent sampling points is 1 pixel. And taking the average value of the pixel values at all sampling points on one line segment as a line integral value in the direction, and obtaining the pixel value by bilinear interpolation if the sampling points are positioned at sub-pixel positions. The variance of the line integration result in each direction is used as the response value of the pixel position. To avoid the integration range exceeding the image window, only the response value of the pixels of the square area in the center of the window is calculated, and the half side length of the center area is b=a-l B =12. The size of the response chart thus generated is 2b×2b=24×24.
As shown in fig. 2 (a), the straight line passing through the corner points is always completely located in the white or black region of the checkerboard due to the pattern rule of the checkerboard calibration object, so that the variance of each line integral result is large, while the straight line passing through a point beside the corner points is always located in a part of the white region and another part of the straight line is located in the black region, as shown in fig. 2 (b), so that the variance of each line integral result is small. Therefore, the response value is maximum at the accurate position of the corner point, and the local maximum value position of the response graph can be used as the accurate position of the corner point.
And S31, carrying out Gaussian blur on the central area of the response graph. The center region of the response map was gaussian blurred using a gaussian kernel of size (2l G+1)×(2lG +1) =9×9, standard deviation σ x=σy =9 in the x, y direction, and smoothed. Wherein the subscript G represents gaussian blur. The half length of the gaussian kernel is l G = (9-1)/2=4. To avoid gaussian kernels out of the response plot, only the central square region of the response plot is gaussian blurred, with the half-length of the central region being c=b-l G =8. The response diagram of the single corner point and the blurred response diagram are shown in fig. 5 (a) and 5 (b), respectively.
And S32, fitting the area near the corner point in the response graph into an elliptic paraboloid. Taking a sampling window with the size of 5×5 by taking the pixel with the largest pixel value in the response diagram after blurring as the center, fitting the light spot into an elliptic paraboloid represented by a general equation z=ax 2+bxy+cy2 +dx+ey+f according to the position (x, y coordinates, represented in the response diagram coordinate system after blurring) of the pixel and the pixel value (z coordinates), wherein parameters a, b, c, d, e and f in the equation are estimated by using a least square method. For example, when the rotation angle is 30 °, the scaling coefficients in the x and y directions of the image are s x=0.5,sy =1, and the offsets in the x and y directions are 0.20 and 0.40 pixels, respectively, the coordinates and the pixel values of the pixels in the sampling window are shown in table 1:
table 1 shows the coordinates and pixel values of the pixels in the sampling window
The elliptic paraboloid parameters obtained by least square fitting are shown in table 2:
Table 2 shows elliptic paraboloid parameters
| a | b | c | d | e | f |
| -60.09 | 15.20 | -38.84 | 821.21 | 524.19 | -2446.89 |
S33, calculating the accurate position of the corner point according to the elliptic paraboloid parameters. After the elliptic paraboloid equation parameters are obtained, the projection points of the maximum value points in the image plane can be calculated as follows:
Wherein, the superscript M represents the image coordinate system of the blurred response diagram, and the subscript P represents the accurate position of the corner point. The point is the accurate position of the corner point estimated by fitting the elliptic paraboloid in the image coordinate system of the blurred response diagram.
Will beTransforming back into the image coordinate system after oversampling to obtain accurate position of corner point represented in the coordinate systemThe transformation relation is as follows:
And S34, converting the accurate position coordinates of the corner points back to an original image coordinate system. According to the equation:
Transforming the angular point position from the image coordinate system after the super sampling to the synthetic image coordinate system as the algorithm input to obtain the accurate position of the angular point in the synthetic image I O Since the x, y direction offsets are 0.20 and 0.40 pixels, respectively, when generating the composite image I O, the original coordinates of the corner points areTherefore, the truth value of the corner position in the composite image I O should beWherein the subscript GT represents a true value. The extraction error (pixel) of the corner position at this time is:
To verify the validity of the algorithm, a composite image I O is generated using a combination of parameters, the true values of the sub-pixel positions of the corner points being known. The method based on image gradient of OpenCV and the method provided by the invention are respectively used for extracting the accurate positions of the angular points from the picture, and comparing the errors of the two.
In order to simulate the angular points in the camera image of the checkerboard calibration object, and considering symmetry, when the synthesized image I O of the angular points is generated, parameters are used, wherein the rotation angle range is 0-90 degrees, the rotation angle step length is 5 degrees, the scaling factor in the x direction is 1 time, the scaling factor in the y direction is 0.5 time, the x-direction offset range of the angular points is 0.00-0.50 pixel, the y-direction offset range is 0.00-1.00 pixel, and the x-direction offset step length and the y-direction offset step length are all 0.01 pixel. Thus, the total of the parameter combinations is 19×51×101= 97869, that is, the total number of used composite images is 97869. According to the steps, the error of the method of the invention and the angular point position extraction method of OpenCV is calculated and compared, as shown in Table 3:
Table 3 shows the error results of the methods of the present invention and the OpenCV method
As can be seen from table 3, the maximum error and the average error of the corner positions obtained by the method of the present invention are smaller than the image gradient-based corner position extraction function cornerSubPix of OpenCV.
Example 2
In this embodiment, a camera is used to capture a physical photograph of a checkerboard calibration object, a rough position of an angular point in an image is obtained by using an OpenCV angular point detection method, then an accurate position (image point) of the angular point is extracted by using the method provided by the invention, and the camera is calibrated by combining a known position (object point) of the angular point in a world coordinate system to obtain an internal parameter, an external parameter and a distortion parameter of the camera. According to the parameters, the object point can be re-projected into the image coordinate system, and the difference between the re-projection point and the position of the previously extracted image point is called re-projection error, so that the accuracy of the angular point extraction result can be reflected. In the embodiment, the accuracy of the image gradient-based angular point position extraction function cornerSubPix of the method and the OpenCV is compared by calculating the reprojection error.
In order to obtain a more accurate camera calibration result, the calibration object should be filled with the image as much as possible when capturing the physical image of the checkered calibration object. While allowing a larger area in a portion of the image to be uncovered by the calibration pattern, the calibration pattern should cover the entire area of the image, especially the edges and corners of the image, when considering all images together. The pose of the calibration object in the different images should be sufficiently changed, but the extreme photographing angle should not be used or the distance between the calibration plate and the camera should be greatly changed.
In this embodiment, a fixed-focus network camera is used to capture a checkerboard calibration image with a 10×7 number of squares, and the image size is 1280×720.
The meaning of the subscripts of the variables in the calculation process of this embodiment is similar to that of embodiment 1.
As shown in fig. 1, the exact position of the corner in the camera image is extracted by:
s10, obtaining the rough position of the corner point. An image coordinate system C O is established in the original calibration object image I O. Detecting checkered markers in the original image I O by using findChessboardCorners functions of OpenCV, and obtaining rough positions of corner points in the image. Each calibration object image I O includes a plurality of corner points, and hereinafter, for convenience of description, one of the corner points is taken as an example, although specific coordinates related to other corner points are different, the method used in extracting the positions of the corner points is the same except that the specific coordinates related to other corner points are different. Recording the rough position of the corner point in the original image coordinate system as
S21, image supersampling. Taking the supersampling multiple as n=2, and supersampling the original calibration object image I O by using bilinear interpolation as an image interpolation mode.
And S22, transforming the rough position coordinates of the corner points into an oversampled coordinate system. Coordinates of rough position of corner point in super-sampled imageWith its coordinates in the original calibration object imageThe relation of (2) is:
s23, cutting out an image window near the rough position of the corner point. Is arranged in the image I S and positioned in The nearest integer pixel coordinate at the bottom right isThe window size is taken to be 48 x 48 pixels, i.e. the half-length of the window is a=24, the x, y range of the window isAnd (3) with
S24, calculating a response value of the central area of the window, and generating a response chart. The method for calculating the response value of a pixel is that the number of integration directions n D =8 is calculated as equal included angle line segments in a plurality of directions taking the pixel as a center, and if the vertical downward direction is 0 DEG and the horizontal rightward direction is 90 DEG, the integration direction is I.e., 0 °,22.5 °,. 157.5 °, each 25 pixels long and the half length of the line integral is l B = (25-1)/2=12. The line integration is performed along 8 line segments respectively, specifically, 25 sampling points with 1 pixel interval are taken on the line segments, the central sampling point is the center of the pixel, and 12 sampling points are respectively arranged on two sides of the center of the pixel on the line segments. Taking the average value of the pixel values at all sampling points on one line segment as a line integral value, and obtaining the pixel value by bilinear interpolation if the sampling points are positioned at sub-pixel positions. The variance of each line integration result is taken as the response value of the pixel. To avoid the integration range exceeding the image window, only the response value of the pixels of the square area in the center of the window is calculated, and the half side length of the center area is b=a-l B =12. The size of the response chart thus generated is 2b×2b=24×24.
And S31, carrying out Gaussian blur on the central area of the response graph. The center region of the response map was gaussian blurred and smoothed using a gaussian kernel of size (2l G+1)×(2lG +1) =9×9 with standard deviation in the x, y direction σ x=σy =9. The half length of the gaussian kernel is i G =4. To avoid gaussian kernels out of the response plot, only the central square region of the response plot is gaussian blurred, with the half-length of the central region being c=b-l G =8.
And S32, fitting the area near the corner point in the response graph into an elliptic paraboloid. An image coordinate system C M is established in the blurred response map. Taking a sampling window with the size of 7×7 with the pixel with the largest pixel value in the blurred response chart as the center, fitting the light spot to an elliptic paraboloid expressed by general equation z=ax 2+bxy+cy2 +dx+ey+f according to the position (x, y coordinates, expressed in C M) of the pixel and the pixel value (z coordinates), and estimating parameters a, b, C, d, e and f in the equation by using a least square method.
S33, calculating the accurate position of the corner point according to the elliptic paraboloid parameters. After the elliptic paraboloid equation parameters are obtained, the projection points of the maximum value points in the image plane can be calculated as follows:
The point is estimated by fitting an elliptic paraboloid, and the local maximum position of the response diagram, which is represented in a coordinate system of the response diagram after blurring, is taken as the accurate position of the corner point in the coordinate system.
And S34, converting the accurate position coordinates of the corner points back to an original image coordinate system. Firstly, converting the accurate position of the corner point in the coordinate system of the blurred response diagram into a coordinate system C S of an image I S, and marking asThe conversion relation is as follows:
Then it is converted into an original image coordinate system C O and marked as The conversion relation is as follows:
Through the steps, the accurate position (image point) of the angular point can be extracted from the calibration plate image shot by a group of cameras, and camera calibration can be carried out by combining the world coordinates (object points) of the angular point in the known calibration plate to obtain the internal and external parameters and the distortion parameters of the cameras. The parameters involved in modeling camera distortion include 6 mirror distortion parameters k 1,…,k6, 2 tangential distortion parameters p 1,p2, and 4 thin prism parameters s 1,…,s4. By using the parameters, the object point can be re-projected into the image plane, and the difference between the re-projection point and the position of the previously extracted image point is called re-projection error, so that the accuracy of the angular point extraction result can be reflected. In the embodiment, the accuracy of the image gradient-based angular point position extraction function cornerSubPix of the method and the OpenCV is compared by calculating the reprojection error.
And respectively performing two groups of camera calibration experiments by using the two groups of calibration plate images shot by the camera. The rough positions of the corner points in the image are obtained by findChessboardCorners functions of OpenCV, and the accurate positions of the corner points are obtained by using the method described in this embodiment and the image gradient-based corner point position extraction function cornerSubPix of OpenCV, respectively. And after the calibration is finished, calculating the sum of angular point re-projection errors in each image in each group of calibration experiments. The accuracy of the cornerSubPix function is related to the window size search size by adjusting the parameter values to minimize gross weight projection errors. The re-projection errors of the images in the two calibration experiments are shown in fig. 7 (a) and fig. 7 (b). Therefore, when the method of the embodiment is used, the sum of the re-projection errors of each image in each calibration experiment is smaller than the result of the image gradient-based angular point position extraction function cornerSubPix by using OpenCV, so that the total weight projection error of each calibration experiment is smaller, and the angular point extraction method of the embodiment of the invention has higher accuracy.
Finally, it should be noted that the above-mentioned embodiments and descriptions are only illustrative of the technical solution of the present invention and are not limiting. It will be understood by those skilled in the art that various modifications and equivalent substitutions may be made to the present invention without departing from the spirit and scope of the present invention as defined in the appended claims.
Claims (6)
1. The extraction method of the corner points in the checkerboard calibration object image is characterized by comprising the following steps of:
S10, acquiring rough estimated positions of all angular points from the checkerboard calibration object image;
s20, after supersampling is carried out on the checkerboard calibration object image, a supersampled image is obtained, the rough estimation position of the corner points obtained in the S10 is converted into an image coordinate system after supersampling, then a preset image window is utilized to intercept and obtain an original corner point image corresponding to each corner point, and therefore an original corner point image corresponding to each corner point is obtained;
S30, generating a response diagram corresponding to each original corner image, and extracting the accurate position of the corner according to the response diagram;
In S30, generating a response map corresponding to each original corner image includes:
For each pixel position in the central area of the original corner image, calculating a response value of the pixel position according to a multidirectional line integral result of each pixel position, traversing all the pixel positions in the central area, and calculating to obtain the response value of each pixel position so as to generate a response graph;
The calculating the response value of each pixel position according to the multi-directional line integral result of the pixel position comprises the following steps:
Forming an integration circle with a radius of l B by taking each pixel position as a circle center, selecting n D diameters, equally dividing the integration circle, marking the trend of each diameter as a direction, performing line integration on sampling points on each diameter to obtain a line integration result in the direction, performing traversal calculation on each diameter to obtain a multidirectional line integration result, and taking the variance of the multidirectional line integration result as a response value of the current pixel position;
In S30, extracting the accurate position of the corner according to the response diagram includes:
Firstly, carrying out Gaussian blur on a current response graph to obtain a blurred response graph, fitting a central bright spot of the blurred response graph into an elliptic paraboloid expressed by a general equation, calculating a sub-pixel extremum position of the bright spot according to the elliptic paraboloid equation obtained by fitting, converting the sub-pixel extremum position of the bright spot from an image coordinate system of the blurred response graph to an image coordinate system of an image after supersampling, and converting the image coordinate system of the blurred response graph into an image coordinate system of a checkerboard calibration object image to obtain an accurate position of a corresponding angular point of the response graph;
The calculating the sub-pixel extremum position of the bright spot according to the elliptic paraboloid equation obtained by fitting comprises the following steps:
calculating a projection point of a parabolic maximum point in a blurred response image plane according to an elliptic parabolic equation obtained by fitting and taking the projection point as a sub-pixel extremum position of a bright spot;
And S40, repeating the step S30, and extracting the corner positions of the rest original corner images so as to obtain the accurate positions of all the corners.
2. The method for extracting corner points from checkerboard calibration object image according to claim 1, wherein in S20, the preset image window is a square image window, and the distances between the center of the window and the x and y directions of the rough positions of the corner points are respectively not more than 0.5 pixel.
3. The method for extracting corner points in a checkerboard marker image according to claim 1, wherein in S30, the dimensions of the original corner point image 2a×2a, the response image 2b×2b and the line integral scale satisfy that a=b+l B,2lB +1=l, a is half of the length/width of the original corner point image, B is half of the length/width of the response image, L B is half of the line integral scale, and L is the total length of the line integral.
4. The method for extracting corner points in a checkerboard marker image according to claim 1, wherein the step of obtaining a blurred response map after gaussian blur of the current response map comprises the steps of:
And (3) carrying out Gaussian blur on a response graph with the size of 2B multiplied by 2B by using a square Gaussian kernel with the size of (2l G+1,2lG +1), so as to obtain a blurred response graph with the size of 2C multiplied by 2C, wherein the center of the image of the blurred response graph coincides with that of the response graph, the size of the blurred response graph and the size of the Gaussian kernel meet the requirements that B=C+l G, B is half of the length/width of the response graph, C is half of the length/width of the blurred response graph, and l G is half of the length of the Gaussian kernel.
5. The method for extracting corner points in a checkerboard marker image according to claim 1, wherein in S20, the oversampling is implemented by adopting an interpolation mode.
6. A storage medium storing a computer program, wherein the computer program when executed by a processor implements the method for extracting corner points in a checkerboard marker image according to any one of claims 1 to 5.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202411282575.7A CN119359818B (en) | 2024-09-13 | 2024-09-13 | Extraction method of corner points in checkerboard calibration object image |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202411282575.7A CN119359818B (en) | 2024-09-13 | 2024-09-13 | Extraction method of corner points in checkerboard calibration object image |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN119359818A CN119359818A (en) | 2025-01-24 |
| CN119359818B true CN119359818B (en) | 2025-10-17 |
Family
ID=94305200
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202411282575.7A Active CN119359818B (en) | 2024-09-13 | 2024-09-13 | Extraction method of corner points in checkerboard calibration object image |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN119359818B (en) |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103927750A (en) * | 2014-04-18 | 2014-07-16 | 上海理工大学 | Detection method of checkboard grid image angular point sub pixel |
| CN109801300A (en) * | 2017-11-16 | 2019-05-24 | 北京百度网讯科技有限公司 | Coordinate extraction method, device, equipment and the computer readable storage medium of X-comers |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108876749A (en) * | 2018-07-02 | 2018-11-23 | 南京汇川工业视觉技术开发有限公司 | A kind of lens distortion calibration method of robust |
| CN114445499B (en) * | 2020-10-19 | 2025-11-04 | 深圳市光鉴科技有限公司 | Automatic extraction method, system, equipment and media for chessboard corner points |
-
2024
- 2024-09-13 CN CN202411282575.7A patent/CN119359818B/en active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103927750A (en) * | 2014-04-18 | 2014-07-16 | 上海理工大学 | Detection method of checkboard grid image angular point sub pixel |
| CN109801300A (en) * | 2017-11-16 | 2019-05-24 | 北京百度网讯科技有限公司 | Coordinate extraction method, device, equipment and the computer readable storage medium of X-comers |
Also Published As
| Publication number | Publication date |
|---|---|
| CN119359818A (en) | 2025-01-24 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN111260731B (en) | Self-adaptive detection method for checkerboard sub-pixel level corner points | |
| CN113160339B (en) | Projector calibration method based on Molaque law | |
| CN113920205B (en) | Calibration method of non-coaxial camera | |
| CN112465912B (en) | Stereo camera calibration method and device | |
| CN110211043B (en) | Registration method based on grid optimization for panoramic image stitching | |
| CN109272574B (en) | Construction method and calibration method of linear array rotary scanning camera imaging model based on projection transformation | |
| CN105096317B (en) | A kind of high-performance camera full automatic calibration method in complex background | |
| CN111968182B (en) | A calibration method for nonlinear model parameters of binocular camera | |
| CN110738273B (en) | Image feature point matching method, device, equipment and storage medium | |
| CN108288294A (en) | A kind of outer ginseng scaling method of a 3D phases group of planes | |
| CN106887023A (en) | For scaling board and its scaling method and calibration system that binocular camera is demarcated | |
| CN105389808A (en) | Camera self-calibration method based on two vanishing points | |
| CN104751458B (en) | A kind of demarcation angular-point detection method based on 180 ° of rotation operators | |
| CN109035170A (en) | Adaptive wide-angle image correction method and device based on single grid chart subsection compression | |
| CN106952262B (en) | Ship plate machining precision analysis method based on stereoscopic vision | |
| CN117576219B (en) | Camera calibration device and calibration method for single-shot image captured by wide-angle fisheye lens | |
| CN109285192A (en) | The binocular camera shooting scaling method of holophotal system | |
| CN107133986A (en) | A kind of camera calibration method based on two-dimensional calibrations thing | |
| CN110458951B (en) | Modeling data acquisition method and related device for power grid pole tower | |
| CN113344795A (en) | Rapid image splicing method based on prior information | |
| CN111126418A (en) | An Oblique Image Matching Method Based on Plane Perspective Projection | |
| CN113706635B (en) | Long-focus camera calibration method based on point feature and line feature fusion | |
| CN118674789A (en) | High-robustness high-precision camera calibration plate and corner detection method | |
| CN120259316B (en) | Method, device, equipment and storage medium for detecting center of calibration plate | |
| CN116051643A (en) | A method and system for synthesizing 3D gluing trajectory based on contours in a multi-coordinate system |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |