[go: up one dir, main page]

CN117036475A - Point cloud construction method and system, equipment and storage media based on binocular matching - Google Patents

Point cloud construction method and system, equipment and storage media based on binocular matching Download PDF

Info

Publication number
CN117036475A
CN117036475A CN202310993745.1A CN202310993745A CN117036475A CN 117036475 A CN117036475 A CN 117036475A CN 202310993745 A CN202310993745 A CN 202310993745A CN 117036475 A CN117036475 A CN 117036475A
Authority
CN
China
Prior art keywords
image
pixel
point
coordinate
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310993745.1A
Other languages
Chinese (zh)
Inventor
李正罡
张旭堂
于波
张华�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Geling Jingrui Vision Co ltd
Original Assignee
Shenzhen Geling Jingrui Vision Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Geling Jingrui Vision Co ltd filed Critical Shenzhen Geling Jingrui Vision Co ltd
Priority to CN202310993745.1A priority Critical patent/CN117036475A/en
Publication of CN117036475A publication Critical patent/CN117036475A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

本申请提供了一种基于双目匹配的点云构建方法和系统、设备及存储介质,属于三维测量技术领域。该方法包括:获取第一图像;对第一图像进行图像滤波,得到第二图像;对第二图像进行掩码图计算,得到第三图像;针对第三图像的左图像中的每个像素点,根据第一像素点坐标重建初始三维坐标;将初始三维坐标投影到第三图像的右图像上,得到第二像素点坐标;基于第二像素点坐标的预设邻域范围内确定第一极线,将第一极线上所有亚像素点作为匹配候选点;根据匹配策略进行坐标预测,得到像素点在右图像中的亚像素级二维坐标;获取像素点的目标三维坐标;获取目标点云数据。本申请能够提高点云构建的准确性,进而得到高精度的点云。

This application provides a point cloud construction method and system, equipment and storage medium based on binocular matching, which belongs to the field of three-dimensional measurement technology. The method includes: acquiring a first image; performing image filtering on the first image to obtain a second image; performing mask map calculation on the second image to obtain a third image; targeting each pixel point in the left image of the third image , reconstruct the initial three-dimensional coordinates based on the first pixel coordinates; project the initial three-dimensional coordinates onto the right image of the third image to obtain the second pixel coordinates; determine the first pole based on the preset neighborhood range of the second pixel coordinates Line, take all sub-pixel points on the first polar line as matching candidate points; perform coordinate prediction according to the matching strategy, and obtain the sub-pixel level two-dimensional coordinates of the pixel point in the right image; obtain the target three-dimensional coordinates of the pixel point; obtain the target point Cloud data. This application can improve the accuracy of point cloud construction and obtain high-precision point clouds.

Description

Point cloud construction method, system, equipment and storage medium based on binocular matching
Technical neighborhood
The present application relates to the field of three-dimensional measurement technologies, and in particular, to a method and a system for constructing a point cloud based on binocular matching, an electronic device, and a storage medium.
Background
With the development of science, technology and industry, the application of three-dimensional measurement technology in the aspects of automatic production, quality control, robot vision, reverse engineering, intelligent manufacturing, biomedical engineering and the like is increasingly important. The rapid acquisition of the high-precision three-dimensional point cloud has important significance for improving the efficiency of manufacturing and processing.
In the related art, when the point cloud is built, most of the point cloud is built based on a monocular vision device through a phase profile technology, and the built point cloud is often low in precision due to the mode.
In addition, in the related art, the point cloud construction is performed based on the binocular vision camera, namely, the point cloud construction is performed based on the binocular vision camera in a characteristic point matching mode, and the mode often causes lower accuracy of the constructed point cloud due to low matching accuracy.
Disclosure of Invention
The embodiment of the application mainly aims to provide a point cloud construction method and system based on binocular matching, electronic equipment and a storage medium, aiming at improving the accuracy of point cloud construction and further obtaining high-precision point cloud.
In order to achieve the above object, a first aspect of an embodiment of the present application provides a method for constructing a point cloud based on binocular matching, where the method includes:
Acquiring a first image, wherein the first image comprises a left image and a right image, and the left image corresponds to the right image one by one;
performing image filtering on the first image to obtain a second image, wherein the second image comprises a left image of the second image and a right image of the second image;
performing mask map calculation on the second image based on a preset mask calculation mode to obtain a third image, wherein the third image comprises a left image of the third image and a right image of the third image;
reconstructing initial three-dimensional coordinates of each pixel point in a left image of the third image according to first pixel point coordinates of the pixel point on the left image of the third image;
for each pixel point in the left image of the third image, projecting the initial three-dimensional coordinate of the pixel point onto the right image of the third image to obtain a second pixel point coordinate of the pixel point on the right image of the third image;
determining a first polar line in a preset neighborhood range based on the coordinates of the second pixel point for each pixel point in the left image of the third image, calculating unwrapped phase values of all sub-pixel points on the first polar line, and taking all sub-pixel points as matching candidate points;
Carrying out coordinate prediction on the pixel points of the left image of the third image according to a preset matching strategy to obtain sub-pixel level two-dimensional coordinates of the pixel points in the right image;
aiming at each pixel point in the left image of the third image, obtaining a target three-dimensional coordinate of the pixel point according to a calibration parameter, the first pixel point coordinate and the sub-pixel level two-dimensional coordinate which are obtained in advance;
and obtaining target point cloud data based on the target three-dimensional coordinates of all the pixel points.
In some embodiments, the image filtering the first image to obtain a second image includes:
performing Gaussian filtering on the first image to obtain a first filtered image;
and performing guided filtering on the first filtering image to obtain the second image.
In some embodiments, the mask calculation mode includes a gray threshold calculation mode and an edge calculation mode, and the performing mask map calculation on the second image based on the preset mask calculation mode to obtain a third image includes:
acquiring the measured object characteristics and background characteristics of the first image;
selecting the gray threshold calculation mode or the edge calculation mode to perform mask map calculation on the second image based on the measured object features and the background features to obtain a mask image;
And taking the mask image as the third image.
In some embodiments, the measured object feature includes a first reflected light intensity and a first surface flatness of the measured object of the first image, the background feature includes a second reflected light intensity and a second surface flatness of an image background of the first image, and selecting the gray threshold calculation mode or the edge calculation mode to perform mask image calculation on the second image based on the measured object feature and the background feature, where obtaining the mask image includes:
if the difference value between the first reflected light intensity and the second reflected light intensity is larger than a preset first threshold value, selecting the gray threshold value calculation mode to perform mask map calculation on the second image, and obtaining the mask image;
and if the difference value between the first surface flatness and the second surface flatness is smaller than a preset second threshold value, selecting the edge calculation mode to perform mask map calculation on the second image to obtain the mask image.
In some embodiments, for each pixel in the left image of the third image, projecting the initial three-dimensional coordinates of the pixel onto the right image of the third image to obtain second pixel coordinates of the pixel on the right image of the third image, including:
Acquiring the initial three-dimensional coordinate and a double-target fixed result, wherein the double-target fixed result comprises an external reference matrix and an internal reference matrix;
multiplying the initial three-dimensional coordinates by the external parameter matrix to obtain camera coordinate system coordinates in a right image corresponding to the initial three-dimensional coordinates;
and multiplying the camera coordinate system coordinate by the internal reference matrix to obtain the pixel coordinate of the right image, wherein if the obtained pixel coordinate of the right image does not exist in the right image, discarding the pixel point which does not exist in the right image.
In some embodiments, the determining, for each pixel in the left image of the third image, the first epipolar line based on the preset neighborhood range of the second pixel coordinate, calculating unwrapped phase values of all sub-pixel points on the first epipolar line, and taking all sub-pixel points as matching candidate points includes:
obtaining epipolar lines according to preset epipolar constraint and the first pixel point coordinates;
intercepting the epipolar line according to the preset neighborhood range and the second pixel point coordinate to obtain the first epipolar line;
determining coordinates of all sub-pixel points according to the first polar line;
And calculating the dephasing wrapping values of all the sub-pixel points according to a preset interpolation method, and taking all the sub-pixel points as the matching candidate points.
In some embodiments, the performing coordinate prediction on the pixel point of the left image of the third image according to a preset matching policy to obtain a sub-pixel level two-dimensional coordinate of the pixel point in the right image includes:
for each pixel point in a left image of the third image, acquiring a phase value of the pixel point, and taking the phase value as a target phase value;
if the first candidate point and the second candidate point which are adjacent to each other exist in the matching candidate points, obtaining a two-dimensional coordinate of a sub-pixel level of the target phase value through the first candidate point and the second candidate point, wherein the phase value of the first candidate point is smaller than the target phase value, and the phase value of the second candidate point is larger than the target phase value;
if a third candidate point and a fourth candidate point which are spaced by 1 sub-pixel exist in the matching candidate points, obtaining a sub-pixel level two-dimensional coordinate of the target phase value by selecting 3 pixel points in the minimum 4-pixel neighborhood based on the third candidate point and the fourth candidate point, wherein the phase value of the third candidate point is smaller than the target phase value, and the phase value of the fourth candidate point is larger than the target phase value.
To achieve the above object, a second aspect of the embodiments of the present application provides a point cloud building system based on binocular matching, the system including:
the first image acquisition module is used for acquiring a first image, wherein the first image comprises a left image and a right image, and the left image corresponds to the right image one by one;
the second image acquisition module is used for carrying out image filtering on the first image to obtain a second image, wherein the second image comprises a left image of the second image and a right image of the second image;
the third image acquisition module is used for carrying out mask image calculation on the second image based on a preset mask calculation mode to obtain a third image, wherein the third image comprises a left image of the third image and a right image of the third image;
an initial three-dimensional coordinate acquisition module, configured to reconstruct, for each pixel point in a left image of the third image, an initial three-dimensional coordinate of the pixel point according to a first pixel point coordinate of the pixel point on the left image of the third image;
a second pixel coordinate acquiring module, configured to project, for each pixel in a left image of the third image, an initial three-dimensional coordinate of the pixel onto a right image of the third image, to obtain a second pixel coordinate of the pixel in the right image of the third image;
The matching candidate point acquisition module is used for determining a first polar line in a preset neighborhood range based on a second pixel point coordinate of each pixel point in a left image of the third image, taking the second pixel point coordinate of the pixel point as an initial point, calculating unwrapped phase values of all sub-pixel points on the first polar line, and taking all the sub-pixel points as matching candidate points;
the coordinate prediction module is used for predicting the coordinates of the pixel points of the left image of the third image according to a preset matching strategy to obtain sub-pixel level two-dimensional coordinates of the pixel points in the right image;
the target three-dimensional coordinate acquisition module is used for acquiring target three-dimensional coordinates of each pixel point in the left image of the third image according to the pre-acquired calibration parameters, the first pixel point coordinates and the sub-pixel level two-dimensional coordinates;
and the target point cloud data acquisition module is used for acquiring target point cloud data based on the target three-dimensional coordinates of all the pixel points.
To achieve the above object, a third aspect of the embodiments of the present application proposes an electronic device, including a memory storing a computer program and a processor implementing the method according to the first aspect when the processor executes the computer program.
To achieve the above object, a fourth aspect of the embodiments of the present application proposes a computer-readable storage medium storing a computer program which, when executed by a processor, implements the method of the first aspect.
According to the point cloud construction method and system based on binocular matching, the electronic equipment and the storage medium, the first image is obtained, the first image comprises the left image and the right image, the left image corresponds to the right image one by one, and high-precision image data can be obtained through the left camera and the right camera. Further, the first image is subjected to image filtering to obtain a second image, external light and noise generated by the system can be removed through image filtering, and the quality of the image is improved. Further, mask map calculation is performed on the second image based on a preset mask calculation mode to obtain a third image, wherein the third image comprises a left image of the third image and a right image of the third image, and pixels which do not need to be reconstructed or pixels with insufficient light intensity can be filtered through mask calculation, so that the accuracy of the subsequent reconstruction point cloud is ensured, and the accuracy of the point cloud construction is improved. Further, reconstructing an initial three-dimensional coordinate of the pixel point according to a first pixel point coordinate of the pixel point on the left image of the third image for each pixel point in the left image of the third image; projecting the initial three-dimensional coordinates of the pixel points onto a right image of the third image to obtain second pixel point coordinates of the pixel points on the right image of the third image; determining a first polar line based on a preset neighborhood range of the coordinates of the second pixel point, calculating unwrapped phase values of all sub-pixel points on the first polar line, and taking all the sub-pixel points as matching candidate points; and carrying out coordinate prediction on the pixel points of the left image of the third image according to a preset matching strategy to obtain sub-pixel level two-dimensional coordinates of the pixel points in the right image, reconstructing initial three-dimensional coordinates by adopting a phase profile operation through a left camera and an optical machine, carrying out back projection on the initial three-dimensional coordinates into the right image to serve as an initial searching position, and then searching in a preset adjacent area of the initial three-dimensional coordinates, so that the matching time can be shortened to a great extent. Further, obtaining a target three-dimensional coordinate of the pixel point according to the calibration parameter, the first pixel point coordinate and the sub-pixel level two-dimensional coordinate which are obtained in advance; and obtaining target point cloud data based on the target three-dimensional coordinates of all the pixel points, thereby obtaining high-precision point cloud.
Drawings
FIG. 1 is a flow chart of a point cloud construction method based on binocular matching provided by an embodiment of the application;
FIG. 2 is a schematic diagram of a photo-camera model according to an embodiment of the present application;
fig. 3 is a flowchart of step S102 in fig. 1;
fig. 4 is a flowchart of step S103 in fig. 1;
fig. 5 is a flowchart of step S402 in fig. 4;
fig. 6 is a flowchart of step S105 in fig. 1;
fig. 7 is a flowchart of step S106 in fig. 1;
fig. 8 is a schematic diagram of a process for implementing pixel matching in the point cloud construction method according to the embodiment of the present application;
fig. 9 is a flowchart of step S107 in fig. 1;
FIG. 10 is a schematic diagram of an implementation process of a point cloud construction method based on binocular matching according to an embodiment of the present application;
FIG. 11 is a schematic structural diagram of a point cloud construction system based on binocular matching according to an embodiment of the present application;
fig. 12 is a schematic hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
It should be noted that although functional block diagrams are depicted as block diagrams, and logical sequences are shown in the flowchart, in some cases, the steps shown or described may be performed in a different order than the block diagrams in the system. The terms first, second and the like in the description and in the claims and in the above-described figures, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the application only and is not intended to be limiting of the application.
With the development of science, technology and industry, the application of three-dimensional measurement technology in the aspects of automatic production, quality control, robot vision, reverse engineering, intelligent manufacturing, biomedical engineering and the like is increasingly important. The rapid acquisition of the high-precision three-dimensional point cloud has important significance for improving the efficiency of manufacturing and processing.
In the related art, when the point cloud is built, most of the point cloud is built based on a monocular vision device through a phase profile technology, and the built point cloud is often low in precision due to the mode.
In addition, in the related art, the point cloud construction is performed based on the binocular vision camera, namely, the point cloud construction is performed based on the binocular vision camera in a characteristic point matching mode, and the mode often causes lower accuracy of the constructed point cloud due to low matching accuracy.
Based on the above, the embodiment of the application provides a point cloud construction method and system based on binocular matching, electronic equipment and a storage medium, aiming at improving the accuracy of point cloud construction and further obtaining high-precision point cloud.
The embodiment of the application provides a point cloud construction method and a system based on binocular matching, electronic equipment and a storage medium, and specifically describes the point cloud construction method based on binocular matching in the embodiment of the application through the following embodiment.
Fig. 1 is a flowchart of a point cloud construction method based on binocular matching according to an embodiment of the present application, and the method in fig. 1 may include, but is not limited to, steps S101 to S109.
Step S101, acquiring a first image, wherein the first image comprises a left image and a right image, and the left image corresponds to the right image one by one;
step S102, performing image filtering on the first image to obtain a second image, wherein the second image comprises a left image of the second image and a right image of the second image;
Step S103, performing mask map calculation on the second image based on a preset mask calculation mode to obtain a third image, wherein the third image comprises a left image of the third image and a right image of the third image;
step S104, reconstructing initial three-dimensional coordinates of the pixel points according to the first pixel point coordinates of the pixel points on the left image of the third image for each pixel point in the left image of the third image;
step S105, for each pixel point in the left image of the third image, projecting the initial three-dimensional coordinates of the pixel point onto the right image of the third image to obtain the second pixel point coordinates of the pixel point in the right image of the third image;
step S106, determining a first polar line in a preset neighborhood range based on the coordinates of the second pixel point for each pixel point in the left image of the third image, calculating unwrapped phase values of all sub-pixel points on the first polar line, and taking all sub-pixel points as matching candidate points;
step S107, carrying out coordinate prediction on the pixel points of the left image of the third image according to a preset matching strategy to obtain sub-pixel level two-dimensional coordinates of the pixel points in the right image;
step S108, aiming at each pixel point in the left image of the third image, obtaining a target three-dimensional coordinate of the pixel point according to the calibration parameter, the first pixel point coordinate and the sub-pixel level two-dimensional coordinate which are obtained in advance;
Step S109, obtaining target point cloud data based on target three-dimensional coordinates of all pixel points.
In steps S101 to S109 shown in the embodiment of the present application, by acquiring a first image, where the first image includes a left image and a right image, the left image corresponds to the right image one by one, and high-precision image data can be acquired by the left and right cameras. Further, the first image is subjected to image filtering to obtain a second image, external light and noise generated by the system can be removed through image filtering, and the quality of the image is improved. Further, mask map calculation is performed on the second image based on a preset mask calculation mode to obtain a third image, wherein the third image comprises a left image of the third image and a right image of the third image, and pixels which do not need to be reconstructed or pixels with insufficient light intensity can be filtered through mask calculation, so that the accuracy of the subsequent reconstruction point cloud is ensured, and the accuracy of the point cloud construction is improved. Further, reconstructing an initial three-dimensional coordinate of the pixel point according to a first pixel point coordinate of the pixel point on the left image of the third image for each pixel point in the left image of the third image; projecting the initial three-dimensional coordinates of the pixel points onto a right image of the third image to obtain second pixel point coordinates of the pixel points on the right image of the third image; determining a first polar line based on a preset neighborhood range of the coordinates of the second pixel point, calculating unwrapped phase values of all sub-pixel points on the first polar line, and taking all the sub-pixel points as matching candidate points; and carrying out coordinate prediction on the pixel points of the left image of the third image according to a preset matching strategy to obtain sub-pixel level two-dimensional coordinates of the pixel points in the right image, reconstructing initial three-dimensional coordinates by adopting a phase profile operation through a left camera and an optical machine, carrying out back projection on the initial three-dimensional coordinates into the right image to serve as an initial searching position, and then searching in a preset adjacent area of the initial three-dimensional coordinates, so that the matching time can be shortened to a great extent. Further, obtaining a target three-dimensional coordinate of the pixel point according to the calibration parameter, the first pixel point coordinate and the sub-pixel level two-dimensional coordinate which are obtained in advance; and obtaining target point cloud data based on the target three-dimensional coordinates of all the pixel points, thereby obtaining high-precision point cloud.
In step S101 of some embodiments, the first image is acquired by a opto-mechanical-camera model. Referring to fig. 2 specifically, fig. 2 is a schematic structural diagram of an optical machine-camera model provided in an embodiment of the present application, in the figure, 1 is a left camera, 2 is an optical machine, 3 is a right camera, and 4 is a background plate, wherein the left camera and the right camera are high resolution cameras, and the left camera and the right camera form a binocular camera for collecting images of an object to be measured; the optical machine needs to project a fringe image required by a preset modulated phase profilometry; the inclined stripe area of the background plate is a light machine projection area; the working principle of the model is that before acquisition, the internal reference calibration and the external reference calibration of a camera are completed through a Zhang Zhengyou calibration method, stripe images are projected on a background plate through an optical machine, and a left camera and a right camera acquire the same frequency of a measured object to obtain images. Correspondingly, the images acquired by the left camera are left images, the images acquired by the right camera are right images, and the left images and the right images acquired at the same moment are in one-to-one correspondence. The first image may be a plurality of sets of left and right images acquired at the same frequency. Compared with the common passive measurement technology which can only obtain sparse three-dimensional point cloud of characteristic points and the partial active measurement technology which can only obtain low-precision point cloud, the method can obtain high-precision depth values by adopting the high-resolution camera.
Referring to fig. 3, in step S102 of some embodiments, the method for constructing a point cloud based on binocular matching may further include, but is not limited to, steps S301 to S302:
step S301, gaussian filtering is carried out on a first image to obtain a first filtered image;
step S302, conducting guide filtering on the first filtering image to obtain a second image.
In step S301 of some embodiments, the gaussian filtering is a linear smoothing filtering, which is a process of weighted averaging the whole image, and the value of each pixel is obtained by weighted averaging itself and other pixel values in the neighborhood. Specifically, each pixel in the first image is scanned through a preset template, and the weighted average gray value of the pixels in the neighborhood determined by the preset template is used for replacing the value of the pixel point in the center of the preset template, so that Gaussian filtering is carried out on the first image, and a first filtered image is obtained. Wherein the first filtered image corresponds to the first image one by one. This way gaussian noise in the image can be eliminated.
In step S302 of some embodiments, the guided filtering performs filtering processing by the information of the guide image, which decomposes the image to be processed into two parts, one being the guide image and the other being the image to be filtered, calculates the weight of each pixel point using the guide image, and performs processing on the image to be filtered according to the weight. Specifically, the first filtering image is subjected to guided filtering, the first filtering image is decomposed into a guide image and an image to be filtered, the weight of each pixel point is calculated by using the guide image, the image to be filtered is processed according to the weight, a second image is obtained, and noise points of the image can be removed under the condition that detailed information of the image is kept.
Through the above steps S301 to S302, noise points in the image due to external light or the camera itself can be removed by double filtering, and a higher quality image can be obtained than by single filtering.
Referring to fig. 4, the mask calculation method includes a gray threshold calculation method and an edge calculation method, and in step S103 of some embodiments, the binocular matching-based point cloud construction method may further include, but is not limited to, steps S401 to S403:
step S401, obtaining the measured object characteristics and the background characteristics of a first image;
step S402, selecting a gray threshold value calculation mode or an edge calculation mode to perform mask image calculation on the second image based on the measured object characteristics and the background characteristics to obtain a mask image;
in step S403, the mask image is set as the third image.
In step S401 of some embodiments, the measured object features include bright metal, dark metal, reflective material, light absorbing material, rough surface of the measured object, smooth surface of the measured object, etc.; background features include background highlighting, background absorbance, background surface flattening, background surface roughening, etc. Firstly, confirming a measured object area and a background area in a first image, and secondly, extracting characteristics of the measured object and the background by adopting a general image processing technology, thereby obtaining characteristics of the measured object and characteristics of the background.
In step S402 of some embodiments, referring to fig. 5, the measured object features include a first reflected light intensity and a first surface flatness of the measured object of the first image, the background features include a second reflected light intensity and a second surface flatness of the image background of the first image, and the method for constructing a point cloud based on binocular matching may further include, but is not limited to, steps S501 to S502:
step S501, if the difference value between the first reflected light intensity and the second reflected light intensity is larger than a preset first threshold value, selecting a gray threshold value calculation mode to perform mask map calculation on the second image, and obtaining a mask image;
step S502, if the difference value between the first surface flatness and the second surface flatness is smaller than a preset second threshold value, selecting an edge calculation mode to perform mask map calculation on the second image, and obtaining a mask image.
In step S501 of some embodiments, the reflected light intensity may be obtained by a spectrometer, or may be obtained by an illuminometer, or may be obtained by another instrument capable of measuring the reflected light intensity, which is not limited herein. The preset first threshold value is specifically set according to actual conditions. The mask image is an image obtained by mask calculation. When the difference value between the first reflected light intensity and the second reflected light intensity is larger than a first threshold value, namely, the reflected light intensity of the measured object is larger than the reflected light intensity of the background, a calculation mode of a gray value threshold value is selected to calculate a mask image of the second image, and a mask image is obtained. Specifically, the gray value threshold segmentation is a binarization process, in which the gray of the second image is classified into different levels, one level is selected from the different levels as a threshold, and the image is converted into a binary image, that is, a mask image.
In step S502 of some embodiments, the surface flatness is measured by measuring the surface level difference, the apparent level difference may be obtained by a ruler, the unobvious level difference may be obtained by a laser plane interferometer, and other instruments capable of measuring the surface flatness, which is not limited herein. The preset second threshold value is specifically set according to practical situations. And when the difference value between the first surface flatness and the second surface flatness is larger than a second threshold value, namely, the surface flatness of the measured object is lower than the surface flatness of the background, selecting an edge calculation mode to perform mask map calculation on the second image, and obtaining a mask image. Specifically, the edge calculation mode is an edge-based segmentation mode, namely, the segmentation of the image is completed by searching the boundaries between different areas, specifically, the obtained image can be subjected to convolution operation by using a Sobe l convolution check, and then the threshold segmentation is performed according to the image calculated by the convolution.
The mask map can be calculated in a targeted manner through the steps S501 to S503, so as to filter out the pixels that do not need to be reconstructed or the pixels that cannot guarantee the accuracy due to insufficient light intensity.
In step S403 of some embodiments, the third image includes a left image of the third image and a right image of the third image, the left image and the right image being in one-to-one correspondence. The mask image is taken as a third image.
It should be noted that the number of pixels needing no reconstruction is mainly two, one is divided according to the business attribute of the object requirement, usually the background part is not needed to be reconstructed, even the reconstructed background point cloud can interfere the cloud data of the target point, and the pixels belonging to the object non-interested area on the object to be measured; the other is a pixel point which cannot guarantee precision even if the point cloud is reconstructed, such as a pixel point of a strong exposure area and a pixel point of an area with too strong background light obtained through calculation, and the whole precision is destroyed because the partial point is distorted even if the point cloud is reconstructed, so that the two pixel points do not need to be reconstructed.
Through the steps S401 to S403, a better mask image can be obtained by adopting a corresponding mask calculation mode according to the measured object features and the background features.
In step S104 of some embodiments, a conversion relationship between the left camera and the light plane is calibrated in advance, each pixel point in the left image in the third image is acquired, and an initial three-dimensional coordinate of the pixel point is reconstructed according to a first pixel point coordinate and the conversion relationship of the pixel point on the left image of the third image. And particularly, reconstructing an initial three-dimensional coordinate by adopting a phase profilometry. Suppose Ω w Representing the world coordinate system, Ω c Representing the camera coordinate system by rotating the matrix R w Translation matrix T w Omega can be described w And omega c Positional relationship between the two. If the coordinates of P in the world coordinate system are (X, Y, Z), the coordinates in the camera coordinate system are (X c ,Y c ,Z c ) The presence of (X, Y, Z) and (X) c ,Y c ,Z c ) The relationship of (2) is shown in formula (1).
Wherein,is a unit orthogonal rotation matrix, r w1 、r w2 、r w3 、r w4 、r w5 、r w6 、r w7 、r w8 、r w9 Can be obtained when the camera is calibrated; />For translating the matrix, t x 、t y 、t z Can be acquired at camera calibration.
The phase value and the coordinate in the camera coordinate system are obtained from the similarity and phase equality c ,Y c ,Z c ) As shown in formula (2).
Wherein a is 1 a 2 a 3 a 4 a 5 a 6 a 7 a 8 The method is a parameter to be calibrated, and the specific calculation process of each parameter is as follows: among them, θ 0 Lambda is the phase value of the origin of the camera coordinate system 0 The grating pitch, l, is the distance between the origin of the world coordinate system and the center of projection.
The phase value of each pixel can be obtained at the time of obtaining the first image, and therefore (X) can be obtained from the relationship between the phase value and the coordinates of the camera coordinate system c ,Y c ,Z c ) And then (X, Y, Z) can be obtained according to the relation between the camera coordinate system and the world coordinate system, so that the reconstruction of the initial three-dimensional coordinates of the pixel points is realized.
Referring to fig. 6, in step S105 of some embodiments, the method for constructing a point cloud based on binocular matching may further include, but is not limited to, steps S601 to S603:
step S601, obtaining an initial three-dimensional coordinate and a double-target fixed result, wherein the double-target fixed result comprises an external reference matrix and an internal reference matrix;
step S602, multiplying the initial three-dimensional coordinates by an external reference matrix to obtain camera coordinate system coordinates in a right image corresponding to the initial three-dimensional coordinates;
in step S603, the camera coordinate system coordinates are multiplied by the internal reference matrix to obtain pixel coordinates of the right image, where if the obtained pixel coordinates of the right image are not present in the right image, the pixel points not present in the right image are discarded.
In step S601 of some embodiments, the initial three-dimensional coordinates are coordinates belonging to the world coordinate system; the double-target calibration result is obtained through camera calibration. The double-target determination result specifically comprises an external reference matrix and an internal reference matrix, wherein the external reference is the relative pose between a world coordinate system and a camera coordinate system, such as the position, the rotation direction and the like of a camera; the internal parameters are parameters related to the characteristics of the camera, such as the focal length, the pixel size and the like of the camera, and correspondingly, the external parameter matrix is a matrix formed according to the external parameters and is used for transforming a world coordinate system and a camera coordinate system and is marked as T; the reference matrix is a matrix formed according to the reference, and is used for transforming the camera coordinate system and the image pixel coordinate system, and is denoted as A.
In step S602 of some embodiments, since the extrinsic matrix can be used for transformation of the world coordinate system and the camera coordinate system, by multiplying the initial three-dimensional coordinates by the extrinsic matrix, the camera coordinate system coordinates in the right image corresponding to the initial three-dimensional coordinates can be obtained.
In step S603 of some embodiments, since the reference matrix can be used for the transformation of the camera coordinate system and the image pixel coordinate system, the projected pixel coordinates of the initial three-dimensional coordinates in the right image can be obtained by multiplying the camera coordinate system coordinates by the reference matrix. If the obtained pixel coordinates do not exist in the right image, the point cloud construction of the pixel points of the left image corresponding to the pixel coordinates in the right image is abandoned. There are various cases of causing absence, including a case where the calculated pixel coordinates in the right image are out of the acquisition range of the right camera, and a case where the calculated pixel coordinates in the right image are discarded when the mask map is calculated.
Specifically, the overall calculation formula of steps S601 to S603 is as shown in formula (3).
Wherein A is an internal reference matrix; t is an extrinsic matrix; z is a coordinate Z value of a camera coordinate system;is a world coordinate system coordinate; / >For pixel coordinate system coordinates, U, V, W is a generic label, meaning as X, Y, Z.
Through the steps S601 to S603, the initial three-dimensional coordinates can be back projected into the right image to obtain the coordinates of the second pixel point in the right image corresponding to the pixel point of the left image, the point can be used as the initial searching position to perform pixel matching, so as to achieve the effect of shortening the matching time, and if the calculated second pixel point coordinates do not exist in the right image, the point cloud reconstruction of the pixel point of the left image corresponding to the second pixel point coordinates is abandoned, the speed of the point cloud reconstruction can be increased, and meanwhile, the accuracy of the point cloud reconstruction can be guaranteed.
Referring to fig. 7, in step S106 of some embodiments, the method for constructing a point cloud based on binocular matching may further include, but is not limited to, steps S701 to S704:
step S701, obtaining polar lines according to preset polar line constraint and first pixel point coordinates;
step S702, intercepting polar lines according to a preset neighborhood range and a second pixel point coordinate to obtain a first polar line;
step S703, determining coordinates of all sub-pixel points according to the first polar line;
step S704, calculating the dephasing wrapping values of all the sub-pixel points according to a preset interpolation method, and taking all the sub-pixel points as matching candidate points.
Epipolar constraint describes the constraint that an image point, the camera optical center, forms when the same point is projected onto two images of different perspectives. Description of polar constraint in particular with reference to FIG. 8Fig. 8 is a schematic diagram of a process of implementing pixel matching in the point cloud construction method according to the embodiment of the present application. Wherein the L plane is a left image collected by a left camera, the R plane is a right image collected by a right camera, and O L Camera optical center being left image, O R Camera optical center being right image, O L With O R The line connecting (2) is called the base line, which intersects the L plane at e L The base line intersects the R plane at e R ,e L And e R Called the base point, P is a coordinate point in the world coordinate system, plane O L O R P is called the polar plane, and the intersection line e of the polar plane and the plane L L a L Intersection line e of polar plane and plane R as polar line of left image R a R Is the epipolar line of the right image. If P is unknown, but a L Known, then P projects a point a on the plane R R Must be located at the polar line e R a R And (3) upper part.
In step S701 of some embodiments, a epipolar constraint formula may be obtained according to the epipolar constraint principle, where the epipolar constraint formula is shown in formula (4). According to the formula, the polar line e can be obtained under the condition that the coordinates of the first pixel point are known R a R Is a function of the equation (c).
Wherein u is 1 Is a as L Is defined by the pixel coordinates of (a); u (u) 2 Is a as R Is used for the image processing of the image data,is u 2 Is a transposed matrix of (a); k (K) 1 An internal reference matrix for the left camera; k (K) 2 Is the reference matrix of the right camera, +.>Is K 2 T' is the inverse symmetric matrix of the translation matrix t; r is a rotation matrix.
Prior to step S702 of some embodiments, a preset neighborhood range needs to be determined. In the pair ofAfter the camera is calibrated, a test is performed through the coordinates of two pixel points corresponding to the left and right, the two pixel points corresponding to the left and right are assumed to be the original left point and the original right point, a new right point coordinate corresponding to the original left point is calculated according to the coordinate of the original left point and the matching strategy of the application, then the deviation of the original right point coordinate and the new right point coordinate is obtained, the deviation is taken as the distance from the middle point to the side in the field range, namely, if the deviation is 10 pixels, the preset neighborhood range is a square range taking the second pixel point as the center, and the distances from the four sides of the square to the center are all 10 pixels, wherein the two pixel points corresponding to the left and right can be multiple groups, so that the average value of the deviations can be taken as the neighborhood range, and the mode of the deviations can be taken as the neighborhood range. Specifically, referring to FIG. 8, square a 1 a 2 a 3 a 4 To a as R A preset neighborhood range for the center.
In step S702 of some embodiments, the first epipolar line is a line segment obtained by cutting out the first epipolar line according to a predetermined neighborhood range. Specifically, taking the coordinates of the second pixel point as the midpoint of a preset neighborhood range, intercepting the polar line through the preset neighborhood range to obtain a line segment, and taking the line segment as the first polar line. Specifically, referring to fig. 8, by presetting a neighborhood range square a 1 a 2 a 3 a 4 Epipolar line e R a R Intercepting to obtain segment b 1 b 7 Line segment b 1 b 7 The first pole line.
In step S703 of some embodiments, a subpixel is a pixel that exists between two actual physical pixels. And acquiring coordinates of all the sub-pixel points on the first polar line according to the preset sub-pixel precision. The sub-pixel precision is set according to actual conditions. Specifically, referring to fig. 8, white circles indicate sub-pixel points, and sub-pixel point b exists on the first line 1 Sub-pixel point b 2 Sub-pixel point b 3 Sub-pixel point b 4 Sub-pixel point b 5 Sub-pixel point b 6 Sub-pixel point b 7 Obtaining a sub-pixel point sequence { b } 1 b 2 b 3 b 4 b 5 b 6 b 7 }。
In step S704 of some embodiments, the preset interpolation method may be a bi-quadratic spline interpolation method, or may be a nearest neighbor interpolation method, a linear interpolation method, a bilinear interpolation method, a cubic spline interpolation method, or the like. The dephasing wrapping value is an absolute phase value to which the wrapping phase is restored. Assuming that a bi-quadratic spline interpolation method is adopted, solving 9 coefficients in a formula (5) through integer pixel points of 3*3 neighborhood around the coordinates of the sub-pixel points to be calculated, then solving a solution phase wrapping value of the coordinates of the sub-pixel points according to the formula (5), and taking all the sub-pixel points as matching candidate values. Wherein, formula (5) is as follows:
Wherein, (x, y) is the coordinate of the sub-pixel point to be calculated, wherein x is the abscissa and y is the ordinate; g (x, y) is the dephasing wrapping value of the sub-pixel point to be calculated; (x) i ,y j ) Integer pixel point coordinates of the ith row and the jth column in the 3*3 neighborhood; a, a ijkl Is a coefficient, wherein i is the global line number of the whole image, j is the global column number of the whole image, the value ranges of i and j are determined by the image size, k is the kth line of the integer pixel point of 3*3, and the value ranges are [1,3 ]]L is the first column of the integer pixel point of 3*3, and the value range is [1,3]The values of i, j, k, l are integers.
Specifically, 9 coefficients exist in each pixel point, constraint equations are established through constraint conditions such as phase values of the pixel points, phase value partial derivative values of the pixel points, first derivative values of the pixel points, continuous first derivatives of joints of the pixel points and surrounding neighborhood points and the like, and the coefficients are required to be solved through the constraint equations, and the constraint equations are required to be established. Assuming that the size of the whole picture is 1920×1080 of resolution, 1920×1080×9 coefficients can be obtained through a constraint equation, and the global line number of the whole image is in the range of [1, 1080 ]]The global column number of the whole image has a value range of [1, 1920]. For example, in solving for 1920×1080×9 coefficients, if the solution phase wrapping value of the sub-pixel coordinate (16.3, 21.6) is to be solved, firstly rounding to obtain the nearest integer pixel coordinate (16, 22), and then taking 9 coefficients of the integer pixel coordinate (16, 22) to calculate the solution phase wrapping value of the sub-pixel. Specifically, assuming that (16, 22) is in the 8 th row and 9 th column of the entire image, i= 8,j = 9,G (16.3, 21.6) =a 8911 (16.3-16) 0 (21.6-21) 0 +a 8912 (16.3-16) 0 (21.6-21) 1 +a 8913 (16.3-16) 0 (21.6-21) 2 +a 8921 (16.3-16) 1 (21.6-21) 0 +a 8922 (16.3-16) 1 (21.6-21) 1 +a 8923 (16.3-16) 1 (21.6-21) 2 +a 8931 (16.3-16) 2 (21.6-21) 0 +a 8932 (16.3-16) 2 (21.6-21) 1 +a 8933 (16.3-16) 2 (21.6-21) 2
Through the steps S701 to S704, the sub-pixel points on the polar line in the preset neighborhood range can be used as the matching candidate points, and compared with global matching, the matching range is reduced, so that the matching speed is improved.
Referring to fig. 9, in step S107 of some embodiments, the method for constructing a point cloud based on binocular matching may further include, but is not limited to, steps S901 to S903:
step S901, for each pixel point in the left image of the third image, acquiring a phase value of the pixel point, and taking the phase value as a target phase value;
step S902, if adjacent first candidate points and second candidate points exist in the matching candidate points, obtaining two-dimensional coordinates of sub-pixel levels of target phase values through the first candidate points and the second candidate points, wherein the phase values of the first candidate points are smaller than the target phase values, and the phase values of the second candidate points are larger than the target phase values;
In step S903, if there are a third candidate point and a fourth candidate point separated by 1 subpixel in the matching candidate points, the two-dimensional coordinates of the subpixel level of the target phase value are obtained by selecting 3 pixels in the minimum 4-pixel neighborhood based on the third candidate point and the fourth candidate point, wherein the phase value of the third candidate point is smaller than the target phase value, and the phase value of the fourth candidate point is larger than the target phase value.
In step S901 of some embodiments, the phase is used to represent phase information for each pixel in the image. For each pixel point in the left image of the third image, the phase value of the pixel point is acquired, wherein the phase value of the pixel point is an absolute phase value, and then the absolute phase value is taken as a target phase value. This is because there is periodicity in the phase, and there is a case where the phase values of a plurality of pixel points are the same, and in order to avoid this, it is ensured that only one pixel point corresponds to the phase value by using the absolute phase value as the target phase value.
In step S902 of some embodiments, the first candidate point and the second candidate point are sub-pixel points, and if the phase value of the first candidate point is smaller than the target phase value, the phase value of the second candidate point is larger than the target phase value, and the first candidate point is adjacent to the second candidate point, the two-dimensional coordinates of the sub-pixel level of the target phase value can be obtained through the first candidate point and the second candidate point. Assuming that the target phase value is 30, and 20, 40 and 50 exist in the unwrapped phase value sequence of the matching candidate points, firstly, sequentially finding out the point closest to the target phase value in the phase value sequence, wherein the sub-pixel point corresponding to the unwrapped phase value of 20 is the first candidate point, the sub-pixel point corresponding to the unwrapped phase value of 40 is the second candidate point, assuming that the sub-pixel coordinate of the first candidate point is (52,61.5), the sub-pixel coordinate of the second candidate point is (53,62.5), and taking the sub-pixel coordinate of the first candidate point and the sub-pixel coordinate of the second candidate point to perform linear interpolation inverse operation to obtain the two-dimensional coordinate of the sub-pixel level of the target phase value.
It should be noted that the phase values are sequentially incremented or sequentially decremented on the same physical plane, and that no repeated jumps in phase values generally occur. If the unwrapped phase value sequence of the matching candidate points is 20, 40, 25 and 60, assuming that the target phase value is 30, the unwrapped phase value closest to the target phase value in the unwrapped phase value sequence is 25, and then taking the unwrapped phase value 40 before 25, detecting whether one unwrapped phase value is smaller than the target phase value and the other unwrapped phase value is larger than the target phase value, if so, taking the unwrapped phase value of 25 as a first candidate point, and taking the unwrapped phase value of 40 as a second candidate point.
In step S903 of some embodiments, if the phase value of the third candidate point is smaller than the target phase value, the phase value of the fourth candidate point is larger than the target phase value, and the third candidate point is separated from the fourth candidate point by 1 subpixel, the two-dimensional coordinates of the subpixel level of the target phase value are obtained by selecting 3 subpixels in the minimum 4-pixel neighborhood. Assuming that the target phase value is 30, and there are cases that the unwrap phase value sequences are 20, x, 40 and 50 in the matching candidate points, the unwrap phase values of 20 and 40 are different from 30 by 10, but the unwrap phase value closest to the target phase value is searched from left to right according to the sequence, the sub-pixel point with the unwrap phase value of 20 is determined to be the nearest point, 40 is found, and 30 is located between 20 and 40, the sub-pixel point with the unwrap phase value of 30 is determined to be the third candidate point, the sub-pixel point with the unwrap phase value of 40 is determined to be the fourth candidate point, and the minimum 4-pixel neighborhood is generally selected backwards, the sub-pixel point with the unwrap phase value of 50 is taken as the fifth candidate point, then the sub-pixel coordinate of the third candidate point, the sub-pixel coordinate of the fourth candidate point and the sub-pixel coordinate of the fifth candidate point are obtained, and secondary interpolation inverse operation is performed on the 3 sub-pixel coordinates to obtain the sub-pixel level two-dimensional coordinates of the target phase value, wherein the point corresponding to x can be the filtered point of the mask map. If the target phase value is 30 and the unwrap phase value sequence is 20, x, 40, y and 50, selecting a sub-pixel point with the unwrap phase value of 20, a sub-pixel point with the unwrap phase value of 40 and a sub-pixel point corresponding to one unwrap phase value before the unwrap phase value of 20, performing quadratic interpolation inverse operation to obtain two-dimensional coordinates of the sub-pixel level of the target phase value, wherein the point corresponding to x and the point corresponding to y can be the point filtered by the mask map.
If a certain pixel does not meet the conditions in steps S902 to S903, the search matching of the pixel of the left image of this third image is abandoned, i.e. the three-dimensional reconstruction of this pixel is abandoned.
Through the steps S901 to S903, the matching strategy can quickly find the two-dimensional coordinates of the sub-pixel level of the right image corresponding to the pixel point of the left image of the third image, so as to improve the accuracy and efficiency of matching.
In step S108 of some embodiments, for each pixel point in the left image of the third image, according to the calibration parameter, the first pixel point coordinate and the sub-pixel level two-dimensional coordinate acquired in advance, the target three-dimensional coordinate of the pixel point is obtained, and the more accurate target three-dimensional coordinate than the initial three-dimensional coordinate can be obtained through the more accurate first pixel point coordinate and the sub-pixel level two-dimensional coordinate, so that the accuracy of point cloud construction is improved.
In step S109 of some embodiments, in computer graphics, a point cloud is a data structure made up of a set of discrete three-dimensional coordinate points. The target point cloud data may be in the form of a point representation, a voxel representation, or a graph representation. The graph representation form can convert the three-dimensional coordinates of the target into point clouds by traversing the three-dimensional coordinates of the target, and particularly can convert the three-dimensional coordinates into point clouds by a visual tool kit (VT K). The application adopts a high-resolution camera, the obtained point cloud point distance can reach 50 mu m, and the point distance can be obtained within 50 mu m by a sub-pixel calculation mode.
A specific application example of the point cloud construction method based on binocular matching according to the embodiment of the present application is described below with reference to fig. 10.
First, a first image is acquired, and its implementation procedure is similar to that of step S101 described above. Then, image preprocessing is performed, and the specific implementation process is similar to that of the steps S301 to S302. The image mask map is recalculated, and its implementation is similar to that of steps S401 to S403 and S501 to S502 described above. Then, the left camera and the optical machine calculate the initial three-dimensional coordinates of the first pixel point by using phase profilometry, and the specific implementation process is similar to that of the step S104. Then, the initial three-dimensional coordinates are back projected into the right camera pixel coordinate system to obtain second pixel point coordinates, and the specific implementation process is similar to that of the steps S601-S603. And calculating all sub-pixel matching candidate points in a preset neighborhood range of the second pixel point coordinates, wherein the specific implementation process is similar to the specific implementation process of the steps S701-S704. And then obtaining a sub-pixel level two-dimensional coordinate corresponding to the first pixel point coordinate according to a matching strategy, wherein the specific implementation process is similar to the specific implementation process of the steps S901-S903. And (3) calculating to obtain target three-dimensional coordinates through binocular camera calibration parameters, traversing all first pixel points to obtain all target three-dimensional data, wherein the specific implementation process is similar to the specific implementation process of the step S108. Finally, the target point cloud data is obtained, and the specific implementation process is similar to the specific implementation process of the step S109. For the sake of space saving, the description is omitted.
Referring to fig. 11, the embodiment of the present application further provides a point cloud construction system based on binocular matching, which can implement the above-mentioned point cloud construction method based on binocular matching, where the system includes:
a first image obtaining module 1101, configured to obtain a first image, where the first image includes a left image and a right image, and the left image corresponds to the right image one by one;
a second image obtaining module 1102, configured to perform image filtering on the first image to obtain a second image, where the second image includes a left image of the second image and a right image of the second image;
a third image obtaining module 1103, configured to perform mask map calculation on the second image based on a preset mask calculation mode to obtain a third image, where the third image includes a left image of the third image and a right image of the third image;
an initial three-dimensional coordinate acquisition module 1104, configured to reconstruct, for each pixel point in the left image of the third image, an initial three-dimensional coordinate of the pixel point according to a first pixel point coordinate of the pixel point on the left image of the third image;
a second pixel coordinate acquiring module 1105, configured to project, for each pixel in the left image of the third image, an initial three-dimensional coordinate of the pixel onto the right image of the third image, to obtain a second pixel coordinate of the pixel on the right image of the third image;
The matching candidate point obtaining module 1106 is configured to determine, for each pixel point in the left image of the third image, a first epipolar line based on a preset neighborhood range of the coordinates of the second pixel point, calculate unwrapped phase values of all sub-pixel points on the first epipolar line, and use all sub-pixel points as matching candidate points;
the coordinate prediction module 1107 is configured to perform coordinate prediction on a pixel point of a left image of the third image according to a preset matching policy, so as to obtain a sub-pixel level two-dimensional coordinate of the pixel point in the right image;
the target three-dimensional coordinate acquisition module 1108 is configured to obtain, for each pixel point in the left image of the third image, a target three-dimensional coordinate of the pixel point according to a calibration parameter, a first pixel point coordinate, and a sub-pixel level two-dimensional coordinate that are acquired in advance;
the target point cloud data acquisition module 1109 is configured to obtain target point cloud data based on target three-dimensional coordinates of all the pixel points.
The specific implementation manner of the point cloud construction system based on binocular matching is basically the same as the specific embodiment of the point cloud construction method based on binocular matching, and is not described herein again.
The embodiment of the application also provides electronic equipment, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the point cloud construction method based on binocular matching when executing the computer program. The electronic equipment can be any intelligent terminal including a tablet personal computer, a vehicle-mounted computer and the like.
Referring to fig. 12, fig. 12 illustrates a hardware structure of an electronic device according to another embodiment, the electronic device includes:
the processor 1201 may be implemented by a general-purpose CPU (Central Processing Unit ), a microprocessor, an application specific integrated circuit (Application Specific Integrated Cir cuit, ASIC), or one or more integrated circuits, etc. for executing relevant programs to implement the technical solutions provided by the embodiments of the present application;
the Memory 1202 may be implemented in the form of a Read Only Memory (ROM), a static storage device, a dynamic storage device, or a random access Memory (Random Access Memory, RAM). The memory 1202 may store an operating system and other application programs, and when the technical solution provided in the embodiments of the present disclosure is implemented by software or firmware, relevant program codes are stored in the memory 1202, and the processor 1201 invokes a point cloud construction method based on binocular matching to execute the embodiments of the present disclosure;
an input/output interface 1203 for implementing information input and output;
the communication interface 1204 is configured to implement communication interaction between the present device and other devices, and may implement communication in a wired manner (e.g., USB, network cable, etc.), or may implement communication in a wireless manner (e.g., mobile network, WIF I, bluetooth, etc.);
A bus 1205 for transferring information between various components of the device such as the processor 1201, memory 1202, input/output interface 1203, and communication interface 1204;
wherein the processor 1201, the memory 1202, the input/output interface 1203 and the communication interface 1204 enable communication connection between each other inside the device via a bus 1205.
The embodiment of the application also provides a computer readable storage medium which stores a computer program, and the computer program realizes the point cloud construction method based on binocular matching when being executed by a processor.
The memory, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs as well as non-transitory computer executable programs. In addition, the memory may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory remotely located relative to the processor, the remote memory being connectable to the processor through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
According to the point cloud construction method and system based on binocular matching, the electronic equipment and the storage medium, the first image is obtained, the first image comprises the left image and the right image, the left image corresponds to the right image one by one, and high-precision image data can be obtained through the left camera and the right camera. Further, the first image is subjected to image filtering to obtain a second image, external light and noise generated by the system can be removed through image filtering, and the quality of the image is improved. Further, mask map calculation is performed on the second image based on a preset mask calculation mode to obtain a third image, wherein the third image comprises a left image of the third image and a right image of the third image, and pixels which do not need to be reconstructed or pixels with insufficient light intensity can be filtered through mask calculation, so that the accuracy of the subsequent reconstruction point cloud is ensured, and the accuracy of the point cloud construction is improved. Further, reconstructing an initial three-dimensional coordinate of the pixel point according to a first pixel point coordinate of the pixel point on the left image of the third image for each pixel point in the left image of the third image; projecting the initial three-dimensional coordinates of the pixel points onto a right image of the third image to obtain second pixel point coordinates of the pixel points on the right image of the third image; determining a first polar line based on a preset neighborhood range of the coordinates of the second pixel point, calculating unwrapped phase values of all sub-pixel points on the first polar line, and taking all the sub-pixel points as matching candidate points; and carrying out coordinate prediction on the pixel points of the left image of the third image according to a preset matching strategy to obtain sub-pixel level two-dimensional coordinates of the pixel points in the right image, reconstructing initial three-dimensional coordinates by adopting a phase profile operation through a left camera and an optical machine, carrying out back projection on the initial three-dimensional coordinates into the right image to serve as an initial searching position, and then searching in a preset adjacent area of the initial three-dimensional coordinates, so that the matching time can be shortened to a great extent. Further, obtaining a target three-dimensional coordinate of the pixel point according to the calibration parameter, the first pixel point coordinate and the sub-pixel level two-dimensional coordinate which are obtained in advance; and obtaining target point cloud data based on the target three-dimensional coordinates of all the pixel points, thereby obtaining high-precision point cloud.
The embodiment described in the embodiments of the present application is for more clearly describing the technical solution of the embodiments of the present application, and does not constitute a limitation on the technical solution provided by the embodiments of the present application, and as a person skilled in the art can know that, with the evolution of technology and the appearance of new application scenarios, the technical solution provided by the embodiments of the present application is applicable to similar technical problems.
It will be appreciated by persons skilled in the art that the technical solutions shown in the figures do not limit the embodiments of the present application, and may include more or fewer steps than shown, or may combine certain steps, or different steps.
The system embodiments described above are merely illustrative, in that the units illustrated as separate components may or may not be physically separate, i.e., may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
Those of ordinary skill in the art will appreciate that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof.
The terms "first," "second," "third," "fourth," and the like in the description of the application and in the above figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in the present application, "at least one (item)" means one or more, and "a plurality" means two or more. "and/or" for describing the association relationship of the association object, the representation may have three relationships, for example, "a and/or B" may represent: only a, only B and both a and B are present, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b or c may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
In the several embodiments provided by the present application, it should be understood that the disclosed systems and methods may be implemented in other ways. For example, the system embodiments described above are merely illustrative, e.g., the division of the above elements is merely a logical functional division, and there may be additional divisions in actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interface, system or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all of the technical solution contributing to the prior art or in the form of a software product stored in a storage medium, including multiple instructions to cause a computer device (which may be a personal computer, a server side, or a network device, etc.) to perform all or part of the steps of the method of the various embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing a program.
The preferred embodiments of the present application have been described above with reference to the accompanying drawings, and are not thereby limiting the scope of the claims of the embodiments of the present application. Any modifications, equivalent substitutions and improvements made by those skilled in the art without departing from the scope and spirit of the embodiments of the present application shall fall within the scope of the claims of the embodiments of the present application.

Claims (10)

1. The method for constructing the point cloud based on binocular matching is characterized by comprising the following steps of:
acquiring a first image, wherein the first image comprises a left image and a right image, and the left image corresponds to the right image one by one;
performing image filtering on the first image to obtain a second image, wherein the second image comprises a left image of the second image and a right image of the second image;
performing mask map calculation on the second image based on a preset mask calculation mode to obtain a third image, wherein the third image comprises a left image of the third image and a right image of the third image;
reconstructing initial three-dimensional coordinates of each pixel point in a left image of the third image according to first pixel point coordinates of the pixel point on the left image of the third image;
for each pixel point in the left image of the third image, projecting the initial three-dimensional coordinate of the pixel point onto the right image of the third image to obtain a second pixel point coordinate of the pixel point on the right image of the third image;
determining a first polar line in a preset neighborhood range based on the coordinates of the second pixel point for each pixel point in the left image of the third image, calculating unwrapped phase values of all sub-pixel points on the first polar line, and taking all sub-pixel points as matching candidate points;
Carrying out coordinate prediction on the pixel points of the left image of the third image according to a preset matching strategy to obtain sub-pixel level two-dimensional coordinates of the pixel points in the right image;
aiming at each pixel point in the left image of the third image, obtaining a target three-dimensional coordinate of the pixel point according to a calibration parameter, the first pixel point coordinate and the sub-pixel level two-dimensional coordinate which are obtained in advance;
and obtaining target point cloud data based on the target three-dimensional coordinates of all the pixel points.
2. The method of claim 1, wherein the performing image filtering on the first image to obtain a second image includes:
performing Gaussian filtering on the first image to obtain a first filtered image;
and performing guided filtering on the first filtering image to obtain the second image.
3. The method of claim 1, wherein the mask calculation mode includes a gray threshold calculation mode and an edge calculation mode, the performing mask map calculation on the second image based on a preset mask calculation mode to obtain a third image includes:
acquiring the measured object characteristics and background characteristics of the first image;
Selecting the gray threshold calculation mode or the edge calculation mode to perform mask map calculation on the second image based on the measured object features and the background features to obtain a mask image;
and taking the mask image as the third image.
4. The method of constructing a point cloud according to claim 3, wherein the measured object features include a first reflected light intensity and a first surface flatness of the measured object of the first image, the background features include a second reflected light intensity and a second surface flatness of an image background of the first image, and selecting the gray threshold calculation mode or the edge calculation mode to perform mask map calculation on the second image based on the measured object features and the background features, and obtaining a mask image includes:
if the difference value between the first reflected light intensity and the second reflected light intensity is larger than a preset first threshold value, selecting the gray threshold value calculation mode to perform mask map calculation on the second image, and obtaining the mask image;
and if the difference value between the first surface flatness and the second surface flatness is smaller than a preset second threshold value, selecting the edge calculation mode to perform mask map calculation on the second image to obtain the mask image.
5. The point cloud construction method according to claim 1, wherein for each pixel point in a left image of the third image, projecting an initial three-dimensional coordinate of the pixel point onto a right image of the third image to obtain a second pixel point coordinate of the pixel point on the right image of the third image, comprises:
acquiring the initial three-dimensional coordinate and a double-target fixed result, wherein the double-target fixed result comprises an external reference matrix and an internal reference matrix;
multiplying the initial three-dimensional coordinates by the external parameter matrix to obtain camera coordinate system coordinates in a right image corresponding to the initial three-dimensional coordinates;
and multiplying the camera coordinate system coordinate by the internal reference matrix to obtain the pixel coordinate of the right image, wherein if the obtained pixel coordinate of the right image does not exist in the right image, discarding the pixel point which does not exist in the right image.
6. The method according to claim 1, wherein the determining a first epipolar line for each pixel point in the left image of the third image based on the preset neighborhood range of the second pixel point coordinates, calculating unwrapped phase values of all sub-pixel points on the first epipolar line, and taking all sub-pixel points as matching candidate points includes:
Obtaining epipolar lines according to preset epipolar constraint and the first pixel point coordinates;
intercepting the epipolar line according to the preset neighborhood range and the second pixel point coordinate to obtain the first epipolar line;
determining coordinates of all sub-pixel points according to the first polar line;
and calculating the dephasing wrapping values of all the sub-pixel points according to a preset interpolation method, and taking all the sub-pixel points as the matching candidate points.
7. The method for constructing a point cloud according to any one of claims 1 to 6, wherein the performing coordinate prediction on the pixel point of the left image of the third image according to a preset matching policy to obtain a sub-pixel level two-dimensional coordinate of the pixel point in the right image includes:
for each pixel point in a left image of the third image, acquiring a phase value of the pixel point, and taking the phase value as a target phase value;
if the first candidate point and the second candidate point which are adjacent to each other exist in the matching candidate points, obtaining a two-dimensional coordinate of a sub-pixel level of the target phase value through the first candidate point and the second candidate point, wherein the phase value of the first candidate point is smaller than the target phase value, and the phase value of the second candidate point is larger than the target phase value;
If a third candidate point and a fourth candidate point which are spaced by 1 sub-pixel exist in the matching candidate points, obtaining a sub-pixel level two-dimensional coordinate of the target phase value by selecting 3 pixel points in the minimum 4-pixel neighborhood based on the third candidate point and the fourth candidate point, wherein the phase value of the third candidate point is smaller than the target phase value, and the phase value of the fourth candidate point is larger than the target phase value.
8. A point cloud construction system based on binocular matching, the system comprising:
the first image acquisition module is used for acquiring a first image, wherein the first image comprises a left image and a right image, and the left image corresponds to the right image one by one;
the second image acquisition module is used for carrying out image filtering on the first image to obtain a second image, wherein the second image comprises a left image of the second image and a right image of the second image;
the third image acquisition module is used for carrying out mask image calculation on the second image based on a preset mask calculation mode to obtain a third image, wherein the third image comprises a left image of the third image and a right image of the third image;
An initial three-dimensional coordinate acquisition module, configured to reconstruct, for each pixel point in a left image of the third image, an initial three-dimensional coordinate of the pixel point according to a first pixel point coordinate of the pixel point on the left image of the third image;
a second pixel coordinate acquiring module, configured to project, for each pixel in a left image of the third image, an initial three-dimensional coordinate of the pixel onto a right image of the third image, to obtain a second pixel coordinate of the pixel in the right image of the third image;
the matching candidate point acquisition module is used for determining a first polar line in a preset neighborhood range based on a second pixel point coordinate of each pixel point in a left image of the third image, taking the second pixel point coordinate of the pixel point as an initial point, calculating unwrapped phase values of all sub-pixel points on the first polar line, and taking all the sub-pixel points as matching candidate points;
the coordinate prediction module is used for predicting the coordinates of the pixel points of the left image of the third image according to a preset matching strategy to obtain sub-pixel level two-dimensional coordinates of the pixel points in the right image;
The target three-dimensional coordinate acquisition module is used for acquiring target three-dimensional coordinates of each pixel point in the left image of the third image according to the pre-acquired calibration parameters, the first pixel point coordinates and the sub-pixel level two-dimensional coordinates;
and the target point cloud data acquisition module is used for acquiring target point cloud data based on the target three-dimensional coordinates of all the pixel points.
9. An electronic device comprising a memory storing a computer program and a processor implementing the binocular matching-based point cloud construction method of any of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the binocular matching-based point cloud construction method of any of claims 1 to 7.
CN202310993745.1A 2023-08-08 2023-08-08 Point cloud construction method and system, equipment and storage media based on binocular matching Pending CN117036475A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310993745.1A CN117036475A (en) 2023-08-08 2023-08-08 Point cloud construction method and system, equipment and storage media based on binocular matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310993745.1A CN117036475A (en) 2023-08-08 2023-08-08 Point cloud construction method and system, equipment and storage media based on binocular matching

Publications (1)

Publication Number Publication Date
CN117036475A true CN117036475A (en) 2023-11-10

Family

ID=88634706

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310993745.1A Pending CN117036475A (en) 2023-08-08 2023-08-08 Point cloud construction method and system, equipment and storage media based on binocular matching

Country Status (1)

Country Link
CN (1) CN117036475A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118505500A (en) * 2024-07-19 2024-08-16 柏意慧心(杭州)网络科技有限公司 Point cloud data splicing method, point cloud data splicing device, medium and electronic equipment

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016037486A1 (en) * 2014-09-10 2016-03-17 深圳大学 Three-dimensional imaging method and system for human body
CN110044301A (en) * 2019-03-29 2019-07-23 易思维(天津)科技有限公司 Three-dimensional point cloud computing method based on monocular and binocular mixed measurement
WO2019230813A1 (en) * 2018-05-30 2019-12-05 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Three-dimensional reconstruction method and three-dimensional reconstruction device
CN110852979A (en) * 2019-11-12 2020-02-28 广东省智能机器人研究院 Point cloud registration and fusion method based on phase information matching
CN112595263A (en) * 2020-12-17 2021-04-02 天津大学 Binocular vision three-dimensional point cloud reconstruction measuring method for sinusoidal grating and speckle mixed pattern projection
CN113074634A (en) * 2021-03-25 2021-07-06 苏州天准科技股份有限公司 Rapid phase matching method, storage medium and three-dimensional measurement system
CN113505626A (en) * 2021-03-15 2021-10-15 南京理工大学 Rapid three-dimensional fingerprint acquisition method and system
CN113916153A (en) * 2021-10-12 2022-01-11 深圳市其域创新科技有限公司 Active and passive combined structured light three-dimensional measurement method
CN114111637A (en) * 2021-11-25 2022-03-01 天津工业大学 A 3D reconstruction method based on virtual binocular fringe structured light

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016037486A1 (en) * 2014-09-10 2016-03-17 深圳大学 Three-dimensional imaging method and system for human body
WO2019230813A1 (en) * 2018-05-30 2019-12-05 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Three-dimensional reconstruction method and three-dimensional reconstruction device
CN110044301A (en) * 2019-03-29 2019-07-23 易思维(天津)科技有限公司 Three-dimensional point cloud computing method based on monocular and binocular mixed measurement
CN110852979A (en) * 2019-11-12 2020-02-28 广东省智能机器人研究院 Point cloud registration and fusion method based on phase information matching
CN112595263A (en) * 2020-12-17 2021-04-02 天津大学 Binocular vision three-dimensional point cloud reconstruction measuring method for sinusoidal grating and speckle mixed pattern projection
CN113505626A (en) * 2021-03-15 2021-10-15 南京理工大学 Rapid three-dimensional fingerprint acquisition method and system
CN113074634A (en) * 2021-03-25 2021-07-06 苏州天准科技股份有限公司 Rapid phase matching method, storage medium and three-dimensional measurement system
CN113916153A (en) * 2021-10-12 2022-01-11 深圳市其域创新科技有限公司 Active and passive combined structured light three-dimensional measurement method
CN114111637A (en) * 2021-11-25 2022-03-01 天津工业大学 A 3D reconstruction method based on virtual binocular fringe structured light

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
黄怡洁: "基于深度学习的立体匹配关键技术研究及应用", 《中国优秀硕士学位论文全文数据库》, 15 February 2022 (2022-02-15) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118505500A (en) * 2024-07-19 2024-08-16 柏意慧心(杭州)网络科技有限公司 Point cloud data splicing method, point cloud data splicing device, medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN109186491B (en) Parallel multi-line laser measurement system and method based on homography matrix
CN112233249B (en) B spline surface fitting method and device based on dense point cloud
CN110487216B (en) Fringe projection three-dimensional scanning method based on convolutional neural network
CN106709947B (en) Three-dimensional human body rapid modeling system based on RGBD camera
WO2018127007A1 (en) Depth image acquisition method and system
CN111028295A (en) A 3D imaging method based on encoded structured light and binocular
CN112381847B (en) Pipeline end space pose measurement method and system
JP2024507089A (en) Image correspondence analysis device and its analysis method
CN111023994B (en) Grating three-dimensional scanning method and system based on multiple measurement
CN116129037B (en) Visual touch sensor, three-dimensional reconstruction method, system, equipment and storage medium thereof
CN111123242A (en) Combined calibration method based on laser radar and camera and computer readable storage medium
CN109520480B (en) Distance measurement method and distance measurement system based on binocular stereo vision
CN116222425A (en) Three-dimensional reconstruction method and system based on multi-view three-dimensional scanning device
CN117474753A (en) Point cloud splicing method based on binocular structured light system and related products
CN110375675B (en) Binocular grating projection measurement method based on space phase expansion
CN111833392B (en) Marking point multi-angle scanning method, system and device
CN111739071A (en) Rapid iterative registration method, medium, terminal and device based on initial value
CN112802184B (en) Three-dimensional point cloud reconstruction method, three-dimensional point cloud reconstruction system, electronic equipment and storage medium
CN110942506A (en) A kind of object surface texture reconstruction method, terminal equipment and system
CN117036475A (en) Point cloud construction method and system, equipment and storage media based on binocular matching
CN114877826B (en) Binocular stereo matching three-dimensional measurement method, system and storage medium
CN118518009B (en) Calibration parameter determining method, calibration method, medium and equipment
CN106228593A (en) A kind of image dense Stereo Matching method
JP2000171214A (en) Corresponding point retrieving method and three- dimensional position measuring method utilizing same
CN109373901B (en) Method for calculating center position of hole on plane

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination