[go: up one dir, main page]

CN107505324B - 3D scanning device and scanning method based on binocular collaborative laser - Google Patents

3D scanning device and scanning method based on binocular collaborative laser Download PDF

Info

Publication number
CN107505324B
CN107505324B CN201710681112.1A CN201710681112A CN107505324B CN 107505324 B CN107505324 B CN 107505324B CN 201710681112 A CN201710681112 A CN 201710681112A CN 107505324 B CN107505324 B CN 107505324B
Authority
CN
China
Prior art keywords
camera
image
images
laser
cutting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710681112.1A
Other languages
Chinese (zh)
Other versions
CN107505324A (en
Inventor
李�杰
Original Assignee
李�杰
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 李�杰 filed Critical 李�杰
Priority to CN201710681112.1A priority Critical patent/CN107505324B/en
Publication of CN107505324A publication Critical patent/CN107505324A/en
Application granted granted Critical
Publication of CN107505324B publication Critical patent/CN107505324B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8887Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention relates to a 3D scanning device and a scanning method based on binocular collaborative laser. The invention aims to solve the problems that the existing point cloud scanning scheme has a complex and fussy structure; the generated point cloud has low precision and is difficult to meet the requirement of industrial production. 3D scanning device based on binocular cooperation laser includes: the system comprises a binocular camera, a stepping motor, a laser and a motor controller; the binocular camera comprises a left camera, a right camera and a binocular connecting fixing piece; the left camera and the right camera are connected through a binocular connecting fixing piece; the stepping motor is provided with a laser and is connected with the motor controller through a connecting piece; the motor controller controls the stepping motor to move, the stepping motor drives the laser to move, the laser emits laser to a scanned object, and the binocular camera shoots the scanned object. The invention is used in the field of 3D scanning.

Description

3D scanning device and scanning method based on binocular collaborative laser
Technical Field
The invention relates to a 3D scanning device and a scanning method.
Background
The 3D point cloud data is the basis of intelligent workpiece identification, grabbing and defect detection, and the 3D point cloud data with good quality can accelerate the identification speed and is suitable for more requirements.
The current common point cloud scanning schemes mainly include three types, one is single-camera matching laser line translation scanning, and the other is shooting a scanned object picture by using a calibrated binocular (multi-view) structure, and then processing the picture to generate point cloud data. The third is an area array structured light scheme, a camera is matched with a projector to establish a system structure similar to a binocular, and corresponding point cloud data are obtained through projection and shooting.
The point cloud generated by the scheme that the single camera is matched with the laser is high in precision, but the scanning speed is high, and in the scanning process, the encoder needs to be used for emitting signals according to the position of the motion of the whole structure, so that the camera is controlled to shoot a picture. Then, the photos are processed, and in the calculation process, the relative position of the camera and the scanned object changes, and the calculated position of each frame of data needs to be converted by using a corresponding algorithm. Overall, this scheme precision is high, but the structure is complicated, loaded down with trivial details.
In the binocular (multi-view) scheme, the scanned object photos are directly shot by using two eyes, the left image and the right image are processed to obtain corresponding feature point pairs, the 3D positions of the feature points are directly calculated according to the calibrated parameters and the parallax of the feature point pairs, and other positions are calculated and filled. The scheme has the advantages of high scanning speed, high calculation speed, unified calculation coordinates and relatively simple use process. However, due to the fact that mismatching exists when the feature point pairs are selected from the images, the point cloud generated by the binocular scheme is low in precision, and the requirement of industrial production is difficult to meet.
The structured light scheme is essentially a variation of the binocular scheme, using a projector to project structured light of a given code, while using a camera to take a picture of the corresponding structured light. And then, decoding operation is carried out on all the photos according to the projected pictures, and each pixel point on the photos has a group of unique codes. Corresponding feature point pairs are added to the binocular camera in a manual mode, so that the binocular camera has all the advantages of the binocular scheme and is high in calculation accuracy. However, projected structured light is sensitive to an external light source and is sensitive to structural reflection of a scanned object, shadow shielding exists during scanning, and the generated point cloud is low in precision. If the scanned object is a stack of metal workpieces, the scanning effect is greatly reduced.
Disclosure of Invention
The invention aims to solve the problems that the existing point cloud scanning scheme has a complex and fussy structure; the generated point cloud has low precision and is difficult to meet the requirement of industrial production, and a 3D scanning device and a scanning method based on binocular cooperative laser are provided.
3D scanning device based on binocular cooperation laser includes: the system comprises a binocular camera, a stepping motor, a laser and a motor controller;
the binocular camera comprises a left camera, a right camera and a binocular connecting fixing piece;
the left camera and the right camera are connected through a binocular connecting fixing piece;
the stepping motor is provided with a laser and is connected with the motor controller through a connecting piece;
the motor controller controls the stepping motor to move, the stepping motor drives the laser to move, the laser emits laser to a scanned object, and the binocular camera shoots the scanned object.
The specific process of the 3D scanning method based on the binocular collaborative laser comprises the following steps:
step one, a motor controller controls a stepping motor to move, the stepping motor drives a laser to rotate, and the laser emits laser to a scanned object;
calibrating the current binocular camera, and acquiring an internal reference matrix and a distortion matrix of each of a left camera and a right camera in the binocular camera, a binocular calibration matrix and a relative position matrix after calibration;
adjusting the aperture size and the exposure time of the binocular camera to enable the left camera image and the right camera image to only see laser lines;
step four, calibrating the left camera image and the right camera image which can only see the laser line in the step three by using the calibration matrix obtained in the step two, and cutting the calibrated images, namely cutting rectangular effective images at the same position and the same size from the scanned object images obtained by the left camera and the right camera to obtain cut images of the left camera and the right camera;
fifthly, converting the cut images of the left camera and the right camera into gray images through gray level conversion, respectively calculating pixel point values of each row in the gray images corresponding to the left camera and the right camera, wherein the position of the maximum point of the pixel point values of each row is the average value of the position with the highest brightness of each row, obtaining the average position of each laser line of the left camera and the right camera in the images according to the average values, and respectively cutting the images with the specified length in the images of the left camera and the right camera by taking the average position as the center to obtain the central point positions of the laser lines of the cut images of the;
step six, carrying out one-to-one correspondence on line numbers of central point positions of laser lines of the left and right camera cutting images obtained in the step five, namely, the X-axis position of a first line of pixels in the left camera cutting image corresponds to the X-axis position of a first line of pixels in the right camera cutting image, the X-axis position of a second line in the left camera cutting image corresponds to the X-axis position of a second line in the right camera cutting image until the X-axis position of the Nth line in the left camera cutting image corresponds to the X-axis position of the Nth line in the right camera cutting image, and calculating the three-dimensional coordinates of each pair of corresponding points relative to a left camera coordinate system by using corresponding point position differences, corresponding point line numbers and calibrated respective internal reference matrixes of the left and right cameras;
n is the total line number of the cut images of the left camera and the right camera, and the value is a positive integer;
the corresponding point position difference is the subtraction of the X-axis position value of the left camera from the X-axis position value of the right camera;
the original point is the optical center of the left camera, the Y-axis direction is the upward direction of the image visual field of the binocular camera, the X-axis direction is the rightward direction of the image visual field of the binocular camera, and the Z-axis is vertical to the XY plane;
step seven, a 3D coordinate point container is created, and the three-dimensional coordinates of each pair of corresponding points obtained by calculation in the step six relative to the left camera coordinate system are placed into the 3D coordinate point container;
step eight, repeatedly executing the step four to the step six, and putting the obtained three-dimensional coordinates of each pair of corresponding points relative to the left camera coordinate system into the container created in the step seven; and obtaining complete point cloud until the scanned object is completely scanned.
The invention has the beneficial effects that:
the invention uses the laser line to assist the positioning of the corresponding point, the laser line starts to scan from the designated position, simultaneously the binocular camera starts to shoot, the central point positions of the laser lines in the image in the left camera and the right camera are processed, and a line consisting of 3D points corresponding to the laser lines is obtained by using binocular distance measurement. And continuously moving and scanning the laser line, continuously photographing by the camera, and repeating the processing process to finally obtain the 3D point cloud model formed by a plurality of lines. The camera position is fixed, the coordinate systems of all frames are the same, splicing errors do not exist, dependency relationship does not exist between frame data and frame data, and final model errors of single-frame data error accumulation are reduced. The invention has low dependence on the camera frame rate and the motor rotating speed, and can also process the frame skipping condition. And generating the position of the point cloud coordinate system relative to the optical center of the left camera in the binocular camera.
The invention mainly uses binocular ranging, which can accurately measure the distance between a specified point in a binocular image and the camera, so that symmetrical points in the left camera and the right camera can be simply obtained, although the binocular image is used in cooperation with the laser, the positions of the two eyes can be relatively fixed, therefore, an encoder is not needed for signal transmission, the position of a laser line of a shooting frame in the scanning process is not required, and the structure is simple. The high-intensity laser is used for projection, so that the interference resistance to an external light source is good, the generated point cloud is high in precision, and the requirement for industrial production is met. If the user feels that the current scanning process is too slow, an auxiliary laser can be added to the structure to obtain another laser line or a prism is used for dividing the current laser line into two lines; the scanning speed can be improved by 50%.
The MV-GE132GM-T industrial camera 2 (the resolution 1280 1024, the frame rate 92FPS) is used, the same trigger signal is transmitted to two cameras by a hardware circuit board, and the synchronization of the images acquired by the cameras is guaranteed. For scanning a 30 x 30cm bin, the scan time is about 5s, the scan accuracy is about x:0.6mm, y:0.3mm, z:0.15mm, if faster scan speed is required in industrial production, the structure can be equipped with auxiliary lasers, and the scan time is changed to 2.5 s-3 s.
Drawings
FIG. 1 is a front view of a binocular coordinated laser-based 3D scanning device according to the present invention;
FIG. 2 is a side view of a binocular coordinated laser based 3D scanning apparatus of the present invention;
FIG. 3a is a schematic view of the laser line positioning principle 1 of the present invention;
FIG. 3b is a schematic view of the laser line positioning principle 2 of the present invention;
FIG. 4 is a front view of the scanning system of the present invention;
FIG. 5 is a side view of a scanning system of the present invention;
FIG. 6 is a schematic view of the laser line position at the beginning of scanning according to the present invention;
FIG. 7 is a schematic diagram of the laser line position at the end of scanning according to the present invention.
Detailed Description
The first embodiment is as follows: the present embodiment is described with reference to fig. 1 and 2, and the binocular coordinated laser based 3D scanning apparatus of the present embodiment includes: the system comprises a binocular camera 1, a stepping motor 2, a laser 3 and a motor controller 4;
the binocular camera 1 comprises a left camera 1-1, a right camera 1-2 and a binocular connecting fixing piece 1-3;
the left camera 1-1 is connected with the right camera 1-2 through a binocular connecting fixing piece 1-3;
a laser 3 is arranged on the stepping motor 2;
the motor controller 4 is in signal connection with the stepping motor 2 and controls the stepping motor 2 to move; step motor 2 drives laser 3 and removes, and laser 3 shoots behind scanning object 5 with laser emission, and two mesh cameras 1 take a picture to scanning object 5.
A binocular camera: the main measuring tool is used for three-dimensional coordinate measurement.
A laser line generator: the laser line emitting device is used for emitting laser lines and assisting in positioning.
A stepping motor: and the laser line generator is used for rotating the laser line generator so that the laser line can sweep the surface of the model at a constant speed.
An instrument support: the method is used for building a test scanning mechanical environment, and the real environment can be ignored.
Connecting piece: and the mechanical connecting part is used for connecting the camera, the motor and the laser line.
The second embodiment is as follows: the embodiment is described with reference to fig. 4 and 5, and the specific process of the binocular collaborative laser-based 3D scanning method of the embodiment is as follows:
step one, a scanning program Ptcreator is developed by using C + + language, an OPENCV library is used, and a running platform is windows;
the scanning program mainly comprises five parts, namely a serial port part, a camera SDK, an operation part and a human-computer interaction part.
The serial port part is mainly responsible for communicating with a motor controller and controlling the operations of advancing, stopping, resetting and the like of the stepping motor, and main operation codes are as follows:
Figure BDA0001375493390000041
Figure BDA0001375493390000051
the camera SDK is mainly responsible for communicating with the camera, setting the camera image size, the camera exposure time, and the operation of acquiring the camera image. The main manipulation code is as follows:
Figure BDA0001375493390000052
Figure BDA0001375493390000061
the calibration part has the main functions of calibrating the binocular cameras, acquiring the relative positions of the internal parameters of the left camera and the right camera and storing calibration data to the local for the calculation module to use. The main operation codes are as follows:
the calculation part is mainly responsible for processing the images and converting the laser line positions in the left and right images into 3D point coordinates, and main operation codes are as follows:
Figure BDA0001375493390000062
Figure BDA0001375493390000071
Figure BDA0001375493390000081
Figure BDA0001375493390000091
the man-machine interaction part mainly has the functions of receiving user input and calling corresponding codes to complete corresponding work, if a user clicks a scanning start button, a program controls a motor to start rotating, and simultaneously controls a camera to acquire images and a calculation part to calculate point clouds.
The motor controller 4 controls the stepping motor 2 to move, the stepping motor 2 drives the laser 3 to rotate, and the laser 3 emits laser to a scanned object;
adjusting the aperture and focal length of a left camera and a right camera in a binocular camera 1 structure, and using corresponding camera image viewing software in a matched manner (the specific use software is determined by the camera model, and common industrial cameras are matched with corresponding image browsing software, for example, the browsing software used by the cameras for experiments is a 'MindVision demonstration program') to observe images of scanned objects of the left camera and the right camera in real time, wherein when the images of the scanned objects are clear and the brightness is consistent with the brightness of the objects in human eyes, the images are proper brightness and focal length;
the image can be clearly seen by people, the phenomenon that certain image positions are whitish due to overlarge aperture can be avoided, and the phenomenon that certain positions are not clearly seen due to undersize aperture can be avoided;
step two,
Starting a scanning program PtCreator (depending on test software written by the principle of the patent), using a calibration function in the scanning program PtCreator,
calibrating the current binocular camera 1 (the binocular calibration method uses a binocular calibration module in an OPENCV library of an open source and uses a required black and white grid calibration plate for calibration), and acquiring respective internal reference matrix and distortion matrix of a left camera and a right camera in the binocular camera 1, a binocular calibration matrix and a relative position matrix after calibration;
adjusting the aperture size (manual physical adjustment) and the exposure time (camera matching program adjustment or camera control module adjustment in a scanning program Ptcreator) of the binocular camera to enable left and right camera images to reach a state that only laser lines can be seen and the other camera images are black as far as possible; if the camera used has no physical function of adjusting the aperture and no corresponding SDK, the step is omitted.
Step four, calibrating the left camera image and the right camera image which can only see the laser line in the step three by using the calibration matrix obtained in the step two, and cutting the calibrated images, namely cutting rectangular effective images at the same position and the same size from the scanned object images obtained by the left camera and the right camera to obtain cut images of the left camera and the right camera;
(the four sides of the calibrated left and right camera images have irregular black edges, the black edges are generated due to camera distortion and errors of the left and right camera installation, the subsequent calculation is interfered, the images are called non-effective images, the rest parts are called effective images, and some effective images can be cut in order to keep the position and the size of the left and right images consistent in the cutting process)
Fifthly, converting the cut images of the left camera and the right camera into gray images through gray level conversion, respectively calculating pixel point values of each row in the gray images corresponding to the left camera and the right camera, wherein the position of the maximum point of the pixel point values of each row is the average value of the position with the highest brightness of each row, obtaining the average position of each laser line of the left camera and the right camera in the images according to the average values, and respectively cutting the images with the specified length in the images of the left camera and the right camera by taking the average position as the center to obtain the central point positions of the laser lines of the cut images of the;
the camera image is loaded into a computer memory and then is converted into a gray image through gray scale conversion, so-called achromatic color exists in the gray image, namely a black-and-white photo understood by ordinary people, the value of each pixel point in the gray image is a brightness value corresponding to each point and exists in a numerical value form, and the numerical value range is 0-255.
And (4) calculating the position with the highest brightness by only comparing the brightness values of the pixel points in each row in the image to obtain the position of the point with the maximum brightness value.
One or more values exist for each row, and then the gray centering method is used to calculate the specific position corresponding to the highest brightness point.
Each row has a value, and when there are many rows, there are many values, and these values are all stored, the central position corresponding to a laser line is obtained.
Step six, carrying out one-to-one correspondence on line numbers of the central point positions of the laser lines of the left and right camera cutting images obtained in the step five, namely, the X-axis position of a first line of pixels in the left camera cutting image corresponds to the X-axis position of a first line of pixels in the right camera cutting image, the X-axis position of a second line in the left camera cutting image corresponds to the X-axis position of a second line in the right camera cutting image, until the X-axis position of the Nth line in the left camera cutting image corresponds to the X-axis position of the Nth line in the right camera cutting image (how many lines of the image correspond to how many corresponding points exist), and calculating the three-dimensional coordinates of each pair of corresponding points relative to a left camera coordinate system by using the corresponding point position difference, the corresponding point line numbers and the calibrated respective internal reference matrixes of the left and right cameras; the specific principle of binocular ranging is as follows:
f is the focal length of the camera, T is the distance between the two optical centers of the eyes, Xl and Xr are the imaging distances (calibrated) of the space midpoint in the left camera and the right camera, and Z is the distance obtained;
f, T and camera distortion coefficients can be obtained through binocular calibration. The image is processed to obtain Xl and Xr.
In summary, the accuracy of measuring the distance between points depends on the selection of corresponding points, and in the invention, the corresponding points are selected by using a laser line auxiliary positioning method;
the method is based on the OPENCV class library of the basic principle of binocular ranging, the three-dimensional coordinates of corresponding points can be directly calculated by using the position difference, the row number and the inverse projection matrix (namely the relative position matrix), and the specific process can refer to the part of '3D point coordinate calculation' in the introduction of a program;
n is the total line number of the cut images of the left camera and the right camera, and the value is a positive integer;
the corresponding point position difference is the subtraction of the X-axis position value of the left camera from the X-axis position value of the right camera;
the original point is the optical center of the left camera, the Y-axis direction is the upward direction of the image visual field of the binocular camera, the X-axis direction is the rightward direction of the image visual field of the binocular camera, and the Z-axis is perpendicular to the XY plane and points to the direction deviating from the camera;
and step seven, creating a 3D coordinate point container (an empty array), and putting each pair of corresponding points calculated in the step six into the 3D coordinate point container (the total image height of coordinate points) relative to the three-dimensional coordinates (3D coordinate points) of the left camera coordinate system.
The container is a place for containing data, and it is assumed that we process each frame of data to obtain 100 points, scan 100 frames in total, and obtain 10000 points in total, but the 10000 points are scattered and not managed well. An empty array capable of containing 10000 points can be created firstly, then 100 points obtained each time are put into the array, and the operation of transferring and storing is convenient;
step eight, repeatedly executing the step four to the step six, and putting the obtained three-dimensional coordinates (3D coordinate points) of each pair of corresponding points relative to the left camera coordinate system into the container created in the step seven; and obtaining complete point cloud until the scanned object is completely scanned.
The motor moves to the termination position (determined by the scanning range, and the scanning can be finished when the edge is scanned);
the specific scanning process is controlled by a user, but the whole scanning process needs to meet certain requirements, for example, in the double-laser-line scanning in the scheme, when scanning is started, the positions of two laser lines are left, one laser line is in the middle, and the laser line on the left side is positioned on the left side of a scanned object; as shown in fig. 6 and 7;
when the scan is finished, the left laser line should move to the middle position (note that this position should be to the right of the right laser line at the start of the scan), while the right laser line is to the right of the scanned object.
Thus, the scanning object can be ensured to be completed, and the obtained point cloud is the complete point cloud.
Creating a pcd-format file according to the number and the positions of the points in the 3D coordinate point container, and writing data of all the points into the created pcd-format file for storage;
and (3) opening the saved file in the pcd format by using third-party software (CloudCompare), checking the quality of the generated 3D point cloud, and verifying the scanning effect.
Principle of finding corresponding points
The corresponding points exist in the form of two-dimensional points on the imaging plane, and the corresponding manner is obtained by first horizontally and then vertically, horizontally using a polar plane, and vertically using a laser line for assisting positioning.
Transverse polar plane principle: the antipodal geometric function provided by OPENCV is used, and the using effects are as follows: the rows of the left and right camera images are aligned one by one after the images are corrected.
The auxiliary schematic diagram of the longitudinal laser line is shown in fig. 3a and 3 b:
and (3) positioning the central line position of the laser line by using a gray level light band method, wherein the centers of the laser lines of the corresponding lines in the left camera and the right camera are the corresponding points of each line. And arranging a laser line to scan from left to right to obtain a 3D point cloud model of the scanning area.
The closer to the image plane, the greater its parallax in the left and right cameras, and the farther from the image plane, the smaller its parallax in the left and right cameras.
The third concrete implementation mode: the second embodiment is different from the first embodiment in that: the internal reference matrix in the second step is a matrix for recording 3 x 3 of internal reference of the binocular camera, the matrix comprises a focal length in the direction of the binocular camera X, Y, and the position of an optical center in an image shot by the binocular camera;
the distortion matrix is a matrix which records 1 x 5 of the distortion of the binocular camera, and parameters such as radial distortion, tangential distortion and the like of the binocular camera are recorded;
the binocular calibration matrix is that after the left camera image and the right camera image are calibrated by respective calibration matrix, the images of the binocular cameras are consistent in height in the horizontal direction, namely, the height positions of the same shooting object in the images of the left camera and the right camera are consistent;
the binocular calibration matrix is exactly six matrices, namely a left camera calibration matrix, a left camera intrinsic matrix, a right camera calibration matrix and a right camera intrinsic matrix, the four matrices are directly calibrated, the left camera calibration + left camera intrinsic can be used for calculating a left camera mapping matrix and a right camera in the same way, the mapping matrix is a matrix which has the same size as an image and can map image position points to new positions, the two matrices which are mapped left and right are finally used, the two matrices are only large (if the image is 2048 x 2048, the matrix is also large), the calibration matrix and the intrinsic matrix are relatively small (the calibration matrix 3 x 3 and the intrinsic matrix 3 x 4), the former four matrices are stored, the latter two matrices are obtained by calculation when in use, and the latter two matrices are used for calibrating the image, so the former four matrices can be ignored in the principle part, it is sufficient to consider the intermediate calculation variable.
Due to installation errors of the binocular cameras, the height of the same point in the left camera image and the right camera image is different, after calibration of the calibration matrix, the height of the point at the same position in the left camera image and the right camera image is consistent, and meanwhile, distortion of the left camera image and distortion of the right camera image are calibrated together through the calibration matrix.
The relative position matrix is a space position transformation matrix of the right camera coordinate system relative to the left camera coordinate system;
other steps and parameters are the same as those in the second embodiment.
The fourth concrete implementation mode: the second or third embodiment is different from the first or second embodiment in that: in the fifth step, the clipped images of the left camera and the right camera are converted into gray images through gray scale conversion, pixel point values of each row in the gray images corresponding to the left camera and the right camera are respectively calculated, the position of the maximum point of the pixel point values of each row is the average value of the position with the highest brightness of each row, the average position of the respective laser lines of the left camera and the right camera in the images is obtained according to the average value, the images with the specified lengths are respectively clipped in the images of the left camera and the right camera by taking the average position as the center, and the central point positions of the laser lines of the clipped images;
irradiating a laser on the surface of an object, wherein the laser looks like a line, but the brightness can be found according to Gaussian distribution after amplification, the brightest position can reach 2-5 pixels, the written positions are the mean value, and the value is calculated by a gray scale gravity center method;
each image has only one clipping position (only the X direction is processed because the Y direction will automatically align when image calibration is performed), Cl in the article represents the left image clipping position, and Cr represents the right image clipping position. The data of the corresponding points are the left image laser position + Cl and the right image laser position + Cr. If the abscissa value of one or two points in the corresponding points of the left and right images is 0, discarding the point;
the specific process is as follows:
1) converting the cut images of the left camera and the right camera into gray images through gray level conversion;
2) calculating pixel point values of a first row in the gray level images corresponding to the left camera and the right camera respectively, wherein the position of the maximum point of the pixel point values is the average value of the positions with the highest brightness of the first row, and the position of the first row of laser lines is the position of the first row;
if M maximum points exist, calculating the maximum span of the M maximum point positions, wherein the maximum span is the difference between the X-axis position of the Mth maximum point and the X-axis position of the first maximum point;
if the maximum span is smaller than the limit value (specified by a user, related to the width of the laser line, generally the specified value is 8-10), the position of the maximum point of the pixel point value is (the X-axis position of the Mth maximum point + the X-axis position of the first maximum point)/2, namely the position of the first row of laser lines;
if the maximum span is larger than or equal to the limit value, abandoning the position of the maximum span, and setting the position as 0, namely the position of the first row of laser lines;
m is positive integer;
the limiting value is 8-10;
assuming that a plurality of maximum values exist, the first X position is 100, the second is 104, the third is 150, and the fourth is 152, the maximum span is 152-100-52, and if the maximum span is greater than the limit value, the position is set to 0; if one X position is 100, the second is 101, the third is 102, and there are no other maximum values, then the maximum span 102-.
Currently, a single image is processed, and both the left and right images are processed accordingly.
3) If the number of the grayscale image lines corresponding to the left camera and the right camera is N, repeating the step 2) for each line in the grayscale images corresponding to the left camera and the right camera, calculating the positions of the N laser lines, and combining the positions of the N laser lines into a one-dimensional matrix P, wherein P is { P0, P1, P2.. Pn }, and Pn represents the position of the nth line of laser lines;
performing median filtering on the one-dimensional matrix P, filtering out a value of 0, and then averaging the rest positions to obtain the average positions Pf of respective laser lines of the left camera and the right camera in the image;
4) cutting an image with a specified length (the specific length is related to the overall width of the image and is generally one twentieth of the overall width of the image) on each of the left camera image and the right camera image by taking the average position Pf of the laser lines of the left camera and the right camera in the image as a center, and if the distance between the Pf and the leftmost position of the image is less than the specified length and the cutting starting position is shifted to the right (the image visual field is in the right direction), setting the current cutting starting position as 0;
if the distance between Pf and the rightmost side of the image is less than the designated length, the cutting position is moved to the left (the image visual field is towards the left direction), and then the current cutting starting position is the source image width-cutting width;
if the Pf is positioned in the middle of the image, the current cutting starting position is Pf-cutting specified length/2;
recording the starting position Cl of the left image cutting; the starting position Cr of the right image cutting;
5) because the brightness of the laser line in the image is in Gaussian distribution (known), the accurate positions of the laser lines of the cut images of the left camera and the right camera are calculated by using a gray center method (known, also can use a gray center method, and the calculation results have little difference) according to the cut images obtained in the step 4) (the positions calculated above are only used for cutting the images and belong to rough calculation, and the position accuracy calculated at the position can reach a sub-pixel level);
adding the corresponding cutting initial position Cl to the obtained laser line position of the left camera cutting image to obtain the central point position of the laser line of the left camera cutting image;
if both the left and right images need to be calculated, for the corresponding left and right images of n rows, we can obtain n pairs of corresponding points, and the distribution of the corresponding points in the left and right camera clipping images is: { PTl0(Pl0,0),PTr0(Pr0,0)},{PTl1(Pl1,1),PTr1(Pr1,1)}...{PTln(Pln,n),PTrn(Prn,n)},
PTl0Left 0 point, PTr0Is a right 0 point, Pl0Is PTl0On the abscissa, 0 is PTl0Ordinate of (P)l00) is PTl0(P) of (A)r00) is PTr0Coordinate of (A), Pr0Is PTr0On the abscissa, 0 is PTr0The ordinate of (a);
PTl11 point on the left, PTr1Is a right 1 point, Pl1Is PTl1On the abscissa of (1) is PTl1Ordinate of (P)l11) is PTl1(P) of (A)r11) is PTr1Coordinate of (A), Pr0Is PTr1On the abscissa, 0 is PTr1The ordinate of (a);
PTlnis a left n point, PTrnIs the left m point, PlnIs PTlnOn the abscissa, n is PTlnOrdinate of (P)lnN) is PTln(P) of (A)rnN) is PTrnCoordinate of (A), Pr0Is PTrnOn the abscissa, n is PTrnThe ordinate of (a);
note that the X coordinate calculated here is relative to the source image;
adding the corresponding cutting initial position Cr to the laser line position of the obtained right camera cutting image to obtain the central point position of the laser line of the right camera cutting image;
if one or two X-axis coordinate values of the points exist in the central point positions of the laser lines of the left camera and the right camera cutting images and are 0, the central point of the laser line is discarded.
Other steps and parameters are the same as those in the second or third embodiment.
The fifth concrete implementation mode: this embodiment is different from one of the second to fourth embodiments in that: in the step five, the specified width is determined by the size of the image and can be set to be one tenth of the size of the image cut by the left camera and the right camera;
other steps and parameters are the same as those of one of the second to fourth embodiments.
The sixth specific implementation mode: the present embodiment is different from one of the second to fifth embodiments in that: the central point of the laser line is positioned relative to the source image position, but not the position of the intercepted image, and the intercepted image is only used for accelerating the image processing speed;
and the source images are left and right camera images which are not cut in the fourth step.
Other steps and parameters are the same as those of one of the second to fifth embodiments.
The following examples were used to demonstrate the beneficial effects of the present invention:
the first embodiment is as follows:
the 3D scanning device and method based on binocular cooperative laser in the embodiment are specifically prepared according to the following steps:
the MV-GE132GM-T industrial camera 2 (the resolution 1280 1024, the frame rate 92FPS) is used, the same trigger signal is transmitted to two cameras by a hardware circuit board, and the synchronization of the images acquired by the cameras is guaranteed. For scanning a 30 x 30cm bin, the scan time is about 5s, the scan accuracy is about x:0.6mm, y:0.3mm, z:0.15mm, if faster scan speed is required in industrial production, the structure can be equipped with auxiliary lasers, and the scan time is changed to 2.5 s-3 s. If higher precision is required, cameras with higher resolution and higher frame rate can be replaced.
The present invention is capable of other embodiments and its several details are capable of modifications in various obvious respects, all without departing from the spirit and scope of the present invention.

Claims (3)

1. The 3D scanning method based on the binocular collaborative laser is characterized by comprising the following steps: the specific process of the 3D scanning method based on the binocular collaborative laser comprises the following steps:
step one, a motor controller (4) controls a stepping motor (2) to move, the stepping motor (2) drives a laser (3) to rotate, and the laser (3) emits laser to a scanned object;
calibrating the current binocular camera (1), and acquiring an internal reference matrix, a distortion matrix, a binocular calibration matrix and a relative position matrix of a left camera and a right camera in the binocular camera (1) after calibration;
adjusting the aperture size and the exposure time of the binocular camera (1) to enable the left camera image and the right camera image to only see laser lines;
step four, calibrating the left camera image and the right camera image which can only see the laser line in the step three by using the calibration matrix obtained in the step two, and cutting the calibrated images, namely cutting rectangular effective images at the same position and the same size from the scanned object images obtained by the left camera and the right camera to obtain cut images of the left camera and the right camera;
fifthly, converting the cut images of the left camera and the right camera into gray images through gray level conversion, respectively calculating pixel point values of each row in the gray images corresponding to the left camera and the right camera, wherein the position of the maximum point of the pixel point values of each row is the average value of the position with the highest brightness of each row, obtaining the average position of each laser line of the left camera and the right camera in the images according to the average values, and respectively cutting the images with the specified width in the images of the left camera and the right camera by taking the average position as the center to obtain the central point positions of the laser lines of the cut images of the;
step six, carrying out one-to-one correspondence on line numbers of central point positions of laser lines of the left and right camera cutting images obtained in the step five, namely, the X-axis position of a first line of pixels in the left camera cutting image corresponds to the X-axis position of a first line of pixels in the right camera cutting image, the X-axis position of a second line in the left camera cutting image corresponds to the X-axis position of a second line in the right camera cutting image until the X-axis position of the Nth line in the left camera cutting image corresponds to the X-axis position of the Nth line in the right camera cutting image, and calculating the three-dimensional coordinates of each pair of corresponding points relative to a left camera coordinate system by using corresponding point position differences, corresponding point line numbers and calibrated respective internal reference matrixes of the left and right cameras;
n is the total line number of the cut images of the left camera and the right camera, and the value is a positive integer;
the corresponding point position difference is the subtraction of the X-axis position value of the left camera from the X-axis position value of the right camera;
the original point is the optical center of the left camera, the Y-axis direction is the upward direction of the image visual field of the binocular camera, the X-axis direction is the rightward direction of the image visual field of the binocular camera, and the Z-axis is vertical to the XY plane;
step seven, a 3D coordinate point container is created, and the three-dimensional coordinates of each pair of corresponding points obtained by calculation in the step six relative to the left camera coordinate system are placed into the 3D coordinate point container;
step eight, repeatedly executing the step four to the step six, and putting the obtained three-dimensional coordinates of each pair of corresponding points relative to the left camera coordinate system into the container created in the step seven; until the scanning object is completely scanned, obtaining complete point cloud;
the internal reference matrix in the second step is a matrix for recording 3X 3 of internal reference of the binocular camera, the matrix comprises focal lengths in the X-axis direction and the Y-axis direction of the binocular camera (1), and the position of an optical center in an image shot by the binocular camera;
the distortion matrix is a matrix of 1 x 5 for recording the distortion of the binocular camera, and radial distortion and tangential distortion parameters of the binocular camera are recorded;
the relative position matrix is a space position transformation matrix of the right camera coordinate system relative to the left camera coordinate system;
in the fifth step, the clipped images of the left camera and the right camera are converted into gray images through gray scale conversion, pixel point values of each row in the gray images corresponding to the left camera and the right camera are respectively calculated, the position of the maximum point of the pixel point values of each row is the average value of the position with the highest brightness of each row, the average position of the respective laser lines of the left camera and the right camera in the images is obtained according to the average value, the images with the designated width are respectively clipped in the images of the left camera and the right camera by taking the average position as the center, and the central point positions of the laser lines of the clipped; the specific process is as follows:
1) converting the cut images of the left camera and the right camera into gray images through gray level conversion;
2) calculating pixel point values of a first row in the gray level images corresponding to the left camera and the right camera respectively, wherein the position of the maximum point of the pixel point values is the average value of the positions with the highest brightness of the first row, and the position of the first row of laser lines is the position of the first row;
if M maximum points exist, calculating the maximum span of the M maximum point positions, wherein the maximum span is the difference between the X-axis position of the Mth maximum point and the X-axis position of the first maximum point;
if the maximum span is smaller than the limit value, the position of the maximum point of the pixel point value is (Mth maximum point X-axis position + first maximum point X-axis position)/2, namely the position of the first row of laser lines;
if the maximum span is larger than or equal to the limit value, abandoning the position of the maximum span, and setting the position as 0, namely the position of the first row of laser lines;
m is positive integer;
the limiting value is 8-10;
3) if the number of the grayscale image lines corresponding to the left camera and the right camera is N, repeating the step 2) for each line in the grayscale images corresponding to the left camera and the right camera, calculating the positions of the N laser lines, and combining the positions of the N laser lines into a one-dimensional matrix P, wherein P is { P0, P1, P2.. Pn }, and Pn represents the position of the nth line of laser lines;
performing median filtering on the one-dimensional matrix P, filtering out a value of 0, and then averaging the rest positions to obtain the average positions Pf of respective laser lines of the left camera and the right camera in the image;
4) cutting images with specified widths on the left camera image and the right camera image by taking the average position Pf of the laser lines of the left camera and the right camera in the images as a center, and if the distance Pf from the leftmost side of the images is less than the specified width and the cutting initial position moves to the right, setting the current cutting initial position to be 0 to obtain a cutting image;
if the distance from the Pf to the rightmost side of the image is smaller than the designated width, the cutting position is moved to the left, and the current cutting starting position is the source image width-cutting width to obtain a cutting image;
if the Pf is positioned in the middle of the image, the current cutting starting position is Pf-cutting specified width/2, and a cutting image is obtained;
recording the starting position Cl of the left image cutting; the starting position Cr of the right image cutting;
5) calculating the laser line positions of the cut images of the left camera and the right camera by using a gray center method for the cut images obtained in the step 4);
adding the corresponding cutting initial position Cl to the obtained laser line position of the left camera cutting image to obtain the central point position of the laser line of the left camera cutting image;
adding the corresponding cutting initial position Cr to the laser line position of the obtained right camera cutting image to obtain the central point position of the laser line of the right camera cutting image;
if one or two X-axis coordinate values of the points exist in the central point positions of the laser lines of the left camera and the right camera cutting images and are 0, the central point of the laser line is discarded.
2. The binocular coordinated laser-based 3D scanning method according to claim 1, wherein: and the specified width in the step five is set to be one tenth of the size of the clipped image of the left camera and the right camera.
3. The binocular coordinated laser-based 3D scanning method according to claim 2, wherein: the central point of the laser line is positioned relative to the source image in the fifth step;
and the source images are left and right camera images which are not cut in the fourth step.
CN201710681112.1A 2017-08-10 2017-08-10 3D scanning device and scanning method based on binocular collaborative laser Active CN107505324B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710681112.1A CN107505324B (en) 2017-08-10 2017-08-10 3D scanning device and scanning method based on binocular collaborative laser

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710681112.1A CN107505324B (en) 2017-08-10 2017-08-10 3D scanning device and scanning method based on binocular collaborative laser

Publications (2)

Publication Number Publication Date
CN107505324A CN107505324A (en) 2017-12-22
CN107505324B true CN107505324B (en) 2020-06-16

Family

ID=60690689

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710681112.1A Active CN107505324B (en) 2017-08-10 2017-08-10 3D scanning device and scanning method based on binocular collaborative laser

Country Status (1)

Country Link
CN (1) CN107505324B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108534708B (en) * 2018-03-30 2020-04-24 深圳积木易搭科技技术有限公司 Binocular three-dimensional scanner assembly and scanning method
CN109590231A (en) * 2018-12-19 2019-04-09 上海易持自动系统有限公司 A kind of non-regular shape material image measurement measuring and controlling device and method
CN111738971B (en) * 2019-03-19 2024-02-27 北京伟景智能科技有限公司 Circuit board stereoscopic scanning detection method based on line laser binocular stereoscopic vision
CN111452036B (en) * 2019-03-19 2023-08-04 北京伟景智能科技有限公司 Workpiece grabbing method based on line laser binocular stereoscopic vision
US10565737B1 (en) 2019-07-09 2020-02-18 Mujin, Inc. Method and system for performing automatic camera calibration for a scanning system
CN112304951A (en) * 2019-08-01 2021-02-02 唐山英莱科技有限公司 Visual detection device and method for high-reflection welding seam through binocular single-line light path
CN110631543B (en) * 2019-09-17 2024-06-14 中国地质大学(武汉) Device and method for monitoring deep deformation of landslide with circular sliding surface placed in shallow layer of ground surface
CN110595391B (en) * 2019-09-26 2024-06-18 桂林电子科技大学 Reticle structured light binocular vision scanning device
CN111479053B (en) * 2020-03-25 2021-07-16 清华大学 Software control system and method for scanning light field multicolor microscopy imaging
CN112285125A (en) * 2020-11-11 2021-01-29 安徽锦希自动化科技有限公司 Detection device for collecting dust deposition degree on solar panel
CN112770046B (en) * 2020-12-21 2022-04-01 深圳市瑞立视多媒体科技有限公司 Generation method of control SDK of binocular USB camera and control method of binocular USB camera
CN113048908B (en) * 2021-03-08 2022-04-26 中国海洋大学 Submarine landform detection image generation system based on laser scanning
CN114111574B (en) * 2021-11-23 2024-01-09 西安理工大学 High-temperature red-hot target binocular line laser vision three-dimensional measurement method
CN115628689A (en) * 2022-10-20 2023-01-20 湖南航天远望科技有限公司 Calibration device and method for camera attitude installation error
CN118710733A (en) * 2024-07-09 2024-09-27 中建三局集团有限公司 A multi-camera extrinsic parameter calibration device and method based on laser line scanning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07294443A (en) * 1994-04-25 1995-11-10 Central Japan Railway Co Ballast condition inspection device for road shoulders
WO1999001988A1 (en) * 1997-07-02 1999-01-14 Ericsson, Inc. Three-dimensional imaging and display system
CN102012217A (en) * 2010-10-19 2011-04-13 南京大学 A three-dimensional geometric shape measurement method for large-shaped objects based on binocular vision

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103940369A (en) * 2014-04-09 2014-07-23 大连理工大学 Quick morphology vision measuring method in multi-laser synergic scanning mode
CN104390584B (en) * 2014-05-22 2018-04-06 北京中天荣泰科技发展有限公司 Binocular vision laser calibration measurement apparatus and measuring method
CN105157602A (en) * 2015-07-13 2015-12-16 西北农林科技大学 Remote three-dimensional scanning system and method based on machine vision
CN105300316B (en) * 2015-09-22 2017-10-13 大连理工大学 Optical losses rapid extracting method based on grey scale centre of gravity method
CN105698699B (en) * 2016-01-26 2017-12-19 大连理工大学 A kind of Binocular vision photogrammetry method based on time rotating shaft constraint

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07294443A (en) * 1994-04-25 1995-11-10 Central Japan Railway Co Ballast condition inspection device for road shoulders
WO1999001988A1 (en) * 1997-07-02 1999-01-14 Ericsson, Inc. Three-dimensional imaging and display system
CN102012217A (en) * 2010-10-19 2011-04-13 南京大学 A three-dimensional geometric shape measurement method for large-shaped objects based on binocular vision

Also Published As

Publication number Publication date
CN107505324A (en) 2017-12-22

Similar Documents

Publication Publication Date Title
CN107505324B (en) 3D scanning device and scanning method based on binocular collaborative laser
CN112804507B (en) Projector correction method, projector correction system, storage medium, and electronic device
US8743349B2 (en) Apparatus and method to correct image
US10237532B2 (en) Scan colorization with an uncalibrated camera
US9858684B2 (en) Image processing method and apparatus for calibrating depth of depth sensor
CN109405765A (en) A kind of high accuracy depth calculation method and system based on pattern light
CN107462184A (en) The parameter recalibration method and its equipment of a kind of structured light three-dimensional measurement system
CN108340405B (en) Robot three-dimensional scanning system and method
CN106949845A (en) Two-dimensional laser galvanometer scanning system and scaling method based on binocular stereo vision
JP2003130621A (en) Method and system for measuring three-dimensional shape
CN106408556A (en) Minimal object measurement system calibration method based on general imaging model
CN106548489A (en) The method for registering of a kind of depth image and coloured image, three-dimensional image acquisition apparatus
CN105306922B (en) Acquisition methods and device of a kind of depth camera with reference to figure
CN107241592B (en) Imaging device and imaging method
CN111637834B (en) Three-dimensional data measuring device and method
CN111402411A (en) Scattered object identification and grabbing method based on line structured light
KR102706337B1 (en) Machine vision system with computer-generated virtual reference object
JP2023546739A (en) Methods, apparatus, and systems for generating three-dimensional models of scenes
CN107564051B (en) A kind of depth information collection method and system
US20200088508A1 (en) Three-dimensional information generating device and method capable of self-calibration
CN113298886A (en) Calibration method of projector
KR20200046789A (en) Method and apparatus for generating 3-dimensional data of moving object
CN112361982A (en) Method and system for extracting three-dimensional data of large-breadth workpiece
US20240291952A1 (en) Calibration method
Huang et al. Line laser based researches on a three-dimensional measuring system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Li Jie

Inventor before: Wang Xing

CB03 Change of inventor or designer information
TA01 Transfer of patent application right

Effective date of registration: 20180321

Address after: 150000 Nantong street, Nangang District, Harbin, Heilongjiang Province, No. 145-11

Applicant after: Li Jie

Address before: 150000 Harbin City, Heilongjiang 150000

Applicant before: Wang Xing

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant