WO2019100933A1 - Method, device and system for three-dimensional measurement - Google Patents
Method, device and system for three-dimensional measurement Download PDFInfo
- Publication number
- WO2019100933A1 WO2019100933A1 PCT/CN2018/114016 CN2018114016W WO2019100933A1 WO 2019100933 A1 WO2019100933 A1 WO 2019100933A1 CN 2018114016 W CN2018114016 W CN 2018114016W WO 2019100933 A1 WO2019100933 A1 WO 2019100933A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- camera
- feature point
- abscissa
- ordinate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
- G01B11/25—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
Definitions
- the present invention generally relates to methods, apparatus, and systems for three-dimensional measurements, and more particularly to three-dimensional measurement methods, apparatus, and systems based on computer vision techniques.
- Three-dimensional measurement based on computer vision and three-dimensional reconstruction based on three-dimensional measurement are widely used in industry, security, transportation and entertainment.
- Industrial robots sense the real world and make decisions that require three-dimensional spatial information.
- Security monitoring adds 3D scenes to improve target recognition accuracy.
- Auto-driving, drones, etc. need to sense the position of surrounding objects in real time.
- Building restoration, cultural relics restoration, etc. require three-dimensional reconstruction of buildings and cultural relics, especially three-dimensional reconstruction based on high-density true color point clouds.
- the characters formed by three-dimensional reconstruction are widely used in the movie, animation, and game industries.
- Three-dimensional virtual characters formed based at least in part on three-dimensional reconstruction are also widely used in the VR and AR industries.
- the three-dimensional reconstruction based on the binocular camera recovers the three-dimensional coordinate information of the object by calculating the positional deviation between the image points corresponding to the same object point in the binocular image, the core of which is to select the feature points in the image, and Find/screen feature point groups (ie, feature point matching) on different images that may correspond to the same object point.
- Find/screen feature point groups ie, feature point matching
- a common feature of the above method for screening matching feature point groups is that which feature point on one image is most similar to the feature point to be matched on another image, and one problem exists in the processing of images of complex scenes.
- the medium match error rate is often very high. For example, in an image of a periodic structure, two image points that do not belong to the same object or on the same period of the same period, it is likely that the normalized cross-correlation of the neighborhood is the largest, or the pixel difference is the smallest, and the true corresponding point combination may be It is in the position where the normalized cross-correlation value is the second largest or the pixel difference is the second. When there are no other dimensions to verify that the match is valid, then that match error will persist and be passed.
- Another problem with the above-described existing method of screening matching feature point groups is that the amount of calculation is very large, resulting in problems such as excessive processing time and/or difficulty in miniaturization and high cost of the required computing device.
- a three-dimensional measurement method based on a parallax ratio relationship comprising: receiving first, second, and third images from a first camera, a second camera, and a third camera, respectively
- the first camera, the second camera, and the third camera have the same focal length and optical axes parallel to each other, and the optical centers of the first camera, the second camera, and the third camera are disposed on the same plane perpendicular to the optical axis; Extracting feature points in the first image, the second image, and the third image; matching feature points in the first image, the second image, and the third image, the matching including filtering matching features based on the following parallax ratio relationship a group of points: a first parallax d 1 in a first direction and a second direction generated between the first image and the third image, which are generated between the first image and the second image, in the second direction
- a three-dimensional measuring apparatus comprising: a processor; and a memory storing program instructions, wherein when the program instructions are executed by the processor, causing the processor to perform the following operations Receiving a first image, a second image, and a third image; extracting feature points in the first image, the second image, and the third image, respectively; matching feature points in the first image, the second image, and the third image
- the matching includes: filtering the matched feature point group based on the following coordinate relationship: a difference between the abscissa of the feature point in the first image and the abscissa of the feature point in the second image and the feature point in the second image
- the abscissa has a predetermined proportional relationship with the difference between the abscissas of the feature points in the third image, and the feature points in the first image and the third image have the same ordinate; and the object corresponding to the matched feature point group is calculated The three-dimensional coordinates of the point.
- a three-dimensional measuring apparatus for use with a camera array for three-dimensional measurement.
- the camera array includes at least a first camera, a second camera, and a third camera, the first camera, the second camera, and the third camera having the same focal length and optical axes parallel to each other, and the first camera and the second camera
- the optical centers of the third camera and the third camera are arranged on the same plane perpendicular to the optical axis.
- the three-dimensional measuring apparatus includes a processing unit that receives a first image, a second image, and a third image from the first camera, the second camera, and the third camera, respectively, and is configured to perform processing of: Extracting feature points in an image, a second image, and a third image; matching feature points in the first image, the second image, and the third image, the matching including filtering the matched feature point groups based on the following parallax ratio relationship: a same object point, a first parallax d 1 generated in a first direction between the first image and the second image, and a second generated in the second direction between the second image and the third image
- a three-dimensional measurement system based on a parallax ratio relationship
- the system comprising: a camera array and any one of the above three-dimensional measurement devices, the camera array including at least a first camera, a second camera, and a a three camera, the first camera, the second camera, and the third camera having the same focal length and an optical axis parallel to each other, and the optical centers of the first camera, the second camera, and the third camera are arranged in a same direction perpendicular to the optical axis on flat surface.
- 1A, 1B and 1C show the relationship between the binocular parallax and the relative position of the camera optical center
- FIG. 3 is a schematic structural block diagram showing an example of a three-dimensional measuring system of the present invention.
- Figure 4 shows a schematic overall flow chart of the three-dimensional measuring method of the present invention
- FIG. 5 illustrates an example of a camera array that can be used in conjunction with the three-dimensional measurement method according to the first embodiment of the present invention
- 6A, 6B, and 6C show the parallax ratio relationship between the cameras shown in Fig. 5.
- FIG. 7 is a view schematically showing an example of an image obtained by the camera array shown in FIG. 5 and illustrating coordinate difference values of image points;
- FIG. 8 is a schematic flow chart showing a three-dimensional measuring method according to a first embodiment of the present invention.
- FIG. 9 shows an example of a process of screening a feature point group which can be used for the three-dimensional measurement method according to the first embodiment of the present invention.
- FIG. 10 is a view schematically comparing feature point matching based on similarity calculation and feature point matching based on coordinate relationship in the three-dimensional measurement method according to the first embodiment of the present invention.
- Figure 11 is a flow chart showing an example of a three-dimensional measuring method according to a first embodiment of the present invention.
- FIG. 12 shows an example of a camera array that can be used in combination with the three-dimensional measurement method according to the second embodiment of the present invention and shows its parallax ratio relationship;
- Figure 13 is a view schematically showing an example of an image obtained by the camera array shown in Figure 12 and illustrating coordinate difference values of image points;
- FIG. 14 shows an example of a process of screening a feature point group which can be used for the three-dimensional measurement method according to the second embodiment of the present invention
- FIG. 15 shows an example of a camera array that can be used in combination with the three-dimensional measurement method according to the third embodiment of the present invention and shows its parallax ratio relationship;
- Figure 16 is a view schematically showing an example of an image obtained by the camera array shown in Figure 12 and illustrating coordinate difference values of image points;
- FIG. 17 shows an example of a process of screening a feature point group that can be used in the three-dimensional measurement method according to the third embodiment of the present invention.
- Figure 18 is a flowchart showing one example of a three-dimensional measuring method according to a third embodiment of the present invention.
- FIG. 19 illustrates an example of a camera array arrangement that can be used in a three-dimensional measurement system in accordance with an embodiment of the present invention.
- O l and O r represent the optical centers of the left and right cameras respectively (the optical center of the camera lens), and I l and I r represent the image planes of the left and right cameras, respectively (hereinafter referred to as left image and right, respectively).
- Image side The image plane of the camera is determined by the position of the photosensitive surface of the image sensor contained in the camera, such as a CCD or CMOS, typically located at a focal length f from the optical center.
- the camera is usually a real image that is inverted on the object.
- the image plane is located on the opposite side of the object with respect to the optical center. However, for the convenience of illustration and analysis, the image surface is shown at a symmetrical position on the same side as the object. Show. It should be understood that this does not change the parallax relationship that will be discussed in this application.
- Both cameras for binocular vision have the same focal length.
- the optical centers O l and O r of the left and right cameras are separated by a distance D (also referred to as "baseline”), and the corresponding optical axes Z l and Z r are parallel to each other.
- the optical center of the camera to the left O l camera coordinate system origin to the left and right of a camera optical axis is the Z-direction.
- the optical centers of the left and right cameras are located in the same plane (ie, the XY plane) perpendicular to the optical axis.
- a direction to connect with the optical center O l O r as the X direction of the camera coordinate system a direction perpendicular to the optical axis of the optical center and the connection to the Y-direction.
- the camera coordinate system may also be set in other ways, for example, the optical center of the camera to the right of the origin O r, is set in a manner different from the camera coordinate system, discussed below, does not affect the parallax than a given relationship.
- the image planes corresponding to the respective cameras have image plane coordinate systems set in the same manner.
- the point at which the camera optical axis intersects the image plane is the origin of the image plane coordinate system, and is parallel to the X coordinate of the camera coordinate system, respectively.
- the direction of the Y-axis is the x-axis and the y-axis of the image plane coordinate system.
- the image coordinate system can also be set in other ways, for example, using an apex angle of the photosensitive surface of the image sensor as the origin, and the setting of different image coordinate systems does not affect the parallax discussed below.
- the ratio relationship is described in the image plane coordinate system.
- Fig. 1A shows the relationship between the binocular parallax and the optical center distance in the direction in which the camera is connected in the optical center.
- Figure 1A shows the projection of the entire imaging system on the XZ plane.
- the same object point P[X, Y, Z] in space is imaged by the left and right cameras to the image points P l and P r , respectively .
- d / D f / Z, where x l and x r are the x-axis coordinates of the image points P l and P r in the left image plane I l and the right image plane I r , respectively.
- Fig. 1B shows binocular parallax in a direction perpendicular to the optical line of the camera.
- Figure 1 B schematically illustrates the projection of the entire imaging system on the YZ plane.
- the direction of the parallax to be investigated is set to the X direction in FIG. 1C, and the optical line connection between the left camera and the right camera does not coincide with the X direction, in other words, the direction of the parallax to be inspected relative to the camera optical center.
- the connection can be in any direction.
- the focal length f is an optical axis (not shown) parallel to each other, and the respective optical centers O 1 , O 2 and O 3 are located in the same plane perpendicular to the optical axis.
- the first parallax d 1 D 1 ⁇ f/Z in the first direction generated between the images obtained by the first camera and the second camera, wherein D 1 Is the offset of the optical center O 2 of the second camera relative to the optical center O 1 of the first camera in the first direction A, and the image obtained between the second camera and the third camera, in the second direction
- the first direction A is parallel to the plane of the camera's optical center and is not perpendicular to the direction of the optical connection of the first camera and the second camera
- the second direction B is parallel to the plane of the camera's optical center and is non-perpendicular to the first The direction of the optical connection between the two cameras and the third camera.
- the first direction is different from the second direction in FIG. 2, the first direction may be the same as the second direction.
- FIG. 3 shows a schematic structural block diagram of one example of a three-dimensional measurement system 10 in accordance with the present invention.
- FIG. 4 shows a schematic overall flow diagram of a three dimensional measurement method 100 of the present invention.
- three-dimensional measurement system includes a camera array 10 in FIG. CA, the CA includes at least a first camera array camera C 1, C 2, and the second camera third camera 3 C, they have the same focal length and the optical axes are parallel to one another And the respective optical centers are arranged on the same plane perpendicular to the optical axis.
- each camera has the same aperture, ISO, shutter time, image sensor, and the like. More preferably, each camera is a camera of exactly the same type.
- the three-dimensional measurement system 10 includes a processing unit 11 that receives images from a camera array, including first, second, and third images from a first camera C 1 , a second camera C 2 , and a third camera C 3 , and Processing is performed based on these images to achieve three-dimensional measurement.
- the camera array CA can include additional cameras, and the processing unit 11 can receive images from these additional cameras and process them.
- the three-dimensional measurement system 10 can include a control unit 12 operative to control the first camera, the second camera, and the third camera to acquire images simultaneously.
- the control unit 12 can also control a multiple zoom such as a camera. For example, for a farther scene, the three cameras need to be adjusted to a larger focal length identically.
- the control unit 12 may be connected through a wired or wireless manner with the first camera C 1, C 2, and the second camera third camera C 3, to realize the above control.
- the control unit 12 can be in communication with the processing unit 11 to receive information from the processing unit 11 to generate the control signals for controlling the camera to acquire images simultaneously, or to operate independently to achieve the above control.
- the three-dimensional measurement method 100 based on the parallax ratio relationship of the present invention is implemented based on, for example, the camera array CA of the three-dimensional measurement system 10.
- the three-dimensional measurement method 100 includes:
- S110 receiving a first image, a second image, and a third image from the first camera, the second camera, and the third camera, respectively;
- S120 extract feature points in the first image, the second image, and the third image
- S140 Calculate the three-dimensional coordinates of the object points corresponding to the matching feature point group.
- it can be considered as equal in engineering sense.
- d 1 :d 2 (1+k)*D 1 :D 2 , where k represents an allowable relative deviation, and
- the processing unit 11 of the three-dimensional measurement system 10 is configured to perform the above-described processes s110 to s140 in the three-dimensional measurement method 100.
- Processing unit 11 may be implemented by a processor and a memory storing program instructions that, when executed by the processor, cause the processor to perform the operations of processes s110-s140 described above.
- the processing unit 11 can constitute a three-dimensional measuring device 20 according to the invention.
- the image may be received directly from the first camera, the second camera, and the third camera, or may be received via other units or devices.
- a grayscale gradient can be used to find a point with a large change in gray as a feature point
- a sift algorithm can be used to find a sift feature point
- a corner point detection algorithm such as the Harris algorithm can be used to find a corner point of an image as a feature point. It should be understood that the present invention is not limited to the specific method for extracting feature points.
- the processing s130 is configured to match the feature points in each image, and includes processing for filtering the matching feature point groups based on the parallax ratio relationship described above.
- the process of screening matching feature point groups based on the parallax ratio relationship in process s130 will be described in more detail below in conjunction with various embodiments.
- Processing s130 may further include the process of filtering the matched set of feature points in other manners.
- the process s130 may implement feature point matching only by processing based on a parallax ratio relationship.
- feature point matching may be implemented in processing s130 in conjunction with processing based on disparity ratio relationships and processing based on other matching/screening methods.
- Other ways of filtering matching feature point groups include, for example, applying a similarity calculation to a pixel or neighborhood pixel group to filter matching feature point groups.
- the similarity calculation includes a sum of squares of pixel gradation differences, a sum of squares of zero-crossing average pixel gradation differences, an absolute value of pixel gradation differences, an absolute value of a zero-crossing average pixel gradation difference, and an adjacent
- the process s130 further includes applying a similarity calculation to the pixel or neighborhood pixel group to further filter the matched feature point group for the matched feature point group selected based on the parallax ratio relationship.
- the similarity calculation may be applied only to two or more sets of feature points that contain the same feature points, ie, only if the match result is not unique.
- the matching feature point group is first filtered by, for example, applying a similarity calculation to the feature point or its neighboring pixel group, and then the matching feature point group obtained by the similarity calculation screening is judged. Whether the group feature points satisfy the parallax ratio relationship, thereby further screening the matching feature point groups.
- Each of the matched feature point groups obtained by processing s130 includes one feature point from the first image, the second image, and the third image, respectively.
- the depth (Z coordinate) of the corresponding object point may be calculated based on any two feature points in a matched feature point group, and then the X, Y coordinates of the object point are calculated according to the similar triangle principle.
- a method of calculating a depth value based on two matched feature points and calculating an X, Y coordinate based on the depth value is known, and will not be described herein.
- a depth value is calculated in each of the two feature points in a matched feature point group in the processing s130, and the average value is taken as the depth value of the corresponding object point.
- the calculated depth values should be equal, but in reality, considering the effect of image noise, there will be a slight difference between the three values. Therefore, taking the average value can reduce the influence of noise and improve the accuracy of the depth value.
- two pairs of feature points in the feature point group may also be selected to calculate the depth value and take the average value as the depth value of the object point.
- the process s140 may further include calculating a color value of the object point corresponding to each matched feature point group, such as [R, G, B]. This color value can form voxel information together with the coordinates [X, Y, Z] of the object point, such as [X, Y, Z, R, G, B].
- voxel information of all object points can be combined to form a true color 3D model of the object or scene.
- the three-dimensional measurement method 100 and the three-dimensional measurement system 10 will be described in more detail below in connection with various embodiments, particularly in which a matching feature point group is screened based on a parallax ratio relationship.
- 5 to 11 illustrate a three-dimensional measurement system and a three-dimensional measurement method according to a first embodiment of the present invention.
- FIG. 5 shows a camera array CA in a three-dimensional measuring system according to a first embodiment of the present invention, wherein the first camera C 1 , the second camera C 2 , and the third camera C 3 have the same focal length and optical axes parallel to each other And the optical centers of the three cameras are arranged on the same straight line (X direction) perpendicular to the optical axis.
- D 1 D 2 ; in the arrangement (b), D 1 ⁇ D 2 .
- FIG. 6A, 6B, and 6C show the parallax ratio relationship between the three cameras shown in Fig. 5.
- Each of the image planes has an image plane coordinate system set in the same manner as described above.
- the x-axis direction of each image plane corresponds to the direction of the line where the camera's optical center is located, and the y-axis is perpendicular to the x-axis.
- k represents the relative deviation allowed, and
- may be, for example, 0.05 or 0.01.
- FIG. 7 schematically shows an example of the first image IM 1 , the second image IM 2 , and the third image IM 3 obtained by the camera array shown in FIG. 5, wherein the abscissa axis (u axis) of the image corresponds to the camera light. The direction of the line where the heart is located. Ideally, as shown in FIG. 7 by superimposing the first image IM. 1, the second and the third image IM 2.
- the first image, the second image, and the third image obtained directly from the first camera, the second camera, and the third camera due to manufacturing errors, mounting errors, and changes in camera internal parameters and/or external parameters generated during use It usually deviates from the above ideal situation. This can bring the image closer to the ideal situation by physically calibrating or correcting the camera and/or by correcting the image by a calculation program.
- the three-dimensional measurement method may include correction processing of the first image, the second image, and the third image, the correction processing such that the first image, the second image, and the third image correspond to the first camera
- the points of the optical axes of the second camera and the third camera have the same abscissa and ordinate, and the abscissa directions of the first image, the second image, and the third image all correspond to the direction of the straight line, wherein The directions of the abscissa and the ordinate are perpendicular to each other.
- the correction process is performed before the process s130 shown in Fig. 4, that is, the process of matching the feature points, preferably after the image is received and before the feature points are extracted, that is, between the processes s110 and s120 shown in Fig. 4.
- the three-dimensional measurement method according to an embodiment of the present invention is not limited to the case including the above-described correction processing, for example, in some applications, correction may be performed by physical adjustment of the camera array without an image Correction.
- a correction unit 13 may be further included in the three-dimensional measurement system 10 according to the present invention, the correction unit 13 receiving an image from the camera column CA and generating a correction matrix based at least in part on the image, The correction matrix is provided to the processing unit 11.
- the correction matrix when applied to the first image, the second image, and the third image by the processing unit 11, implements the above-described correction processing in the three-dimensional measurement method.
- the three-dimensional measuring device 20 can include the correcting unit 13.
- Processing unit 11 and correction unit 13 may be implemented based on the same processor and memory or different processors and memories.
- the processing of matching the feature point groups is implemented by: filtering the matched feature point groups based on the following coordinate relationships: the abscissa of the feature points P 1 [u 1 , v 1 ] in the first image and the feature points P 2 in the second image [
- the difference s 1 u 1 -u 2 of the abscissa of u 2 , v 2 ] and the abscissa of the feature point P 2 [u 2 , v 2 ] in the second image and the feature points in the third image
- the other processing of the three-dimensional measurement method 200 is the same as the corresponding processing in
- process 300 includes:
- the first feature point and the second feature point are respectively selected in the first image, the second image, and the third image, and the ordinates of the first feature point and the second feature point satisfy relative to one
- the target ordinate is within a predetermined range
- S330 Search for the third feature point in the third party based on the expected position formed by the expected abscissa and the target ordinate of the third feature point.
- Process 300 can be used to scan an image line by line, for example, in a certain order of ordinates (increment or decrement) to filter matching sets of feature points.
- the first feature point is selected from the second image in the above description, which is merely exemplary, and the present invention is not limited to which image is selected from the first feature point.
- the number of feature points I, J, and K that meet the ordinate requirement in each image may be first compared, and the first feature point is selected in the image with the smallest number of feature points, and the number of feature points is The second feature point is selected in the small image, and finally the third feature point is searched for in the image with the largest number of feature points.
- the feature points can be searched and selected within a predetermined range of the ordinate of v t - ⁇ - v t + ⁇ , and ⁇ is an integer greater than or equal to 0, which can be installed according to, for example, a camera array. It is determined by the condition of use or the quality of the image, preferably 0 ⁇ ⁇ ⁇ 2.
- the expected position [u e , v t ] of the feature point P 1 can be obtained, and based on the The expected position is searched in the first image IM 1 for the presence or absence of the feature point P 1 .
- the abscissa tolerance ⁇ u and the ordinate tolerance ⁇ v can be set, and [u e - ⁇ u ⁇ u e + ⁇ u , v t - ⁇ v ⁇ v t + ⁇ v] range inherent in the first image IM 1 searches the feature point P 1 (e.g. see FIG. 1, shown in a broken line range image IM. 7).
- P 1 e.g. see FIG. 1, shown in a broken line range image IM. 7
- only one of the abscissa tolerance and the ordinate tolerance may be set, and details are not described herein again.
- the feature points P 1 , P 2 , P 3 are taken as one matching feature point group. If the expected feature point P 1 is not searched, the selected first feature point (P 2 ) and the second feature point (P 3 ) do not have the possibility of matching, and the screening of the round ends, and then the selection can be selected. The next second feature point is then repeated for the above processes s320 and s330. After traversing all of the second feature points, a new first feature point is selected and the above process is similarly repeated.
- FIG. 9 shows only one example of a process of filtering matching feature point groups based on the coordinate relationship.
- the candidate matching feature point group has been obtained by other means (for example, calculation of the similarity of the pixel or the neighborhood pixel group)
- the processing based on the parallax ratio relationship generated between the three cameras is simplified to the processing based on the coordinate relationship of the corresponding feature points in the image, and the feature point matching/filter matching is performed.
- the former mainly adds and subtracts coordinates and a small number of multiplication operations (in D 1 :
- D 2 1
- only addition and subtraction operations are required, and the latter usually requires dense multiplication operations, such as convolution of matrices, so in comparison, the former greatly reduces the matching process.
- a significant reduction in the amount of computation is of great significance for three-dimensional measurement and reconstruction based on high definition images and for real-time three-dimensional measurement and reconstruction, which offers the possibility of implementation of the latter.
- FIG. 10(a) shows a certain feature point P l1 in one image obtained by, for example, a binocular camera, and three candidate feature points P r1 which are very similar to the self attribute and the neighborhood attribute in another image, P r2 , P r3 . According to the unique requirement of feature point matching in 3D reconstruction, the above non-unique matching result can only be abandoned.
- FIG. 10 shows that for images from the first camera, the second camera, and the third camera arranged as shown in FIG.
- the feature point matching based on the coordinate relationship can help to eliminate the error result obtained by the matching based on the similarity calculation, and improve the correct rate of the matching; at the same time, since the unique matching result can be obtained with a larger probability, the matching result is avoided.
- the matching failure caused by not unique, so the feature point matching based on the parallax ratio relationship/the above coordinate relationship also contributes to the improvement of the matching rate, thereby contributing to the realization of the high density feature point cloud.
- FIG. 11 is a flow chart showing an example of a three-dimensional measurement method, a three-dimensional measurement method 400, according to a first embodiment of the present invention.
- the three-dimensional measurement method 400 includes:
- S410 receiving a first image, a second image, and a third image from the first camera, the second camera, and the third camera, respectively;
- S420 performing a correction process on the first image, the second image, and the third image, as discussed above, such that the first image, the second image, and the third image correspond to the first camera, the second camera, and
- the points of the optical axis of the third camera have the same abscissa and ordinate, and the abscissa directions of the first image, the second image, and the third image all correspond to the direction of the straight line;
- S430 extract feature points in the first image, the second image, and the third image
- S442 further performing screening based on the similarity calculation of the pixel group of the pixel or the neighborhood for the matching feature point group obtained by processing s441;
- S450 Calculate the three-dimensional coordinates of the object points corresponding to the matching feature point group.
- the process s441 can be implemented as, for example, the process 300 shown in FIG. 9, but is not limited thereto.
- Process s420 may be a correction process implemented using any correction method, including a correction process implemented by applying a certain correction matrix to the image.
- the parallax ratio relationship can also be applied to the correction processing, and for example, the correction matrix for the correction processing can be generated based on the parallax ratio relationship.
- the process of generating a matrix for the correction process based on the parallax ratio relationship may include extracting feature points in the first image, the second image, and the third image, respectively, for example, extracting a sparse feature dot lattice by using a sift algorithm Matching feature points in the first image, the second image, and the third image to obtain a plurality of matching feature point groups, for example, by using a RANSAC algorithm; using coordinates of the feature points in each image in the matching feature point group, according to the correction a matrix is applied to each of the images to match the parallax ratio relationship satisfied between feature points in the feature point group, to establish an overdetermined system of equations; and to solve the overdetermined equations by, for example, a least squares method to obtain a correction matrix .
- the matching feature point group is first filtered by the processing s441, so that the number of feature points entering the subsequent feature point matching processing s442 based on the similarity calculation is greatly reduced, and the calculation amount in the feature point matching process can be significantly reduced.
- the combination of processing s441 and s442 also helps to improve the correct rate and matching rate of feature point matching, so that a higher density feature point cloud can be obtained.
- FIG. 12 shows a camera array CA in a three-dimensional measuring system according to a second embodiment of the present invention, wherein the first camera C 1 , the second camera C 2 , and the third camera C 3 have the same focal length and optical axes parallel to each other (Z direction), the optical centers O 1 , O 2 of the first camera C 1 and the second camera C 2 are aligned in the X direction, and the optical centers O 2 , O 3 of the second camera C 2 and the third camera C 3 are along the Y Align the directions.
- the shift of the optical center of the second camera with respect to the optical center of the first camera in the X direction is D 1
- the shift of the optical center of the third camera with respect to the optical center of the second camera in the Y direction is D 2 .
- FIG. 13 is a view schematically showing an example of the first image IM 1 , the second image IM 2 , and the third image IM 3 obtained by the camera array shown in FIG. 12, wherein the abscissa axis (u axis) of the image corresponds to the first The direction of the line where the camera and the second camera are located (x-axis direction), and the ordinate of the image (v-axis) corresponds to the direction of the line where the second camera and the third camera's optical center are located (y direction).
- the abscissa axis (u axis) of the image corresponds to the first The direction of the line where the camera and the second camera are located (x-axis direction)
- the ordinate of the image (v-axis) corresponds to the direction of the line where the second camera and the third camera's optical center are located (y direction).
- the three-dimensional measurement method may include correction processing of the first image, the second image, and the third image, the correction processing such that the first image, the second image, and the third image correspond to the first camera, the second image
- the points of the optical axes of the camera and the third camera have the same abscissa and ordinate, and the abscissa directions of the first image, the second image, and the third image correspond to the optical centers of the first camera and the second camera
- the direction, the ordinate direction corresponds to the direction of the optical line connection of the second camera and the third camera.
- the process of screening the matching feature point group based on the parallax ratio relationship in the process s130 shown in FIG. 4 is implemented to include:
- the matching feature point group is filtered based on the following coordinate relationship: the abscissa of the feature point P 1 [u 1 , v 1 ] in the first image and the abscissa of the feature point P 2 [u 2 , v 2 ] in the second image
- the difference s 3 u 1 -u 2 and the ordinate of the feature point P 2 [u 2 , v 2 ] in the second image and the feature point P 3 [u 3 , v 3 ] in the third image
- FIG. 14 shows an example of a process of screening matching feature point groups based on the above-described coordinate relationship, process 500.
- process 500 includes:
- S510 selecting a first feature point and a second feature point in the first image and the second image, the ordinates of the first feature point and the second feature point satisfying a predetermined range with respect to a target ordinate;
- S540 Calculate a expected ordinate of the third feature point in the third image that matches the first feature point and the second feature point, such that a difference between the ordinate of the second feature point and the expected ordinate of the third feature point The second difference s 4 calculated for the above;
- S550 Search for the third feature point in the third image based on the expected position formed by the expected ordinate of the third feature point and the abscissa of the second feature point.
- the operation of selecting feature points in the predetermined range with respect to the target coordinates in the processing s510 may refer to the operations described above in the processing s310 in the combination processing 300.
- the process s550 of the process 500 may also set a tolerance range with respect to the expected position, and details are not described herein again.
- the first camera and the third camera actually have a pair with respect to the second camera. a positional relationship, such that in process s510, the first feature point and the second feature point are selected in the first image and the second image, and the ordinates of the first feature point and the second feature point satisfy The first and second feature points are selected first in the horizontally aligned two images by being within a predetermined range with respect to a target ordinate.
- feature points in the first image and the third image that have the same ordinate and the same abscissa as the feature point may be searched for based on the first image. And the number of feature points searched in the third image to determine the next search order. For example, when the number of feature points having the same ordinate in the first image is greater than the number of feature points having the same abscissa in the third image, the feature points may be selected from the third image to be calculated. And determining the expected position of the feature points in the first image and searching. Processing s510 is intended to cover this situation.
- the process 500 can be used to traverse feature points in the second image, for example, row by row or column by column, to filter feature points in the first image and the third image to match, thereby filtering the matched feature point groups.
- the three-dimensional measurement method according to the second embodiment of the present invention is relative to a pixel-based or neighborhood-based pixel group in the feature point matching process.
- the calculation amount of the matching process is greatly reduced, which helps to improve the spatial precision and real-time performance of the three-dimensional measurement.
- the three-dimensional measurement method according to the present embodiment may be combined with a feature point matching method based on the similarity calculation of a pixel or a neighborhood pixel group, in which case the feature point matching based on the parallax ratio relationship (screening matching feature point group)
- the method can effectively eliminate the mismatching result of the feature point matching calculated based on the similarity of the pixel or the neighborhood pixel group, improve the correct rate of the matching; and help to avoid the matching failure due to the non-unique matching result, thereby Helps improve the matching rate and obtain a high-density feature point cloud.
- 15 to 18 illustrate a three-dimensional measurement system and a three-dimensional measurement method according to a third embodiment of the present invention.
- Figure 15 shows a camera array CA in a three-dimensional measuring system according to a third embodiment of the present invention, wherein the first camera C 1 , the second camera C 2 and the third camera C 3 have the same focal length and optical axes parallel to each other (Z direction), the first camera C 1 , the second camera C 2 and the third camera C 3 are arranged in a triangle, and their optical centers O 1 , O 2 , O 3 are arranged on the same plane perpendicular to the optical axis Wherein the optical centers O 1 , O 2 of the first camera C 1 and the second camera C 2 are aligned in the X direction.
- the shift of the optical center of the second camera with respect to the optical center of the first camera in the X direction is D 1
- the shift of the optical center of the third camera with respect to the optical center of the second camera in the X direction is D 2 .
- FIG. 16 schematically shows an example of the first image IM 1 , the second image IM 2 , and the third image IM 3 obtained by the camera array shown in FIG. 15 , wherein the abscissa axis (u axis) of the image corresponds to the first The direction of the line where the camera and the third camera are located (x-axis direction).
- the same object point P[X, Y, Z] obtains image points P 1 [u 1 , v 1 ], P 2 [u] in the images IM 1 , IM 2 , and IM 3 , respectively.
- the three-dimensional measurement method may include correction processing of the first image, the second image, and the third image, the correction processing such that the first image, the second image, and the third image correspond to the first camera, the second image
- the points of the optical axes of the camera and the third camera have the same abscissa and ordinate, and the abscissa directions of the first image, the second image, and the third image correspond to the optical connections of the first camera and the third camera The direction.
- the process of screening the matching feature point group based on the parallax ratio relationship in the process s130 shown in FIG. 4 is implemented to include:
- the matching feature point group is filtered based on the following coordinate relationship: the abscissa of the feature point P 1 [u 1 , v 1 ] in the first image and the abscissa of the feature point P 2 [u 2 , v 2 ] in the second image
- the difference s 5 u 1 -u 2 and the abscissa of the feature point P 2 [u 2 , v 2 ] in the second image and the feature point P 3 [u 3 , v 3 ] in the third image
- the feature points in the first image and the third image have the same
- FIG. 17 shows an example of a process of screening a matching feature point group based on the above coordinate relationship, process 600.
- process 600 includes:
- S610 respectively selecting a first feature point and a second feature point in the first image and the third image, wherein the ordinate of the second feature point is within a predetermined range with respect to the ordinate of the first feature point;
- S630 Search for the third feature point in the second image based on the expected abscissa.
- the operation of selecting the feature points in the predetermined range with respect to the target coordinates in the processing s610 may refer to the operations described above in the processing s310 in the combination processing 300. Further, similar to the process 300, the processing s630 600 may be expected with respect to the abscissa is set tolerance range (horizontal dashed line 2 of the feature point P in FIG. 16 Referring range), are not repeated here.
- the ordinate range of the third feature point in the second image is not constrained in the process 600 as compared to the process 300 shown in FIG. 9, so that the matching feature is searched based on the expected abscissa in the second image. Point, the resulting matching result is more likely to be non-unique than the first embodiment and the second embodiment.
- FIG. 18 shows an example of a three-dimensional measurement method, a three-dimensional measurement method 700, according to a third embodiment of the present invention.
- the three-dimensional measurement method 700 includes:
- S710 receiving a first image, a second image, and a third image from the first camera, the second camera, and the third camera, respectively;
- S720 performing a correction process on the first image, the second image, and the third image, and as the above discussion, the correction process is such that the first image, the second image, and the third image correspond to the first camera, the second camera, and The points of the optical axis of the third camera have the same abscissa and ordinate, and the abscissa directions of the first image, the second image, and the third image correspond to the directions of the optical centers of the first camera and the third camera. ;
- S730 extract feature points in the first image, the second image, and the third image
- S742 Filter matching feature point groups based on the following coordinate relationship: the abscissa of the feature points P 1 [u 1 , v 1 ] in the first image and the cross-section of the feature points P 2 [u 2 , v 2 ] in the second image
- S750 Calculate the three-dimensional coordinates of the object points corresponding to the matching feature point group.
- the process s720 may be a correction process implemented by using any correction method, including a correction process implemented by applying a certain correction matrix to the image.
- the three-dimensional measurement method 700 is not limited to the processing using the similarity calculation based on the pixel group or the neighborhood of the pixel group in the processing s741, and may be replaced with other existing or later emerging others. Filter the processing of matching feature point group/feature point matching.
- the process s742 can be implemented as, for example, the process 600 shown in FIG. 17, but is not limited thereto.
- the three-dimensional measurement method 700 according to the third embodiment of the present invention can help improve the correct rate and matching rate of feature point matching by employing the process s742, and contribute to obtaining a higher density feature point cloud.
- Figure 19 shows a more arrangement of a camera array that can be used in a three dimensional measurement system in accordance with the present invention.
- the camera array can include three cameras arranged in an equilateral triangle, and can be further expanded to arrange more than three cameras forming a plurality of equilateral triangles, such as the honeycomb arrangement shown in the lower right corner of FIG.
- a camera array comprising three cameras arranged in a right triangle may be further expanded into a variety of other forms, such as rectangular (including square), T-shaped, cross-shaped, diagonal, and expanded forms in units of these shapes.
- the three-dimensional measurement method according to the present invention can be implemented based on images from three cameras or more than three cameras in a camera array.
- the three-dimensional measurement system and the three-dimensional measurement method according to the first, second and third embodiments of the present invention are respectively described above, those skilled in the art should understand that these embodiments or features therein may be combined to form different technologies.
- the first, second, third, and fourth cameras are included in the camera array, and the three-dimensional measurement method according to the present invention may be for a set of images from the first, second, and third cameras.
- the set of images from the first, third and fourth cameras respectively perform the feature point matching processing based on the parallax ratio relationship proposed by the present invention, and combine the matching results of the two sets of images to determine the last in the first sum.
- the set of feature points in the third camera is matched to calculate the spatial position of the corresponding object point.
- the three-dimensional measurement system 10 may further include a projection unit 14 in addition to a different camera array CA.
- the projection unit 14 is for projecting a projection pattern onto a shooting area of the camera array CA, which can be captured by a camera in the camera array CA.
- the projection pattern can add more feature points in the shooting area, and in some applications, the feature points can be distributed more evenly or compensate for the lack of feature points in some areas.
- the projected pattern can include dots, lines, or a combination thereof.
- the dots may also be formed as larger or have a specific shape after the size is increased, and the lines may be formed into a strip pattern having a wider or other shape characteristic after the size is increased.
- the projection pattern includes a line, and the extending direction of the line is not parallel to the optical fiber connection direction of at least two of the first camera, the second camera, and the third camera, which helps provide more available for use according to the present invention.
- a feature point of the matching process based on the parallax ratio relationship in the three-dimensional measurement method of the invention.
- the projected pattern can also be encoded by features such as color, intensity, shape, distribution, and the like.
- the feature points with the same encoding in the image obtained by the camera are necessarily matching points. Added matching dimensions to improve match rate and match accuracy.
- the projection unit 14 may be configured to include a light source and an optical element for forming a projection pattern based on illumination light from the light source, such as a diffractive optical element or grating or the like.
- the illumination light emitted by the light source includes light having wavelengths in an operating wavelength range of the first camera, the second camera, and the third camera, and may be monochromatic light or multi-color light, and may include visible light and/or Or non-visible light, such as infrared light.
- the projection unit 14 can be configured to be able to adjust the projection direction to selectively project a projection pattern to different regions depending on different shooting scenes.
- the projection unit 14 may be configured to enable sequential single pattern projection or sequential multi-pattern projection, or to project different patterns according to different shooting scenes.
- the three-dimensional measurement system 10 can also include a sensor 15 for detecting at least a portion of the pattern features projected by the projection unit 14 to obtain additional information that can be used for three-dimensional measurements.
- the camera array CA operates at visible wavelengths and infrared wavelengths
- the projection unit 14 projects a projected pattern at infrared wavelengths
- the sensor 15 is an infrared sensor or an infrared camera, at which point the information acquired by the camera array CA includes both visible light formation.
- the image information in turn includes a projected pattern that can provide more feature points for three-dimensional measurement based on binocular vision, while the projected pattern obtained by sensor 15 can be used for measurements based on other three-dimensional measurement techniques, such as based on structured light. measuring.
- the information obtained by the sensor 15 can be transmitted, for example, to the correcting unit 13, which can, for example, use three-dimensional measurement results obtained based on information from the sensor 15 and other three-dimensional measuring techniques for the camera. Calibration and/or correction of the array or its obtained images.
- the three-dimensional measurement system 10 can be implemented as an integrated system or as a distributed system.
- the camera array CA in the three-dimensional measurement system 10 can be mounted on one device, and the processing unit 11 can be implemented based on an internet server to be physically separate from the camera array CA.
- the projection unit 14 and the sensor 15 may be mounted together with the camera array CA or may be provided independently.
- the control unit 12 the correcting unit 13 may be implemented together with the processing unit 11 by the same processor and associated memory or the like (which may be formed as part of the three-dimensional measuring device 20 represented by the dashed box in FIG. 3), or may be implemented separately.
- the correction unit 13 can be implemented by a processor integrated with the camera array CA and an associated memory or the like.
- the three-dimensional measuring system according to the present invention is implemented as a three-dimensional measuring device based on a mobile phone and an external camera module.
- the camera module consists of three cameras of the same model.
- the centers of the three cameras are in a straight line, and the distances between the centers of adjacent cameras are equal.
- the optical axes of the cameras are parallel to each other and face the same, all perpendicular to the line where the camera center is located.
- the camera module is connected to the phone via WiFi and/or data cable.
- the phone can control the camera's 3 cameras for simultaneous shooting (photo and / or video) and equal magnification.
- Photos and/or videos captured by the camera module are transmitted to the phone via WiFi and/or data lines.
- the mobile phone corrects the image frames in the photo and/or video by the correction application.
- a checkerboard placed in front of the camera module can be utilized during the calibration process, and the checkerboard mesh size is known. Corrections based on an auxiliary correction tool such as a checkerboard are known in the art and will not be described again.
- the method for correction in this application example is also not limited to this particular method.
- the camera module is used to capture the scenes and objects to be modeled.
- the mobile phone can be further integrated or externally connected with a projection module for projecting stripes onto the object to be photographed, and the direction of the stripes is not parallel to the direction of the center of the three cameras.
- a processing unit (consisting of a processing chip and a storage unit of the mobile phone) integrated in the mobile phone extracts feature points and feature areas common to the three camera photos.
- the feature points also include new feature points and feature regions formed on the object by the stripes projected by the projection module.
- the matching feature point group having the image coordinate symmetry relationship and the similar attribute is selected.
- a plurality of depth values are calculated based on each of the matched feature point groups, and an average of the depth values is taken. Then, the three-dimensional space coordinates of the object points corresponding to the feature point group are calculated and fused with the color information to form voxel information. Finally, a true color point cloud of the entire object or scene is displayed on the mobile phone or a three-dimensional model reconstructed based on the point cloud.
- the handset can transmit the received photos and/or videos from the camera module to the cloud server.
- the server can quickly correct the photos, extract feature points, match the corresponding matching feature point groups with image coordinate symmetry relationship and similar attributes, calculate the three-dimensional coordinates of the object points corresponding to the matched feature point groups, and form voxel information by color information fusion.
- the true color point cloud result or the three-dimensional model reconstructed based on the point cloud is transmitted back to the mobile phone for display.
- the system can also be integrated with TOF to solve image-free features (such as large white walls) or occlusion problems.
- the fusion of the two point clouds not only ensures that large object targets can be sampled, but also is compatible with the surface details of the objects, and can form a three-dimensional point cloud with more comprehensive spatial sampling and higher resolution.
- the "forward camera + TOF" fusion 3d sampling module can scan the pixel level precision of the face, and the current TOF can only scan the approximate surface contour.
- the 3D module of "backward camera + TOF” is a shortcoming that complements the TOF system's limited projection distance.
- the three-dimensional point cloud formed by the TOF is transformed and projected by the space coordinate system to calculate the initial parallax of the corresponding sampling point on the image, which can accelerate the image matching process.
- the basic process of image 3D measurement system and laser radar fusion is as follows: 1. TOF generates 3D point cloud; 2. Converts the point cloud 3D coordinates of lidar to reference camera coordinates through conversion between TOF coordinate system and reference camera coordinate system The three-dimensional coordinates of the corresponding points are obtained; 3. According to the projection equation of the camera, the three-dimensional coordinates of the corresponding sampling points in the reference camera coordinate system are converted into the two-dimensional coordinates and the parallax of the image; 4. The two-dimensional coordinates corresponding to the sampling points by referring to the camera image Coordinates and disparity find the initial position of the two-dimensional coordinates on other camera images; 5.
- the system can also be integrated with structured light to solve the problem of images without features such as large white walls.
- Structured light can form high density features on the surface of the object. Structured light is more flexible to use. It can only increase the feature points on the surface of the object (allowing the pattern arrangement of different positions to be repeated), and the three-dimensional point cloud can be generated by multi-camera system matching, and the coding pattern projection can also be generated (guarantee that the pattern coding or neighborhood pattern coding of each point is unique)
- the three-dimensional point cloud is calculated from the triangle original understanding between the camera and the projection device.
- a combination of "camera + TOF + structured light” can be formed, which can form a high-density and ultra-high-resolution 3D point cloud, which can be used for applications such as VR/AR games or movies that require very realistic 3D virtual objects. in.
- the three-dimensional measuring system according to the present invention is realized as a device for automatic driving of a vehicle.
- the camera module is mounted on the front of the vehicle and includes three cameras of the same model.
- the centers of the three cameras are in a straight line with the same distance from the center of the camera.
- the optical axes of the cameras are parallel to each other and face the same, all perpendicular to the line where the camera center is located.
- the camera module is connected to the onboard computer via a data cable.
- the on-board computer can control the camera's 3 cameras for simultaneous shooting (photo and/or video) and equal magnification.
- the photos and/or videos captured by the camera module are transmitted to the onboard computer via the data cable.
- the on-board computer generates a correction matrix from multiple images (image frames in a photo or video) taken at the same time. After the correction, the on-board computer extracts the feature points and feature regions shared by the images taken by the three cameras, selects the corresponding matching group with symmetric structure and similar properties, and selects the matched feature point groups with the image coordinate symmetry relationship and the similar attributes, and calculates The three-dimensional coordinates of the object points corresponding to the feature point group are matched, and the true color three-dimensional point cloud is output after being merged with the color information, and is pushed to the decision system of the vehicle automatic driving.
- the system can also be integrated with Lidar to solve image-free features (such as large white walls) or occlusion problems.
- Lidar is suitable for spatial sampling of large objects, even in the case of images without features.
- the spatial sampling rate limitation such as a power pole of several tens of meters
- the leakage angle may exist when the opening angle is smaller than the spatial sampling angle resolution of the radar.
- the image system is more sensitive to various features of the object, including edge features.
- the fusion of the two point clouds not only ensures that large targets can be sampled, but also is compatible with the surface details of objects and the sampling of small objects, which can form a three-dimensional point cloud with more comprehensive spatial sampling and higher resolution.
- the three-dimensional point cloud formed by the laser radar is converted into the initial parallax of the corresponding sampling point on the image, which can accelerate the image matching process.
- the basic process of image 3D measurement system and laser radar fusion is as follows: 1. Lidar generates 3D point cloud; 2. Converts lidar 3D coordinates of lidar into reference by conversion between lidar coordinate system and reference camera coordinate system The three-dimensional coordinates of the corresponding points in the camera coordinate system; 3. According to the projection equation of the camera, the three-dimensional coordinates of the corresponding sampling points in the reference camera coordinate system are converted into two-dimensional coordinates and parallax of the image; 4. by referring to the camera image corresponding to the sampling points Two-dimensional coordinates and parallax find the initial position of the two-dimensional coordinates on other camera images; 5.
- the three-dimensional measurement system according to the present invention is realized as a device for performing aerial photography based on a drone and an external camera module.
- the camera module is placed on the airborne head of the drone, including five cameras of the same model.
- the five cameras are distributed in a cross shape with the center of the camera on the same plane.
- the distance between the centers of adjacent cameras in the longitudinal and lateral directions is equal.
- the optical axes of the cameras are parallel to each other and face the same, all perpendicular to the plane of the center of the camera.
- the camera module is connected to the drone's onboard computer via a data cable.
- the onboard computer can control the camera's 5 cameras for simultaneous shooting (photo and/or video) and equal magnification.
- Photos and/or videos captured by the camera module are transmitted to the onboard computer via the data cable.
- the onboard computer generates a correction matrix from multiple images (image frames in a photo or video) taken at the same time. The image is corrected by a correction matrix.
- the onboard computer can display the feature points and feature areas shared by the images of the five cameras on the camera module in real time, extract corresponding matching feature point groups with image coordinate symmetry relationship and similar attributes, and calculate the three-dimensional object points corresponding to the matched feature point groups. Coordinates, combined with color information, output true color 3D point clouds.
- the equal signs in the formulas in the present invention are all equal in engineering sense, and a certain deviation can be tolerated. That is, if the difference between the two sides of the equal sign is within a certain range, it can be considered equal.
- the range of the deviation is, for example, plus or minus 5%, or plus or minus 1%.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
Description
本发明总体上涉及用于三维测量的方法、装置以及系统,具体而言,涉及基于计算机视觉技术的三维测量方法、装置和系统。The present invention generally relates to methods, apparatus, and systems for three-dimensional measurements, and more particularly to three-dimensional measurement methods, apparatus, and systems based on computer vision techniques.
基于计算机视觉的三维测量以及在三维测量基础上实现的三维重构在工业,安全,交通,娱乐方面都有广泛应用。工业机器人感知现实世界并进行决策需要三维的空间信息。安全监控加入三维场景可提高目标识别准确度。汽车自动驾驶、无人机等需要实时感知周围物体的位置。建筑物复原、文物复原等需要对建筑物、文物进行三维重构,特别是需要基于高密度的真彩色点云实现的三维重构。此外,三维重构形成的人物形象广泛应用于电影、动漫、游戏产业。至少部分地基于三维重构而形成的三维虚拟角色也广泛应用于VR、AR产业。Three-dimensional measurement based on computer vision and three-dimensional reconstruction based on three-dimensional measurement are widely used in industry, security, transportation and entertainment. Industrial robots sense the real world and make decisions that require three-dimensional spatial information. Security monitoring adds 3D scenes to improve target recognition accuracy. Auto-driving, drones, etc. need to sense the position of surrounding objects in real time. Building restoration, cultural relics restoration, etc. require three-dimensional reconstruction of buildings and cultural relics, especially three-dimensional reconstruction based on high-density true color point clouds. In addition, the characters formed by three-dimensional reconstruction are widely used in the movie, animation, and game industries. Three-dimensional virtual characters formed based at least in part on three-dimensional reconstruction are also widely used in the VR and AR industries.
基于双目相机的三维重构已发展数十年,但复原出高正确率的真彩色三维点云并非易事。具体来说,基于双目相机的三维重构通过计算双目图像中对应于同一物点的像点之间的位置偏差来恢复物体的三维坐标信息,其核心在于选取图像中的特征点,以及寻找/筛选不同图像上可能对应同一物点的特征点组(即特征点匹配)。目前用于筛选匹配特征点组的方法例如有:Three-dimensional reconstruction based on binocular cameras has been developed for decades, but restoring a true-color three-dimensional point cloud with high accuracy is not an easy task. Specifically, the three-dimensional reconstruction based on the binocular camera recovers the three-dimensional coordinate information of the object by calculating the positional deviation between the image points corresponding to the same object point in the binocular image, the core of which is to select the feature points in the image, and Find/screen feature point groups (ie, feature point matching) on different images that may correspond to the same object point. The methods currently used to filter matching feature point groups are as follows:
1)计算不同图像待匹配对应点之间像素灰度差的平方和(SSD,sum of squared difference),寻找最小值;1) Calculate the sum of squared differences (SSD) of the pixel difference between the corresponding points of different images to be matched, and find the minimum value;
2)计算不同图像待匹配对应点之间过零平均像素灰度差的平方和(ZSSD,zero mean sum of squared differences),寻找最小值;2) Calculate the sum of the squares of the zero-to-average sum of squared differences between the corresponding points of the different images to be matched, and find the minimum value;
3)计算不同图像待匹配对应点之间像素灰度差的绝对值和(SAD,sum of absolute difference),寻找最小值;以及3) calculating the absolute value of (SAD, sum of absolute difference) between the corresponding points of the different images to be matched, and finding the minimum value;
4)计算不同图像待匹配对应点之间过零平均像素灰度差的绝 对值和(ZSAD,zero mean sum of squared differences),寻找最小值。4) Calculate the minimum value of (ZSAD, zero mean sum of squared differences) between the corresponding points of the different images to be matched, and find the minimum value.
5)计算不同图像待匹配对应点邻域之间归一化交叉相关(NCC,Normalization cross correlation),寻找最大值;5) Calculate the normalization cross correlation (NCC) of the corresponding points of the corresponding images to be matched, and find the maximum value;
6)计算不同图像待匹配对应点邻域之间过零平均归一化交叉相关(ZNCC,zero-mean normalized cross correlation),寻找最大值。6) Calculate the zero-mean normalized cross correlation (ZNCC) of different image to be matched corresponding points, and find the maximum value.
以上筛选匹配特征点组的方法的一个共同特点是搜索一幅图像上的哪个特征点与另一幅图像上的待匹配特征点最相似,其存在的一个问题是在对复杂场景的图像的处理中匹配错误率往往很高。如在周期性结构的图像中,不属于同一物体或同一周期极次上的两个像点,很可能其邻域的归一化交叉相关最大,或像素差异最小,而真正的对应点组合可能处在归一化交叉相关值次大或像素差异次小的位置。当没有其他维度来检验匹配是否有效,则该种匹配错误会一直存在并传递。上述现有的筛选匹配特征点组的方法的另一问题在于其计算量非常大,从而造成处理时间过长以及/或者所需的计算设备难以小型化、成本高等问题。A common feature of the above method for screening matching feature point groups is that which feature point on one image is most similar to the feature point to be matched on another image, and one problem exists in the processing of images of complex scenes. The medium match error rate is often very high. For example, in an image of a periodic structure, two image points that do not belong to the same object or on the same period of the same period, it is likely that the normalized cross-correlation of the neighborhood is the largest, or the pixel difference is the smallest, and the true corresponding point combination may be It is in the position where the normalized cross-correlation value is the second largest or the pixel difference is the second. When there are no other dimensions to verify that the match is valid, then that match error will persist and be passed. Another problem with the above-described existing method of screening matching feature point groups is that the amount of calculation is very large, resulting in problems such as excessive processing time and/or difficulty in miniaturization and high cost of the required computing device.
发明内容Summary of the invention
本发明的目的是提供一种三维测量方法以及用于实现该方法的装置和系统,其至少部分地解决了现有技术中存在的上述问题。It is an object of the present invention to provide a three-dimensional measurement method and apparatus and system for implementing the same that at least partially solve the above-described problems in the prior art.
根据本发明的一个方面,提供一种基于视差定比关系的三维测量方法,该方法包括:接收分别来自第一相机、第二相机和第三相机的第一图像、第二图像和第三图像,第一相机、第二相机和第三相机具有相同的焦距以及彼此平行的光轴,并且第一相机、第二相机和第三相机的光心布置在垂直于光轴的同一平面上;分别在所述第一图像、第二图像和第三图像中提取特征点;对第一图像、第二图像和第三图像中的特征点进行匹配,该匹配包括基于以下视差定比关系筛选匹配特征点组:对于同一物点,在第一图像和第二图像之间产生的、在第一方向上的第一视差d 1与在第二图像和第三图像之间产生的、在第二方向上的第二视差d 2满足d 1:d 2=D 1:D 2,其中D 1为第一相机的光心相对于第二相机的光心在所述第一方向上的偏移量,D 2为第二相机的光 心相对于第三相机的光心在所述第二方向上的偏移量,其中第一方向为平行于所述平面且非垂直于第一相机和第二相机的光心连线的方向,第二方向为平行于所述平面且非垂直于第二相机和第三相机的光心连线的方向;以及计算匹配特征点组对应的物点的三维坐标。 According to an aspect of the present invention, a three-dimensional measurement method based on a parallax ratio relationship is provided, the method comprising: receiving first, second, and third images from a first camera, a second camera, and a third camera, respectively The first camera, the second camera, and the third camera have the same focal length and optical axes parallel to each other, and the optical centers of the first camera, the second camera, and the third camera are disposed on the same plane perpendicular to the optical axis; Extracting feature points in the first image, the second image, and the third image; matching feature points in the first image, the second image, and the third image, the matching including filtering matching features based on the following parallax ratio relationship a group of points: a first parallax d 1 in a first direction and a second direction generated between the first image and the third image, which are generated between the first image and the second image, in the second direction The second parallax d 2 above satisfies d 1 :d 2 =D 1 :D 2 , where D 1 is the offset of the optical center of the first camera relative to the optical center of the second camera in the first direction, D 2 is the optical center with respect to the second camera An offset of the optical center of the three cameras in the second direction, wherein the first direction is a direction parallel to the plane and not perpendicular to a line connecting the optical centers of the first camera and the second camera, and the second direction is a direction parallel to the plane and not perpendicular to a line connecting the optical centers of the second camera and the third camera; and calculating a three-dimensional coordinate of the object point corresponding to the matched feature point group.
根据本发明的另一个方面,提供一种三维测量装置,其包括:处理器;和存储程序指令的存储器,其中当所述程序指令由所述处理器执行时,使得所述处理器执行下列操作:接收第一图像、第二图像和第三图像;分别在第一图像、第二图像和第三图像中提取特征点;对第一图像、第二图像和第三图像中的特征点进行匹配,该匹配包括:基于以下坐标关系筛选匹配特征点组:第一图像中的特征点的横坐标与第二图像中的特征点的横坐标的差值与第二图像中的所述特征点的横坐标与第三图像中的特征点的横坐标的差值成预定比例关系,并且第一图像和第三图像中的所述特征点具有相同的纵坐标;以及计算匹配特征点组对应的物点的三维坐标。According to another aspect of the present invention, there is provided a three-dimensional measuring apparatus comprising: a processor; and a memory storing program instructions, wherein when the program instructions are executed by the processor, causing the processor to perform the following operations Receiving a first image, a second image, and a third image; extracting feature points in the first image, the second image, and the third image, respectively; matching feature points in the first image, the second image, and the third image The matching includes: filtering the matched feature point group based on the following coordinate relationship: a difference between the abscissa of the feature point in the first image and the abscissa of the feature point in the second image and the feature point in the second image The abscissa has a predetermined proportional relationship with the difference between the abscissas of the feature points in the third image, and the feature points in the first image and the third image have the same ordinate; and the object corresponding to the matched feature point group is calculated The three-dimensional coordinates of the point.
根据本发明的又一个方面,提供一种三维测量装置,用于与相机阵列配合使用以进行三维测量。所述相机阵列至少包括第一相机、第二相机和第三相机,所述第一相机、第二相机和第三相机具有相同的焦距以及彼此平行的光轴,并且第一相机、第二相机和第三相机的光心布置在垂直于光轴的同一平面上。所述三维测量装置包括处理单元,其接收分别来自第一相机、第二相机和第三相机的第一图像、第二图像和第三图像,并被配置为进行以下处理:分别在所述第一图像、第二图像和第三图像中提取特征点;对第一图像、第二图像和第三图像中的特征点进行匹配,该匹配包括基于以下视差定比关系筛选匹配特征点组:对于同一物点,在第一图像和第二图像之间产生的、在第一方向上的第一视差d 1与在第二图像和第三图像之间产生的、在第二方向上的第二视差d 2满足d 1:d 2=D 1:D 2,其中D 1为第一相机的光心相对于第二相机的光心在所述第一方向上的偏移量,D 2为第二相机的光心相对于第三相机的光心在所述第二方向上的偏移量,其中第一方向为平行于所述平面且非垂直于第一相机和第二相机的光心连线的方向,第二方向为平行于所述平面且非垂直于第二相机和第三相机的光心连 线的方向;以及计算匹配特征点组对应的物点的三维坐标。 According to still another aspect of the present invention, a three-dimensional measuring apparatus for use with a camera array for three-dimensional measurement is provided. The camera array includes at least a first camera, a second camera, and a third camera, the first camera, the second camera, and the third camera having the same focal length and optical axes parallel to each other, and the first camera and the second camera The optical centers of the third camera and the third camera are arranged on the same plane perpendicular to the optical axis. The three-dimensional measuring apparatus includes a processing unit that receives a first image, a second image, and a third image from the first camera, the second camera, and the third camera, respectively, and is configured to perform processing of: Extracting feature points in an image, a second image, and a third image; matching feature points in the first image, the second image, and the third image, the matching including filtering the matched feature point groups based on the following parallax ratio relationship: a same object point, a first parallax d 1 generated in a first direction between the first image and the second image, and a second generated in the second direction between the second image and the third image The parallax d 2 satisfies d 1 :d 2 =D 1 :D 2 , where D 1 is the offset of the optical center of the first camera relative to the optical center of the second camera in the first direction, and D 2 is the first An offset of the optical center of the two cameras relative to the optical center of the third camera in the second direction, wherein the first direction is an optical center connection parallel to the plane and non-perpendicular to the first camera and the second camera The direction of the line, the second direction being parallel to the plane and not perpendicular to Two cameras and the direction of the camera optical center of the third connection; and calculating a set of matching feature points corresponding to the three-dimensional coordinates of the object point.
根据本发明又一个方面,提供一种基于视差定比关系的三维测量系统,该系统包括:相机阵列以及上述任何一种三维测量装置,所述相机阵列至少包括第一相机、第二相机和第三相机,所述第一相机、第二相机和第三相机具有相同的焦距以及彼此平行的光轴,并且第一相机、第二相机和第三相机的光心布置在垂直于光轴的同一平面上。According to still another aspect of the present invention, a three-dimensional measurement system based on a parallax ratio relationship is provided, the system comprising: a camera array and any one of the above three-dimensional measurement devices, the camera array including at least a first camera, a second camera, and a a three camera, the first camera, the second camera, and the third camera having the same focal length and an optical axis parallel to each other, and the optical centers of the first camera, the second camera, and the third camera are arranged in a same direction perpendicular to the optical axis on flat surface.
通过阅读参照以下附图所作的对非限制性实施例所作的详细描述,本发明的其它特征、目的和优点将会变得更明显:Other features, objects, and advantages of the present invention will become more apparent from the Detailed Description of Description
图1A、图1B和图1C示出双目视差与相机光心相对位置的关系;1A, 1B and 1C show the relationship between the binocular parallax and the relative position of the camera optical center;
图2示出设置在垂直于光轴的同一平面上的相机的视差定比关系;2 shows a parallax ratio relationship of cameras disposed on the same plane perpendicular to the optical axis;
图3示出本发明的三维测量系统的一个示例的示意性结构框图;3 is a schematic structural block diagram showing an example of a three-dimensional measuring system of the present invention;
图4示出本发明的三维测量方法的示意性总流程图;Figure 4 shows a schematic overall flow chart of the three-dimensional measuring method of the present invention;
图5示出可与根据本发明第一实施例的三维测量方法结合使用的相机阵列的示例;FIG. 5 illustrates an example of a camera array that can be used in conjunction with the three-dimensional measurement method according to the first embodiment of the present invention;
图6A、图6B和图6C示出图5所示相机之间的视差定比关系。6A, 6B, and 6C show the parallax ratio relationship between the cameras shown in Fig. 5.
图7示意性地示出图5所示相机阵列获得的图像的示例并图解了像点的坐标差值;FIG. 7 is a view schematically showing an example of an image obtained by the camera array shown in FIG. 5 and illustrating coordinate difference values of image points;
图8示出根据本发明第一实施例的三维测量方法的示意性流程图;FIG. 8 is a schematic flow chart showing a three-dimensional measuring method according to a first embodiment of the present invention; FIG.
图9示出可用于根据本发明第一实施例的三维测量方法的筛选特征点组的处理的一个示例;FIG. 9 shows an example of a process of screening a feature point group which can be used for the three-dimensional measurement method according to the first embodiment of the present invention;
图10示意性地对比了基于相似性计算的特征点匹配和根据本发明第一实施例的三维测量方法中基于坐标关系的特征点匹配;FIG. 10 is a view schematically comparing feature point matching based on similarity calculation and feature point matching based on coordinate relationship in the three-dimensional measurement method according to the first embodiment of the present invention;
图11示出根据本发明第一实施例的三维测量方法的一个示例的流程图;Figure 11 is a flow chart showing an example of a three-dimensional measuring method according to a first embodiment of the present invention;
图12示出可与根据本发明第二实施例的三维测量方法结合使用的相机阵列的示例并示出了其视差定比关系;12 shows an example of a camera array that can be used in combination with the three-dimensional measurement method according to the second embodiment of the present invention and shows its parallax ratio relationship;
图13示意性地示出图12所示相机阵列获得的图像的示例并图解像点的坐标差值;Figure 13 is a view schematically showing an example of an image obtained by the camera array shown in Figure 12 and illustrating coordinate difference values of image points;
图14示出可用于根据本发明第二实施例的三维测量方法的筛选特征点组的处理的一个示例;FIG. 14 shows an example of a process of screening a feature point group which can be used for the three-dimensional measurement method according to the second embodiment of the present invention;
图15示出可与根据本发明第三实施例的三维测量方法结合使用的相机阵列的示例并示出了其视差定比关系;15 shows an example of a camera array that can be used in combination with the three-dimensional measurement method according to the third embodiment of the present invention and shows its parallax ratio relationship;
图16示意性地示出图12所示相机阵列获得的图像的示例并图解像点的坐标差值;Figure 16 is a view schematically showing an example of an image obtained by the camera array shown in Figure 12 and illustrating coordinate difference values of image points;
图17示出可用于根据本发明第三实施例的三维测量方法的筛选特征点组的处理的一个示例;17 shows an example of a process of screening a feature point group that can be used in the three-dimensional measurement method according to the third embodiment of the present invention;
图18示出根据本发明第三实施例的三维测量方法的一个示例的流程图;以及Figure 18 is a flowchart showing one example of a three-dimensional measuring method according to a third embodiment of the present invention;
图19示出可用于根据本发明实施例的三维测量系统的相机阵列布置的示例。19 illustrates an example of a camera array arrangement that can be used in a three-dimensional measurement system in accordance with an embodiment of the present invention.
下面结合附图和实施例对本发明作进一步的详细说明。可以理解的是,此处所描述的具体实施例仅仅用于解释相关发明,而非对该发明的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与发明相关的部分。The present invention will be further described in detail below with reference to the accompanying drawings and embodiments. It is understood that the specific embodiments described herein are merely illustrative of the invention, rather than the invention. It should also be noted that, for the convenience of description, only parts related to the invention are shown in the drawings.
在本申请中,除非另有说明,“左”和“右”、“上”和“下”分别仅表示相对的位置,当“左”和“右”、“上”和“下”时,表示的是在相互垂直的方向上彼此相对的位置,上述位置不限于人们通常对左右上下的认定。另外,在本申请中,除非另有说明,“纵”和“横”表示彼此垂直的方向,前者并不限于“竖直”方向,后者也不限于“水平方向”。In the present application, "left" and "right", "upper" and "lower" respectively denote only relative positions unless otherwise stated, when "left" and "right", "upper" and "lower", The positions indicated to each other in the mutually perpendicular directions are not limited to those in which the person usually refers to the left and right. In addition, in the present application, "vertical" and "horizontal" indicate directions perpendicular to each other unless otherwise stated, the former is not limited to the "vertical" direction, and the latter is not limited to the "horizontal direction".
需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。下面将参考附图并结合实施例来详细说明本发明。It should be noted that the embodiments in the present application and the features in the embodiments may be combined with each other without conflict. The invention will be described in detail below with reference to the drawings in conjunction with the embodiments.
首先将结合图1A、图1B和图1C介绍双目视差与相机光心间距 的关系。图中O l与O r分别表示左相机和右相机的光心(相机镜头的光心),I l和I r分别表示左相机和右相机的像面(以下分别简称为左像面和右像面)。相机的像面由相机所包含的图像传感器的感光面的位置决定,所述图像传感器例如为CCD或CMOS,通常定位在距离光心一个焦距f处。相机通常是对物体成倒立的实像,像面相对于光心位于物的相反侧,但是在这里为了图示和分析的方便,将像面示出在与物体同侧的对称位置上,如图所示。应该理解,这并不改变本申请中将讨论的视差关系。 First, the relationship between binocular parallax and camera optical distance will be described with reference to FIGS. 1A, 1B, and 1C. In the figure, O l and O r represent the optical centers of the left and right cameras respectively (the optical center of the camera lens), and I l and I r represent the image planes of the left and right cameras, respectively (hereinafter referred to as left image and right, respectively). Image side). The image plane of the camera is determined by the position of the photosensitive surface of the image sensor contained in the camera, such as a CCD or CMOS, typically located at a focal length f from the optical center. The camera is usually a real image that is inverted on the object. The image plane is located on the opposite side of the object with respect to the optical center. However, for the convenience of illustration and analysis, the image surface is shown at a symmetrical position on the same side as the object. Show. It should be understood that this does not change the parallax relationship that will be discussed in this application.
用于双目视觉的两个相机具有相同的焦距。左、右相机的光心O l与O r相隔间距D(又称为“基线”),相应的光轴Z l和Z r彼此平行。 Both cameras for binocular vision have the same focal length. The optical centers O l and O r of the left and right cameras are separated by a distance D (also referred to as "baseline"), and the corresponding optical axes Z l and Z r are parallel to each other.
图1A、图1B和图1C中,以左相机的光心O l为相机坐标系的原点,以左右相机的光轴的方向为Z方向。左相机和右相机的光心位于垂直于光轴的同一平面(即XY平面)中。在图1A和图1B中,以光心O l与O r的连线方向作为相机坐标系的X方向,垂直于该光心连线和光轴的方向为Y方向。应该理解的是,相机坐标系也可以以其它方式来设定,例如以右相机的光心O r为原点,而不同的相机坐标系的设定方式并不影响以下讨论的视差定比关系。 FIGS. 1A, 1B and 1C, the optical center of the camera to the left O l camera coordinate system origin to the left and right of a camera optical axis is the Z-direction. The optical centers of the left and right cameras are located in the same plane (ie, the XY plane) perpendicular to the optical axis. 1A and 1B, a direction to connect with the optical center O l O r as the X direction of the camera coordinate system, a direction perpendicular to the optical axis of the optical center and the connection to the Y-direction. It should be appreciated that the camera coordinate system may also be set in other ways, for example, the optical center of the camera to the right of the origin O r, is set in a manner different from the camera coordinate system, discussed below, does not affect the parallax than a given relationship.
在本申请中,各个相机对应的像面具有以相同方式设定的像面坐标系。例如,在本申请图示示例中,以相机光轴与像面相交的点(光轴上物点成像的像点)为像面坐标系的原点,并以分别平行于相机坐标系X轴和Y轴的方向为像面坐标系的x轴和y轴。应该理解的是,像面坐标系也可以以其它方式来设定,例如以图像传感器的感光面的一个顶角作为原点,而不同的像面坐标系的设定方式并不影响以下讨论的视差定比关系。In the present application, the image planes corresponding to the respective cameras have image plane coordinate systems set in the same manner. For example, in the illustrated example of the present application, the point at which the camera optical axis intersects the image plane (the image point imaged by the object point on the optical axis) is the origin of the image plane coordinate system, and is parallel to the X coordinate of the camera coordinate system, respectively. The direction of the Y-axis is the x-axis and the y-axis of the image plane coordinate system. It should be understood that the image coordinate system can also be set in other ways, for example, using an apex angle of the photosensitive surface of the image sensor as the origin, and the setting of different image coordinate systems does not affect the parallax discussed below. The ratio relationship.
图1A示出了相机光心连线方向上的双目视差与光心间距的关系。图1A示出了整个成像系统在XZ平面上的投影。空间中的同一物点P[X,Y,Z]通过左右相机分别成像于像点P l和P r。如图1B中更清楚地示出的,基于三角形相似原理,对于物点P,左右相机在图像的对应于相机光心连线的方向(x方向)上产生视差d=x l-x r,且d/D=f/Z,其中x l和x r分别为像点P l和P r在左像面I l和右像面I r中的x轴坐标。 Fig. 1A shows the relationship between the binocular parallax and the optical center distance in the direction in which the camera is connected in the optical center. Figure 1A shows the projection of the entire imaging system on the XZ plane. The same object point P[X, Y, Z] in space is imaged by the left and right cameras to the image points P l and P r , respectively . As more clearly shown in FIG. 1B, based on the triangle similarity principle, for the object point P, the left and right cameras generate a parallax d=x l -x r in the direction (x direction) of the image corresponding to the camera's optical center line. And d / D = f / Z, where x l and x r are the x-axis coordinates of the image points P l and P r in the left image plane I l and the right image plane I r , respectively.
图1B示出了在垂直于相机光心连线方向上的双目视差。图1B示意性地示出整个成像系统在YZ平面上的投影。如图1B所示,物点P通过左右相机在图像的垂直于相机光心连线的方向(y方向)上并不产生视差,即d=y l-y r=0。 Fig. 1B shows binocular parallax in a direction perpendicular to the optical line of the camera. Figure 1 B schematically illustrates the projection of the entire imaging system on the YZ plane. As shown in FIG. 1B, the object point P does not generate a parallax in the direction (y direction) of the image perpendicular to the optical line of the camera by the left and right cameras, that is, d=y l -y r =0.
接下来考察像面上任意方向上的视差与相机光心间距D之间的关系。为了讨论的方便,图1C中将待考察视差的方向设定为X方向,此时左相机与右相机的光心连线与X方向不一致,换句话说待考察视差的方向相对于相机光心连线可以为任意方向。可以假设存在一中间相机,其光心O m与光心O l与O r布置在垂直于光轴的同一平面内,该中间相机在X方向上与左相机对齐,在Y方向上与右相机对齐,如图1C所示。根据以上结合图1A、图1B和图1C的讨论,左相机与中间相机在X方向上的视差d lm=x l-x m=D x×f/Z,其中x m为物点通过中间相机所成像点在中间相机的像面上的x轴坐标,D x等于右相机相对于左相机在X方向上的偏移量,即D x=X r-X l。中间相机与右相机由于在Y方向上对齐,所以两者在X方向上没有视差,即d mr=x m-x r=0。综合而言,左相机与右相机在X方向上产生的视差d x=x l-x r=d lm+d mr=D x×f/Z。类似地,左相机与右相机在Y方向上产生的视差dy=y l-y r=D y×f/Z,其中D y等于右相机相对于左相机在Y方向上的偏移量,即D y=Y r-Y l。 Next, the relationship between the parallax in the arbitrary direction on the image plane and the distance D between the optical centers of the cameras is examined. For the convenience of discussion, the direction of the parallax to be investigated is set to the X direction in FIG. 1C, and the optical line connection between the left camera and the right camera does not coincide with the X direction, in other words, the direction of the parallax to be inspected relative to the camera optical center. The connection can be in any direction. Intermediate can be assumed that there is a camera, which is the optical center and the optical center O m O l O r and disposed in the same plane perpendicular to the optical axis of the camera is aligned with the intermediate left camera in the X direction, the Y direction and the right camera Alignment, as shown in Figure 1C. According to the discussion above with reference to FIGS. 1A, 1B and 1C, the parallax d lm = x l - x m = D x × f / Z of the left camera and the intermediate camera in the X direction, where x m is the object point passing through the intermediate camera The x-axis coordinate of the imaged point on the image plane of the intermediate camera, D x is equal to the offset of the right camera relative to the left camera in the X direction, ie D x =X r -X l . Since the middle camera and the right camera are aligned in the Y direction, there is no parallax in the X direction, that is, d mr = x m - x r =0. In summary, the left camera and the right camera produce a parallax d x = x l - x r = d lm + d mr = D x × f / Z in the X direction. Similarly, the left camera and the right camera produce a parallax dy=y l -y r =D y ×f/Z in the Y direction, where D y is equal to the offset of the right camera relative to the left camera in the Y direction, ie D y = Y r -Y l .
图2将图1C所示的情况扩展为布置在垂直于光轴的同一平面内的三台相机的情况,其中,第一相机C 1、第二相机C 2和第三相机C 3具有相同的焦距f和彼此平行的光轴(未示出),并且各自的光心O 1、O 2和O 3位于垂直于光轴的同一平面内。根据以上参照图1C的分析,对于同一物点,第一相机和第二相机得到的图像之间产生的、在第一方向上的第一视差d 1=D 1×f/Z,其中D 1为第二相机的光心O 2相对于第一相机的光心O 1在第一方向A上的偏移量,并且第二相机和第三相机得到的图像之间产生的、在第二方向B上的第二视差d 2=D 2×f/Z,其中D 2为第三相机的光心O 3相对于第二相机的光心O 2在第二方向B上的偏移量,其中第一方向A为平行于相机光心所在的平面且非垂直于第一相机和第二相机的光心连线的方向,第二方向B为平行于相机光心所在的平面且非垂直于第二相机和第三相机的光心连线的方向。 尽管图2中示出第一方向不同于第二方向,但是第一方向也可以与第二方向相同。本发明发明人发现,对于布置在垂直于光轴的同一平面内的三台相机而言存在以下视差定比关系,即:上述第一视差d 1与第二视差d 2满足d 1:d 2=D 1:D 2。需要说明的是,上述等式中的等号并不意味着本发明限制于严格相等的情形,本领域技术人员能够理解,等号两边相差在一定范围内的情况下,就可以认为是工程意义上的相等。例如可以表示为d 1:d 2=(1+k)*D 1:D 2,其中k表示允许的相对偏差,|k|<<1,|k|例如可以为0.05或者0.01。 2 expands the situation shown in FIG. 1C into the case of three cameras arranged in the same plane perpendicular to the optical axis, wherein the first camera C 1 , the second camera C 2 , and the third camera C 3 have the same The focal length f is an optical axis (not shown) parallel to each other, and the respective optical centers O 1 , O 2 and O 3 are located in the same plane perpendicular to the optical axis. According to the analysis above with reference to FIG. 1C, for the same object point, the first parallax d 1 =D 1 ×f/Z in the first direction generated between the images obtained by the first camera and the second camera, wherein D 1 Is the offset of the optical center O 2 of the second camera relative to the optical center O 1 of the first camera in the first direction A, and the image obtained between the second camera and the third camera, in the second direction The second parallax on B is D 2 = D 2 × f / Z, where D 2 is the offset of the optical center O 3 of the third camera relative to the optical center O 2 of the second camera in the second direction B, wherein The first direction A is parallel to the plane of the camera's optical center and is not perpendicular to the direction of the optical connection of the first camera and the second camera, and the second direction B is parallel to the plane of the camera's optical center and is non-perpendicular to the first The direction of the optical connection between the two cameras and the third camera. Although the first direction is different from the second direction in FIG. 2, the first direction may be the same as the second direction. The inventors of the present invention have found that the following parallax ratio relationship exists for three cameras arranged in the same plane perpendicular to the optical axis, that is, the first parallax d 1 and the second parallax d 2 satisfy d 1 : d 2 =D 1 :D 2 . It should be noted that the equal sign in the above equation does not mean that the present invention is limited to a strictly equal case, and those skilled in the art can understand that if the difference between the two sides of the equal sign is within a certain range, it can be considered as engineering significance. Equal on. For example, it can be expressed as d 1 :d 2 =(1+k)*D 1 :D 2 , where k represents an allowable relative deviation, and |k|<<1,|k| can be, for example, 0.05 or 0.01.
基于上述发现,根据本发明提出一种基于视差定比关系的三维测量系统和三维测量方法。图3示出了根据本发明的三维测量系统10的一个示例的示意性结构框图。图4示出了本发明的三维测量方法100的示意性总流程图。Based on the above findings, a three-dimensional measurement system and a three-dimensional measurement method based on a parallax ratio relationship are proposed according to the present invention. FIG. 3 shows a schematic structural block diagram of one example of a three-
如图3所示,三维测量系统10包括相机阵列CA,该相机阵列CA至少包括第一相机C
1、第二相机C
2和第三相机C
3,它们具有相同的焦距以及彼此平行的光轴,并且各自的光心布置在垂直于光轴的同一平面上。优选地,各个相机具有相同的光圈、ISO、快门时间、图像传感器等。更优选地,各个相机为完全同型号的相机。
3, three-dimensional measurement system includes a
三维测量系统10包括处理单元11,其接收来自相机阵列的图像,包括来自第一相机C
1、第二相机C
2和第三相机C
3的第一图像、第二图像和第三图像,并基于这些图像进行处理,以实现三维测量。当然,相机阵列CA可以包括额外的相机,并且处理单元11可以接收来自这些额外的相机的图像并进行处理。
The three-
三维测量系统10可以包括控制单元12,其可操作以控制第一相机、第二相机和第三相机同步地采集图像。对于动态场景三维重构,如在道路上的车流,人的瞬间表情,不同步拍摄的后果是匹配结果错误率非常高。控制单元12还可以控制相机等倍数变焦。例如,对于较远的场景,需要将三个相机相同地调整到较大焦距。控制单元12可以通过有线或者无线的方式与第一相机C
1、第二相机C
2和第三相机C
3连接,以实现上述控制。控制单元12可以与处理单元11通信,接收来自处理单元11的信息,以生成所述用于控制相机同步采集图像的控 制信号,也可以独立工作以实现上述控制。
The three-
本发明的基于视差定比关系的三维测量方法100基于例如三维测量系统10的相机阵列CA来实现。三维测量方法100包括:The three-
s110:接收分别来自第一相机、第二相机和第三相机的第一图像、第二图像和第三图像;S110: receiving a first image, a second image, and a third image from the first camera, the second camera, and the third camera, respectively;
s120:在所述第一图像、第二图像和第三图像中提取特征点;S120: extract feature points in the first image, the second image, and the third image;
s130:对第一图像、第二图像和第三图像中的特征点进行匹配,该匹配包括基于以下视差定比关系筛选匹配特征点组:对于同一物点,在第一图像和第二图像之间产生的、在第一方向上的第一视差d 1与在第二图像和第三图像之间产生的、在第二方向上的第二视差d 2满足d 1:d 2=D 1:D 2,其中D 1为第一相机的光心相对于第二相机的光心在所述第一方向上的偏移量,D 2为第二相机的光心相对于第三相机的光心在所述第二方向上的偏移量,其中第一方向为平行于所述平面且非垂直于第一相机和第二相机的光心连线的方向,第二方向为平行于所述平面且非垂直于第二相机和第三相机的光心连线的方向;以及 S130: Matching feature points in the first image, the second image, and the third image, the matching includes filtering the matched feature point group based on the following parallax ratio relationship: for the same object point, in the first image and the second image The first disparity d 1 generated in the first direction and the second disparity d 2 in the second direction generated between the second image and the third image satisfy d 1 :d 2 =D 1 : D 2 , where D 1 is the offset of the optical center of the first camera relative to the optical center of the second camera in the first direction, and D 2 is the optical center of the second camera relative to the optical center of the third camera An offset in the second direction, wherein the first direction is a direction parallel to the plane and non-perpendicular to a line connecting the optical centers of the first camera and the second camera, the second direction being parallel to the plane And not perpendicular to the direction of the optical connection of the second camera and the third camera;
s140:计算匹配特征点组对应的物点的三维坐标。S140: Calculate the three-dimensional coordinates of the object points corresponding to the matching feature point group.
同样的,上述等式d 1:d 2=D 1:D 2中的等号并不意味着本发明限制于严格相等的情形,本领域技术人员能够理解,等号两边相差在一定范围内的情况下,就可以认为是工程意义上的相等。例如可以表示为d 1:d 2=(1+k)*D 1:D 2,其中k表示允许的相对偏差,|k|<<1,|k|例如可以为0.05或者0.01。 Similarly, the equal sign in the above equation d 1 :d 2 =D 1 :D 2 does not mean that the present invention is limited to strictly equal cases, and those skilled in the art can understand that the difference between the two sides of the equal sign is within a certain range. In this case, it can be considered as equal in engineering sense. For example, it can be expressed as d 1 :d 2 =(1+k)*D 1 :D 2 , where k represents an allowable relative deviation, and |k|<<1,|k| can be, for example, 0.05 or 0.01.
三维测量系统10的处理单元11配置为执行三维测量方法100中的上述处理s110~s140。处理单元11可以由处理器和存储程序指令的存储器来实现,该程序指令在由处理器执行时使得处理器执行上述处理s110~s140的操作。处理单元11可以构成根据本发明的三维测量装置20。The
处理s110中,可以直接从第一相机、第二相机和第三相机接收所述图像,也可以经由其他单元或设备接收所述图像。In process s110, the image may be received directly from the first camera, the second camera, and the third camera, or may be received via other units or devices.
处理s120中,找到各个图像中的特征点,并用属性组合描述。例如可以用灰度梯度找到灰度变化较大的点作为特征点,或者使用sift 算法找到sift特征点,或者使用诸如Harris算法之类的角点检测算法找到图像的角点作为特征点。应该理解,本发明并不限于用于提取特征点的具体方法。In process s120, feature points in each image are found and described by a combination of attributes. For example, a grayscale gradient can be used to find a point with a large change in gray as a feature point, or a sift algorithm can be used to find a sift feature point, or a corner point detection algorithm such as the Harris algorithm can be used to find a corner point of an image as a feature point. It should be understood that the present invention is not limited to the specific method for extracting feature points.
处理s130用于对各图像中的特征点进行匹配,其中包括基于上述视差定比关系筛选匹配特征点组的处理。下文将结合不同实施例更加详细地介绍处理s130中基于视差定比关系筛选匹配特征点组的处理。The processing s130 is configured to match the feature points in each image, and includes processing for filtering the matching feature point groups based on the parallax ratio relationship described above. The process of screening matching feature point groups based on the parallax ratio relationship in process s130 will be described in more detail below in conjunction with various embodiments.
处理s130可以进一步包括以其他方式筛选匹配特征点组的处理。换句话说,在一些实施例中,处理s130可以仅通过基于视差定比关系的处理来实现特征点匹配。在另一些实施例中,处理s130中可以结合基于视差定比关系的处理和基于其他匹配/筛选方式的处理来实现特征点匹配。筛选匹配特征点组的其他方式例如包括对像素或邻域像素群施用相似性计算来筛选匹配特征点组。这里,所述相似性计算包括对像素灰度差的平方和、过零平均像素灰度差的平方和、像素灰度差的绝对值和、过零平均像素灰度差的绝对值和、邻域之间归一化交叉相关度、以及过零平均归一化交叉相关度中的至少一项的计算。Processing s130 may further include the process of filtering the matched set of feature points in other manners. In other words, in some embodiments, the process s130 may implement feature point matching only by processing based on a parallax ratio relationship. In other embodiments, feature point matching may be implemented in processing s130 in conjunction with processing based on disparity ratio relationships and processing based on other matching/screening methods. Other ways of filtering matching feature point groups include, for example, applying a similarity calculation to a pixel or neighborhood pixel group to filter matching feature point groups. Here, the similarity calculation includes a sum of squares of pixel gradation differences, a sum of squares of zero-crossing average pixel gradation differences, an absolute value of pixel gradation differences, an absolute value of a zero-crossing average pixel gradation difference, and an adjacent The calculation of at least one of the normalized cross-correlation between the domains and the zero-crossing average normalized cross-correlation.
例如,在一些实施例中,处理s130还包括对基于所述视差定比关系筛选出来的匹配特征点组,施用对像素或邻域像素群的相似性计算以进一步筛选匹配特征点组。在一些示例中,相似性计算可以仅施用于包含有相同特征点的两个以上特征点组,即仅施用于匹配结果不唯一的情况下。For example, in some embodiments, the process s130 further includes applying a similarity calculation to the pixel or neighborhood pixel group to further filter the matched feature point group for the matched feature point group selected based on the parallax ratio relationship. In some examples, the similarity calculation may be applied only to two or more sets of feature points that contain the same feature points, ie, only if the match result is not unique.
在另一些实施例中,处理s130中,首先通过例如对特征点或其邻域像素群施用相似性计算来筛选匹配特征点组,然后对于通过相似性计算筛选获得的匹配特征点组,判断每组特征点是否满足所述视差定比关系,从而进一步筛选匹配特征点组。In other embodiments, in the process s130, the matching feature point group is first filtered by, for example, applying a similarity calculation to the feature point or its neighboring pixel group, and then the matching feature point group obtained by the similarity calculation screening is judged. Whether the group feature points satisfy the parallax ratio relationship, thereby further screening the matching feature point groups.
通过处理s130得到的每个匹配特征点组包含分别来自第一图像、第二图像和第三图像的一个特征点。在处理s140中,可以基于一个匹配特征点组中的任意两个特征点来计算对应的物点的深度(Z坐标),再根据相似三角形原理计算出该物点的X,Y坐标。基于两个匹配的特征点计算深度值和基于深度值计算X、Y坐标的方法是已知的,在此不再赘述。Each of the matched feature point groups obtained by processing s130 includes one feature point from the first image, the second image, and the third image, respectively. In the process s140, the depth (Z coordinate) of the corresponding object point may be calculated based on any two feature points in a matched feature point group, and then the X, Y coordinates of the object point are calculated according to the similar triangle principle. A method of calculating a depth value based on two matched feature points and calculating an X, Y coordinate based on the depth value is known, and will not be described herein.
在一些优选实施例中,处理s130中基于一个匹配特征点组中的每两个特征点分别计算出一个深度值,并取其平均值作为对应物点的深度值。理论上,计算出来的各个深度值应该是相等的,但是实际情况中考虑图像噪声的影响,则三个值之间会有微小的差异。因此取其平均值,可减少噪声的影响,提高深度值的准确性。在其他一些实施例中,也可以选取特征点组中的两对特征点来计算深度值并取其平均值作为物点的深度值。In some preferred embodiments, a depth value is calculated in each of the two feature points in a matched feature point group in the processing s130, and the average value is taken as the depth value of the corresponding object point. In theory, the calculated depth values should be equal, but in reality, considering the effect of image noise, there will be a slight difference between the three values. Therefore, taking the average value can reduce the influence of noise and improve the accuracy of the depth value. In some other embodiments, two pairs of feature points in the feature point group may also be selected to calculate the depth value and take the average value as the depth value of the object point.
对于用于三维重构的三维测量而言,处理s140还可以包括计算每个匹配特征点组所对应的物点的颜色值,例如[R,G,B]。该颜色值可以与物点的坐标[X,Y,Z]一起形成体素信息,例如[X,Y,Z,R,G,B]。在三维重构中,可以组合所有物点的体素信息,以形成物体或场景的真彩色三维模型。For three-dimensional measurement for three-dimensional reconstruction, the process s140 may further include calculating a color value of the object point corresponding to each matched feature point group, such as [R, G, B]. This color value can form voxel information together with the coordinates [X, Y, Z] of the object point, such as [X, Y, Z, R, G, B]. In 3D reconstruction, voxel information of all object points can be combined to form a true color 3D model of the object or scene.
以下将结合不同实施例更加详细地介绍三维测量方法100和三维测量系统10,特别是其中基于视差定比关系筛选匹配特征点组的处理。The three-
图5至图11图解了根据本发明第一实施例的三维测量系统和三维测量方法。5 to 11 illustrate a three-dimensional measurement system and a three-dimensional measurement method according to a first embodiment of the present invention.
图5示出了根据本发明第一实施例的三维测量系统中的相机阵列CA,其中第一相机C 1、第二相机C 2和第三相机C 3具有相同的焦距和彼此平行的光轴,并且三个相机的光心布置在垂直于光轴的同一直线(X方向)上。图5所示布置(a)中D 1=D 2;布置(b)中D 1≠D 2。 5 shows a camera array CA in a three-dimensional measuring system according to a first embodiment of the present invention, wherein the first camera C 1 , the second camera C 2 , and the third camera C 3 have the same focal length and optical axes parallel to each other And the optical centers of the three cameras are arranged on the same straight line (X direction) perpendicular to the optical axis. In the arrangement (a) shown in Fig. 5, D 1 = D 2 ; in the arrangement (b), D 1 ≠ D 2 .
图6A、图6B和图6C示出了图5所示三台相机之间的视差定比关系。如图所示,空间同一物点P[X,Y,Z]通过第一相机C 1、第二相机C 2和第三相机C 3分别成像于第一像面I 1的P 1点、第二像面I 2的P 2点和第三像面I 3的P 3点。各个像面如上所述具有以相同方式设定的像面坐标系。在图6中,各像面的x轴方向对应于相机光心所在直线的方向,y轴与x轴垂直。如图6C中更清晰地显示的,在x轴方向上,第一相机和第二相机之间产生的第一视差d 1=x 1-x 2=D 1×f/Z,第二相机和第三相机之间产生的第二视差d 2=x 2-x 3=D 2×f/Z,满足d 1:d 2=D 1:D 2。同样的,在d 1:d 2=(1+k)*D 1:D 2的情况下就可以认为是相等。其中k表示 允许的相对偏差,|k|<<1,|k|例如可以为0.05或者0.01。 6A, 6B, and 6C show the parallax ratio relationship between the three cameras shown in Fig. 5. As shown, the space of the same object point P [X, Y, Z] through a first camera C 1, C 2, and the second camera C 3, respectively, third camera image in the first image plane I 1 P 1 point, the first two plane image I P 2 of two points and the third image plane I P 3 point 3. Each of the image planes has an image plane coordinate system set in the same manner as described above. In FIG. 6, the x-axis direction of each image plane corresponds to the direction of the line where the camera's optical center is located, and the y-axis is perpendicular to the x-axis. As shown more clearly in Figure 6C, in the x-axis direction, a first parallax d 1 = x 1 - x 2 = D 1 × f / Z generated between the first camera and the second camera, the second camera and The second parallax d 2 = x 2 - x 3 = D 2 × f / Z generated between the third cameras satisfies d 1 : d 2 = D 1 : D 2 . Similarly, in the case of d 1 :d 2 =(1+k)*D 1 :D 2 , it can be considered equal. Where k represents the relative deviation allowed, and |k|<<1, |k| may be, for example, 0.05 or 0.01.
图7示意性地示出了图5所示相机阵列获得的第一图像IM 1、第二图像IM 2和第三图像IM 3的示例,其中图像的横坐标轴(u轴)对应于相机光心所在的直线的方向。理想情况下,如图7中通过叠加第一图像IM 1、第二图像IM 2和第三图像IM 3而得到假象的图像IM’所示,同一物点P[X,Y,Z]在图像IM 1、IM 2和IM 3中的对应像点P 1[u 1,v 1]、P 2[u 2,v 2]、P 3[u 3,v 3]具有相同的纵坐标,即v 1=v 2=v 3,并且像点P 1[u 1,v 1]、P 2[u 2,v 2]的横坐标差值s 1=u 1-u 2=d 1/d p,像点P 2[u 2,v 2]、P 3[u 3,v 3]的横坐标差值s 2=u 2-u 3=d 2/d p,满足s 1:s 2=d 1:d 2=D 1:D 2,其中d p为图像传感器的像素单元的边长。 FIG. 7 schematically shows an example of the first image IM 1 , the second image IM 2 , and the third image IM 3 obtained by the camera array shown in FIG. 5, wherein the abscissa axis (u axis) of the image corresponds to the camera light. The direction of the line where the heart is located. Ideally, as shown in FIG. 7 by superimposing the first image IM. 1, the second and the third image IM 2. 3 obtained image IM artifact image IM 'as shown, the same object point P [X, Y, Z] in the image The corresponding image points P 1 [u 1 , v 1 ], P 2 [u 2 , v 2 ], P 3 [u 3 , v 3 ] in IM 1 , IM 2 and IM 3 have the same ordinate, ie v 1 = v 2 = v 3 , and the abscissa difference s 1 = u 1 - u 2 = d 1 / d p of the image points P 1 [u 1 , v 1 ], P 2 [u 2 , v 2 ], The abscissa difference s 2 = u 2 - u 3 = d 2 / d p of the image point P 2 [u 2 , v 2 ], P 3 [u 3 , v 3 ] satisfies s 1 : s 2 = d 1 :d 2 =D 1 :D 2 , where d p is the side length of the pixel unit of the image sensor.
由于制造误差、安装误差以及使用过程中产生的相机内部参数和/或外部参数的变化,实际从第一相机、第二相机和第三相机直接获得的第一图像、第二图像和第三图像通常会偏离上述理想情况。这可以通过对相机的物理上的标定或校正,以及/或者通过计算程序对图像的校正,来使得图像接近上述理想情况。因此,根据本发明实施例的三维测量方法可以包括对第一图像、第二图像和第三图像的校正处理,该校正处理使得第一图像、第二图像和第三图像中对应于第一相机、第二相机和第三相机的光轴的点具有相同的横坐标和纵坐标,并且第一图像、第二图像和第三图像的横坐标方向都对应于所述直线的方向,其中所述横坐标和纵坐标的方向相互垂直。The first image, the second image, and the third image obtained directly from the first camera, the second camera, and the third camera due to manufacturing errors, mounting errors, and changes in camera internal parameters and/or external parameters generated during use It usually deviates from the above ideal situation. This can bring the image closer to the ideal situation by physically calibrating or correcting the camera and/or by correcting the image by a calculation program. Therefore, the three-dimensional measurement method according to an embodiment of the present invention may include correction processing of the first image, the second image, and the third image, the correction processing such that the first image, the second image, and the third image correspond to the first camera The points of the optical axes of the second camera and the third camera have the same abscissa and ordinate, and the abscissa directions of the first image, the second image, and the third image all correspond to the direction of the straight line, wherein The directions of the abscissa and the ordinate are perpendicular to each other.
校正处理在图4所示处理s130,即匹配特征点的处理之前进行,优选在接收到图像之后而在提取特征点之前进行,即在图4所示处理s110和s120之间进行。然而,应该注意的是,根据本发明实施例的三维测量方法并不限于包括上述校正处理的情况,例如,在一些应用中,可以通过对相机阵列的物理调节来进行校正,而无需对图像的校正。The correction process is performed before the process s130 shown in Fig. 4, that is, the process of matching the feature points, preferably after the image is received and before the feature points are extracted, that is, between the processes s110 and s120 shown in Fig. 4. However, it should be noted that the three-dimensional measurement method according to an embodiment of the present invention is not limited to the case including the above-described correction processing, for example, in some applications, correction may be performed by physical adjustment of the camera array without an image Correction.
相应地,如图3所示,在根据本发明的三维测量系统10中可以进一步包括校正单元13,该校正单元13接收来自相机整列CA的图像,并至少部分地基于所述图像生成校正矩阵,并将校正矩阵提供给处理单元11。该校正矩阵,当通过处理单元11被施用于第一图像、第二图像和第三图像时,实现三维测量方法中的上述校正处理。如图3所 示,三维测量装置20可以包括该校正单元13。处理单元11和校正单元13可以基于相同的处理器和存储器或者不同的处理器和存储器来实现。Accordingly, as shown in FIG. 3, a
基于具有图5所示布置的相机阵列,如图8所示,根据本发明第一实施例的三维测量方法200的处理s230中,将图4所示的处理s130中的基于视差定比关系筛选匹配特征点组的处理实现为包括:基于以下坐标关系筛选匹配特征点组:第一图像中的特征点P
1[u
1,v
1]的横坐标和第二图像中的特征点P
2[u
2,v
2]的横坐标的差值s
1=u
1-u
2与第二图像中的所述特征点P
2[u
2,v
2]的横坐标和第三图像中的特征点P
3[u
3,v
3]的横坐标的差值s
2=u
2-u
3满足s
1:s
2=D
1:D
2,并且第一图像、第二图像和第三图像中的所述特征点具有相同的纵坐标,即v
1=v
2=v
3。三维测量方法200的其他处理与以上参照图4描述的三维测量方法100中的相应处理是相同的,在此不再赘述。
Based on the camera array having the arrangement shown in FIG. 5, as shown in FIG. 8, in the process s230 of the three-
图9示出了基于上述坐标关系s
1:s
2=D
1:D
2以及v
1=v
2=v
3筛选匹配特征点组的处理的一个示例,处理300。如图所示,处理300包括:
FIG. 9 shows an example of a process of screening matching feature point groups based on the above-described coordinate relationship s 1 : s 2 = D 1 : D 2 and v 1 = v 2 = v 3 ,
s310:在第一图像、第二图像和第三图像中的两者中分别选定第一特征点和第二特征点,所述第一特征点和第二特征点的纵坐标满足相对于一目标纵坐标位于预定范围内;S310: The first feature point and the second feature point are respectively selected in the first image, the second image, and the third image, and the ordinates of the first feature point and the second feature point satisfy relative to one The target ordinate is within a predetermined range;
s320:计算第一图像、第二图像和第三图像中的第三者中与所述第一特征点和第二特征点匹配的第三特征点的期待横坐标,使得差值s 1=u 1-u 2与差值s 2=u 2-u 3满足s 1:s 2=D 1:D 2;以及 S320: Calculate a desired abscissa of a third feature point matching the first feature point and the second feature point among the third one of the first image, the second image, and the third image, such that the difference s 1 =u 1 -u 2 and the difference s 2 =u 2 -u 3 satisfy s 1 :s 2 =D 1 :D 2 ;
s330:基于由第三特征点的期待横坐标和目标纵坐标构成的期待位置,在所述第三者中搜索第三特征点。S330: Search for the third feature point in the third party based on the expected position formed by the expected abscissa and the target ordinate of the third feature point.
需要说明的是,上述等式s 1:s 2=D 1:D 2中的等号并不意味着本发明限制于严格相等的情形,本领域技术人员能够理解,等号两边相差在一定范围内的情况下,就可以认为是工程意义上的相等。例如可以表示为s 1:s 2=(1+k)*D 1:D 2,其中k表示允许的相对偏差,|k|<<1,|k|例如可以为0.05或者0.01。 It should be noted that the equal sign in the above equation s 1 : s 2 = D 1 : D 2 does not mean that the present invention is limited to strictly equal cases, and those skilled in the art can understand that the difference between the two sides of the equal sign is within a certain range. In the case of the inside, it can be considered as equal in engineering sense. For example, it can be expressed as s 1 :s 2 =(1+k)*D 1 :D 2 , where k represents an allowable relative deviation, and |k|<<1, |k| can be, for example, 0.05 or 0.01.
处理300可以用于例如按照纵坐标的一定顺序(递增或递减)对图像进行逐行的扫描以筛选匹配特征点组。以下以第一相机、第二相 机和第三相机等间距布置(即D
1=D
2)的情况为例结合图7更详细地介绍上述处理300。应该理解以下介绍仅为示例性的,而非限制性的。
处理s310中,在第二图像IM 2中的J个具有目标纵坐标v t的特征点P 2[u 2(j),v 2(j)](1≤j≤J)中,选定其中一个特征点P 2[u 2,v 2](第一特征点),其中纵坐标v 2(j)=v t;搜索第一图像IM 1和第三图像IM 3中具有目标纵坐标的特征点P 1[u 1(i),v 1(i)]和特征点P 3[u 3(j),v 3(j)],其中v 1(i)=v 3(j)=v t,得到第一图像IM 1中有满足条件的I个特征点P 1[u 1(i),v 1(i)],1≤i≤I,第二图像IM 2中有满足条件的K个特征点P 3[u 3(k),v 3(k)],1≤j≤K(I或K为零时则结束搜索,重新选定第一特征点);假设I≥K,为了减少搜索次数,从第三图像IM 3中开始搜索,即在第三图像中IM 3中选定其中一个特征点P 3[u 3,v 3](第二特征点)(如果M<N,从左图开始搜索)。 In the processing s310, among the J feature points P 2 [u 2 (j), v 2 (j)] (1 ≤ j ≤ J) having the target ordinate v t in the second image IM 2 , among them, a feature point P 2 [u 2 , v 2 ] (first feature point), wherein the ordinate v 2 (j)=v t ; searching for features having a target ordinate in the first image IM 1 and the third image IM 3 Point P 1 [u 1 (i), v 1 (i)] and feature point P 3 [u 3 (j), v 3 (j)], where v 1 (i)=v 3 (j)=v t Obtaining 1 feature points P 1 [u 1 (i), v 1 (i)] satisfying the condition in the first image IM 1 , 1 ≤ i ≤ I, and K in the second image IM 2 satisfying the condition Characteristic point P 3 [u 3 (k), v 3 (k)], 1 ≤ j ≤ K (when I or K is zero, the search is ended, the first feature point is reselected); assuming I ≥ K, in order to reduce Searching times, starting from the third image IM 3 , that is, selecting one of the feature points P 3 [u 3 , v 3 ] (second feature point) in IM 3 in the third image (if M < N, from The picture on the left starts searching).
以上描述中从第二图像中选定第一特征点,这仅仅是示例性的,本发明并不限于从哪一个图像开始选定第一特征点。例如,在其他示例中,也可以首先比较各个图像中符合纵坐标要求的特征点数量I、J、K的大小,在特征点数量最小的图像中选定第一特征点,在特征点数量次小的图像中选定第二特征点,最后在特征点数量最大的图像中搜索第三特征点。The first feature point is selected from the second image in the above description, which is merely exemplary, and the present invention is not limited to which image is selected from the first feature point. For example, in other examples, the number of feature points I, J, and K that meet the ordinate requirement in each image may be first compared, and the first feature point is selected in the image with the smallest number of feature points, and the number of feature points is The second feature point is selected in the small image, and finally the third feature point is searched for in the image with the largest number of feature points.
考虑到图像误差/噪声,处理s310中可以在v t-ε~v t+ε的纵坐标预定范围内搜索和选定特征点,ε为大于或等于0的整数,可以根据例如相机阵列的安装和使用状况或者图像的质量来确定,优选地,0≤ε≤2。 Considering the image error/noise, in the processing s310, the feature points can be searched and selected within a predetermined range of the ordinate of v t - ε - v t + ε, and ε is an integer greater than or equal to 0, which can be installed according to, for example, a camera array. It is determined by the condition of use or the quality of the image, preferably 0 ≤ ε ≤ 2.
然后,处理s320中,利用对称性计算第一图像IM 1中期待出现与特征点P 2[u 2,v 2]和特征点P 3[u 3,v 3]匹配的特征点P 1(第三特征点)的横坐标位置(期待横坐标u e),使得差值s 1=u e-u 2与差值s 2=u 2-u 3满足s 1:s 2=D 1:D 2=1,即u e=2u 2-u 3。可以看到此时,特征点的横坐标具有对称的坐标关系。 Then, the process in s320, the symmetry is calculated by using the first image IM in an expected occurrence of the feature point P 2 [u 2, v 2 ] and the feature point P 3 [u 3, v 3 ] matching feature points P 1 (first The abscissa position of the three feature points (expected abscissa u e ) such that the difference s 1 =u e -u 2 and the difference s 2 =u 2 -u 3 satisfy s 1 :s 2 =D 1 :D 2 =1, ie u e = 2u 2 -u 3 . It can be seen that the abscissa of the feature point has a symmetric coordinate relationship at this time.
接下来,在处理s330中,基于期待出现的特征点P 1的期待横坐标u e和上述目标纵坐标v t,可以得到特征点P 1的期待位置[u e,v t],并基于该期待位置在第一图像IM 1中搜索是否存在特征点P 1。同样,考虑到图像误差/噪声,在一些示例中,可以设定横坐标容差ε u和纵坐标 容差ε v,并[u e-ε u~u e+ε u,v t-ε v~v t+ε v]的范围内在第一图像IM 1中搜索特征点P 1(例如见图7中图像IM 1中虚线所示范围)。优选,0≤ε u≤3,0≤ε v≤3在另一些示例中,可以仅设定横坐标容差和纵坐标容差中的一者,在此不再赘述。 Next, in process s330, based on the expected abscissa u e of the feature point P 1 expected to appear and the target ordinate v t , the expected position [u e , v t ] of the feature point P 1 can be obtained, and based on the The expected position is searched in the first image IM 1 for the presence or absence of the feature point P 1 . Also, considering image error/noise, in some examples, the abscissa tolerance ε u and the ordinate tolerance ε v can be set, and [u e -ε u ~u e + ε u , v t - ε v ~ v t + ε v] range inherent in the first image IM 1 searches the feature point P 1 (e.g. see FIG. 1, shown in a broken line range image IM. 7). Preferably, 0 ≤ ε u ≤ 3, 0 ≤ ε v ≤ 3 In other examples, only one of the abscissa tolerance and the ordinate tolerance may be set, and details are not described herein again.
如果处理s330中搜索到期待的特征点P 1,则特征点P 1、P 2、P 3作为一个匹配特征点组。如果搜索不到期待的特征点P 1,则选定的第一特征点(P 2)和第二特征点(P 3)不具备匹配的可能性,该回合的筛选结束,接下来可以选定下一个第二特征点,然后重复以上处理s320和s330。遍历所有第二特征点之后,选定新的第一特征点,并类似地重复以上处理。 If the expected feature point P 1 is searched for in s330, the feature points P 1 , P 2 , P 3 are taken as one matching feature point group. If the expected feature point P 1 is not searched, the selected first feature point (P 2 ) and the second feature point (P 3 ) do not have the possibility of matching, and the screening of the round ends, and then the selection can be selected. The next second feature point is then repeated for the above processes s320 and s330. After traversing all of the second feature points, a new first feature point is selected and the above process is similarly repeated.
以上以相机直线布置且对称分布(D
1:D
2=1)为例对处理300进行了介绍。对于D
1:D
2=1:R,R≠1的情况,上述处理300的操作是类似的,不同之处在于计算第三特征点的期待横坐标的时候,以以上讨论的具体情形为例,u
e=(1+R)×u
2-R×u
3,此时考虑到图像坐标都为整数,所以需要对计算得到的数值取整才能获得需要的期待横坐标,即u
e=round{(1+R)×u
2-R×u
3},“round”在此表示取整运算。
The
图9所示仅为基于所述坐标关系筛选匹配特征点组的处理的一个示例。在已经通过其他方式(例如对像素或邻域像素群的相似性计算)获得了候选的匹配特征点组的情况下,也可以以验算候选的特征点组中各个特征点的横坐标是否在一定容差范围内满足s 1:s 2=D 1:D 2,以及判断各个特征点的纵坐标是否在允许的范围内,从而筛选满足上述坐标关系s 1:s 2=D 1:D 2且v 1=v 2=v 3的特征点组。 FIG. 9 shows only one example of a process of filtering matching feature point groups based on the coordinate relationship. In the case where the candidate matching feature point group has been obtained by other means (for example, calculation of the similarity of the pixel or the neighborhood pixel group), it is also possible to check whether the abscissa of each feature point in the candidate feature point group is certain. Satisfying s 1 : s 2 = D 1 : D 2 within the tolerance range, and judging whether the ordinate of each feature point is within an allowable range, so that the screening satisfies the above coordinate relationship s 1 : s 2 = D 1 : D 2 and A feature point group of v 1 = v 2 = v 3 .
根据本发明第一实施例的三维测量方法200中,将基于三个相机之间产生的视差定比关系的处理简化为基于图像中对应特征点的坐标关系的处理,在特征点匹配/筛选匹配特征点组的过程中,与传统的基于对像素或邻域像素群的相似性计算进行特征点匹配的方法相比,由于前者主要是对坐标做加减法运算和少量的乘法运算(在D
1:D
2=1的情况下,只需要进行加减法运算),而后者通常需要进行密集的乘法运算,例如矩阵的卷积,所以相比较而言,前者大大减小了其匹配过程的计算量。计算量的显著减小对于基于高清晰度图像的三维测量和重 构以及对于实时的三维测量和重构具有非常重大的意义,其为后者的实现提供了可能性。
According to the three-
图10示意性地对比了基于相似性计算的特征点匹配和根据本发明第一实施例的三维测量方法200中基于坐标关系s
1:s
2=D
1:D
2的特征点匹配。图10的(a)中显示对于例如双目相机获得的一个图像中的某个特征点P
l1,在另一图像中找到3个与自身属性及邻域属性非常相近的候选特征点P
r1、P
r2、P
r3。而根据三维重构中对特征点匹配的唯一性要求,上述不唯一的匹配结果只能放弃。此外,可以想到,当由于图像噪音等原因,非正确匹配的特征点P
r3可能比正确匹配的特征点点P
r1表现出与特征点P
l1更大的相似性,从而,基于对像素或邻域像素群的相似性计算的特征点组筛选可能获得错误的匹配结果。图10的(b)显示,对于来自如图5所示布置的第一相机、第二相机和第三相机的图像,基于坐标关系s
1:s
2=D
1:D
2,可以筛选出唯一的匹配特征点组(P
11,P
21,P
31),而排除根据属性相似性可能发生匹配的其它两个候选特征点P
32、P
33。需要说明的是,上述等式s
1:s
2=D
1:D
2中的等号并不意味着本发明限制于严格相等的情形,否则可能导致无法找到匹配特征点组。本领域技术人员能够理解,等号两边相差在一定范围内的情况下,就可以认为是工程意义上的相等。例如可以表示为s
1:s
2=(1+k)*D
1:D
2,其中k表示允许的相对偏差,|k|<<1,|k|例如可以为0.05或者0.01。
FIG. 10 schematically compares feature point matching based on similarity calculation and feature point matching based on the coordinate relationship s 1 : s 2 = D 1 : D 2 in the three-
可以看到,基于所述坐标关系的特征点匹配能够帮助排除基于相似性计算的匹配得到的错误结果,提高匹配的正确率;同时,由于能够以更大概率获得唯一匹配结果,避免由于匹配结果不唯一而导致的匹配失败,所以基于视差定比关系/上述坐标关系的特征点匹配也有助于提高匹配率,从而有助于实现高密度的特征点云。It can be seen that the feature point matching based on the coordinate relationship can help to eliminate the error result obtained by the matching based on the similarity calculation, and improve the correct rate of the matching; at the same time, since the unique matching result can be obtained with a larger probability, the matching result is avoided. The matching failure caused by not unique, so the feature point matching based on the parallax ratio relationship/the above coordinate relationship also contributes to the improvement of the matching rate, thereby contributing to the realization of the high density feature point cloud.
图11为流程图,示出了根据本发明第一实施例的三维测量方法的一个示例,三维测量方法400。三维测量方法400包括:FIG. 11 is a flow chart showing an example of a three-dimensional measurement method, a three-
s410:接收分别来自第一相机、第二相机和第三相机的第一图像、第二图像和第三图像;S410: receiving a first image, a second image, and a third image from the first camera, the second camera, and the third camera, respectively;
s420:对第一图像、第二图像和第三图像进行校正处理,如以上 所讨论的,该校正处理使得第一图像、第二图像和第三图像中对应于第一相机、第二相机和第三相机的光轴的点具有相同的横坐标和纵坐标,并且第一图像、第二图像和第三图像的横坐标方向都对应于所述直线的方向;S420: performing a correction process on the first image, the second image, and the third image, as discussed above, such that the first image, the second image, and the third image correspond to the first camera, the second camera, and The points of the optical axis of the third camera have the same abscissa and ordinate, and the abscissa directions of the first image, the second image, and the third image all correspond to the direction of the straight line;
s430:在第一图像、第二图像和第三图像中提取特征点;S430: extract feature points in the first image, the second image, and the third image;
s440:对第一图像、第二图像和第三图像中的特征点进行匹配,该匹配包括:S440: Matching feature points in the first image, the second image, and the third image, the matching includes:
s441:基于以下坐标关系筛选匹配特征点组:第一图像中的特征点P 1[u 1,v 1]的横坐标和第二图像中的特征点P 2[u 2,v 2]的横坐标的差值s 1=u 1-u 2与第二图像中的所述特征点P 2[u 2,v 2]的横坐标和第三图像中的特征点P 3[u 3,v 3]的横坐标的差值s 2=u 2-u 3满足s 1:s 2=D 1:D 2,并且第一图像、第二图像和第三图像中的所述特征点具有相同的纵坐标,即v 1=v 2=v 3;以及 S441: Filter matching feature point groups based on the following coordinate relationship: the abscissa of the feature points P 1 [u 1 , v 1 ] in the first image and the horizontal of the feature points P 2 [u 2 , v 2 ] in the second image The difference s 1 = u 1 -u 2 of the coordinates and the abscissa of the feature point P 2 [u 2 , v 2 ] in the second image and the feature point P 3 [u 3 , v 3 in the third image The difference s 2 = u 2 - u 3 of the abscissa satisfies s 1 : s 2 = D 1 : D 2 , and the feature points in the first image, the second image, and the third image have the same vertical Coordinates, ie v 1 = v 2 = v 3 ;
s442:对于通过处理s441获得的匹配特征点组,基于对像素或邻域的像素群的相似性计算,进一步进行筛选;S442: further performing screening based on the similarity calculation of the pixel group of the pixel or the neighborhood for the matching feature point group obtained by processing s441;
s450:计算匹配特征点组对应的物点的三维坐标。S450: Calculate the three-dimensional coordinates of the object points corresponding to the matching feature point group.
处理s441可以实现为例如图9所示的处理300,但是并不限于此。The process s441 can be implemented as, for example, the
处理s420可以为利用任何校正方法实现的校正处理,包括通过对图像施用一定的校正矩阵实现的校正处理。Process s420 may be a correction process implemented using any correction method, including a correction process implemented by applying a certain correction matrix to the image.
根据本发明,视差定比关系也可以被应用于校正处理中,例如可以基于视差定比关系生成用于校正处理的校正矩阵。作为一个示例,基于视差定比关系生成用于校正处理的矩阵的处理可以包括:分别在所述第一图像、第二图像和第三图像中提取特征点,例如通过sift算法提取稀疏特征点点阵;对第一图像、第二图像和第三图像中的特征点进行匹配,获得多个匹配特征点组,例如通过RANSAC算法;利用匹配特征点组中特征点在各图像中的坐标,根据校正矩阵被施用于各个图像之后匹配特征点组中的特征点之间满足的所述视差定比关系,建立超定方程组;以及通过例如最小二乘法解算所述超定方程组,得到校正矩阵。According to the present invention, the parallax ratio relationship can also be applied to the correction processing, and for example, the correction matrix for the correction processing can be generated based on the parallax ratio relationship. As an example, the process of generating a matrix for the correction process based on the parallax ratio relationship may include extracting feature points in the first image, the second image, and the third image, respectively, for example, extracting a sparse feature dot lattice by using a sift algorithm Matching feature points in the first image, the second image, and the third image to obtain a plurality of matching feature point groups, for example, by using a RANSAC algorithm; using coordinates of the feature points in each image in the matching feature point group, according to the correction a matrix is applied to each of the images to match the parallax ratio relationship satisfied between feature points in the feature point group, to establish an overdetermined system of equations; and to solve the overdetermined equations by, for example, a least squares method to obtain a correction matrix .
三维测量方法400中,首先通过处理s441筛选匹配特征点组,使 得进入到后续基于相似性计算的特征点匹配处理s442的特征点的数量大幅减少,能够显著减小特征点匹配过程中的计算量。另外,处理s441与s442的组合使用,还有助于提高特征点匹配的正确率和匹配率,使得能够得到更高密度的特征点云。In the three-
图12至图14图解了根据本发明第二实施例的三维测量系统和三维测量方法。12 to 14 illustrate a three-dimensional measurement system and a three-dimensional measurement method according to a second embodiment of the present invention.
图12示出了根据本发明第二实施例的三维测量系统中的相机阵列CA,其中第一相机C 1、第二相机C 2和第三相机C 3具有相同的焦距和彼此平行的光轴(Z方向),第一相机C 1和第二相机C 2的光心O 1、O 2沿X方向对齐,第二相机C 2和第三相机C 3的光心O 2、O 3沿Y方向对齐。第二相机的光心相对于第一相机的光心在X方向上的偏移量为D 1,第三相机的光心相对于第二相机的光心在Y方向上的偏移量为D 2。 12 shows a camera array CA in a three-dimensional measuring system according to a second embodiment of the present invention, wherein the first camera C 1 , the second camera C 2 , and the third camera C 3 have the same focal length and optical axes parallel to each other (Z direction), the optical centers O 1 , O 2 of the first camera C 1 and the second camera C 2 are aligned in the X direction, and the optical centers O 2 , O 3 of the second camera C 2 and the third camera C 3 are along the Y Align the directions. The shift of the optical center of the second camera with respect to the optical center of the first camera in the X direction is D 1 , and the shift of the optical center of the third camera with respect to the optical center of the second camera in the Y direction is D 2 .
参照以上结合图1C所讨论的,且如图12所示,在x轴方向上,第一相机和第二相机之间产生的第一视差d 1=x 1-x 2=D 1×f/Z;在y轴方向上,第二相机和第三相机之间产生的第二视差d 2=y 2-y 3=D 2×f/Z,满足d 1:d 2=D 1:D 2。同样的,本领域技术人员能够理解,等号两边相差在一定范围内的情况下,就可以认为是工程意义上的相等。例如可以表示为d 1:d 2=(1+k)*D 1:D 2,其中k表示允许的相对偏差,|k|<<1,|k|例如可以为0.05或者0.01。 Referring to the discussion above in connection with FIG. 1C, and as shown in FIG. 12, in the x-axis direction, a first parallax d 1 = x 1 - x 2 = D 1 × f / generated between the first camera and the second camera Z; in the y-axis direction, a second parallax d 2 = y 2 - y 3 = D 2 × f / Z generated between the second camera and the third camera satisfies d 1 : d 2 = D 1 : D 2 . Similarly, those skilled in the art can understand that if the difference between the two sides of the equal sign is within a certain range, it can be considered as equivalent in engineering sense. For example, it can be expressed as d 1 :d 2 =(1+k)*D 1 :D 2 , where k represents an allowable relative deviation, and |k|<<1,|k| can be, for example, 0.05 or 0.01.
图13示意性地示出了图12所示相机阵列获得的第一图像IM 1、第二图像IM 2和第三图像IM 3的示例,其中图像的横坐标轴(u轴)对应于第一相机和第二相机光心所在的直线的方向(x轴方向),图像的纵坐标(v轴)对应于第二相机和第三相机光心所在的直线的方向(y方向)。理想情况下,如图13中通过叠加第一图像IM 1、第二图像IM 2和第三图像IM 3而得到假象的图像IM’所示,同一物点P[X,Y,Z]在图像IM 1、IM 2中的对应像点P 1[u 1,v 1]、P 2[u 2,v 2]具有相同的纵坐标,即v 1=v 2,在图像IM 2、IM 3中的对应像点P 2[u 2,v 2]、P 3[u 3,v 3]具有相同的横坐标,即u 2=u 3,并且像点P 1[u 1,v 1]、P 2[u 2,v 2]的横坐标差值s 3=u 1-u 2=d 1/d p,像点P 2[u 2,v 2]、P 3[u 3,v 3]的纵坐标差值s 4=v 2-v 3=d 2/d p, 满足s 3:s 4=d 1:d 2=D 1:D 2,其中d p为图像传感器的像素单元的边长。同样的,等号两边相差在一定范围内的情况下,就可以认为是工程意义上的相等。例如可以表示为s 3:s 4=(1+k)*D 1:D 2,其中k表示允许的相对偏差,|k|<<1,|k|例如可以为0.05或者0.01。 FIG. 13 is a view schematically showing an example of the first image IM 1 , the second image IM 2 , and the third image IM 3 obtained by the camera array shown in FIG. 12, wherein the abscissa axis (u axis) of the image corresponds to the first The direction of the line where the camera and the second camera are located (x-axis direction), and the ordinate of the image (v-axis) corresponds to the direction of the line where the second camera and the third camera's optical center are located (y direction). Ideally, as shown in FIG. 13 by superimposing the first image IM. 1, the second and the third image IM 2. 3 obtained image IM artifact image IM 'as shown, the same object point P [X, Y, Z] in the image The corresponding image points P 1 [u 1 , v 1 ], P 2 [u 2 , v 2 ] in IM 1 , IM 2 have the same ordinate, ie v 1 = v 2 , in the images IM 2 , IM 3 The corresponding image points P 2 [u 2 , v 2 ], P 3 [u 3 , v 3 ] have the same abscissa, that is, u 2 = u 3 , and the image points P 1 [u 1 , v 1 ], P 2 [u 2 , v 2 ] the abscissa difference s 3 = u 1 - u 2 = d 1 / d p , the image point P 2 [u 2 , v 2 ], P 3 [u 3 , v 3 ] The ordinate difference s 4 = v 2 - v 3 = d 2 / d p , satisfies s 3 : s 4 = d 1 : d 2 = D 1 : D 2 , where d p is the side length of the pixel unit of the image sensor . Similarly, if the difference between the two sides of the equal sign is within a certain range, it can be considered as equivalent in the engineering sense. For example, it can be expressed as s 3 : s 4 = (1 + k) * D 1 : D 2 , where k represents an allowable relative deviation, and |k|<<1, |k| can be, for example, 0.05 or 0.01.
根据本实施例的三维测量方法可以包括对第一图像、第二图像和第三图像的校正处理,该校正处理使得第一图像、第二图像和第三图像中对应于第一相机、第二相机和第三相机的光轴的点具有相同的横坐标和纵坐标,并且第一图像、第二图像和第三图像的横坐标方向对应于第一相机和第二相机的光心连线的方向,纵坐标方向对应于第二相机和第三相机的光心连线的方向。The three-dimensional measurement method according to the present embodiment may include correction processing of the first image, the second image, and the third image, the correction processing such that the first image, the second image, and the third image correspond to the first camera, the second image The points of the optical axes of the camera and the third camera have the same abscissa and ordinate, and the abscissa directions of the first image, the second image, and the third image correspond to the optical centers of the first camera and the second camera The direction, the ordinate direction corresponds to the direction of the optical line connection of the second camera and the third camera.
基于具有图12所示布置的相机阵列,根据本发明第二实施例的三维测量方法中,将图4所示的处理s130中的基于视差定比关系筛选匹配特征点组的处理实施为包括:基于以下坐标关系筛选匹配特征点组:第一图像中的特征点P
1[u
1,v
1]的横坐标和第二图像中的特征点P
2[u
2,v
2]的横坐标的差值s
3=u
1-u
2与第二图像中的所述特征点P
2[u
2,v
2]的纵坐标和第三图像中的特征点P
3[u
3,v
3]的纵坐标的差值s
4=v
2-v
3满足s
3:s
4=D
1:D
2且v
1=v
2,u
2=u
3。根据本实施例的三维测量方法的其他处理与以上参照图4描述的三维测量方法100中的相应处理可以是相同或者类似的,在此不再赘述。
According to the camera array having the arrangement shown in FIG. 12, in the three-dimensional measurement method according to the second embodiment of the present invention, the process of screening the matching feature point group based on the parallax ratio relationship in the process s130 shown in FIG. 4 is implemented to include: The matching feature point group is filtered based on the following coordinate relationship: the abscissa of the feature point P 1 [u 1 , v 1 ] in the first image and the abscissa of the feature point P 2 [u 2 , v 2 ] in the second image The difference s 3 =u 1 -u 2 and the ordinate of the feature point P 2 [u 2 , v 2 ] in the second image and the feature point P 3 [u 3 , v 3 ] in the third image The difference s 4 = v 2 - v 3 of the ordinate satisfies s 3 : s 4 = D 1 : D 2 and v 1 = v 2 , u 2 = u 3 . The other processing of the three-dimensional measuring method according to the present embodiment may be the same as or similar to the corresponding processing in the three-
图14示出了基于上述坐标关系筛选匹配特征点组的处理的一个示例,处理500。如图14所示,处理500包括:FIG. 14 shows an example of a process of screening matching feature point groups based on the above-described coordinate relationship,
s510:在第一图像和第二图像中选定第一特征点和第二特征点,所述第一特征点和第二特征点的纵坐标满足相对于一目标纵坐标位于预定范围内;S510: selecting a first feature point and a second feature point in the first image and the second image, the ordinates of the first feature point and the second feature point satisfying a predetermined range with respect to a target ordinate;
s520:计算第一特征点的横坐标与第二特征点的横坐标的差值s 3=u 1-u 2; S520: calculating a difference s 3 = u 1 -u 2 between the abscissa of the first feature point and the abscissa of the second feature point;
s530:计算差值s 4,使得s 3:s 4=D 1:D 2; S530: Calculate the difference s 4 such that s 3 : s 4 = D 1 : D 2 ;
s540:计算第三图像中与所述第一特征点和第二特征点匹配的第三特征点的期待纵坐标,使得第二特征点的纵坐标与第三特征点的期待纵坐标的差值为上述计算得到的第二差值s 4;以及 S540: Calculate a expected ordinate of the third feature point in the third image that matches the first feature point and the second feature point, such that a difference between the ordinate of the second feature point and the expected ordinate of the third feature point The second difference s 4 calculated for the above;
s550:基于由所述第三特征点的期待纵坐标和第二特征点的横坐标构成的期待位置,在第三图像中搜索第三特征点。S550: Search for the third feature point in the third image based on the expected position formed by the expected ordinate of the third feature point and the abscissa of the second feature point.
处理s510中的在相对于目标坐标的预定范围内选定特征点的操作可以参照以上结合处理300中的处理s310所介绍的操作。此外,类似于处理300,处理500的处理s550中也可以相对于期待位置设定容差范围,在此不再赘述。The operation of selecting feature points in the predetermined range with respect to the target coordinates in the processing s510 may refer to the operations described above in the processing s310 in the
应该注意的是,由于在本申请中图像中的横坐标和纵坐标表示的是相对垂直的两个方向,所以本实施例中,第一相机和第三相机相对于第二相机实际上具有对等的位置关系,从而在处理s510中,所述“在第一图像和第二图像中选定第一特征点和第二特征点,所述第一特征点和第二特征点的纵坐标满足相对于一目标纵坐标位于预定范围内”并不限于首先在水平对齐的两个图像中选取第一和第二特征点。It should be noted that since the abscissa and the ordinate in the image in the present application represent two directions that are relatively perpendicular, in this embodiment, the first camera and the third camera actually have a pair with respect to the second camera. a positional relationship, such that in process s510, the first feature point and the second feature point are selected in the first image and the second image, and the ordinates of the first feature point and the second feature point satisfy The first and second feature points are selected first in the horizontally aligned two images by being within a predetermined range with respect to a target ordinate.
在一些优选的实施方式中,可以在第二图像中选定特征点之后,搜索第一图像和第三图像中与该特征点具有相同纵坐标和相同横坐标的特征点,并基于第一图像和第三图像中搜索到的特征点的数量来决定接下来的搜索顺序。例如,当第一图像中所述具有相同纵坐标的特征点数量大于第三图像中所述具有相同横坐标的特征点的数量的时候,可以接下来从第三图像中选定特征点,计算并确定第一图像中的特征点的期待位置并搜索。处理s510意图覆盖这种情况。In some preferred embodiments, after the feature points are selected in the second image, feature points in the first image and the third image that have the same ordinate and the same abscissa as the feature point may be searched for based on the first image. And the number of feature points searched in the third image to determine the next search order. For example, when the number of feature points having the same ordinate in the first image is greater than the number of feature points having the same abscissa in the third image, the feature points may be selected from the third image to be calculated. And determining the expected position of the feature points in the first image and searching. Processing s510 is intended to cover this situation.
处理500可以用于按照一定顺序(例如逐行或逐列)遍历例如第二图像中的特征点,筛选第一图像和第三图像中与之匹配的特征点,从而筛选匹配特征点组。The
从技术效果角度来看,与根据本发明第一实施例的三维测量方法类似,根据本发明第二实施例的三维测量方法在特征点匹配过程中,相对于基于对像素或邻域像素群的相似性计算的特征点匹配方法而言,大大减小了匹配过程的计算量,有助于提高了三维测量的空间精度和实时性。同时,根据本实施例的三维测量方法可以结合基于对像素或邻域像素群的相似性计算的特征点匹配方法,这种情况下,基于视差定比关系的特征点匹配(筛选匹配特征点组)的方法能够有效排除基于对像素或邻域像素群的相似性计算的特征点匹配的错误匹配结 果,提高匹配的正确率;而且有助于避免由于匹配结果不唯一导致的匹配失败,从而有助于提高匹配率,获得高密度的特征点云。From a technical effect point of view, similar to the three-dimensional measurement method according to the first embodiment of the present invention, the three-dimensional measurement method according to the second embodiment of the present invention is relative to a pixel-based or neighborhood-based pixel group in the feature point matching process. In the feature point matching method of similarity calculation, the calculation amount of the matching process is greatly reduced, which helps to improve the spatial precision and real-time performance of the three-dimensional measurement. Meanwhile, the three-dimensional measurement method according to the present embodiment may be combined with a feature point matching method based on the similarity calculation of a pixel or a neighborhood pixel group, in which case the feature point matching based on the parallax ratio relationship (screening matching feature point group) The method can effectively eliminate the mismatching result of the feature point matching calculated based on the similarity of the pixel or the neighborhood pixel group, improve the correct rate of the matching; and help to avoid the matching failure due to the non-unique matching result, thereby Helps improve the matching rate and obtain a high-density feature point cloud.
图15至图18图解了根据本发明第三实施例的三维测量系统和三维测量方法。15 to 18 illustrate a three-dimensional measurement system and a three-dimensional measurement method according to a third embodiment of the present invention.
图15示出了根据本发明第三实施例的三维测量系统中的相机阵列CA,其中第一相机C 1、第二相机C 2和第三相机C 3具有相同的焦距和彼此平行的光轴(Z方向),第一相机C 1、第二相机C 2和第三相机C 3的成三角形布置,并且它们的光心O 1、O 2、O 3布置在垂直于光轴的同一平面上,其中第一相机C 1和第二相机C 2的光心O 1、O 2沿X方向对齐。第二相机的光心相对于第一相机的光心在X方向上的偏移量为D 1,第三相机的光心相对于第二相机的光心在X方向上的偏移量为D 2。 Figure 15 shows a camera array CA in a three-dimensional measuring system according to a third embodiment of the present invention, wherein the first camera C 1 , the second camera C 2 and the third camera C 3 have the same focal length and optical axes parallel to each other (Z direction), the first camera C 1 , the second camera C 2 and the third camera C 3 are arranged in a triangle, and their optical centers O 1 , O 2 , O 3 are arranged on the same plane perpendicular to the optical axis Wherein the optical centers O 1 , O 2 of the first camera C 1 and the second camera C 2 are aligned in the X direction. The shift of the optical center of the second camera with respect to the optical center of the first camera in the X direction is D 1 , and the shift of the optical center of the third camera with respect to the optical center of the second camera in the X direction is D 2 .
参照以上结合图1C所讨论的,且如图15所示,在x轴方向上,第一相机和第二相机之间产生的第一视差d 1=x 1-x 2=D 1×f/Z,第二相机和第三相机之间产生的第二视差d 2=x 2-x 3=D 2×f/Z,满足d 1:d 2=D 1:D 2。 Referring to the discussion above in connection with FIG. 1C, and as shown in FIG. 15, in the x-axis direction, a first parallax d 1 = x 1 - x 2 = D 1 × f / generated between the first camera and the second camera Z, a second parallax generated between the second camera and the third camera d 2 = x 2 - x 3 = D 2 × f / Z, satisfying d 1 : d 2 = D 1 : D 2 .
图16示意性地示出了图15所示相机阵列获得的第一图像IM 1、第二图像IM 2和第三图像IM 3的示例,其中图像的横坐标轴(u轴)对应于第一相机和第三相机光心所在的直线的方向(x轴方向)。理想情况下,如图16所示,同一物点P[X,Y,Z]在图像IM 1、IM 2、IM 3中分别得到像点P 1[u 1,v 1]、P 2[u 2,v 2]、P 3[u 3,v 3],如假想图像IM’中所示,其中图像IM 1、IM 3中的对应像点具有相同的纵坐标,即v 1=v 3,并且像点P 1[u 1,v 1]、P 2[u 2,v 2]的横坐标差值s 5=u 1-u 2=d 1/d p,像点P 2[u 2,v 2]、P 3[u 3,v 3]的横坐标差值s 6=u 2-u 3=d 2/d p,满足s 5:s 6=d 1:d 2=D 1:D 2,其中d p为图像传感器的像素单元的边长。同样的,等号两边相差在一定范围内的情况下,就可以认为是工程意义上的相等。例如可以表示为s 5:s 6=(1+k)*D 1:D 2,其中k表示允许的相对偏差,|k|<<1,|k|例如可以为0.05或者0.01。 FIG. 16 schematically shows an example of the first image IM 1 , the second image IM 2 , and the third image IM 3 obtained by the camera array shown in FIG. 15 , wherein the abscissa axis (u axis) of the image corresponds to the first The direction of the line where the camera and the third camera are located (x-axis direction). Ideally, as shown in FIG. 16, the same object point P[X, Y, Z] obtains image points P 1 [u 1 , v 1 ], P 2 [u] in the images IM 1 , IM 2 , and IM 3 , respectively. 2 , v 2 ], P 3 [u 3 , v 3 ], as shown in the imaginary image IM′, wherein corresponding image points in the images IM 1 , IM 3 have the same ordinate, ie v 1 = v 3 , And the abscissa difference s 5 = u 1 - u 2 = d 1 / d p of the image points P 1 [u 1 , v 1 ], P 2 [u 2 , v 2 ], the image point P 2 [u 2 , v 2 ], P 3 [u 3 , v 3 ] The abscissa difference s 6 =u 2 -u 3 =d 2 /d p , satisfying s 5 :s 6 =d 1 :d 2 =D 1 :D 2 , where d p is the side length of the pixel unit of the image sensor. Similarly, if the difference between the two sides of the equal sign is within a certain range, it can be considered as equivalent in the engineering sense. For example, it can be expressed as s 5 :s 6 =(1+k)*D 1 :D 2 , where k represents an allowable relative deviation, and |k|<<1,|k| can be, for example, 0.05 or 0.01.
根据本实施例的三维测量方法可以包括对第一图像、第二图像和第三图像的校正处理,该校正处理使得第一图像、第二图像和第三图像中对应于第一相机、第二相机和第三相机的光轴的点具有相同的横 坐标和纵坐标,并且第一图像、第二图像和第三图像的横坐标方向都对应于第一相机和第三相机的光心连线的方向。The three-dimensional measurement method according to the present embodiment may include correction processing of the first image, the second image, and the third image, the correction processing such that the first image, the second image, and the third image correspond to the first camera, the second image The points of the optical axes of the camera and the third camera have the same abscissa and ordinate, and the abscissa directions of the first image, the second image, and the third image correspond to the optical connections of the first camera and the third camera The direction.
基于具有图15所示布置的相机阵列,根据本发明第三实施例的三维测量方法中,将图4所示的处理s130中的基于视差定比关系筛选匹配特征点组的处理实施为包括:基于以下坐标关系筛选匹配特征点组:第一图像中的特征点P
1[u
1,v
1]的横坐标和第二图像中的特征点P
2[u
2,v
2]的横坐标的差值s
5=u
1-u
2与第二图像中的所述特征点P
2[u
2,v
2]的横坐标和第三图像中的特征点P
3[u
3,v
3]的横坐标的差值s
6=u
2-u
3满足s
5:s
6=D
1:D
2,并且第一图像和第三图像中的所述特征点具有相同的纵坐标,即v
1=v
3。根据本实施例的三维测量方法的其他处理与以上参照图4描述的三维测量方法100中的相应处理相同,在此不再赘述。
According to the camera array having the arrangement shown in FIG. 15, in the three-dimensional measurement method according to the third embodiment of the present invention, the process of screening the matching feature point group based on the parallax ratio relationship in the process s130 shown in FIG. 4 is implemented to include: The matching feature point group is filtered based on the following coordinate relationship: the abscissa of the feature point P 1 [u 1 , v 1 ] in the first image and the abscissa of the feature point P 2 [u 2 , v 2 ] in the second image The difference s 5 =u 1 -u 2 and the abscissa of the feature point P 2 [u 2 , v 2 ] in the second image and the feature point P 3 [u 3 , v 3 ] in the third image The difference s 6 = u 2 - u 3 of the abscissa satisfies s 5 : s 6 = D 1 : D 2 , and the feature points in the first image and the third image have the same ordinate, that is, v 1 = v 3 . The other processing of the three-dimensional measuring method according to the present embodiment is the same as the corresponding processing in the three-
图17示出了基于上述坐标关系筛选匹配特征点组的处理的一个示例,处理600。如图17所示,处理600包括:FIG. 17 shows an example of a process of screening a matching feature point group based on the above coordinate relationship,
s610:在第一图像和第三图像中分别选定第一特征点和第二特征点,第二特征点的纵坐标相对于第一特征点的纵坐标在预定范围内;S610: respectively selecting a first feature point and a second feature point in the first image and the third image, wherein the ordinate of the second feature point is within a predetermined range with respect to the ordinate of the first feature point;
s620:计算第二图像中与第一特征点和第二特征点满足以下关系的第三特征点的期待横坐标,使得所述差值s 5与差值s 6满足s 5:s 6=D 1:D 2;以及 S620: Calculate a desired abscissa of the third feature point in the second image that satisfies the following relationship with the first feature point and the second feature point, such that the difference value s 5 and the difference value s 6 satisfy s 5 :s 6 =D 1 : D 2 ; and
s630:基于期待横坐标,在第二图像中搜索第三特征点。S630: Search for the third feature point in the second image based on the expected abscissa.
处理s610中的在相对于目标坐标的预定范围内选定特征点的操作可以参照以上结合处理300中的处理s310所介绍的操作。此外,类似于处理300,处理600的处理s630中也可以相对于期待横坐标设定容差范围(参见图16中特征点P
2左右的虚线所示的范围),在此不再赘述。
The operation of selecting the feature points in the predetermined range with respect to the target coordinates in the processing s610 may refer to the operations described above in the processing s310 in the
应该注意的是,与图9所示的处理300相比,处理600中并不约束第二图像中的第三特征点的纵坐标范围,因此在第二图像中基于期待横坐标搜索匹配的特征点,相比于第一实施例和第二实施例,得到的匹配结果更大可能是非唯一的。考虑到这一点,优选在根据本发明第三实施例的三维测量方法中结合基于对像素或邻域像素群的相似性计算的特征点匹配方法。It should be noted that the ordinate range of the third feature point in the second image is not constrained in the
图18示出根据本发明第三实施例的三维测量方法的一个示例,三维测量方法700。如图所示,三维测量方法700包括:FIG. 18 shows an example of a three-dimensional measurement method, a three-
s710:接收分别来自第一相机、第二相机和第三相机的第一图像、第二图像和第三图像;S710: receiving a first image, a second image, and a third image from the first camera, the second camera, and the third camera, respectively;
s720:对第一图像、第二图像和第三图像进行校正处理,如以上所讨论的,该校正处理使得第一图像、第二图像和第三图像中对应于第一相机、第二相机和第三相机的光轴的点具有相同的横坐标和纵坐标,并且第一图像、第二图像和第三图像的横坐标方向都对应于第一相机和第三相机的光心连线的方向;S720: performing a correction process on the first image, the second image, and the third image, and as the above discussion, the correction process is such that the first image, the second image, and the third image correspond to the first camera, the second camera, and The points of the optical axis of the third camera have the same abscissa and ordinate, and the abscissa directions of the first image, the second image, and the third image correspond to the directions of the optical centers of the first camera and the third camera. ;
s730:在第一图像、第二图像和第三图像中提取特征点;S730: extract feature points in the first image, the second image, and the third image;
s740:对第一图像、第二图像和第三图像中的特征点进行匹配,该匹配包括:S740: Matching feature points in the first image, the second image, and the third image, the matching includes:
s741:基于对像素或邻域的像素群的相似性计算筛选匹配特征点组;以及S741: Filter matching matching feature point groups based on similarity calculation of pixel groups of pixels or neighborhoods;
s742:基于以下坐标关系筛选匹配特征点组:第一图像中的特征点P 1[u 1,v 1]的横坐标和第二图像中的特征点P 2[u 2,v 2]的横坐标的差值s 5=u 1-u 2与第二图像中的所述特征点P 2[u 2,v 2]的横坐标和第三图像中的特征点P 3[u 3,v 3]的横坐标的差值s 6=v 2-v 3满足s 5:s 6=D 1:D 2,并且第一图像和第三图像中的所述特征点具有相同的纵坐标,即v 1=v 3; S742: Filter matching feature point groups based on the following coordinate relationship: the abscissa of the feature points P 1 [u 1 , v 1 ] in the first image and the cross-section of the feature points P 2 [u 2 , v 2 ] in the second image The difference s 5 of the coordinates θ 5 -u 1 -u 2 and the abscissa of the feature point P 2 [u 2 , v 2 ] in the second image and the feature point P 3 [u 3 , v 3 in the third image The difference s 6 = v 2 - v 3 of the abscissa satisfies s 5 : s 6 = D 1 : D 2 , and the feature points in the first image and the third image have the same ordinate, ie v 1 = v 3 ;
s750:计算匹配特征点组对应的物点的三维坐标。S750: Calculate the three-dimensional coordinates of the object points corresponding to the matching feature point group.
其中,处理s720可以为利用任何校正方法实现的校正处理,包括通过对图像施用一定的校正矩阵实现的校正处理。Wherein, the process s720 may be a correction process implemented by using any correction method, including a correction process implemented by applying a certain correction matrix to the image.
此外,应该理解的是,三维测量方法700并不限于在处理s741中采用基于对像素或邻域的像素群的相似性计算的处理,也可以替代为现有的或以后新出现的其他用于筛选匹配特征点组/特征点匹配的处理。In addition, it should be understood that the three-
处理s742可以实现为例如图17所示的处理600,但并不限于此。The process s742 can be implemented as, for example, the
根据本发明第三实施例的三维测量方法700通过采用处理s742,可以帮助提高特征点匹配的正确率和匹配率,有助于获得更高密度的 特征点云。The three-
尽管以上图18示出的示例中,筛选满足坐标关系s
5:s
6=D
1:D
2的匹配特征点组的处理被设置在基于对像素或邻域的像素群的相似性计算筛选匹配特征点组的处理之后,但是如以上结合图4所示三维测量方法100的处理s130所讨论的,根据本发明第三实施例的三维测量方法中上述两种处理的前后顺序可以调换,即首先筛选满足坐标关系s
5:s
6=D
1:D
2的匹配特征点组,得到候选的匹配特征点组,然后对于这些候选的匹配特征点组施用相似性计算,从而进一步筛选匹配特征点组。如此实现的三维测量方法由于基于坐标关系s
5:s
6=D
1:D
2的筛选使得进入到基于相似性计算的筛选处理的特征点的数量大大减少,所以可以有效地减小匹配过程的计算量,同时如上所述,还有助于提高特征点匹配的正确率和匹配率,有助于获得更高密度的特征点云。
Although in the example shown in FIG. 18 above, the process of screening the matched feature point group satisfying the coordinate relationship s 5 :s 6 =D 1 :D 2 is set to perform filter matching based on the similarity of the pixel group to the pixel or the neighborhood. After the processing of the feature point group, but as discussed above in connection with the process s130 of the three-
图19示出可用于根据本发明的三维测量系统的相机阵列的更多布置方式。如图所示,相机阵列可以包括布置为正三角形的三个相机,还可以进一步扩展为排布形成多个正三角形的三个以上的相机,例如图9右下角所示蜂窝状的布置。此外,包括成直角三角形布置的三个相机的相机阵列可以进一步扩展成多种其它形式,例如矩形(包括正方形)、T字形、十字形、对角线形以及以这些形状为单元的扩展形式。Figure 19 shows a more arrangement of a camera array that can be used in a three dimensional measurement system in accordance with the present invention. As shown, the camera array can include three cameras arranged in an equilateral triangle, and can be further expanded to arrange more than three cameras forming a plurality of equilateral triangles, such as the honeycomb arrangement shown in the lower right corner of FIG. Furthermore, a camera array comprising three cameras arranged in a right triangle may be further expanded into a variety of other forms, such as rectangular (including square), T-shaped, cross-shaped, diagonal, and expanded forms in units of these shapes.
根据本发明的三维测量方法可以基于来自相机阵列中的三个相机或三个以上相机的图像来实施。尽管以上分别介绍了根据本发明第一、第二和第三实施例的三维测量系统和三维测量方法,但是本领域技术人员应该理解,这些实施例或其中的特征可以进行组合以形成不同的技术方案。例如,在其他的一些实施例中,相机阵列中包括第一、第二、第三和第四相机,根据本发明的三维测量方法可以对于来自第一、第二、第三相机的一组图像和来自第一、第三和第四相机的一组图像分别进行本发明提出的基于视差定比关系的特征点匹配处理,并结合两组图像的匹配结果来确定最后在第一和第三相机中的匹配特征点组,从而计算对应物点的空间位置。这样的技术方案仍然采用了本发明总的发明构思,本申请的发明范围意图覆盖这样的技术方案。The three-dimensional measurement method according to the present invention can be implemented based on images from three cameras or more than three cameras in a camera array. Although the three-dimensional measurement system and the three-dimensional measurement method according to the first, second and third embodiments of the present invention are respectively described above, those skilled in the art should understand that these embodiments or features therein may be combined to form different technologies. Program. For example, in some other embodiments, the first, second, third, and fourth cameras are included in the camera array, and the three-dimensional measurement method according to the present invention may be for a set of images from the first, second, and third cameras. And the set of images from the first, third and fourth cameras respectively perform the feature point matching processing based on the parallax ratio relationship proposed by the present invention, and combine the matching results of the two sets of images to determine the last in the first sum. The set of feature points in the third camera is matched to calculate the spatial position of the corresponding object point. Such a technical solution still adopts the general inventive concept of the present invention, and the scope of the invention of the present application is intended to cover such technical solutions.
接下来,返回参照图3进一步介绍根据本发明的三维测量系统10。 如图3所示,三维测量系统10除了可以采用不同的相机阵列CA之外,还可以进一步包括投影单元14。投影单元14用于向相机阵列CA的拍摄区域投射投射图案,该投射图案能够被相机阵列CA中的相机所采集到。投影图案可以在拍摄区域中增加较多的特征点,在一些应用中还可以使得特征点分布较为均匀或弥补部分区域缺少特征点的缺陷。Next, a three-
投射图案可以包括点、线或它们的组合。点在尺寸增大之后也可形成为较大或具有特定形状的斑,线在尺寸增大之后也可形成为较宽或具有其他形状特征的条带状图案。优选,投射图案中包括线条,并且该线条的延伸方向与第一相机、第二相机和第三相机中至少两者的光心连线方向不平行,这样有助于提供更多可用于根据本发明的三维测量方法中的基于视差定比关系的匹配处理的特征点。The projected pattern can include dots, lines, or a combination thereof. The dots may also be formed as larger or have a specific shape after the size is increased, and the lines may be formed into a strip pattern having a wider or other shape characteristic after the size is increased. Preferably, the projection pattern includes a line, and the extending direction of the line is not parallel to the optical fiber connection direction of at least two of the first camera, the second camera, and the third camera, which helps provide more available for use according to the present invention. A feature point of the matching process based on the parallax ratio relationship in the three-dimensional measurement method of the invention.
投射图案还可以通过颜色、强度、形状、分布等特征来编码。对于经过编码的投射图案,通过相机得到的图像中具有相同编码的特征点必然是匹配点。增加了匹配维度,提高匹配率和匹配的正确率。The projected pattern can also be encoded by features such as color, intensity, shape, distribution, and the like. For an encoded projection pattern, the feature points with the same encoding in the image obtained by the camera are necessarily matching points. Added matching dimensions to improve match rate and match accuracy.
投射单元14可以构造为包括光源以及用于基于来自该光源的照明光形成投射图案的光学元件,例如衍射光学元件或光栅等。所述光源发出的照明光包含具有所述第一相机、第二相机和第三相机的工作波长范围内的波长的光,可以为单色光,也可以为多色光,而且可以包含可见光和/或非可见光,例如红外光。在一些应用中,投射单元14可以构造为能够调节投射方向,以根据不同的拍摄场景来有选择地向不同的区域投射投射图案。另外,投射单元14可以构造为能够实现时序单图案投射或者时序多图案投射,或者能够根据不同的拍摄场景投射不同的图案。The
在一些实施例中,三维测量系统10中还可以包括传感器15,其用于探测投射单元14所投射的至少部分图案特征,以获得可用于三维测量的额外的信息。例如,在一些应用中,相机阵列CA工作于可见光波长和红外波长,投射单元14以红外波长投射投射图案,传感器15为红外传感器或红外相机,此时通过相机阵列CA采集的信息既包括可见光形成的图像信息又包括投射图案,可以提供更多的用于基于 双目视觉的三维测量的特征点,而通过传感器15获得的投射图案可以用于基于其他三维测量技术的测量,例如基于结构光的测量。在根据本发明的三维测量系统10中,传感器15获得的信息例如可以发送给校正单元13,校正单元13例如可以将基于来自传感器15的信息和其他三维测量技术获得的三维测量结果用于对相机阵列或其获得的图像的标定和/或校正。In some embodiments, the three-
对于根据本发明的三维测量系统10,应该理解的是,其可以实现为一个集成的系统,也可以实现为一个分布式的系统。例如,三维测量系统10中的相机阵列CA可以安装在一个设备上,而处理单元11可以基于互联网服务器来实现,从而在实体上与相机阵列CA分开。投射单元14以及传感器15可以与相机阵列CA安装在一起,也可以独立设置。对于控制单元12,情况也是类似的。此外,校正单元13可以与处理单元11一起通过同一处理器和相关联的存储器等来实现(此时可以构成为图3中虚线框所代表的三维测量装置20的一部分),也可以分开实现,例如校正单元13可以通过与相机阵列CA集成在一起的处理器和相关联的存储器等来实现。For the three-
以下将介绍根据本发明的三维测量系统和测量方法的若干应用例。Several application examples of the three-dimensional measurement system and measurement method according to the present invention will be described below.
【应用例1】[Application Example 1]
本应用例中,根据本发明的三维测量系统实现为基于手机和外接相机模组的三维测量装置。In this application example, the three-dimensional measuring system according to the present invention is implemented as a three-dimensional measuring device based on a mobile phone and an external camera module.
相机模组包括三个同型号相机,三个相机中心在一条直线上,相邻相机中心的距离相等,相机的光轴互相平行,朝向相同,都垂直于相机中心所在的直线。The camera module consists of three cameras of the same model. The centers of the three cameras are in a straight line, and the distances between the centers of adjacent cameras are equal. The optical axes of the cameras are parallel to each other and face the same, all perpendicular to the line where the camera center is located.
相机模组通过WiFi和/或数据线与手机连接。手机可以控制相机模组的3个相机同步拍摄(照片和/或视频)以及等倍数变焦。The camera module is connected to the phone via WiFi and/or data cable. The phone can control the camera's 3 cameras for simultaneous shooting (photo and / or video) and equal magnification.
相机模组拍摄到的照片和/或视频通过WiFi和/或数据线传到手机。手机通过校正用APP对照片和/或视频中的图像帧进行校正。校正过程中例如可以利用放置在相机模组前方的棋盘,棋盘网格尺寸已知。基于类似棋盘这样的辅助校正工具的校正是本领域已知的,在此 不再赘述。本应用例中用于校正的方法也并不限于该特定的方法。Photos and/or videos captured by the camera module are transmitted to the phone via WiFi and/or data lines. The mobile phone corrects the image frames in the photo and/or video by the correction application. For example, a checkerboard placed in front of the camera module can be utilized during the calibration process, and the checkerboard mesh size is known. Corrections based on an auxiliary correction tool such as a checkerboard are known in the art and will not be described again. The method for correction in this application example is also not limited to this particular method.
完成校正之后,利用相机模块拍摄要建模的场景和物体。手机上可以进一步集成或外接有投影模块,用于向被拍摄物体上投射条纹,条纹的方向与三相机中心连线方向不平行。集成在手机中的处理单元(由手机的处理芯片和存储单元构成)提取三个相机照片上共有的特征点和特征区域。特征点除了图像上被拍摄物体自带的特征点,还包含投射模块投射的条纹在物体上形成的新特征点和特征区。筛选同时具有图像坐标对称关系及相近属性(相近属性通过对像素及邻域像素群的相似性计算来判断)的匹配特征点组。基于每个匹配特征点组计算出多个深度值,取这些深度值的平均值。然后,计算出该特征点组对应的物点的三维空间坐标,并与其颜色信息融合形成体素信息。最终在手机上显示整个物体或场景的真彩色点云或基于该点云重构得到的三维模型。After the calibration is completed, the camera module is used to capture the scenes and objects to be modeled. The mobile phone can be further integrated or externally connected with a projection module for projecting stripes onto the object to be photographed, and the direction of the stripes is not parallel to the direction of the center of the three cameras. A processing unit (consisting of a processing chip and a storage unit of the mobile phone) integrated in the mobile phone extracts feature points and feature areas common to the three camera photos. In addition to the feature points that are attached to the object on the image, the feature points also include new feature points and feature regions formed on the object by the stripes projected by the projection module. The matching feature point group having the image coordinate symmetry relationship and the similar attribute (the similarity attribute is calculated by the similarity calculation of the pixel and the neighborhood pixel group) is selected. A plurality of depth values are calculated based on each of the matched feature point groups, and an average of the depth values is taken. Then, the three-dimensional space coordinates of the object points corresponding to the feature point group are calculated and fused with the color information to form voxel information. Finally, a true color point cloud of the entire object or scene is displayed on the mobile phone or a three-dimensional model reconstructed based on the point cloud.
在一变型例中,手机可以将接收到的来自相机模组的照片和/或视频传输到云端服务器。服务器可以快速的对照片进行校正,提取特征点,匹配具有图像坐标对称关系及相近属性的对应匹配特征点组,计算匹配特征点组对应的物点的三维坐标并于颜色信息融合形成体素信息,将真彩色点云结果或基于该点云重构得到的三维模型再传回手机上显示。In a variant, the handset can transmit the received photos and/or videos from the camera module to the cloud server. The server can quickly correct the photos, extract feature points, match the corresponding matching feature point groups with image coordinate symmetry relationship and similar attributes, calculate the three-dimensional coordinates of the object points corresponding to the matched feature point groups, and form voxel information by color information fusion. The true color point cloud result or the three-dimensional model reconstructed based on the point cloud is transmitted back to the mobile phone for display.
本系统还可与TOF融合,解决图像无特征(如大白墙)或遮挡问题。二者点云的融合,既保证大的物体目标都能采样到,又可兼容物体表面细节,可形成空间采样更全面,分辨率更高的三维点云。如在手机端,“前向摄像头+TOF”融合的3d采样模块可以扫描出像素级精度的人脸,而目前的TOF只能扫描出大致表面轮廓。而“后向摄像头+TOF”融合的3d模块更是补足了TOF系统投射距离有限的短板。另外,将TOF形成的三维点云通过空间坐标系转换和投影变换,计算出图像上对应采样点的初始视差,可加速图像匹配过程。The system can also be integrated with TOF to solve image-free features (such as large white walls) or occlusion problems. The fusion of the two point clouds not only ensures that large object targets can be sampled, but also is compatible with the surface details of the objects, and can form a three-dimensional point cloud with more comprehensive spatial sampling and higher resolution. For example, on the mobile phone side, the "forward camera + TOF" fusion 3d sampling module can scan the pixel level precision of the face, and the current TOF can only scan the approximate surface contour. The 3D module of "backward camera + TOF" is a shortcoming that complements the TOF system's limited projection distance. In addition, the three-dimensional point cloud formed by the TOF is transformed and projected by the space coordinate system to calculate the initial parallax of the corresponding sampling point on the image, which can accelerate the image matching process.
图像三维测量系统与激光雷达融合的基本过程如下:1.TOF生成三维点云;2.通过TOF坐标系和参考相机坐标系之间的转换,将激光雷达的点云三维坐标转换为参考相机坐标系下对应点的三维坐标; 3.根据相机的投影方程,将参考相机坐标系下对应采样点的三维坐标转换为图像的二维坐标和视差;4.通过参考相机图像对应采样点的二维坐标和视差找到在其他相机图像上的二维坐标初始位置;5.以初始位置为中心,设置搜索邻域,在其他相机图像精找对应点,精度精确到像素级或亚像素级;6.通过精找匹配结果计算匹配点的精确视差,再换算出匹配点的三维坐标;7.根据初始匹配结果对图像分区,相邻对应匹配点夹持的区域为对应待匹配区域;8.在对应待匹配区域中匹配多个相机图像上的待匹配特征点;9.输出参考相机坐标系下TOF的点云和图像特征点匹配形成的三维点云的融合结果。The basic process of image 3D measurement system and laser radar fusion is as follows: 1. TOF generates 3D point cloud; 2. Converts the point cloud 3D coordinates of lidar to reference camera coordinates through conversion between TOF coordinate system and reference camera coordinate system The three-dimensional coordinates of the corresponding points are obtained; 3. According to the projection equation of the camera, the three-dimensional coordinates of the corresponding sampling points in the reference camera coordinate system are converted into the two-dimensional coordinates and the parallax of the image; 4. The two-dimensional coordinates corresponding to the sampling points by referring to the camera image Coordinates and disparity find the initial position of the two-dimensional coordinates on other camera images; 5. Set the search neighborhood centered on the initial position, and find the corresponding points in other camera images with precision to the pixel level or sub-pixel level; The exact parallax of the matching point is calculated by searching for the matching result, and then the three-dimensional coordinates of the matching point are converted; 7. The image is partitioned according to the initial matching result, and the area clamped by the adjacent corresponding matching point is corresponding to the area to be matched; 8. Corresponding Matching feature points to be matched on multiple camera images in the to-be-matched area; 9. Outputting a point cloud of the TOF in the reference camera coordinate system and matching the image feature points The results of the integration of 3D point cloud.
本系统还可与结构光融合,解决图像无特征(如大白墙)的问题。结构光可以在物体表面形成高密度特征。结构光使用较为灵活。既可以只增加物体表面的特征点(允许不同位置的图案排列重复),由多相机系统匹配产生三维点云,也可以生成编码图案投射(保证每个点的图案编码或邻域图案编码唯一),由相机和投影装置之间的三角原理解算出三维点云。The system can also be integrated with structured light to solve the problem of images without features such as large white walls. Structured light can form high density features on the surface of the object. Structured light is more flexible to use. It can only increase the feature points on the surface of the object (allowing the pattern arrangement of different positions to be repeated), and the three-dimensional point cloud can be generated by multi-camera system matching, and the coding pattern projection can also be generated (guarantee that the pattern coding or neighborhood pattern coding of each point is unique) The three-dimensional point cloud is calculated from the triangle original understanding between the camera and the projection device.
另外还可以形成“摄像头+TOF+结构光”的组合方案,如此可以形成超高密度和超高分辨率的三维点云,可用于VR/AR游戏或电影等需要非常逼真的三维虚拟物体等应用场景中。In addition, a combination of "camera + TOF + structured light" can be formed, which can form a high-density and ultra-high-resolution 3D point cloud, which can be used for applications such as VR/AR games or movies that require very realistic 3D virtual objects. in.
【应用例2】[Application Example 2]
本应用例中,根据本发明的三维测量系统实现为用于车辆自动驾驶的一种装置。In this application example, the three-dimensional measuring system according to the present invention is realized as a device for automatic driving of a vehicle.
相机模块安装在车辆前部,其包括三个同型号相机。三个相机中心在一条直线上,相邻相机中心的距离相等,相机的光轴互相平行,朝向相同,都垂直于相机中心所在的直线。相机模块通过数据线与车载电脑连接。车载电脑可以控制相机模块的3个相机同步拍摄(照片和/或视频)以及等倍数变焦。The camera module is mounted on the front of the vehicle and includes three cameras of the same model. The centers of the three cameras are in a straight line with the same distance from the center of the camera. The optical axes of the cameras are parallel to each other and face the same, all perpendicular to the line where the camera center is located. The camera module is connected to the onboard computer via a data cable. The on-board computer can control the camera's 3 cameras for simultaneous shooting (photo and/or video) and equal magnification.
相机模块拍摄到的照片和/或视频通过数据线传到车载电脑。车载电脑由同一时间拍摄的多个图像(照片或视频中的图像帧)生成校正矩阵。校正之后,车载电脑提取三个相机拍摄的图像上共有的特征点和特征区域,挑选出具有对称结构及相近属性的对应匹配组筛选同时 具有图像坐标对称关系及相近属性的匹配特征点组,计算匹配特征点组对应的物点的三维坐标,与颜色信息融合后输出真彩色三维点云,将其推送给车辆自动驾驶的决策系统。The photos and/or videos captured by the camera module are transmitted to the onboard computer via the data cable. The on-board computer generates a correction matrix from multiple images (image frames in a photo or video) taken at the same time. After the correction, the on-board computer extracts the feature points and feature regions shared by the images taken by the three cameras, selects the corresponding matching group with symmetric structure and similar properties, and selects the matched feature point groups with the image coordinate symmetry relationship and the similar attributes, and calculates The three-dimensional coordinates of the object points corresponding to the feature point group are matched, and the true color three-dimensional point cloud is output after being merged with the color information, and is pushed to the decision system of the vehicle automatic driving.
本系统还可与激光雷达融合,解决图像无特征(如大白墙)或遮挡问题。激光雷达适合大块物体的空间采样,即使在图像无特征的情况下也能形成空间点云。但受到空间采样率限制,如几十米外的电线杆,其张角小于雷达的空间采样角分辨率时可能存在漏采样。而图像系统对物体的各种特征(包含边缘特征)较敏感。二者点云的融合,既保证大目标都能采样到,又可兼容物体表面细节和细小的物体的采样,可形成空间采样更全面,分辨率更高的三维点云。另外,将激光雷达形成的三维点云通过空间转换为图像上对应采样点的初始视差,可加速图像匹配过程。The system can also be integrated with Lidar to solve image-free features (such as large white walls) or occlusion problems. Lidar is suitable for spatial sampling of large objects, even in the case of images without features. However, due to the spatial sampling rate limitation, such as a power pole of several tens of meters, the leakage angle may exist when the opening angle is smaller than the spatial sampling angle resolution of the radar. The image system is more sensitive to various features of the object, including edge features. The fusion of the two point clouds not only ensures that large targets can be sampled, but also is compatible with the surface details of objects and the sampling of small objects, which can form a three-dimensional point cloud with more comprehensive spatial sampling and higher resolution. In addition, the three-dimensional point cloud formed by the laser radar is converted into the initial parallax of the corresponding sampling point on the image, which can accelerate the image matching process.
图像三维测量系统与激光雷达融合的基本过程如下:1.激光雷达生成三维点云;2.通过激光雷达坐标系和参考相机坐标系之间的转换,将激光雷达的点云三维坐标转换为参考相机坐标系下对应点的三维坐标;3.根据相机的投影方程,将参考相机坐标系下对应采样点的三维坐标转换为图像的二维坐标和视差;4.通过参考相机图像对应采样点的二维坐标和视差找到在其他相机图像上的二维坐标初始位置;5.以初始位置为中心,设置搜索邻域,在其他相机图像精找对应点,精度精确到像素级或亚像素级;6.通过精找匹配结果计算匹配点的精确视差,再换算出匹配点的三维坐标;7.根据初始匹配结果对图像分区,相邻对应匹配点夹持的区域为对应待匹配区域;8.在对应待匹配区域中匹配多个相机图像上的待匹配特征点;9.输出参考相机坐标系下激光雷达的点云和图像特征点匹配形成的三维点云的融合结果。The basic process of image 3D measurement system and laser radar fusion is as follows: 1. Lidar generates 3D point cloud; 2. Converts lidar 3D coordinates of lidar into reference by conversion between lidar coordinate system and reference camera coordinate system The three-dimensional coordinates of the corresponding points in the camera coordinate system; 3. According to the projection equation of the camera, the three-dimensional coordinates of the corresponding sampling points in the reference camera coordinate system are converted into two-dimensional coordinates and parallax of the image; 4. by referring to the camera image corresponding to the sampling points Two-dimensional coordinates and parallax find the initial position of the two-dimensional coordinates on other camera images; 5. Set the search neighborhood centered on the initial position, and find the corresponding points in other camera images with precision to the pixel level or sub-pixel level; 6. Calculate the exact parallax of the matching point by searching for the matching result, and then convert the three-dimensional coordinates of the matching point; 7. According to the initial matching result, the image is partitioned, and the area clamped by the adjacent matching point is the corresponding matching area; Matching feature points to be matched on multiple camera images in the corresponding to-match region; 9. Output point cloud and graph of lidar in reference camera coordinate system The result of the fusion of a three-dimensional point cloud formed by feature point matching.
【应用例3】[Application Example 3]
本应用例中,根据本发明的三维测量系统实现为基于无人机和外接相机模块用于进行航拍的一种装置。In this application example, the three-dimensional measurement system according to the present invention is realized as a device for performing aerial photography based on a drone and an external camera module.
相机模块安置在无人机的机载云台上,包括5个同型号相机。5个相机分布成十字架形,相机中心在同一平面上,纵向和横向方向的 相邻相机中心的距离相等,相机的光轴互相平行,朝向相同,都垂直于相机中心所在的平面。相机模块通过数据线与无人机的机载电脑连接。机载电脑可以控制相机模块的5个相机同步拍摄(照片和/或视频)以及等倍数变焦。The camera module is placed on the airborne head of the drone, including five cameras of the same model. The five cameras are distributed in a cross shape with the center of the camera on the same plane. The distance between the centers of adjacent cameras in the longitudinal and lateral directions is equal. The optical axes of the cameras are parallel to each other and face the same, all perpendicular to the plane of the center of the camera. The camera module is connected to the drone's onboard computer via a data cable. The onboard computer can control the camera's 5 cameras for simultaneous shooting (photo and/or video) and equal magnification.
相机模块拍摄到的照片和/或视频通过数据线传到机载电脑。机载电脑由同一时间拍摄的多个图像(照片或视频中的图像帧)生成校正矩阵。图像经校正矩阵校正。机载电脑可以实时显示相机模块上五个相机的图像上共有的特征点和特征区域,提取具有图像坐标对称关系及相近属性的对应匹配特征点组,计算匹配特征点组对应的物点的三维坐标,与颜色信息融合后输出真彩色三维点云。Photos and/or videos captured by the camera module are transmitted to the onboard computer via the data cable. The onboard computer generates a correction matrix from multiple images (image frames in a photo or video) taken at the same time. The image is corrected by a correction matrix. The onboard computer can display the feature points and feature areas shared by the images of the five cameras on the camera module in real time, extract corresponding matching feature point groups with image coordinate symmetry relationship and similar attributes, and calculate the three-dimensional object points corresponding to the matched feature point groups. Coordinates, combined with color information, output true color 3D point clouds.
容易理解,本发明中的公式中的等号均为工程意义上的相等,可容许一定的偏差。即等号两边相差在一定范围内的情况下,就可以认为相等。偏差的范围例如为正负5%,或者正负1%。It is easy to understand that the equal signs in the formulas in the present invention are all equal in engineering sense, and a certain deviation can be tolerated. That is, if the difference between the two sides of the equal sign is within a certain range, it can be considered equal. The range of the deviation is, for example, plus or minus 5%, or plus or minus 1%.
以上描述仅为本申请的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本申请中所涉及的发明范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离所述发明构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本申请中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。The above description is only a preferred embodiment of the present application and a description of the principles of the applied technology. It should be understood by those skilled in the art that the scope of the invention referred to in the present application is not limited to the specific combination of the above technical features, and should also be covered by the above technical features without departing from the inventive concept. Other technical solutions formed by any combination of their equivalent features. For example, the above features are combined with the technical features disclosed in the present application, but are not limited to the technical features having similar functions.
Claims (51)
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201711164746.6 | 2017-11-21 | ||
| CN201711164746.6A CN109813251B (en) | 2017-11-21 | 2017-11-21 | Method, device and system for three-dimensional measurement |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2019100933A1 true WO2019100933A1 (en) | 2019-05-31 |
Family
ID=66599669
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2018/114016 Ceased WO2019100933A1 (en) | 2017-11-21 | 2018-11-05 | Method, device and system for three-dimensional measurement |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN109813251B (en) |
| WO (1) | WO2019100933A1 (en) |
Cited By (17)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110706334A (en) * | 2019-09-26 | 2020-01-17 | 华南理工大学 | Three-dimensional reconstruction method for industrial part based on trinocular vision |
| CN111368745A (en) * | 2020-03-06 | 2020-07-03 | 上海眼控科技股份有限公司 | Frame number image generation method and device, computer equipment and storage medium |
| CN111612728A (en) * | 2020-05-25 | 2020-09-01 | 北京交通大学 | A 3D point cloud densification method and device based on binocular RGB images |
| CN112016570A (en) * | 2019-12-12 | 2020-12-01 | 天目爱视(北京)科技有限公司 | Three-dimensional model generation method used in background plate synchronous rotation acquisition |
| CN112102404A (en) * | 2020-08-14 | 2020-12-18 | 青岛小鸟看看科技有限公司 | Object detection tracking method and device and head-mounted display equipment |
| CN112183436A (en) * | 2020-10-12 | 2021-01-05 | 南京工程学院 | Highway visibility detection method based on eight-neighborhood gray scale contrast of pixel points |
| CN112381874A (en) * | 2020-11-04 | 2021-02-19 | 北京大华旺达科技有限公司 | Calibration method and device based on machine vision |
| CN113310420A (en) * | 2021-04-22 | 2021-08-27 | 中国工程物理研究院上海激光等离子体研究所 | Method for measuring distance between two targets through image |
| CN113358020A (en) * | 2020-03-05 | 2021-09-07 | 青岛海尔工业智能研究院有限公司 | Machine vision detection system and method |
| CN113487679A (en) * | 2021-06-29 | 2021-10-08 | 哈尔滨工程大学 | Visual ranging signal processing method for automatic focusing system of laser marking machine |
| CN113487686A (en) * | 2021-08-02 | 2021-10-08 | 固高科技股份有限公司 | Calibration method and device for multi-view camera, multi-view camera and storage medium |
| CN114087991A (en) * | 2021-11-28 | 2022-02-25 | 中国船舶重工集团公司第七一三研究所 | Underwater target measuring device and method based on line structured light |
| CN115082621A (en) * | 2022-06-21 | 2022-09-20 | 中国科学院半导体研究所 | A three-dimensional imaging method, device, system, electronic device and storage medium |
| CN116503570A (en) * | 2023-06-29 | 2023-07-28 | 聚时科技(深圳)有限公司 | Image three-dimensional reconstruction method and related device |
| CN117611752A (en) * | 2024-01-22 | 2024-02-27 | 卓世未来(成都)科技有限公司 | A method and system for generating 3D models of digital humans |
| CN120151493A (en) * | 2025-05-09 | 2025-06-13 | 安徽玄视界控股有限责任公司 | A binocular vision naked eye 3D image generation method |
| CN120976450A (en) * | 2025-10-22 | 2025-11-18 | 长春工程学院 | Methods and Systems for 3D Reconstruction of Trees from High-Resolution Remote Sensing Imagery |
Families Citing this family (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110440712B (en) * | 2019-08-26 | 2021-03-12 | 英特维科技(苏州)有限公司 | Self-adaptive large-field-depth three-dimensional scanning method and system |
| CN111457859B (en) * | 2020-03-06 | 2022-12-09 | 奥比中光科技集团股份有限公司 | Alignment calibration method and system for 3D measuring device and computer readable storage medium |
| CN112129262B (en) * | 2020-09-01 | 2023-01-06 | 珠海一微半导体股份有限公司 | Visual ranging method and visual navigation chip of multi-camera group |
| CN112033352B (en) * | 2020-09-01 | 2023-11-07 | 珠海一微半导体股份有限公司 | Multi-camera ranging robot and visual ranging method |
| CN112505980A (en) * | 2020-12-11 | 2021-03-16 | 深圳博升光电科技有限公司 | Three-dimensional camera and 3D detection equipment |
| CN113503830B (en) * | 2021-07-05 | 2023-01-03 | 无锡维度投资管理合伙企业(有限合伙) | Aspheric surface shape measuring method based on multiple cameras |
| CN115317747B (en) * | 2022-07-28 | 2023-04-07 | 北京大学第三医院(北京大学第三临床医学院) | Automatic trachea cannula navigation method and computer equipment |
| CN118279402A (en) * | 2022-12-30 | 2024-07-02 | 比亚迪股份有限公司 | Calibration method and storage medium of panoramic surround view system, electronic device, and vehicle |
| CN116524160B (en) * | 2023-07-04 | 2023-09-01 | 应急管理部天津消防研究所 | Product consistency auxiliary verification system and method based on AR identification |
| CN119413076B (en) * | 2025-01-06 | 2025-04-01 | 西安爱德华测量设备股份有限公司 | Measurement system and measurement method of three-coordinate measuring machine based on image processing |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2002063240A1 (en) * | 2001-02-02 | 2002-08-15 | Snap-On Technologies Inc | Method and apparatus for mapping system calibration |
| CN101680756A (en) * | 2008-02-12 | 2010-03-24 | 松下电器产业株式会社 | Compound eye imaging device, distance measurement device, parallax calculation method and distance measurement method |
| US20110122228A1 (en) * | 2009-11-24 | 2011-05-26 | Omron Corporation | Three-dimensional visual sensor |
| CN104101293A (en) * | 2013-04-07 | 2014-10-15 | 鸿富锦精密工业(深圳)有限公司 | Measurement machine station coordinate system unification system and method |
| CN104897065A (en) * | 2015-06-09 | 2015-09-09 | 河海大学 | Measurement system for surface displacement field of shell structure |
Family Cites Families (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6859549B1 (en) * | 2000-06-07 | 2005-02-22 | Nec Laboratories America, Inc. | Method for recovering 3D scene structure and camera motion from points, lines and/or directly from the image intensities |
| KR101926563B1 (en) * | 2012-01-18 | 2018-12-07 | 삼성전자주식회사 | Method and apparatus for camera tracking |
| DE102012112322B4 (en) * | 2012-12-14 | 2015-11-05 | Faro Technologies, Inc. | Method for optically scanning and measuring an environment |
| CN103292710B (en) * | 2013-05-27 | 2016-01-06 | 华南理工大学 | A kind of distance measurement method applying binocular vision vision range finding principle |
| US10257494B2 (en) * | 2014-09-22 | 2019-04-09 | Samsung Electronics Co., Ltd. | Reconstruction of three-dimensional video |
| CN106813595B (en) * | 2017-03-20 | 2018-08-31 | 北京清影机器视觉技术有限公司 | Three-phase unit characteristic point matching method, measurement method and three-dimensional detection device |
-
2017
- 2017-11-21 CN CN201711164746.6A patent/CN109813251B/en active Active
-
2018
- 2018-11-05 WO PCT/CN2018/114016 patent/WO2019100933A1/en not_active Ceased
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2002063240A1 (en) * | 2001-02-02 | 2002-08-15 | Snap-On Technologies Inc | Method and apparatus for mapping system calibration |
| CN101680756A (en) * | 2008-02-12 | 2010-03-24 | 松下电器产业株式会社 | Compound eye imaging device, distance measurement device, parallax calculation method and distance measurement method |
| US20110122228A1 (en) * | 2009-11-24 | 2011-05-26 | Omron Corporation | Three-dimensional visual sensor |
| CN104101293A (en) * | 2013-04-07 | 2014-10-15 | 鸿富锦精密工业(深圳)有限公司 | Measurement machine station coordinate system unification system and method |
| CN104897065A (en) * | 2015-06-09 | 2015-09-09 | 河海大学 | Measurement system for surface displacement field of shell structure |
Cited By (25)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110706334B (en) * | 2019-09-26 | 2023-05-09 | 华南理工大学 | Three-dimensional reconstruction method for industrial part based on three-dimensional vision |
| CN110706334A (en) * | 2019-09-26 | 2020-01-17 | 华南理工大学 | Three-dimensional reconstruction method for industrial part based on trinocular vision |
| CN112016570A (en) * | 2019-12-12 | 2020-12-01 | 天目爱视(北京)科技有限公司 | Three-dimensional model generation method used in background plate synchronous rotation acquisition |
| CN112016570B (en) * | 2019-12-12 | 2023-12-26 | 天目爱视(北京)科技有限公司 | Three-dimensional model generation method for background plate synchronous rotation acquisition |
| CN113358020A (en) * | 2020-03-05 | 2021-09-07 | 青岛海尔工业智能研究院有限公司 | Machine vision detection system and method |
| CN111368745A (en) * | 2020-03-06 | 2020-07-03 | 上海眼控科技股份有限公司 | Frame number image generation method and device, computer equipment and storage medium |
| CN111612728A (en) * | 2020-05-25 | 2020-09-01 | 北京交通大学 | A 3D point cloud densification method and device based on binocular RGB images |
| CN112102404A (en) * | 2020-08-14 | 2020-12-18 | 青岛小鸟看看科技有限公司 | Object detection tracking method and device and head-mounted display equipment |
| CN112102404B (en) * | 2020-08-14 | 2024-04-30 | 青岛小鸟看看科技有限公司 | Object detection and tracking method, device and head mounted display device |
| CN112183436A (en) * | 2020-10-12 | 2021-01-05 | 南京工程学院 | Highway visibility detection method based on eight-neighborhood gray scale contrast of pixel points |
| CN112183436B (en) * | 2020-10-12 | 2023-11-07 | 南京工程学院 | Highway visibility detection method based on eight-neighbor gray contrast of pixels |
| CN112381874B (en) * | 2020-11-04 | 2023-12-12 | 北京大华旺达科技有限公司 | Calibration method and device based on machine vision |
| CN112381874A (en) * | 2020-11-04 | 2021-02-19 | 北京大华旺达科技有限公司 | Calibration method and device based on machine vision |
| CN113310420A (en) * | 2021-04-22 | 2021-08-27 | 中国工程物理研究院上海激光等离子体研究所 | Method for measuring distance between two targets through image |
| CN113487679A (en) * | 2021-06-29 | 2021-10-08 | 哈尔滨工程大学 | Visual ranging signal processing method for automatic focusing system of laser marking machine |
| CN113487686A (en) * | 2021-08-02 | 2021-10-08 | 固高科技股份有限公司 | Calibration method and device for multi-view camera, multi-view camera and storage medium |
| CN114087991A (en) * | 2021-11-28 | 2022-02-25 | 中国船舶重工集团公司第七一三研究所 | Underwater target measuring device and method based on line structured light |
| CN115082621B (en) * | 2022-06-21 | 2023-01-31 | 中国科学院半导体研究所 | Three-dimensional imaging method, device and system, electronic equipment and storage medium |
| CN115082621A (en) * | 2022-06-21 | 2022-09-20 | 中国科学院半导体研究所 | A three-dimensional imaging method, device, system, electronic device and storage medium |
| CN116503570A (en) * | 2023-06-29 | 2023-07-28 | 聚时科技(深圳)有限公司 | Image three-dimensional reconstruction method and related device |
| CN116503570B (en) * | 2023-06-29 | 2023-11-24 | 聚时科技(深圳)有限公司 | Three-dimensional reconstruction method and related device for image |
| CN117611752A (en) * | 2024-01-22 | 2024-02-27 | 卓世未来(成都)科技有限公司 | A method and system for generating 3D models of digital humans |
| CN117611752B (en) * | 2024-01-22 | 2024-04-02 | 卓世未来(成都)科技有限公司 | Method and system for generating 3D model of digital person |
| CN120151493A (en) * | 2025-05-09 | 2025-06-13 | 安徽玄视界控股有限责任公司 | A binocular vision naked eye 3D image generation method |
| CN120976450A (en) * | 2025-10-22 | 2025-11-18 | 长春工程学院 | Methods and Systems for 3D Reconstruction of Trees from High-Resolution Remote Sensing Imagery |
Also Published As
| Publication number | Publication date |
|---|---|
| CN109813251A (en) | 2019-05-28 |
| CN109813251B (en) | 2021-10-01 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2019100933A1 (en) | Method, device and system for three-dimensional measurement | |
| CN110036410B (en) | Apparatus and method for obtaining distance information from view | |
| CN110363838B (en) | Optimization method for 3D reconstruction of large field of view images based on multi-spherical camera model | |
| US20180051982A1 (en) | Object-point three-dimensional measuring system using multi-camera array, and measuring method | |
| CN106033614B (en) | A kind of mobile camera motion object detection method under strong parallax | |
| US11348271B2 (en) | Image processing device and three-dimensional measuring system | |
| KR102801383B1 (en) | Image restoration method and device | |
| US11803982B2 (en) | Image processing device and three-dimensional measuring system | |
| JP7489253B2 (en) | Depth map generating device and program thereof, and depth map generating system | |
| CN106170086B (en) | Method and device thereof, the system of drawing three-dimensional image | |
| CN107809610A (en) | Camera parameter set calculating apparatus, camera parameter set calculation method and program | |
| CN103873773B (en) | Primary-auxiliary synergy double light path design-based omnidirectional imaging method | |
| CN108805921A (en) | Image-taking system and method | |
| KR20190108721A (en) | Multi-view capturing apparatus and method using single 360-degree camera and planar mirrors | |
| CN116258759B (en) | A stereo matching method, device and equipment | |
| CN114332373B (en) | Magnetic circuit fall detection method and system for overcoming reflection of metal surface of relay | |
| CN112804515A (en) | Omnidirectional stereoscopic vision camera configuration system and camera configuration method | |
| CN110708532A (en) | Universal light field unit image generation method and system | |
| CN110827230A (en) | Method and device for improving RGB image quality by TOF | |
| CN212163540U (en) | Omnidirectional stereoscopic vision camera configuration system | |
| Ye et al. | Iterative Closest Point Algorithm Based on Point Cloud Curvature and Density Characteristics | |
| JP2022183954A (en) | Information processing device, information processing method and information processing program | |
| CN119289899B (en) | Three-dimensional scanning device, three-dimensional scanning method, three-dimensional measuring device, system and method | |
| CN120510349B (en) | Space alignment method based on traditional camera and event camera combined camera device | |
| JP2006078291A (en) | Omnidirectional three-dimensional measuring device |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18880684 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 18880684 Country of ref document: EP Kind code of ref document: A1 |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 18880684 Country of ref document: EP Kind code of ref document: A1 |
|
| 32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 09/02/2021) |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 18880684 Country of ref document: EP Kind code of ref document: A1 |