CN106408609A - Parallel mechanism end motion pose detection method based on binocular vision - Google Patents
Parallel mechanism end motion pose detection method based on binocular vision Download PDFInfo
- Publication number
- CN106408609A CN106408609A CN201610819289.9A CN201610819289A CN106408609A CN 106408609 A CN106408609 A CN 106408609A CN 201610819289 A CN201610819289 A CN 201610819289A CN 106408609 A CN106408609 A CN 106408609A
- Authority
- CN
- China
- Prior art keywords
- matching
- parallel mechanism
- point
- points
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20182—Noise reduction or smoothing in the temporal domain; Spatio-temporal filtering
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
本发明公开了一种基于双目视觉的并联机构末端运动位姿检测方法,首先,将采集到的并联机构图像进行基于小波变换的图像去噪预处理;然后,采用Harris‑SIFT算法对并联机构图像进行特征匹配。该匹配算法首先通过Harris算子提取图像特征点,再利用SIFT特征描述子对Harris算子提取出的特征点进行匹配;接着,提出一种新的提纯算法对匹配结果进行提纯,该提纯算法通过分块取点和提前取点验算临时模型的方法;最后,将匹配提纯后的并联机构末端特征点对带入双目视觉模型,通过坐标变换求出机构末端三维位姿。本发明可大幅度降低图像处理时间,另通过所提出的新的提纯算法对匹配结果进行提纯,进一步提高匹配正确率,进而使得并联机构末端位姿检测的实时性和精度都得以提高。
The invention discloses a binocular vision-based detection method for the terminal motion and posture of a parallel mechanism. First, the collected images of the parallel mechanism are subjected to image denoising preprocessing based on wavelet transform; images for feature matching. The matching algorithm first extracts the image feature points through the Harris operator, and then uses the SIFT feature descriptor to match the feature points extracted by the Harris operator; then, a new purification algorithm is proposed to purify the matching results. The method of taking points in blocks and taking points in advance to check the temporary model; finally, bring the pair of characteristic points at the end of the parallel mechanism after matching and purification into the binocular vision model, and obtain the three-dimensional pose of the end of the mechanism through coordinate transformation. The invention can greatly reduce the image processing time, and further improve the matching accuracy rate through the proposed new purification algorithm to purify the matching result, thereby improving the real-time performance and accuracy of the terminal pose detection of the parallel mechanism.
Description
技术领域technical field
本发明涉及基于机器视觉的位姿检测系统,尤其涉及基于双目视觉、针对并联机构末端运动位姿的检测方法。The invention relates to a pose detection system based on machine vision, in particular to a binocular vision-based detection method aimed at the motion pose of an end of a parallel mechanism.
背景技术Background technique
在并联机构控制中,末端运动位姿是反映机构运动状态的重要参数,精确测得机构的末端运动位姿可有效避免通过运动学模型解算所带来的误差,这将利于实现并联机构的高性能控制。相对于其它检测手段,机器视觉具有非接触、适用性强、高性价比等优点,尤其适用于具有运动多自由度、运动轨迹复杂、难以直接检测的并联机构末端运动位姿检测。视觉位姿检测系统通常可分为单目、双目以及多目检测系统。由于双目视觉算法的鲁棒性和精确性优于单目算法,而其算法的实时性相对于多目视觉又具有较大的优势,因此,双目视觉得到了较为广泛的应用。对于并联机构的末端运动位姿检测问题,采用基于双目视觉检测方法的难点在于如何将视觉系统采集到的并联机构图像经过图像处理后快速而精确地获取该机构的末端位姿信息。在双目视觉位姿检测过程中,立体匹配过程最为重要,匹配的速度与精度直接决定整个位姿检测系统的速度与精度,但由于双目视觉算法复杂,光照变化、背景反光、噪声干扰等外界因素会引起双目视觉系统的图像匹配难以精确实现,导致视觉检测系统的速度和精度受到影响。In the control of parallel mechanisms, the terminal motion pose is an important parameter that reflects the motion state of the mechanism. Accurately measuring the end motion pose of the mechanism can effectively avoid the errors caused by the kinematic model solution, which will facilitate the realization of the parallel mechanism. High performance controls. Compared with other detection methods, machine vision has the advantages of non-contact, strong applicability, and high cost performance. It is especially suitable for the detection of motion poses at the end of parallel mechanisms with multiple degrees of freedom, complex motion trajectories, and difficult direct detection. Visual pose detection systems can generally be divided into monocular, binocular, and multi-eye detection systems. Because the robustness and accuracy of the binocular vision algorithm are superior to the monocular algorithm, and the real-time performance of the algorithm has a greater advantage compared with the multi-eye vision, therefore, the binocular vision has been widely used. For the detection of the terminal motion pose of the parallel mechanism, the difficulty of using the binocular vision detection method is how to quickly and accurately obtain the terminal pose information of the mechanism after image processing of the image of the parallel mechanism collected by the vision system. In the binocular vision pose detection process, the stereo matching process is the most important. The speed and accuracy of matching directly determine the speed and accuracy of the entire pose detection system. External factors will cause the image matching of the binocular vision system to be difficult to achieve accurately, which will affect the speed and accuracy of the visual inspection system.
文献《一种新型的并联机器人位姿立体视觉检测系统》(吴迪飞,丁永生等.计算机工程与应用,2007,43(33):190-192)建立了一种基于尺度不变特征变换(SIFT)的双目视觉检测系统对并联机器人进行位姿检测,并用仿真实验证明了该检测算法的鲁棒性和可行性,但由于该检测系统匹配过程所采用的SIFT算法会产生大量维数较大的特征向量,增大了算法的复杂度,导致位姿检测的实时性受到影响;文献《Harris-SIFT算法及其在双目立体视觉中的应用》(赵钦君,赵东标等.电子科技大学学报,2010,39(4):546-550)结合Harris显著性和SIFT描述子,提出一种Harris-SIFT算法运用于双目视觉匹配中,并实验验证了方法的有效性,该方法有效提高了匹配算法的实时性,但由于Harris检测出的角点所含信息少于SIFT检测特征点所含信息,Harris-SIFT算法存在错误匹配和误差匹配的问题。The document "A New Type of Stereo Vision Detection System for Parallel Robot Pose" (Wu Difei, Ding Yongsheng, etc. Computer Engineering and Application, 2007, 43(33):190-192) established a scale-invariant feature transformation based on ( The binocular vision detection system of SIFT) detects the pose of parallel robots, and proves the robustness and feasibility of the detection algorithm by simulation experiments, but because the SIFT algorithm used in the matching process of the detection system will generate a large number of Large eigenvectors increase the complexity of the algorithm, which affects the real-time performance of pose detection; the document "Harris-SIFT Algorithm and Its Application in Binocular Stereo Vision" (Zhao Qinjun, Zhao Dongbiao, etc. Journal of University of Electronic Science and Technology of China , 2010, 39(4):546-550) combined Harris saliency and SIFT descriptors, proposed a Harris-SIFT algorithm for binocular vision matching, and verified the effectiveness of the method, which effectively improved the The matching algorithm is real-time, but because the corner points detected by Harris contain less information than the feature points detected by SIFT, the Harris-SIFT algorithm has the problem of wrong matching and error matching.
针对Harris-SIFT算法存在的误匹配问题,可考虑采用提纯算法对匹配结果进行提纯。常规提纯算法通常根据一组包含异常点的数据集,计算出匹配点对的目标模型,再利用剩余点来检验模型,从而得到有效数据样本,因此,提纯算法的关键在于快速找到正确的目标模型。然而,目标模型的获取通常存在一些问题:比如选取随机样本时,存在两个候选点距离过近而被认为是一个点,从而导致求取的目标模型不准确的问题;而且每次挑选随机样本集,都要寻找其对应候选模型的支撑集,这将导致花费过多时间寻找存在较多误差观测数据集的对应支撑点集上。In view of the mismatching problem existing in the Harris-SIFT algorithm, a purification algorithm can be considered to purify the matching results. Conventional purification algorithms usually calculate the target model of matching point pairs based on a set of data sets containing abnormal points, and then use the remaining points to test the model to obtain valid data samples. Therefore, the key to the purification algorithm is to quickly find the correct target model . However, there are usually some problems in the acquisition of the target model: for example, when selecting a random sample, there are two candidate points that are too close to be considered as one point, resulting in an inaccurate target model; and each time a random sample is selected set, it is necessary to find the support set corresponding to the candidate model, which will lead to spending too much time looking for the corresponding support point set with more error observation data sets.
发明内容Contents of the invention
鉴于上述双目视觉检测存在的问题,本发明提出一种基于双目视觉的并联机构末端运动位姿检测方法,该方法提出了一种新的匹配算法,算法通过快速Harris-SIFT算法实现图像特征点匹配,并提出一种新的提纯算法对匹配结果进行提纯,从而解决传统双目视觉算法复杂,空间匹配难以精确实现,所导致的检测实时性差、检测精度不高的问题。In view of the problems existing in the above-mentioned binocular vision detection, the present invention proposes a binocular vision-based detection method for the terminal motion pose of a parallel mechanism. This method proposes a new matching algorithm, and the algorithm realizes image features through the fast Harris-SIFT algorithm. Point matching, and a new purification algorithm is proposed to purify the matching results, so as to solve the problems that the traditional binocular vision algorithm is complicated, the space matching is difficult to achieve accurately, and the real-time detection is poor and the detection accuracy is not high.
本发明采用的技术方案是采用如下步骤:The technical scheme that the present invention adopts is to adopt following steps:
一种基于双目视觉的并联机构末端运动位姿检测方法,包括如下步骤:A binocular vision-based detection method for terminal motion pose of a parallel mechanism, comprising the steps of:
步骤1)利用双目相机采集并联机构的原始图像,并对采集到的机构图像进行基于小波变换的图像去噪预处理;Step 1) Use the binocular camera to collect the original image of the parallel mechanism, and perform image denoising preprocessing based on wavelet transform on the collected mechanism image;
步骤2)采用基于Harris算子的角点提取法对机构图像特征进行提取;Step 2) using the corner point extraction method based on the Harris operator to extract the mechanism image features;
步骤3)采用Harris-SIFT算法对并联机构图像进行特征匹配;该匹配算法首先通过Harris算子提取图像特征点,再利用SIFT特征描述子对Harris算子提取出的特征点进行匹配;Step 3) use the Harris-SIFT algorithm to perform feature matching on the parallel mechanism image; the matching algorithm first extracts image feature points through the Harris operator, and then uses the SIFT feature descriptor to match the feature points extracted by the Harris operator;
步骤4)针对Harris-SIFT算法存在错误匹配和误差匹配的问题,采用新的提纯算法对匹配结果做进一步提纯处理:通过分块取点和提前取点验算临时模型改进提纯算法,其中,分块取点:采取图像分块且每个块中随机取一个匹配点的方式;提前取点验算临时模型:随机选取9个匹配对,用8个匹配求出临时模型参数,而第9个匹配对用来验证临时模型的正确性,以此快速确定图像模型;Step 4) Aiming at the problem of wrong matching and error matching in the Harris-SIFT algorithm, a new purification algorithm is used to further purify the matching results: to improve the purification algorithm by taking points in blocks and checking the temporary model in advance, wherein, block Picking points: take the method of dividing the image into blocks and randomly selecting a matching point in each block; taking points in advance to check the temporary model: randomly select 9 matching pairs, and use 8 matches to find the temporary model parameters, and the ninth matching pair Used to verify the correctness of the temporary model to quickly determine the image model;
步骤5)根据双目视觉原理,通过前期图像处理提取到的末端特征点计算出该末端特征点的三维坐标,并计算出并联机构末端姿态角,最终获得该并联机构末端位姿信息。Step 5) According to the principle of binocular vision, the three-dimensional coordinates of the terminal feature points are calculated through the terminal feature points extracted by the previous image processing, and the terminal attitude angle of the parallel mechanism is calculated, and finally the terminal pose information of the parallel mechanism is obtained.
进一步,所述步骤2)中,Harris算子的角点提取法可直接调用OpenCV库函数实现,在Harris特征点检测中,像素点的二阶矩可表示为:Further, described step 2) in, the corner extraction method of Harris operator can directly call OpenCV storehouse function and realize, and in Harris feature point detection, the second-order moment of pixel point can be expressed as:
式中Ix、Iy表示像素点的灰度在x和y方向的梯度;Ixy为二阶混合偏导;像素点的Harris响应函数为:In the formula, I x , I y represent the gradient of the gray level of the pixel point in the x and y directions; I xy is the second-order mixed partial derivative; the Harris response function of the pixel point is:
R=det M-k(traceM)2 R=det Mk(traceM) 2
其中det M为矩阵M的行列式,traceM为矩阵M的真迹;k为经验常数,通常取值范围0.04~0.06;设定阈值t,根据上述Harris算子计算响应值R,若R<t,则该点为特征点。Where det M is the determinant of matrix M, traceM is the authentic trace of matrix M; k is an empirical constant, usually ranging from 0.04 to 0.06; set the threshold t, and calculate the response value R according to the above Harris operator, if R<t, Then this point is a feature point.
进一步,所述步骤4)中,所述提纯算法通过一组包含异常点的数据集,计算出匹配点对的目标模型,再利用剩余点来检验模型,从而得到有效数据样本;具体步骤如下:Further, in the step 4), the purification algorithm calculates the target model of matching point pairs through a set of data sets containing abnormal points, and then uses the remaining points to test the model, thereby obtaining valid data samples; the specific steps are as follows:
4.1)根据第一幅图中匹配点的坐标边界将图片平均分成b×b块,其中,b>3,随机选取9个互不相同的块且每个块中随机选取一个匹配点,构成9个匹配点对的随机样本集S;4.1) According to the coordinate boundaries of the matching points in the first picture, divide the picture into b×b blocks on average, where b>3, randomly select 9 different blocks and randomly select a matching point in each block to form 9 A random sample set S of matching point pairs;
4.2)由随机样本集S随机选取8个匹配点对求出临时候选模型F;4.2) Randomly select 8 matching point pairs from the random sample set S to obtain the temporary candidate model F;
4.3)检测第9个点是否为临时候选模型F的支撑集:如果是,则此临时候选模型F为候选模型F;否则,重新选择9对匹配点,重复4.1)、4.2)进程;4.3) Detect whether the ninth point is the support set of the temporary candidate model F: if yes, then the temporary candidate model F is the candidate model F; otherwise, reselect 9 pairs of matching points, and repeat 4.1), 4.2) process;
4.4)通过候选模型F和阈值k检测所有匹配点对,得到候选模型D的支撑集对数m;4.4) Detect all matching point pairs through the candidate model F and the threshold k, and obtain the support set logarithm m of the candidate model D;
4.5)若m≥阈值t,则得到目标模型F;否则,重新选择9对匹配点,重复4.1)、4.2)、4.3)、4.4)进程;4.5) If m≥threshold t, then obtain the target model F; otherwise, reselect 9 pairs of matching points and repeat 4.1), 4.2), 4.3), 4.4) process;
4.6)对目标模型F进行优化,最终确定目标模型H;4.6) Optimize the target model F, and finally determine the target model H;
4.7)每次选择随机样本集,计数器都加1,若重复k次没有找到模型参数,则终止程序。4.7) Each time a random sample set is selected, the counter is incremented by 1. If the model parameters are not found after k times of repetition, the program is terminated.
进一步,所述步骤5)中,根据投影定理求出并联机构末端特征点三维坐标(X,Y,Z),再通过摄像机旋转矩阵R,求出并联机构末端姿态角;其中并联机构末端位置信息如下:Further, in the step 5), the three-dimensional coordinates (X, Y, Z) of the characteristic points at the end of the parallel mechanism are obtained according to the projection theorem, and then the attitude angle of the end of the parallel mechanism is obtained through the camera rotation matrix R; wherein the terminal position information of the parallel mechanism as follows:
Pw=(ATA)-1ATBP w =(A T A) -1 A T B
其中,(u1,v1)、(u2,v2)分别表示末端特征点在左右相机投影点的图像坐标;表示左右投影矩阵的第i行,第j列;in, (u 1 , v 1 ), (u 2 , v 2 ) represent the image coordinates of the end feature points on the left and right camera projection points respectively; Indicates the i-th row and j-th column of the left and right projection matrix;
同时,并联机构末端姿态角为:At the same time, the terminal attitude angle of the parallel mechanism is:
其中,Rij(i=1,2,3;j=1,2,3)表示旋转矩阵R的第i行,第j列,α为绕X轴旋转的俯仰角,单位为:rad,β为绕Y轴旋转的翻滚角,单位为:rad,γ为绕Z轴旋转的航向角,单位为:rad。Among them, R ij (i=1,2,3; j=1,2,3) represents the i-th row and j-th column of the rotation matrix R, and α is the pitch angle of rotation around the X axis, and the unit is: rad, β It is the roll angle around the Y axis, the unit is: rad, γ is the heading angle around the Z axis, the unit is: rad.
进一步,所述步骤5)中,还包括,三维重建时,首先需要得到所建双目视觉模型的摄像机参数,该摄像机参数可通过摄像机标定获得;采用张正友平面标定法对双目视觉系统进行摄像机标定。Further, in said step 5), it also includes that during three-dimensional reconstruction, at first the camera parameters of the built binocular vision model need to be obtained, and the camera parameters can be obtained through camera calibration; adopt Zhang Zhengyou plane calibration method to carry out camera calibration.
进一步,所述张正友平面标定法过程为:首先固定相机,然后对标定板进行拍照,再将标定板图像输入计算机并获取标定板上每个格点的像素坐标,接着把标定板的已知三维坐标载入计算机,通过将标定板的像素坐标和三维坐标代入标定模型,求解出摄像机的内外参数,经过左右摄像机的分别标定。Further, the process of Zhang Zhengyou’s planar calibration method is as follows: first fix the camera, then take pictures of the calibration plate, then input the image of the calibration plate into the computer and obtain the pixel coordinates of each grid point on the calibration plate, and then take the known three-dimensional coordinates of the calibration plate The coordinates are loaded into the computer, and by substituting the pixel coordinates and three-dimensional coordinates of the calibration board into the calibration model, the internal and external parameters of the camera are solved, and the left and right cameras are calibrated separately.
本发明提出一种基于双目视觉的并联机构末端运动位姿检测方法,通过采用上述技术方案后,其具有以下有益效果:The present invention proposes a binocular vision-based detection method for the terminal motion and posture of a parallel mechanism. After adopting the above technical solution, it has the following beneficial effects:
1、本发明在立体匹配阶段采用Harris-SIFT算法实现立体匹配,通过算法简单的Harris算子提取图像特征点,再利用SIFT特征描述子对图像进行匹配,使得匹配结果兼具实时性和稳定性;1. The present invention adopts the Harris-SIFT algorithm to realize stereo matching in the stereo matching stage, extracts image feature points through a simple Harris operator, and then uses SIFT feature descriptors to match images, so that the matching results are both real-time and stable ;
2、本发明针对匹配算法存在的误差匹配和错误匹配问题,提出一种新的提纯算法,即通过分块取点获得更为准确的图像模型参数和提前取点验算临时模型以快速确定图像模型的方式来改进提纯算法,用以提纯Harris-SIFT匹配结果,剔除Harris-SIFT算法存在的误匹配点,在保证匹配算法实时性的同时,提高匹配的正确率,从而提高机构末端三维位姿检测的精度。2. Aiming at the error matching and wrong matching problems in the matching algorithm, the present invention proposes a new purification algorithm, that is, obtaining more accurate image model parameters by taking points in blocks and checking the temporary model by taking points in advance to quickly determine the image model The method is used to improve the purification algorithm to purify the Harris-SIFT matching results and eliminate the wrong matching points existing in the Harris-SIFT algorithm. While ensuring the real-time performance of the matching algorithm, the accuracy of the matching is improved, thereby improving the three-dimensional pose detection at the end of the mechanism. accuracy.
附图说明Description of drawings
以下结合附图和具体实施方式对本发明作进一步详细说明。The present invention will be described in further detail below in conjunction with the accompanying drawings and specific embodiments.
图1是本发明所提出的一种基于双目视觉的并联机构末端位姿检测系统流程图。Fig. 1 is a flow chart of a binocular vision-based terminal pose detection system for a parallel mechanism proposed by the present invention.
图2是采用基于小波变换方法的并联机构图像去噪效果图。其中:a为有噪声的图像,b为去噪后的图像。Figure 2 is an image denoising effect diagram of parallel mechanism based on wavelet transform method. Among them: a is the image with noise, and b is the image after denoising.
图3是Harris特征点提取图。Figure 3 is a Harris feature point extraction diagram.
图4是基于Harris-SIFT算法的特征点立体匹配效果图。Figure 4 is an effect diagram of feature point stereo matching based on the Harris-SIFT algorithm.
图5是对匹配结果提纯后的效果图。Fig. 5 is an effect diagram after the matching result is purified.
图6是本发明方法的双目视觉成像原理图。Fig. 6 is a principle diagram of binocular vision imaging of the method of the present invention.
图7是建立在机构平台的定坐标系与动坐标系。Fig. 7 is the fixed coordinate system and the moving coordinate system established on the mechanism platform.
图8是本发明方法实验结果与激光测距仪和电子罗盘测量结果对比得到的机构末端位姿参数的跟踪误差图。Fig. 8 is a tracking error graph of the end pose parameters of the mechanism obtained by comparing the experimental results of the method of the present invention with the measurement results of the laser rangefinder and the electronic compass.
具体实施方式detailed description
本发明提供了一种基于双目视觉的并联机构末端位姿检测方法,为解决双目视觉匹配过程实时性差的问题,本发明采用Harris-SIFT算法实现立体匹配;而针对Harris-SIFT算法存在错误匹配和误差匹配的问题,本发明提出一种提纯算法,以剔除Harris-SIFT算法存在的误匹配点,在保证匹配算法实时性的同时,提高匹配的正确率,进而使得位姿检测的实时性和精度都得以提高。具体步骤如下:The invention provides a binocular vision-based detection method for the terminal pose of a parallel mechanism. In order to solve the problem of poor real-time performance in the binocular vision matching process, the invention uses the Harris-SIFT algorithm to achieve stereo matching; however, there are errors in the Harris-SIFT algorithm For the problem of matching and error matching, the present invention proposes a purification algorithm to eliminate the mismatching points existing in the Harris-SIFT algorithm, while ensuring the real-time performance of the matching algorithm, improve the matching accuracy rate, and then make the real-time performance of pose detection and accuracy are improved. Specific steps are as follows:
1、参见图1,通过双目视觉左右相机采集并联机构原始图像。通过双目相机实时获取并联机构图像信息,相机应调节到适宜的位置以保证在并联机构运动过程中,并联机构末端始终在相机视野范围之内。1. Referring to Figure 1, the original image of the parallel mechanism is collected through the binocular vision left and right cameras. The image information of the parallel mechanism is obtained in real time through the binocular camera, and the camera should be adjusted to a suitable position to ensure that the end of the parallel mechanism is always within the camera's field of view during the movement of the parallel mechanism.
2、参见图1和图2,将左右摄像机采集到的原始图像进行基于小波变换的图像去噪预处理操作。通过采用Visual Studio集成开发工具进行检测系统人机界面开发,并使用C/C++语言编程,同时结合OpenCV(开源跨平台计算机视觉库),实现各模块相关功能的编程。去噪过程采用图像小波阈值去噪(WaveDec())的方法对原始机构图像进行预处理,通过设置阈值(g_HighPassFilterValue),再对高频小波系数进行阈值处理,最后对图像进行小波重构(WaveRec()),实现图像去噪。2. Referring to Fig. 1 and Fig. 2, the original images collected by the left and right cameras are subjected to image denoising preprocessing operations based on wavelet transform. The human-machine interface of the detection system is developed by using the Visual Studio integrated development tool, and the C/C++ language is used for programming. At the same time, combined with OpenCV (open source cross-platform computer vision library), the programming of the related functions of each module is realized. In the denoising process, the image wavelet threshold denoising (WaveDec()) method is used to preprocess the original institutional image. By setting the threshold (g_HighPassFilterValue), the high-frequency wavelet coefficients are thresholded, and finally the image is reconstructed by wavelet (WaveRec ()) to achieve image denoising.
3、参见图1和图3,采用基于Harris算子的角点提取法对机构图像特性进行提取。为了保证特征提取的快速性和稳定性,考虑到并联机构图像是刚性图像且存在多个角点,而Harris角点检测不仅计算简单、稳定,而且不易受光照、刚体几何形变的影响,因此,采用Harris角点检测算法对机构图像进行特征点提取。Harris特征点提取可直接调用OpenCV库函数((conerHarris())实现,在Harris特征点检测中,像素点的二阶矩可表示为:3. Referring to Figure 1 and Figure 3, use the corner point extraction method based on the Harris operator to extract the characteristics of the institutional image. In order to ensure the rapidity and stability of feature extraction, considering that the parallel mechanism image is a rigid image with multiple corners, Harris corner detection is not only simple and stable, but also not easily affected by illumination and rigid geometric deformation. Therefore, The Harris corner detection algorithm is used to extract the feature points of the institutional image. Harris feature point extraction can be implemented by directly calling the OpenCV library function ((conerHarris()). In Harris feature point detection, the second-order moment of a pixel can be expressed as:
式中Ix、Iy表示像素点的灰度在x和y方向的梯度;Ixy为二阶混合偏导。因此,像素点的Harris响应函数为:In the formula, I x , I y represent the gradient of the pixel gray level in the x and y directions; I xy is the second-order mixed partial derivative. Therefore, the Harris response function of the pixel is:
R=det M-k(traceM)2 (2)R=det Mk(traceM) 2 (2)
其中det M为矩阵M的行列式,traceM为矩阵M的真迹;k为经验常数,通常取值范围0.04~0.06。设定阈值t,根据上述Harris算子计算响应值R,若R<t,则该点为特征点。Where det M is the determinant of matrix M, traceM is the true trace of matrix M; k is an empirical constant, usually in the range of 0.04 to 0.06. Set the threshold t, and calculate the response value R according to the above Harris operator. If R<t, the point is a feature point.
4、参见图1和图4,采用Harris-SIFT算法实现特征点立体匹配。通过使用Harris特征点取代SIFT中的极值点,并为每个特征点定义主方向,根据特征点生成特征向量描述子,再根据SIFT匹配算法的欧氏距离进行特征点匹配,将Harris角点检测算法与SIFT尺度不变特征变换有机结合,使得到的特征点具有两种算法的共同特征,不仅可提高SIFT算子的实时性,而且能提高特征点的稳定性。基于Harris-SIFT算法的特征匹配需要编程实现,程序编写时采用了如下公式:4. Referring to Figure 1 and Figure 4, the Harris-SIFT algorithm is used to achieve stereo matching of feature points. By using the Harris feature points to replace the extreme points in SIFT, and defining the main direction for each feature point, generating feature vector descriptors based on the feature points, and then matching the feature points according to the Euclidean distance of the SIFT matching algorithm, the Harris corner points The detection algorithm is organically combined with the SIFT scale-invariant feature transformation, so that the obtained feature points have the common characteristics of the two algorithms, which can not only improve the real-time performance of the SIFT operator, but also improve the stability of the feature points. The feature matching based on the Harris-SIFT algorithm needs to be implemented by programming, and the following formula is used when writing the program:
其中:公式(3)为特征点x的DoG算子在尺度空间的泰勒展开式,D(x)表示空间尺度函数,△x为特征点x的偏移量;公式(4)为特征点精确位置的偏移量;公式(5)表示非边沿点的判断条件,r为设定的阈值,H为2×2的Hessian矩阵,Tr(H)为H矩阵的真迹,Det(H)为H矩阵的行列式;公式(6)为2×2的Hessian矩阵,Dxx表示特征点的DOG算子沿x方向的二阶偏导数,Dyy表示特征点的DOG算子沿y方向的二阶偏导数,Dxy表示特征点的DOG算子沿xy方向的二阶偏导数;公式(7)表示特征点处高斯梯度的模值m(x,y)和方向θ(x,y),L(x,y)表示特征点所在的尺度空间值。Among them: Formula (3) is the Taylor expansion of the DoG operator of the feature point x in the scale space, D(x) represents the spatial scale function, △x is the offset of the feature point x; formula (4) is the exact The offset of the position; the formula (5) represents the judgment condition of the non-edge point, r is the set threshold, H is the Hessian matrix of 2×2, Tr(H) is the true trace of the H matrix, and Det(H) is H The determinant of the matrix; formula (6) is a 2×2 Hessian matrix, D xx represents the second-order partial derivative of the DOG operator of the feature point along the x direction, and D yy represents the second-order partial derivative of the DOG operator of the feature point along the y direction Partial derivative, D xy represents the second-order partial derivative of the DOG operator of the feature point along the xy direction; formula (7) represents the modulus m(x,y) and direction θ(x,y) of the Gaussian gradient at the feature point, L (x, y) represents the scale space value where the feature point is located.
5、参见图1和图5,采用新的提纯算法对匹配结果进行提纯,剔除误匹配对。针对Harris-SIFT算法存在错误匹配和误差匹配的问题,本发明采用新的提纯算法对匹配结果做进一步提纯处理,以剔除Harris-SIFT算法存在的误匹配点,在保证匹配算法实时性的同时,提高匹配的正确率,从而提高机构末端三维位姿检测的精度。进行提纯处理时,由于提纯算法的关键在于快速找到正确的目标模型,而目标模型是由包含内点数最多的候选模型优化而得到,为了快速求出候选模型参数,本发明随机选取9个匹配对,其中8个匹配对用来求出临时模型参数,而第9个匹配对用来验证临时模型的正确性,从而快速确定目标模型;同时,本发明采取图像分块且每个块中随机取一个匹配点的方式避免存在两个匹配点距离过近而被认为是一个点,而导致的基本矩阵不准确的问题。获取目标模型的具体步骤如下:5. Referring to Figure 1 and Figure 5, a new purification algorithm is used to purify the matching results and eliminate false matching pairs. Aiming at the problem of wrong matching and error matching in the Harris-SIFT algorithm, the present invention adopts a new purification algorithm to further purify the matching results, so as to eliminate the wrong matching points existing in the Harris-SIFT algorithm, while ensuring the real-time performance of the matching algorithm, Improve the accuracy of matching, thereby improving the accuracy of three-dimensional pose detection at the end of the mechanism. When performing the purification process, since the key of the purification algorithm is to quickly find the correct target model, and the target model is obtained by optimizing the candidate model containing the largest number of internal points, in order to quickly obtain the candidate model parameters, the present invention randomly selects 9 matching pairs , where 8 matching pairs are used to obtain the temporary model parameters, and the ninth matching pair is used to verify the correctness of the temporary model, thereby quickly determining the target model; at the same time, the present invention adopts image segmentation and randomly selects The method of one matching point avoids the problem of inaccurate basic matrix caused by the fact that two matching points are too close to be considered as one point. The specific steps to obtain the target model are as follows:
[1]根据第一幅图中匹配点的坐标边界将图片平均分成b×b(b>3)块,随机选取9个互不相同的块且每个块中随机选取一个匹配点,构成9个匹配点对的随机样本集S。[1] According to the coordinate boundaries of the matching points in the first picture, the picture is divided into b×b (b>3) blocks on average, and 9 different blocks are randomly selected and a matching point is randomly selected in each block to form 9 A random sample set S of matching point pairs.
[2]由随机样本集S随机选取8个匹配点对求出临时候选模型F。[2] Randomly select 8 matching point pairs from the random sample set S to obtain the temporary candidate model F.
[3]检测第9个点是否为临时候选模型F的支撑集:是,则此临时候选模型F为候选模型F;否则,重新选择9对匹配点,重复[1]、[2]进程。[3] Check whether the ninth point is the support set of the temporary candidate model F: if yes, then the temporary candidate model F is the candidate model F; otherwise, reselect 9 pairs of matching points and repeat the process of [1] and [2].
[4]通过候选模型F和阈值k检测所有匹配点对,得到候选模型D的支撑集对数m。[4] Detect all matching point pairs through the candidate model F and the threshold k, and obtain the support set logarithm m of the candidate model D.
[5]若m≥阈值t,则得到目标模型F;否则,重新选择9对匹配点,重复[1]、[2]、[3]、[4]进程。[5] If m≥threshold t, get the target model F; otherwise, reselect 9 pairs of matching points and repeat the process of [1], [2], [3], [4].
[6]对目标模型F进行优化,最终确定目标模型H。[6] Optimize the target model F, and finally determine the target model H.
[7]每次选择随机样本集,计数器都加1,若重复k次没有找到模型参数,则终止程序。[7] Each time a random sample set is selected, the counter is incremented by 1. If the model parameters are not found after k times of repetition, the program is terminated.
6、参见图1和图6,根据双目视觉原理实现三维重建,将前期图像处理提取到的末端匹配点对带入双目视觉模型,最终获得并联机构末端位姿信息。三维重建时,首先需要得到所建双目视觉模型的摄像机参数,该摄像机参数可通过摄像机标定获得,本发明采用张正友标定法对双目视觉系统进行摄像机标定,根据张正友平面标定流程:首先固定相机,然后对标定板进行拍照,再将标定板图像输入计算机并获取标定板上每个格点的像素坐标,接着把标定板的已知三维坐标载入计算机,通过将标定板的像素坐标和三维坐标代入标定模型,求解出摄像机的内外参数,经过左右摄像机的分别标定,再根据式(8)便可获得双目视觉系统的摄像机参数:6. Referring to Figure 1 and Figure 6, realize 3D reconstruction according to the binocular vision principle, bring the end matching point pairs extracted from the previous image processing into the binocular vision model, and finally obtain the terminal pose information of the parallel mechanism. During three-dimensional reconstruction, firstly, the camera parameters of the built binocular vision model need to be obtained. The camera parameters can be obtained through camera calibration. The present invention adopts the Zhang Zhengyou calibration method to perform camera calibration on the binocular vision system. According to the Zhang Zhengyou plane calibration process: first, fix the camera , then take pictures of the calibration plate, input the image of the calibration plate into the computer and obtain the pixel coordinates of each grid point on the calibration plate, and then load the known three-dimensional coordinates of the calibration plate into the computer, by combining the pixel coordinates of the calibration plate and the three-dimensional The coordinates are substituted into the calibration model, and the internal and external parameters of the camera are solved. After the calibration of the left and right cameras, the camera parameters of the binocular vision system can be obtained according to formula (8):
式中,Rl、Tl和Rr、Tr分别表示左右摄像机的旋转矩阵和平移矩阵。In the formula, R l , T l and R r , T r represent the rotation matrix and translation matrix of the left and right cameras, respectively.
摄像机标定完成后便可得到双目视觉投影矩阵。假定并联机构末端特征点P在左右摄像机的像点分别为Pl和Pr,左右摄像机已完成标定且它们的投影矩阵分别为Ml、Mr,则根据投影定理有:After the camera calibration is completed, the binocular vision projection matrix can be obtained. Assuming that the image points of the terminal feature point P of the parallel mechanism in the left and right cameras are respectively Pl and Pr, the left and right cameras have been calibrated and their projection matrices are respectively Ml and Mr, then according to the projection theorem:
其中sl,sr表示比例因子,将上式展开来则有:Among them, s l and s r represent scale factors, and the above formula can be expanded as follows:
式中,(u1,v1,1),(u2,v2,1)分别表示投影点Pl和Pr的齐次坐标;(X,Y,Z,1)为末端特征点在世界坐标系下的齐次坐标;表示左右投影矩阵的第i行,第j列。In the formula, (u 1 ,v 1 ,1), (u 2 ,v 2 ,1) represent the homogeneous coordinates of the projection points P l and P r respectively; (X,Y,Z,1) are the terminal feature points at Homogeneous coordinates in the world coordinate system; Indicates the i-th row and j-th column of the left-right projection matrix.
式(10)和式(11)联立后可得如下方程:After formula (10) and formula (11) are combined, the following equation can be obtained:
令:则可采用最小二乘法求出并联机构末端位置三维坐标PW,其表示如下:make: Then the least square method can be used to obtain the three-dimensional coordinates P W of the terminal position of the parallel mechanism, which is expressed as follows:
Pw=(ATA)-1ATB (13)P w =(A T A) -1 A T B (13)
再将旋转矩阵表示为如式(14)所示:Then the rotation matrix is expressed as shown in formula (14):
则并联机构末端姿态角为式(15)所示:Then the terminal attitude angle of the parallel mechanism is shown in formula (15):
式中,α为绕X轴旋转的俯仰角(单位为:rad),β为绕Y轴旋转的翻滚角(单位为:rad),γ为绕Z轴旋转的航向角(单位为:rad)。In the formula, α is the pitch angle of rotation around the X axis (unit: rad), β is the roll angle of rotation around the Y axis (unit: rad), and γ is the heading angle of rotation around the Z axis (unit: rad) .
至此,并联机构的末端位姿参数求解已完成。So far, the solution of the terminal pose parameters of the parallel mechanism has been completed.
实施例Example
本发明着重提出了一种新的基于双目视觉的并联机构末端运动位姿检测方法,以解决传统双目视觉算法在并联机构末端位姿检测中存在的实时性差、检测精度不高的问题。下面以一种新型3-DOF并联机构末端为检测对象,该检测方法的具体实施方式如下:The present invention emphatically proposes a new binocular vision-based detection method for the terminal motion pose of a parallel mechanism to solve the problems of poor real-time performance and low detection accuracy existing in traditional binocular vision algorithms in the detection of the terminal pose of a parallel mechanism. The end of a new type of 3-DOF parallel mechanism is used as the detection object below, and the specific implementation of the detection method is as follows:
1、采集并联机构原始图像。通过维视双目相机采集新型3-DOF并联机构图像,其中,相机型号为MV-1300FM,镜头型号为AFT-0814MP,相机安装并联机构的前方,且距地高度和角度均可灵活调节。此外,本检测系统采用计算机操作系统为Windows7,处理器型号为Intel(R)Core(TM)2Duo,主频为2.66GHz,内存为2GB。1. Collect the original image of the parallel mechanism. The image of the new 3-DOF parallel mechanism is collected by the Weishi binocular camera. The camera model is MV-1300FM and the lens model is AFT-0814MP. The camera is installed in front of the parallel mechanism, and the height and angle from the ground can be flexibly adjusted. In addition, the detection system uses a computer operating system of Windows 7, a processor model of Intel(R) Core(TM) 2Duo, a main frequency of 2.66GHz, and a memory of 2GB.
2、对原始图像进行基于小波变换的去噪处理。为了降低噪声对并联机构末端位姿检测所带来的不良影响,采用基于小波变换的去噪方法对原始图像进行预处理,通过去噪处理有效抑制图像噪声,以便下一步特征提取操作,去噪效果如图2所示。2. Denoise the original image based on wavelet transform. In order to reduce the adverse effect of noise on the detection of the terminal pose of the parallel mechanism, the original image is preprocessed with a denoising method based on wavelet transform, and the image noise is effectively suppressed through the denoising process, so that the next feature extraction operation, denoising The effect is shown in Figure 2.
3、采用基于Harris算子的角点提取法对机构图像特性进行提取。考虑到新型3-DOF并联机构具有明显的刚体特征,而Harris特征点检测不仅在刚体几何变形和亮度变化方面有着高复检率,而且算法简单、稳定,因此选取Harris角点检测算法对机构图像进行特征点提取。像素点的Harris响应函数为:3. Use the corner point extraction method based on Harris operator to extract the image characteristics of the mechanism. Considering that the new 3-DOF parallel mechanism has obvious rigid body characteristics, and the Harris feature point detection not only has a high re-inspection rate in terms of rigid body geometric deformation and brightness change, but also has a simple and stable algorithm, so the Harris corner detection algorithm is selected for the mechanism image Extract feature points. The Harris response function of the pixel is:
R=det M-k(traceM)2 (16)R=det Mk(traceM) 2 (16)
其中det M为矩阵M的行列式,traceM为矩阵M的真迹;k为经验常数,通常取值范围0.04~0.06。设定阈值t,根据上述Harris算子计算响应值R,若R<t,则该点为特征点,本实施例取经验常数k=0.04,并直接通过调用OpenCV库函数((conerHarris())实现新型3-DOF并联机构特征点提取,新型3-DOF并联机构特征点提取图如图3所示。Where det M is the determinant of matrix M, traceM is the true trace of matrix M; k is an empirical constant, usually in the range of 0.04 to 0.06. Set threshold t, calculate response value R according to above-mentioned Harris operator, if R<t, then this point is feature point, present embodiment gets empirical constant k=0.04, and directly by calling OpenCV library function ((conerHarris ()) The feature point extraction of the new 3-DOF parallel mechanism is realized, and the feature point extraction diagram of the new 3-DOF parallel mechanism is shown in Figure 3.
4、采用Harris-SIFT算法实现特征点立体匹配。本实施例采用Harris-SIFT算法进行新型3-DOF并联机构图像特征点匹配。该匹配算法使用Harris特征点取代SIFT中的极值点,采用拟合三维二次函数的方法实现对特征点的精确定位,并剔除对噪声敏感的低对比度点和难以定位的处于图像边沿的点,从而筛选出稳定的点作为图像的SIFT特征点。基于Harris-SIFT算法的立体匹配分为以下步骤:4. The Harris-SIFT algorithm is used to realize the stereo matching of feature points. In this embodiment, the Harris-SIFT algorithm is used to match the image feature points of the novel 3-DOF parallel mechanism. The matching algorithm uses Harris feature points to replace the extreme points in SIFT, and uses the method of fitting a three-dimensional quadratic function to achieve precise positioning of feature points, and removes low-contrast points that are sensitive to noise and points on the edge of the image that are difficult to locate , so as to filter out stable points as the SIFT feature points of the image. Stereo matching based on the Harris-SIFT algorithm is divided into the following steps:
(1)对特征点x的DoG算子进行泰勒展开:(1) Perform Taylor expansion on the DoG operator of the feature point x:
其中,△x为特征点x的偏移量,当x为DoG算子的极值点时,D(x)的一阶偏导为0,则特征点精确位置的偏移量△x可求得:Among them, △x is the offset of the feature point x, when x is the extreme point of the DoG operator, the first-order partial derivative of D(x) is 0, then the offset △x of the precise position of the feature point can be obtained have to:
经过多次迭代后得到被测点的精确位置和尺度,将其代入公式(16)可求得该点的DoG值并取其绝对值,设定对比度阈值Tc,从而剔除对比度绝对值小于阈值Tc的低对比度点。After multiple iterations, the precise position and scale of the measured point can be obtained, and the DoG value of the point can be obtained by substituting it into formula (16), and its absolute value can be obtained, and the contrast threshold Tc can be set, so as to eliminate the contrast whose absolute value is less than the threshold Tc low-contrast points.
而对于边沿点的剔除,可通过DoG函数极值点的主曲率与阈值比较来实现,通常,边沿的DoG函数极值点比非边沿点的主曲率大,因此,可将主曲率比值大于阈值的边沿点剔除。非边沿点满足关系如式(19)所示:For the elimination of edge points, it can be realized by comparing the main curvature of the extreme point of the DoG function with the threshold. Usually, the extreme point of the DoG function of the edge is larger than the main curvature of the non-edge point. Therefore, the main curvature ratio can be greater than the threshold. The edge points are eliminated. The non-edge points satisfy the relationship as shown in formula (19):
其中,r为设定的阈值,本实施例取r=10;H为2×2的Hessian矩阵,H矩阵表示如式(20)所示:Wherein, r is the set threshold, and the present embodiment takes r=10; H is a 2×2 Hessian matrix, and the H matrix is expressed as shown in formula (20):
(2)确定特征点方向。设点L(x,y)的梯度模为m(x,y),方向为θ(x,y),则该点高斯梯度模和方向表示如式(21)所示:(2) Determine the direction of the feature point. Let the gradient modulus of point L(x, y) be m(x, y) and the direction be θ(x, y), then the Gaussian gradient modulus and direction of this point are expressed as formula (21):
统计关键点邻域像素的梯度分布,并选取直方图的主峰值作为关键点的方向。至此,每个特征点都具备了3个重要的信息:位置、尺度和方向。The gradient distribution of pixels in the neighborhood of the key point is counted, and the main peak of the histogram is selected as the direction of the key point. So far, each feature point has three important information: position, scale and direction.
(3)生成特征描述子。生成特征描述子需要将坐标轴旋转为特征点方向,并均匀划分该特征点相邻的16×16像素区域为8个4×4的子区域,接着在每个4×4的子区域上计算8个方向的梯度直方图,这样每个特征点就可产生128个数据以形成128维的SIFT特征向量,将生成的特征向量进行长度归一化,可进一步减少光照变化的影响。(3) Generate feature descriptors. To generate a feature descriptor, you need to rotate the coordinate axis to the direction of the feature point, and evenly divide the 16×16 pixel area adjacent to the feature point into 8 4×4 sub-regions, and then calculate on each 4×4 sub-region Gradient histograms in 8 directions, so that each feature point can generate 128 data to form a 128-dimensional SIFT feature vector, and normalize the length of the generated feature vector to further reduce the impact of illumination changes.
(4)特征描述子生成后,便可根据SIFT匹配算法的欧氏距离进行特征点匹配。具体实现如下:首先,将左图像中的某一特征点p与待匹配的右图像的所有特征点分别进行欧氏距离计算;然后,取出右图像中与p点欧氏距离最近和次近的两个点p1和p2;最后,计算最近欧式距离与次近欧式距离的比值并与一个阈值相比较,若该比值小于阈值,则匹配成功,此时,(p1,p2)为图像序列的一个匹配点对,反之,则匹配失败。本实施例设定阈值为0.5,基于Harris-SIFT算法的新型3-DOF并联机构匹配效果如图4所示。(4) After the feature descriptor is generated, the feature points can be matched according to the Euclidean distance of the SIFT matching algorithm. The specific implementation is as follows: First, calculate the Euclidean distance between a certain feature point p in the left image and all the feature points in the right image to be matched; Two points p1 and p2; finally, calculate the ratio of the closest Euclidean distance to the next closest Euclidean distance and compare it with a threshold. If the ratio is less than the threshold, the match is successful. At this time, (p1,p2) is a sequence of images Match point pairs, otherwise, the match fails. In this embodiment, the threshold is set to 0.5, and the matching effect of the new 3-DOF parallel mechanism based on the Harris-SIFT algorithm is shown in FIG. 4 .
5、采用本发明提出的提纯算法剔除匹配结果中存在的误匹配点。进行提纯处理时,可根据一组包含异常点的数据集,计算出匹配点对的目标模型,再利用剩余点来检验模型,从而得到有效数据样本。为了快速求出候选模型参数,根据左图中匹配点的坐标边界将图片平均分成16块,随机选取9个互不相同的块且每个块中随机选取一个匹配点,从9个匹配对中随机选择8个匹配对用来求出临时模型参数,而第9个匹配对用来验证临时模型的正确性,从而快速确定目标模型。提纯匹配结果效果如图5所示,目标模型H的选取和Sampson距离表示如下:5. Use the purification algorithm proposed by the present invention to eliminate the mismatching points existing in the matching result. During the purification process, the target model of matching point pairs can be calculated based on a set of data sets containing abnormal points, and then the remaining points can be used to test the model to obtain valid data samples. In order to quickly find out the candidate model parameters, the picture is divided into 16 blocks on average according to the coordinate boundaries of the matching points in the left figure, and 9 different blocks are randomly selected, and a matching point is randomly selected in each block, and from the 9 matching pairs Eight matching pairs are randomly selected to obtain the temporary model parameters, and the ninth matching pair is used to verify the correctness of the temporary model, so as to quickly determine the target model. The effect of the purified matching result is shown in Figure 5. The selection of the target model H and the Sampson distance are expressed as follows:
(1)两图像间目标模型H的选取:按照以上分块取点的方式随机选择8个匹配点对计算临时候选模型F,并求出模型F所对应匹配点对的Sampson距离d,设定阈值k,将符合d<k条件的点作为内点,并将内点数最多的候选模型F做进一步优化得到目标模型H,最后通过目标模型H重新估计匹配点,剔除误匹配对。投影变换模型表示如式(22)所示:(1) Selection of the target model H between the two images: Randomly select 8 matching point pairs according to the above block-taking method to calculate the temporary candidate model F, and calculate the Sampson distance d of the matching point pair corresponding to the model F, set Threshold k, the points that meet the condition of d<k are used as inliers, and the candidate model F with the largest number of inliers is further optimized to obtain the target model H. Finally, the matching points are re-estimated through the target model H, and false matching pairs are eliminated. The representation of the projection transformation model is shown in formula (22):
式中,(x,y,1)和(x’,y’,1)分别表示匹配点对m(x,y)和m(x’,y’)的齐次坐标。In the formula, (x, y, 1) and (x’, y’, 1) represent the homogeneous coordinates of matching point pairs m(x, y) and m(x’, y’), respectively.
(2)Sampson距离d的表示:对于任一匹配点对m(x,y)和m(x’,y’)来说,其在候选模型F中的Sampson距离d可表示为:(2) Representation of the Sampson distance d: For any pair of matching points m(x, y) and m(x’, y’), its Sampson distance d in the candidate model F can be expressed as:
6、求解新型3-DOF并联机构末端运动位姿。根据新型3-DOF并联机构运动特点,建立机构平台的定坐标系与动坐标系如图7所示,其中,{B}={O-XYZ}、{T}={O'-X'Y'Z'}分别表示定坐标系和动坐标系,则并联机构末端位姿的变化即可表示为动平台中心点O'点的位姿变化。本实施例采用张正友标定法实现双目视觉系统的摄像机标定,并通过前期图像处理和后期坐标变换求出新型3-DOF并联机构末端位置信息如下:6. Solve the terminal motion pose of the new 3-DOF parallel mechanism. According to the motion characteristics of the new 3-DOF parallel mechanism, the fixed coordinate system and the dynamic coordinate system of the mechanism platform are established as shown in Figure 7, where {B}={O-XYZ}, {T}={O'-X'Y 'Z'} represent the fixed coordinate system and the moving coordinate system respectively, then the change of the end pose of the parallel mechanism can be expressed as the change of the pose of the center point O' of the moving platform. In this embodiment, Zhang Zhengyou calibration method is used to realize the camera calibration of the binocular vision system, and the terminal position information of the new 3-DOF parallel mechanism is obtained through the early image processing and the later coordinate transformation as follows:
Pw=(ATA)-1ATB (24)P w =(A T A) -1 A T B (24)
其中,(u1,v1)、(u2,v2)分别表示末端特征点在左右相机投影点的图像坐标,表示左右投影矩阵的第i行,第j列。in, (u 1 , v 1 ), (u 2 , v 2 ) represent the image coordinates of the end feature points on the left and right camera projection points respectively, Indicates the i-th row and j-th column of the left-right projection matrix.
同时,新型3-DOF并联机构末端姿态角为式(25)所示:At the same time, the terminal attitude angle of the new 3-DOF parallel mechanism is shown in formula (25):
由于本并联机构做三自由度运动,因此可求出其在Y轴、Z轴的平动位移,以及绕Y轴的翻滚角β,则机构位姿参数即为(Y,Z,β)。Since the parallel mechanism performs three-degree-of-freedom motion, its translational displacement on the Y-axis and Z-axis, and the roll angle β around the Y-axis can be obtained, and the pose parameters of the mechanism are (Y, Z, β).
本实施例定时采集12张图像,然后求出每个时刻机构三维位姿参数,并将结果与激光测距仪INR-Laser Scanner H和电子罗盘Honeywell-HMR3100测得的机构末端实际位姿参数进行对比,跟踪误差如图8所示。分析图8可见,测量点P1和P7在Y方向的跟踪误差较大,而P4和P10在Z方向的跟踪误差较大,这与机构的对称性和运行轨迹有关,由于机构在上述测量点方向发生改变,使得速度和加速度变化较大,外加机构惯性的影响,导致误差较大,这表明实验测量得到的并联机构末端位姿运动状态与实际运动状态相一致;同时,根据跟踪误差可计算出各个测量点的最大偏差分别为Y方向:0.622mm,Z方向:0.782mm,β旋转角:0.677°,说明检测具有较高精度。实验结果表明,本发明所提出的基于双目视觉的位姿检测方法,能较好地实现并联机构末端运动位姿的实时检测。In this embodiment, 12 images are regularly collected, and then the three-dimensional pose parameters of the mechanism are calculated at each moment, and the results are compared with the actual pose parameters of the end of the mechanism measured by the laser rangefinder INR-Laser Scanner H and the electronic compass Honeywell-HMR3100 In contrast, the tracking error is shown in Figure 8. Analyzing Figure 8, it can be seen that the tracking errors of measurement points P1 and P7 in the Y direction are relatively large, while the tracking errors of P4 and P10 in the Z direction are relatively large, which is related to the symmetry and running track of the mechanism. Changes in velocity and acceleration lead to large changes in velocity and acceleration, plus the influence of mechanism inertia, resulting in large errors. This shows that the experimentally measured end pose motion state of the parallel mechanism is consistent with the actual motion state; at the same time, according to the tracking error can be calculated The maximum deviation of each measurement point is Y direction: 0.622mm, Z direction: 0.782mm, β rotation angle: 0.677°, indicating that the detection has high precision. The experimental results show that the pose detection method based on binocular vision proposed by the present invention can better realize the real-time detection of the motion pose of the end of the parallel mechanism.
综上,本发明的一种基于双目视觉的并联机构末端运动位姿检测方法,首先,将采集到的并联机构图像进行基于小波变换的图像去噪预处理,以消除图像噪声对随后特征提取造成的影响;然后,采用Harris-SIFT算法对并联机构图像进行特征匹配。不同于一般的匹配算法,该匹配算法首先通过Harris算子提取图像特征点,再利用SIFT特征描述子对Harris算子提取出的特征点进行匹配,使得匹配结果兼具实时性和稳定性;接着,为进一步提高匹配正确率,提出一种新的提纯算法对匹配结果进行提纯,该提纯算法通过分块取点和提前取点验算临时模型的方法,解决了提纯算法中目标模型获取费时、模型求解不准确的问题;最后,将匹配提纯后的并联机构末端特征点对带入双目视觉模型,通过坐标变换求出机构末端三维位姿。本发明所提出的基于双目视觉的并联机构末端位姿检测方法,由于在匹配阶段首先通过Harris算子提取图像特征点,再利用SIFT特征描述子对Harris算子提取出的特征点进行匹配,可大幅度降低图像处理时间,另通过所提出的新的提纯算法对匹配结果进行提纯,可进一步提高匹配正确率,进而使得并联机构末端位姿检测的实时性和精度都得以提高。To sum up, a binocular vision-based terminal motion pose detection method of a parallel mechanism of the present invention, firstly, performs image denoising preprocessing based on wavelet transform on the collected parallel mechanism image to eliminate the impact of image noise on subsequent feature extraction. The influence caused; then, the Harris-SIFT algorithm is used to perform feature matching on the image of the parallel mechanism. Different from the general matching algorithm, this matching algorithm first extracts the feature points of the image through the Harris operator, and then uses the SIFT feature descriptor to match the feature points extracted by the Harris operator, so that the matching results are both real-time and stable; then , in order to further improve the matching accuracy, a new purification algorithm is proposed to purify the matching results. The purification algorithm solves the problem of time-consuming acquisition of the target model and model Solve the inaccurate problem; finally, bring the pair of feature points at the end of the parallel mechanism after matching and purification into the binocular vision model, and obtain the three-dimensional pose of the end of the mechanism through coordinate transformation. The binocular vision-based terminal pose detection method of the parallel mechanism proposed by the present invention first extracts image feature points through the Harris operator in the matching stage, and then uses the SIFT feature descriptor to match the feature points extracted by the Harris operator. The image processing time can be greatly reduced, and the matching result can be purified through the proposed new purification algorithm, which can further improve the matching accuracy rate, thereby improving the real-time and accuracy of the terminal pose detection of the parallel mechanism.
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示意性实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本发明的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施例或示例中以合适的方式结合。In the description of this specification, references to the terms "one embodiment," "some embodiments," "exemplary embodiments," "example," "specific examples," or "some examples" are intended to mean that the implementation A specific feature, structure, material, or characteristic described by an embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the specific features, structures, materials or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
尽管已经示出和描述了本发明的实施例,本领域的普通技术人员可以理解:在不脱离本发明的原理和宗旨的情况下可以对这些实施例进行多种变化、修改、替换和变型,本发明的范围由权利要求及其等同物限定。Although the embodiments of the present invention have been shown and described, those skilled in the art can understand that various changes, modifications, substitutions and variations can be made to these embodiments without departing from the principle and spirit of the present invention. The scope of the invention is defined by the claims and their equivalents.
Claims (6)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201610819289.9A CN106408609B (en) | 2016-09-13 | 2016-09-13 | A kind of parallel institution end movement position and posture detection method based on binocular vision |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201610819289.9A CN106408609B (en) | 2016-09-13 | 2016-09-13 | A kind of parallel institution end movement position and posture detection method based on binocular vision |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN106408609A true CN106408609A (en) | 2017-02-15 |
| CN106408609B CN106408609B (en) | 2019-05-31 |
Family
ID=57999227
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201610819289.9A Expired - Fee Related CN106408609B (en) | 2016-09-13 | 2016-09-13 | A kind of parallel institution end movement position and posture detection method based on binocular vision |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN106408609B (en) |
Cited By (18)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106875440A (en) * | 2017-02-22 | 2017-06-20 | 神华集团有限责任公司 | The computational methods and device of coal pile volume |
| CN107133983A (en) * | 2017-05-09 | 2017-09-05 | 河北科技大学 | Bundled round steel end face binocular vision system and space orientation and method of counting |
| CN107186548A (en) * | 2017-06-08 | 2017-09-22 | 大连理工大学 | A kind of five-axle number control machine tool gyroaxis geometric error detection method |
| CN107443369A (en) * | 2017-06-25 | 2017-12-08 | 重庆市计量质量检测研究院 | A calibration-free servo control method for robotic arms based on inverse identification of visual measurement models |
| CN107680060A (en) * | 2017-09-30 | 2018-02-09 | 努比亚技术有限公司 | A kind of image distortion correction method, terminal and computer-readable recording medium |
| CN108416791A (en) * | 2018-03-01 | 2018-08-17 | 燕山大学 | A Binocular Vision-Based Pose Monitoring and Tracking Method for Parallel Mechanism Maneuvering Platform |
| CN108460707A (en) * | 2018-03-12 | 2018-08-28 | 林为庆 | A method and system for intelligently supervising students' homework |
| CN108668637A (en) * | 2018-04-25 | 2018-10-19 | 江苏大学 | A kind of machine vision places grape cluster crawl independent positioning method naturally |
| CN109261528A (en) * | 2018-09-03 | 2019-01-25 | 广州铁路职业技术学院(广州铁路机械学校) | Express delivery sorting method and device based on binocular vision |
| CN110320100A (en) * | 2019-07-03 | 2019-10-11 | 中北大学 | A kind of double camera brineling device and measurement method based on machine vision |
| CN111383272A (en) * | 2020-02-24 | 2020-07-07 | 江苏大学 | Binocular vision fruit sorting parallel robot vision blind area end pose detection method |
| CN111915680A (en) * | 2020-09-01 | 2020-11-10 | 山东新一代信息产业技术研究院有限公司 | Robot positioning method, system, electronic device and computer readable medium |
| CN112967319A (en) * | 2021-03-31 | 2021-06-15 | 交通运输部天津水运工程科学研究所 | Block motion real-time detection method based on feature point identification |
| CN113160332A (en) * | 2021-04-28 | 2021-07-23 | 哈尔滨理工大学 | Multi-target identification and positioning method based on binocular vision |
| CN113500584A (en) * | 2021-07-15 | 2021-10-15 | 西北工业大学 | Tail end error correction system and method of three-degree-of-freedom parallel robot |
| CN113689555A (en) * | 2021-09-09 | 2021-11-23 | 武汉惟景三维科技有限公司 | Binocular image feature matching method and system |
| CN114523471A (en) * | 2022-01-07 | 2022-05-24 | 中国人民解放军海军军医大学第一附属医院 | Error detection method based on associated identification and robot system |
| CN114964206A (en) * | 2022-04-12 | 2022-08-30 | 中国计量大学 | Monocular vision odometer target pose detection method |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20130287256A1 (en) * | 2012-04-30 | 2013-10-31 | Telibrahma Convergent Communications Private Limited | Method and system for real time image recognition on a mobile device |
| CN103955931A (en) * | 2014-04-29 | 2014-07-30 | 江苏物联网研究发展中心 | Image matching method and device |
-
2016
- 2016-09-13 CN CN201610819289.9A patent/CN106408609B/en not_active Expired - Fee Related
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20130287256A1 (en) * | 2012-04-30 | 2013-10-31 | Telibrahma Convergent Communications Private Limited | Method and system for real time image recognition on a mobile device |
| CN103955931A (en) * | 2014-04-29 | 2014-07-30 | 江苏物联网研究发展中心 | Image matching method and device |
Non-Patent Citations (2)
| Title |
|---|
| P. RENAUD ET AL: "Simplifying the Kinematic Calibration of Parallel Mechanisms Using Vision-Based Metrology", 《IEEE TRANSACTIONS ON ROBOTICS》 * |
| 赵钦君 等: "Harris-SIFT算法及其在双目立体视觉中的应用", 《电子科技大学学报》 * |
Cited By (24)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106875440A (en) * | 2017-02-22 | 2017-06-20 | 神华集团有限责任公司 | The computational methods and device of coal pile volume |
| CN107133983A (en) * | 2017-05-09 | 2017-09-05 | 河北科技大学 | Bundled round steel end face binocular vision system and space orientation and method of counting |
| CN107186548B (en) * | 2017-06-08 | 2018-12-18 | 大连理工大学 | A kind of five-axle number control machine tool rotating shaft geometric error detection method |
| CN107186548A (en) * | 2017-06-08 | 2017-09-22 | 大连理工大学 | A kind of five-axle number control machine tool gyroaxis geometric error detection method |
| CN107443369A (en) * | 2017-06-25 | 2017-12-08 | 重庆市计量质量检测研究院 | A calibration-free servo control method for robotic arms based on inverse identification of visual measurement models |
| CN107680060A (en) * | 2017-09-30 | 2018-02-09 | 努比亚技术有限公司 | A kind of image distortion correction method, terminal and computer-readable recording medium |
| CN108416791A (en) * | 2018-03-01 | 2018-08-17 | 燕山大学 | A Binocular Vision-Based Pose Monitoring and Tracking Method for Parallel Mechanism Maneuvering Platform |
| CN108460707A (en) * | 2018-03-12 | 2018-08-28 | 林为庆 | A method and system for intelligently supervising students' homework |
| CN108668637A (en) * | 2018-04-25 | 2018-10-19 | 江苏大学 | A kind of machine vision places grape cluster crawl independent positioning method naturally |
| CN109261528A (en) * | 2018-09-03 | 2019-01-25 | 广州铁路职业技术学院(广州铁路机械学校) | Express delivery sorting method and device based on binocular vision |
| CN110320100A (en) * | 2019-07-03 | 2019-10-11 | 中北大学 | A kind of double camera brineling device and measurement method based on machine vision |
| CN110320100B (en) * | 2019-07-03 | 2022-04-01 | 中北大学 | Double-camera Brinell hardness measuring device and measuring method based on machine vision |
| CN111383272A (en) * | 2020-02-24 | 2020-07-07 | 江苏大学 | Binocular vision fruit sorting parallel robot vision blind area end pose detection method |
| CN111915680A (en) * | 2020-09-01 | 2020-11-10 | 山东新一代信息产业技术研究院有限公司 | Robot positioning method, system, electronic device and computer readable medium |
| CN111915680B (en) * | 2020-09-01 | 2024-07-05 | 浪潮智能物联技术有限公司 | Robot positioning method, system, electronic device and computer readable medium |
| CN112967319A (en) * | 2021-03-31 | 2021-06-15 | 交通运输部天津水运工程科学研究所 | Block motion real-time detection method based on feature point identification |
| CN113160332A (en) * | 2021-04-28 | 2021-07-23 | 哈尔滨理工大学 | Multi-target identification and positioning method based on binocular vision |
| CN113500584A (en) * | 2021-07-15 | 2021-10-15 | 西北工业大学 | Tail end error correction system and method of three-degree-of-freedom parallel robot |
| CN113500584B (en) * | 2021-07-15 | 2022-06-28 | 西北工业大学 | A terminal error correction system and method for a three-degree-of-freedom parallel robot |
| CN113689555A (en) * | 2021-09-09 | 2021-11-23 | 武汉惟景三维科技有限公司 | Binocular image feature matching method and system |
| CN113689555B (en) * | 2021-09-09 | 2023-08-22 | 武汉惟景三维科技有限公司 | Binocular image feature matching method and system |
| CN114523471A (en) * | 2022-01-07 | 2022-05-24 | 中国人民解放军海军军医大学第一附属医院 | Error detection method based on associated identification and robot system |
| US12318937B2 (en) | 2022-01-07 | 2025-06-03 | The First Affiliated Hospital of Naval Medical University | Error detection method and robot system based on association identification |
| CN114964206A (en) * | 2022-04-12 | 2022-08-30 | 中国计量大学 | Monocular vision odometer target pose detection method |
Also Published As
| Publication number | Publication date |
|---|---|
| CN106408609B (en) | 2019-05-31 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN106408609B (en) | A kind of parallel institution end movement position and posture detection method based on binocular vision | |
| US9928595B2 (en) | Devices, systems, and methods for high-resolution multi-view camera calibration | |
| Prescott et al. | Line-based correction of radial lens distortion | |
| CN109360246B (en) | Stereo vision 3D displacement measurement method based on synchronous sub-region search | |
| CN116433737B (en) | A method, apparatus, and intelligent terminal for lidar point cloud and image registration. | |
| CN113393439A (en) | Forging defect detection method based on deep learning | |
| CN109801333B (en) | Volume measurement method, device and system and computing equipment | |
| US20150262346A1 (en) | Image processing apparatus, image processing method, and image processing program | |
| Zhou et al. | Feature matching for automated and reliable initialization in three-dimensional digital image correlation | |
| CN114952856B (en) | Method, system, computer and readable storage medium for calibrating hand and eye of mechanical arm | |
| CN111028205A (en) | Eye pupil positioning method and device based on binocular ranging | |
| CN105066962B (en) | A kind of high-precision photogrammetric apparatus of the big angle of visual field of multiresolution | |
| CN117115116A (en) | Bridge construction deformation monitoring methods, devices and media based on computer vision | |
| CN105913489A (en) | Indoor three-dimensional scene reconstruction method employing plane characteristics | |
| KR20140014298A (en) | Planar mapping and tracking for mobile devices | |
| CN110851978B (en) | A visibility-based camera position optimization method | |
| CN117197241B (en) | A high-precision tracking method for absolute posture of robot end based on multi-eye vision | |
| CN114119652A (en) | Method and device for three-dimensional reconstruction and electronic equipment | |
| CN112446917B (en) | Gesture determination method and device | |
| CN113706635B (en) | Long-focus camera calibration method based on point feature and line feature fusion | |
| Kruger et al. | In-factory calibration of multiocular camera systems | |
| CN118918099A (en) | Computer vision measuring method for vibration of label-free structure | |
| CN113689397A (en) | Workpiece circular hole feature detection method and workpiece circular hole feature detection device | |
| CN116128919B (en) | Multi-temporal image abnormal target detection method and system based on polar constraint | |
| CN108537831B (en) | Method and device for CT imaging of additively manufactured workpieces |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20190531 Termination date: 20190913 |
|
| CF01 | Termination of patent right due to non-payment of annual fee |